<%BANNER%>

Vision-Based Control for Flight Relative to Dynamic Environments

University of Florida Institutional Repository
Permanent Link: http://ufdc.ufl.edu/UFE0021231/00001

Material Information

Title: Vision-Based Control for Flight Relative to Dynamic Environments
Physical Description: 1 online resource (164 p.)
Language: english
Creator: Causey, Ryan S
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2007

Subjects

Subjects / Keywords: autonomous, camera, estimation, homography, moving, tracking, vision, visual
Mechanical and Aerospace Engineering -- Dissertations, Academic -- UF
Genre: Aerospace Engineering thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: The concept of autonomous systems has been considered an enabling technology for a diverse group of military and civilian applications. The current direction for autonomous systems is increased capabilities through more advanced systems that are useful for missions that require autonomous avoidance, navigation, tracking, and docking. To facilitate this level of mission capability, passive sensors, such as cameras, and complex software are added to the vehicle. By incorporating an on-board camera, visual information can be processed to interpret the surroundings. This information allows decision making with increased situational awareness without the cost of a sensor signature, which is critical in military applications. The concepts presented in this dissertation facilitate the issues inherent to vision-based state estimation of moving objects for a monocular camera configuration. The process consists of several stages involving image processing such as detection, estimation, and modeling. The detection algorithm segments the motion field through a least-squares approach and classifies motions not obeying the dominant trend as independently moving objects. An approach to state estimation of moving targets is derived using a homography approach. The algorithm requires knowledge of the camera motion, a reference motion, and additional feature point geometry for both the target and reference objects. The target state estimates are then observed over time to model the dynamics using a probabilistic technique. The effects of uncertainty on state estimation due to camera calibration are considered through a bounded deterministic approach. The system framework focuses on an aircraft platform of which the system dynamics are derived to relate vehicle states to image plane quantities. Control designs using standard guidance and navigation schemes are then applied to the tracking and homing problems using the derived state estimation. Four simulations are implemented in MATLAB that build on the image concepts present in this dissertation. The first two simulations deal with feature point computations and the effects of uncertainty. The third simulation demonstrates the open-loop estimation of a target ground vehicle in pursuit whereas the four implements a homing control design for the Autonomous Aerial Refueling (AAR) using target estimates as feedback.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Ryan S Causey.
Thesis: Thesis (Ph.D.)--University of Florida, 2007.
Local: Adviser: Lind, Richard C.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2007
System ID: UFE0021231:00001

Permanent Link: http://ufdc.ufl.edu/UFE0021231/00001

Material Information

Title: Vision-Based Control for Flight Relative to Dynamic Environments
Physical Description: 1 online resource (164 p.)
Language: english
Creator: Causey, Ryan S
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2007

Subjects

Subjects / Keywords: autonomous, camera, estimation, homography, moving, tracking, vision, visual
Mechanical and Aerospace Engineering -- Dissertations, Academic -- UF
Genre: Aerospace Engineering thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: The concept of autonomous systems has been considered an enabling technology for a diverse group of military and civilian applications. The current direction for autonomous systems is increased capabilities through more advanced systems that are useful for missions that require autonomous avoidance, navigation, tracking, and docking. To facilitate this level of mission capability, passive sensors, such as cameras, and complex software are added to the vehicle. By incorporating an on-board camera, visual information can be processed to interpret the surroundings. This information allows decision making with increased situational awareness without the cost of a sensor signature, which is critical in military applications. The concepts presented in this dissertation facilitate the issues inherent to vision-based state estimation of moving objects for a monocular camera configuration. The process consists of several stages involving image processing such as detection, estimation, and modeling. The detection algorithm segments the motion field through a least-squares approach and classifies motions not obeying the dominant trend as independently moving objects. An approach to state estimation of moving targets is derived using a homography approach. The algorithm requires knowledge of the camera motion, a reference motion, and additional feature point geometry for both the target and reference objects. The target state estimates are then observed over time to model the dynamics using a probabilistic technique. The effects of uncertainty on state estimation due to camera calibration are considered through a bounded deterministic approach. The system framework focuses on an aircraft platform of which the system dynamics are derived to relate vehicle states to image plane quantities. Control designs using standard guidance and navigation schemes are then applied to the tracking and homing problems using the derived state estimation. Four simulations are implemented in MATLAB that build on the image concepts present in this dissertation. The first two simulations deal with feature point computations and the effects of uncertainty. The third simulation demonstrates the open-loop estimation of a target ground vehicle in pursuit whereas the four implements a homing control design for the Autonomous Aerial Refueling (AAR) using target estimates as feedback.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Ryan S Causey.
Thesis: Thesis (Ph.D.)--University of Florida, 2007.
Local: Adviser: Lind, Richard C.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2007
System ID: UFE0021231:00001


This item has the following downloads:


Full Text





VISION-BASED CONTROL FOR FLIGHT RELATIVE TO DYNAMIC ENVIRONMENTS


By
RYAN SCOTT CAUSEY



















A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA

2007

































@ 2007 Ryan Scott Causey



































To my lovely wife, Liza P. Causey, that has supported me every step of the way. Her love and

understanding through the years have brought my passion for life beyond boundaries.









ACKNOWLEDGMENTS

This work was supported jointly by NASA under NNDO4GRR13H with Steve Jacobson

and Joe Pahle as project managers along with the Air Force Research Laboratory and the Air

Force Office of Scientific Research under F49620-03-1-0381 with Johnny Evers, Neal Glassman,

Sharon Heise, and Robert Sierakowski as project monitors. Additionally, I thank Dr. Rick Lind

for his remarkable guidance and inspiration that will truly last a life time. Finally, I thank my

parents Sandra and James Causey for making this journey possible by providing me the guidance

and discipline needed to be successful.












TABLE OF CONTENTS

page

ACKNOWLEDGMENTS ........ .. .. 4

LIST OF TABLES .......... ............ 8

LISTOFFIGURES ............. .............. 9

LISTOFTERMS .......... ............. 12

ABSTRACT............_ ........ ...... 19

CHAPTER

1 INTRODUCTION . .... ... .. . 21

1.1 Motivation ........ . .. .. 21
1.2 Problem Statement ........ .. .. 27
1.3 Potential Missions . ... .. .. . 27
1.4 System Architecture ......... ... .. 30
1.5 Contributions ......... . ... .. 33

2 LITERATURE REVIEW .......... . ... .. 36

2. 1 Detection of Moving Objects . . .. 36
2.2 State Estimation Using Vision Information .. .. . 38
2.2.1 Localization ....._.. .... .. 39
2.2.2 Mapping ........... ........... 39
2.2.3 Target-Motion Estimation . ... .. .. 40
2.3 Modeling Object Motion ....... ... .. 41
2.4 Uncertainty in Vision Algorithms . ... .. .. 42
2.5 Control Using Visual Feedback in Dynamic Environments .. .. .. .. .. 43

3 IMAGE PROCESSING AND COMPUTER VISION ... .. .. 45

3.1 Camera Geometry ......... . ... .. 45
3.2 Camera Model ......... . ... .. 47
3.2.1 Ideal Perspective ......... .. .. 47
3.2.2 Intrinsic Parameters ........ .. .. 48
3.2.3 Extrinsic Parameters ........ .. .. 49
3.2.4 Radial Distortion . ..... ... . 50
3.3 Feature Point Detection ......... .. .. 51
3.4 Feature Point Tracking ......... .. .. 53
3.5 Optic Flow ............. ............ 56












3.6 Two-View Image Geometry ........ .. .. 56
3.6.1 Epipolar Constraint ........ .. .. 57
3.6.2 Eight-Point Algorithm ....... .. .. 59
3.6.3 Planar Homography .. . .... .. 61
3.6.4 Structure from Motion . ...... ... .. 65

4 EFFECTS ON STATE ESTIMATION FROM VISION UNCERTAINTY .. .. .. 67

4. 1 Feature Points ......... . .. .. 67
4.2 Optical Flow ......... ... .. 70
4.3 Epipolar Geometry ......... . .. .. 71
4.4 Homography ........ . ... .. 73
4.5 Structure From Motion ....... ... .. 75

5 SYSTEM DYNAMICS ......... . ... .. 77

5.1 Dyanmic States ......... . ... .. 77
5.1.1 Aircraft ............. ........... 77
5.1.2 Camera ............. ........... 79
5.2 System Geometry ......... . ... .. 81
5.3 Nonlinear Aircraft Equations ....... .. .. 83
5.4 Aircraft-Camera System ......... ... .. 84
5.4.1 Feature Point Position ....... .. .. 85
5.4.2 Feature Point Velocity ..... ... .. 85
5.5 System Formulation ........ .... .. 86
5.6 Simulating ....._.. . ... .. 89

6 DISCERNING MOVING TARGET FROM STATIONARY TARGETS .. .. .. 90

6. 1 Camera Motion Compensation . ...... ... .. 90
6.2 Classification ......... . ... .. 95

7 HOMOGRAPHY APPROACH TO MOVING TARGETS ... .. .. .. .. 98

7.1 Introduction ......... . ... .. 98
7.2 State Estimation . . ...... .01
7.2.1 System Description .............10
7.2.2 Homography Estimation .............10

8 MODELING TARGET MOTION.............11

8.1 Introduction .............. ............ 111
8.2 Dynamic Modeling of an Object.............11
8.2.1 Motion Models .............12
8.2.2 Stochastic Prediction .............13

9 CONTROL DESIGN ..............17

9. 1 Control Objectives .............17












9.2 Controller Development .. . .... .. 118
9.2.1 Altitude Control . .... ... . 18
9.2.2 Heading Control ......... .. .... .. 119
9.2.3 Depth Control . . .. ..... .21

10 SIMULATIONS ............. ..............123

10.1 Example 1: Feature Point Generation ..... .... .. .. 123
10.2 Example 2: Feature Point Uncertainty .... .... . .. 126
10.2.1 Scenario . .. .... .. .26
10.2.2 Optic Flow ......... ... .. .. 128
10.2.3 The Epipolar Constraint . .... .. .. 130
10.2.4 Structure From Motion . ... .. .. 132
10.3 Example 3: Open-loop Ground Vehicle Estimation .. . 133
10.3.1 System Model . ... ..... .. .34
10.3.2 Open-loop Results . .. .. .. .. 135
10.4 Example 4: Closed-loop Aerial Refueling of a UAV .. .. .. .. .. 138
10.4.1 System Model ... . ..... .. .39
10.4.2 Control Tuning . ..... .. . 140
10.4.3 Closed-loop Results . ... .... .14
10.4.4 Uncertainty Analysis ....... ... .. .. 148

11 CONCLUSION ............. ..............151

REFERENCES ......... . ..... .. 154

BIOGRAPHICAL SKETCH ......... .. ... .. 164











LIST OF TABLES


Table

3-1

10-1

10-2

10-3

10-4

10-5

10-6

10-7

10-8

10-9

10-10


page

.. 64

.... . .23

..... . 13

.... . 14

.... . .25

.. .126

..... . 19

.... . 31

... .. . 33

. .. . 10

. .. .. .. . 10


Solutions for homography decomposition ......

States of the cameras ......

Limits on image coordinates ......

States of the feature points ......

Aircraft states ......

Image coordinates of feature points .......

Effects of camera perturbations on optic flow ......

Effects of camera perturbations on epipolar geometry .....

Effects of camera perturbations on structure from motion ...

Maximum variations in position due to parametric uncertainty

Maximum variations in attitude due to parametric uncertainty .











LIST OF FIGURES

Figure page

1-1 TheUAV fleet ............. ....... ...... 23

1-2 AeroVironment's MAV: The Black Widow ..... .. .. 23

1-3 The UF MAV fleet ......... . . .. 24

1-4 Refueling approach using the probe-drogue method .. .. .. .. .. 28

1-5 Tracking a pursuit vehicle using a vision equipped UAV .. .. . .. 30

1-6 Closed-loop block diagram with visual state estimation .. .. .. .. 31

3-1 Mapping from environment to image plane ..... .. .. 46

3-2 Image plane field of view (top view) . ..... .. .. 46

3-3 Radial distortion effects ......... .. .. .. 51

3-4 Geometry of the epipolar constraint . ...... ... .. 58

3-5 Geometry of the planar homography . ..... .. .. 62

4-1 Feature point dependence on focal length ..... .... .. 68

4-2 Feature point dependence on radial distortion .... .. . 68

5-1 Body-fixed coordinate frame . ..... .. .. .. 78

5-2 Camera-fixed coordinate frame ........ .. .. 80

5-3 Scenario for vision-based feedback . ...... ... .. 81

6-1 Epipolar lines across two image frames . .. .. .. 91

6-2 FOE constraint on translational optic flow for static feature points .. . 94

6-3 Residual optic flow for dynamic environments ... .. .. 95

7-1 System vector description . ... .. .. 102

7-2 Moving target vector description . .... .. .. 103

9-1 Altitude hold block diagram ......... .. .... .. 118

9-2 Heading hold block diagram . ..... .. .. 119

10-1 Virtual environment for example 1 . ..... .. .. 124

10-2 Feature point measurements for example 1 .... .... . .. 125











10-3

10-4

10-5

10-6

10-7

10-8

10-9

10-10

10-11

10-12

10-13

10-14

10-15

10-16

10-17

10-18

10-19

10-20

10-21

10-22

10-23

10-24

10-25

10-26

10-27

10-28


Optic flow measurements for example 1 .....

Virtual environment for example 2 ......

Feature points across two image frames .....

Uncertainty in feature point .......

Uncertainty results in optic flow ......

Nominal epipolar lines between two image frames

Uncertainty results for epipolar geometry ....

Nominal estimation using structure from motion .

Uncertainty results for structure from motion ..

Vehicle trajectories for example 3 ......

Position states of the UAV with on-board camera

Attitude states of the UAV with on-board camera

Position states of the reference vehicle ......

Attitude states of the reference vehicle ......

Position states of the target vehicle .......

Attitude states of the target vehicle .......

Norm error .......

Relative position states ......

Relative attitude states ......

Virtual environment .......

Inner-loop pitch to pitch command Bode plot ..

Pitch angle step response ......

Altitude step response .......

Inner-loop roll to roll command Bode plot ....

Roll angle step response ......

Heading response .......


. .126

..... .. .. .127

....... .. .128

........ .. .128

........ .. .129

. .. ... .. .130

. ...... .. .131

.... .. .. .132

. ... .. .. .133

........ .. .134

.... .. .. .135

.... .. .. .135

....... .. .135

....... .. .136

........ .. .136

........ .. .136

......... .. .. 137

.......... .. 137

.......... .. 137

........ .. .138

. ... .. .. .141

........ .. .141

.......... .. 142

. .... .. .. .143

.......... .. 144

......... .. .. 144


10-29 Open-loop estimation of target's inertial position











10-30

10-31

10-32

10-33

10-34

10-35

10-36

10-37


Open-loop estimation of target's inertial attitude .

Norm error for target state estimates ......

Closed-loop target position tracking ......

Position tracking error ......

Target attitude tracking ......

Tracking error in heading angle .......

Target's inertial position with uncertainty bounds

Target's inertial attitude with uncertainty bounds .


.. .. ... . 145

........ .. .146

........ .. .146

.......... .. 147

.......... .. 147

........ .. .147

.... .. .. .149

. .... ... .. .149









LIST OF TERMS
Acceleration of the target in E

Body-fixed coordinate frame components

Position vector of camera center in camera-fixed coordinated frame

Radial distortion

Nominal radial distortion

Earth-fixed coordinate frame components

Focal length

Nominal focal length

Altitude state

Stacked column vector of the entries of the planar homography matrix

Altitude command

Nominal entries of the planar homography matrix

Image motion model

Camera-fixed coordinate frame components

Proportional gain on altitude error

Proportional gain on pitch rate

Proportional gain to roll rate

Proportional gain on the lateral position error

Integral gain on the lateral position error

Proportional gain on pitch

Proportional gain to roll

Proportional gain to heading error

Epipolar line in image i

Translation from camera-fixed to reference-fixed coordinates expressed

relative camera-fixed coordinates


a (t)




d

do

{&1l, e2, 83}

f

fo

h

h

he

ho

h(x)


k

kg

k,

kY,

kyi

ke


kg


li

mlF










myr Translation from camera-fixed to target-fixed coordinates expressed

relative camera-fixed coordinates

mVF Translation from virtual to refemce-fixed coordinates expressed relative

virtual coordinates

my, Translation from virtual to target-fixed coordinates expressed relative

virtual coordinates

op Vertical image offset from center to upper left corner in pixel units

ov Horizontal image offset from center to upper left corner in pixel units

p (t) Position of the target in E

pVF Image coordinates in the virtual camera of the reference vehicle

pyr Image coordinates in the virtual camera of the target vehicle

q Stacked column vector of the entries of the essential matrix

qo Nominal entries of the essential matrix

sqc Vertical unit length to pixel scaling

sv Horizontal unit length to pixel scaling

so Image skew factor

u Time rate of change of (p, v)

v (t) Velocity of the target in E

Vb = (u, v, w) Velocity of the body-fixed frame (velocity of the aircraft in body-fixed

coordinates)

vc = (uc, Ve, Wc) Velocity of the camera-fixed frame along {$1,82, 3 } axes

w (t) Random vector

I Subset image specified by W

xrv Translation from camera-fixed to virtual coordinates expressed in

camera-fixed coordinates










z Depth components in two-view camera geometry

Zo Nominal depth components in two-view camera geometry

A Two-view feature point matrix using structure from motion

Ao Nominal two-view feature point matrix using structure from motion

B Body-fixed coordinate frame

C Two-view feature point matrix using epiploar methods

Co Nominal two-view feature point matrix

C Classification group of features to a focus of expansion

D Distance from plane to optical center

E Earth-fixed inertial coordinate frame

F Fundamental camera matrix

F Reference-fixed coordinate frame

Fb = (F ,F,, F ) Aerodynamic forces about {$~1?, 8 -3) aXes

G Image outer product summation

G, Altitude compensator

H Planar homography matrix

IJ Velocity vector of a feature in the image plane (optic flow)

IJo Nominal image plane optic flow

K Intrinsic parameter matrix

I Camera-fixed coordinate frame

I, Image gradient in the vertical direction

I, Image gradient in the horizontal direction

Mbr = (L, M, N) Aerodynamic moments about {$1, 82 -3) aXes

N Normal vector of the plane containing feature points expressed in I










P (x)



R

RBI

REB

REF

RET

REV

RFV

Rry

Rl

T

T

TBI =

TEB =

U

fl

V

W

X

Xo

Y

lk

3 h


Probability density function

Essential matrix

Relative rotation

Rotational transformation from body-fixed to camera-fixed coordinates

Rotational transformation from Earth-fixed to body-fixed coordinates

Rotational transformation from Earth-fixed to reference coordinates

Rotational transformation from Earth-fixed to target coordinates

Rotational transformation from Earth-fixed to virtual coordinates

Rotational transformation from reference-fixed to virtual coordinates

Rotational transformation from target-fixed to virtual coordinates

Rotational transformation from camera-fixed to virtual coordinates

Relative translation

Target-fixed coordinate frame

Position of camera along {$1,82, ~3} axes

Position of aircraft along {&1l, e^2, 83 } axes

Control input vector

Classification group of features to an independently moving object

Virtual coordinate frame

Search window in the image

Vector of aircraft states

Vector of initial aircraft states

Feature point measurements in the image plane

Camera parameter of the k camera

Horizontal angle for field of view


(xc,Yc,Zc)

- (Xb, Yb ,Zb)










'Yv Vertical angle for field of view

87 A variation in focal length

6d A variation in radial distortion

8,, A variation in pu

6v A variation in v

8 y A variation in optic flow

Sc A variation in the two-view feature point matrix

Sq A variation in the entries of the essential matrix

Sq, A variation in the two-view feature point matrix using the planar

homography matrix

Sh A variation in the entries of the planar homography matrix

6A A variation in the two-view feature point matrix using structure from

motion

8: A variation to the depth components in two-view camera geometry

rl Position vector of feature point relative to and expressed in camera

coordinate frame I

rlF,n Feature point location on reference vehicle realtive and expressed in

camera-fixed coordinates

grl,, Feature point location on target vehicle realtive and expressed in

camera-fixed coordinates

TIVF,n Feature point location on reference vehicle realtive and expressed in
virtual coordinates

TIV,,, Feature point location on target vehicle realtive and expressed in virtual
coordinates

pu Vertical coordinate in the image plane

pu (x) Mean operator of a vector x










iuo Nominal pu

lufoe Vertical component of the focus of expansion in image coordinates

p'I Vertical coordinate in image plane in pixel units

p't Vertical coordinate in image plane with radial distortion in pixel units

(p, pU) Vertical minimum and maximum coordinates in image plane

A Vertical velocity in the image plane

iit Vertical velocity in the image plane due to moving objects

4;- Rotational component of the vertical velocity in the image plane

4, Estimated rotational component of the vertical velocity component

iPRes Residual vertical component of optic flow

At Translational component of the vertical velocity in the image plane

v Horizontal coordinate in the image plane

vo Nominal v

vfoe Horizontal component of the focus of expansion in image coordinates

v' Horizontal coordinate in image plane in pixel units

v't Horizontal image plane coordinate with radial distortion in pixel units

(v, v) Horizontal minimum and maximum coordinates in image plane

9t Horizontal velocity in the image plane

iti Horizontal velocity in the image plane due to moving objects

t,- Rotational component of the horizontal velocity in the image plane

t,- Estimated rotational component of the horizontal velocity component

9Res Residual horizontal component of optic flow

itr Translational component of the horizontal velocity in the image plane

5 Position vector of feature point relative to and expressed in Earth-fixed

coordinate frame E










(F,n



5T,n


Feature point location on reference vehicle realtive and expressed in

Earth-fixed coordinates

Feature point location on target vehicle expressed in Earth-fixed

coordinates

Variance operator of a vector x

Gradient threshold

Attitude of aircraft about {$1, 82, ~3) aXeS

Attitude of camera about {81l, c^2, 83) aXeS

Roll command

Heading command

Angular rates of aircraft about {$1, 82 -3) aXeS

Angular rates of camera about {FIl, c^2, c^3) aXeS

Radial distortion uncertainty bound

Focal length uncertainty bound

Uncertainty bound in the entries of the planar homography matrix

Uncertainty bound in the entries of the essential matrix

Uncertainty bound in depth components

Lateral deviation between vehicle and target

Uncertainty bound in optic flow

Uncertainty bound in pu

Uncertainty bound v

Two-view feature point matrix using the planar homography matrix

Nominal two-view feature point matrix using the planar homography


(#, 6, y)







S= (p, q, r)

me = (Pc, 4c, re)





Ad









Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

VISION-BASED CONTROL FOR FLIGHT RELATIVE TO DYNAMIC ENVIRONMENTS

By

Ryan Scott Causey

August 2007

Chair: Richard C. Lind
Major: Aerospace Engineering

The concept of autonomous systems has been considered an enabling technology for a

diverse group of military and civilian applications. The current direction for autonomous systems

is increased capabilities through more advanced systems that are useful for missions that require

autonomous avoidance, navigation, tracking, and docking. To facilitate this level of mission

capability, passive sensors, such as cameras, and complex software are added to the vehicle.

By incorporating an on-board camera, visual information can be processed to interpret the

surroundings. This information allows decision making with increased situational awareness

without the cost of a sensor signature, which is critical in military applications. The concepts

presented in this dissertation facilitate the issues inherent to vision-based state estimation of

moving objects for a monocular camera configuration. The process consists of several stages

involving image processing such as detection, estimation, and modeling. The detection algorithm

segments the motion field through a least-squares approach and classifies motions not obeying

the dominant trend as independently moving objects. An approach to state estimation of moving

targets is derived using a homography approach. The algorithm requires knowledge of the

camera motion, a reference motion, and additional feature point geometry for both the target and

reference objects. The target state estimates are then observed over time to model the dynamics

using a probabilistic technique. The effects of uncertainty on state estimation due to camera

calibration are considered through a bounded deterministic approach. The system framework

focuses on an aircraft platform of which the system dynamics are derived to relate vehicle states










to image plane quantities. Control designs using standard guidance and navigation schemes

are then applied to the tracking and homing problems using the derived state estimation. Four

simulations are implemented in MATLAB that build on the image concepts present in this

dissertation. The first two simulations deal with feature point computations and the effects of

uncertainty. The third simulation demonstrates the open-loop estimation of a target ground

vehicle in pursuit whereas the four implements a homing control design for the Autonomous

Aerial Refueling (AAR) using target estimates as feedback.










CHAPTER 1
INTRODUCTION

1.1 Motivation

Autonomous systems are an enabling technology to facilitate the needs of both military

and civilian applications. The usefulness of autonomous systems ranges from robotic assembly

lines for streamlining an operation to a rover exploring the terrain of a distant planet. The main

motivation behind these types of systems is the removal of a human operator which in many

cases reduces operational cost, human errors, and, most importantly, human risk. In particular,

military missions consistently place soldiers in hazardous environments but in the future could

be performed using an autonomous system. The federal sector is considering autonomous

vehicles, specifically, to play a more prominent role in several missions such as reconnaissance,

surveillance, border patrol, space and planet exploration over the next 30 years [1]. This increase

in capability for such complex tasks requires technology for more advanced systems to further

enhance the situational awareness.

Over the past several years, the interest and demand for autonomous systems has

grown considerably, especially from the Armed Forces. This interest has leveraged funding

opportunities to advance the technology into a state of realizable systems. Some technical

innovations that have emerged from these efforts, from a hardware standpoint, consist mainly

of increasingly capable microprocessors in the sensors, controls, and mission management

computers. The Defense Advanced Research Projects Agency (DARPA) has funded several

projects pertaining to the advancement of electronic devices through size reduction, improved

speed and performance. From these developments, the capability of autonomous system has been

demonstrated on vehicles with strict weight and payload requirements. In essence, the current

technology has matured to a point where autonomous systems are physically achievable for

complex missions but not yet algorithmically capable.

The aerospace community has employed many of the research developed for autonomous

systems and applied it to Unmanned Aerial Vehicles (UAV). Many of these vehicles are currently










operational and have served reconnaissance missions during Operation Iraqi Freedom. The

Department of Defense (DoD) has recorded over 10,000 flight hours performed by UAV in

support of the war in Iraq since September 2004 and that number is expected to increase [1].

Future missions envision UAV to conduct more complex task such as terrain mapping,

surveillance of possible threats, maritime patrol, bomb damage assessment, and eventually

offensive strike. These missions can span over various types of environments and, therefore,

require a wide range of vehicle designs and complex controls to accommodate the associated

tasks.

The requirements and design of UAV are considered to enable a particular mission

capability. Each mission scenario is the driving force of these requirements and are dictated

by range, speed, maneuverability, and operational environment. Current UAV range in size from

less than 1 pound to over 40,000 pounds. Some popular UAV that are operational, in testing

phase, and in the concept phase are depicted in Figure 1-1 to illustrate the various designs. The

two UAV on the left, Global Hawk and Predator, are currently in operation. Global Hawk is

employed as a high altitude, long endurance reconnaissance vehicle whereas the Predator is

used for surveillance missions at lower altitudes. Meanwhile, the remaining two pictures present

J-UCAS, which is a joint collaboration for both the Air Force and Navy. This UAV is described

as a medium altitude flyer with increased maneuverability over Global Hawk and the Predator

and is considered for various missions, some of which have already been demonstrated in flight,

such as weapon delivery and coordinated flight.

The advancements in sensors and computing technology, mentioned earlier, has facilitated

the miniaturization of these UAV, which are referred to as Micro Air Vehicles (MAV). The scale

of these small vehicles ranges from a few feet in wingspan down to a few inches. DARPA has

also funded the first successful MAV project through AeroVironment, as shown in Figure 1-2,

where basic autonomy was first demonstrated at this scale [2]. These small scales allow highly

agile vehicles that can maneuver in and around obstacles such as buildings and trees. This

capability enables UAV to operate in urban environments, below rooftop levels, to provide





























Figure 1-1. The UAV fleet

the necessary information which cannot be obtained at higher altitudes. Researchers are

currently pursuing MAV technology to accomplish the very same missions stated earlier for

the unique application of operating at low altitudes in cluttered environments. As sensor and

control technologies evolve, these MAV can be equipped with the latest hardware to perform

advanced surveillance operations where the detection, tracking, and classification of threats

are monitored autonomously online. Although a single mirco air vehicle can provide distinct

information, targets may be difficult to monitor due to both flight path and sensor field of view

constraints. This limitation has motivated the idea of a corporative network or a "swarm" of MAV

communicating and working together to accomplish a common task.















Figure 1-2. AeroVironment's MAV: The Black Widow


..iii.iii ~~.










Currently, several universities have a research facility dedicated to the investigation of

MAV, including Brigham Young University (BYU), Stanford University, Georgia Institute

of Technology, and the University of Florida. The autonomous capabilities demonstrated by

BYU incorporated an autopilot system for waypoint navigation that integrated traditional IMU

sensors [3, 4]. Meanwhile, Stanford has examined motion planning strategies that optimize flight

trajectories to maintain sensor integrity for improved state estimation [5]. The work at Georgia

Tech and BYU has considered corporative control of MAV for autonomous formation flying [6]

and consensus work for distributed task assignment [7]. Alternatively, vision based control has

also been the topic of interest at both Georgia Tech and UF. Control schemes using vision have

been demonstrated on platforms such as a helicopter at Georgia Tech [8], while UF implemented

a MAV that integrated vision based stabilization into a navigation architecture [9, 10]. The

University of Florida has also considered MAV designs that improve the performance and agility

of these vehicles through morphing technology [11-13]. Fabrication facilities at UF have enabled

rapid construction of design prototypes useful for both morphing and control testing. The fleet of

MAV produced by UF are illustrated in Figure 1-3 where the wingspan of these vehicles range

from 24 in down to 4 in.









Figure 1-3. The UF MAV fleet

There are a number of current difficulties associated with MAV due to their size. For

example, characterizing their dynamics under flight conditions at such low Reynolds numbers

is an extremely challenging task. The consequence of increased agility at this scale also gives

rise to erratic behavior and a severe sensitivity to wind gust and other disturbances. Waszak

et al. [14] performed wind tunnel experiments on 6 inch MAV and obtained the required

stability derivatives for linear and nonlinear simulations. Another critical challenge toward MAV










implementation is their weight restrictions and limited payload capacity. More importantly,

this restriction places constraints on the types and amount of sensors and processors that can be

carried onboard. So sensor selection is a critical process of optimizing size and weight to the

amount of information the sensor provides. This debate has lead researchers toward using vision

as a primary sensor for the guidance and navigation of autonomous UAV and MAY

Vision as a primary sensor is a favorable direction for MAV and even for larger UAV

This sensor provides an enormous amount of information regarding the scene and the way

objects are moving in relation to that scene. Humans rely heavily on visual information, for

instance, being able to navigate through cluttered environments or distinguishing between objects

based on appearance. Although these qualities are to a large percentage visual, humans also

rely on past knowledge for recognition and classification of objects and their motion. So the

challenges that computer vision has faced is how to interpret this information to gain awareness

of the surroundings and, more importantly, perform efficiently for real-time implementation.

Specifically, for autonomous systems to operate in urban environments, vision algorithms are

required to detect objects, track objects, and provide state estimation to make decisions for

navigation.

The features among vision that are most prevalent to UAV and MAV systems are compact

size, passive sensor qualities, low cost, and an abundance of useful data. The compact size of

these sensors enables MAV to carry on-board cameras with little expense toward weight and

payload capacity. The passive nature of these sensors is another desirable quality that increases

the stealth properties of the vehicle by removing emissions. This benefit has obvious advantages

over other sensors, such as sonar and radar, in operations where UAV with a low probability of

detection are required. Additionally, the low cost of vision sensors coupled with the large data

return provides the "more bang for your buck" approach.

The foundation of this dissertation stems from the ability to estimate motion parameters

through output images acquired from a camera system. The amount of information one can

estimate depends on the camera configuration. The two basic types considered in literature are










a single camera setup, known as monocular vision, and a two camera setup, known as stereo

vision. For monocular vision, a sequence of images are taken over time whereas stereo vision

uses two images taken by different cameras at the same time. Motion estimates, using monocular

vision, has been solved for the cases associated with the movement of the camera relative to a

stationary objects and the reverse problem involving movement of objects relative to a stationary

camera. The process of determining camera motion from stationary objects is commonly referred

to as localization. Conversely, determining the motion or position of an object in space from a

pair of images is known as structure from motion. For fixed objects, simultaneous localization

and mapping (SLAM) can be employed to estimate the camera motion in conjunction with the

object's locations. Meanwhile, the use of stereo vision allows one to estimate the motion of

objects while the camera is also moving. Solutions to these methods are well established in

the computer science community and the mathematical details regarding these techniques are

provided in Chapter 3. This dissertation will focus on the monocular camera configuration to

address the state estimation problem regarding moving targets.

The advantage of these techniques becomes more apparent to UAV when applied to

guidance, navigation, and control. By mounting a camera on a vehicle, state estimation of

the vehicle and objects in the environment can be achieved in some instances through vision

processing. Once state estimates are known, they can then be used in feedback. Control

techniques can then be utilized for complex missions that require navigation, path planning,

avoidance, tracking, homing, etc. This general framework of vision processing and control has

been successfully applied to various systems and vehicles including robotic manipulators, ground

vehicles, underwater vehicles, and aerial vehicles but there still exists some critical limitations.

The problematic issues with using vision for state estimation involves camera nonlinearities,

camera calibration, sensitivity to noise, large computational time, limited field of view, and

solving the correspondence problem. A particular set of these image processing issues will be

addressed directly in this dissertation to facilitate the control of autonomous systems in complex

surroundings.










1.2 Problem Statement

The problem addressed in this dissertation consists of target state estimation with unknown

stochastic motion for autonomous systems using a moving monocular camera. The application

for this type of research applies directly to the advancements of autonomous systems by

increasing their mission capabilities. The estimation of 3-dimensional points in space given

two perspective views relies heavily on camera configuration, accurate camera calibration,

and perfect image processing, however, the practical realizations in camera systems involve

limitations to configurations and significant uncertainties. In order to estimate the states of a

moving object in the presence of uncertainty, several key issues will be addressed. These include:

1. segmenting moving targets from stationary targets within the scene

2. classifying moving targets into deterministic and stochastic motions

3. coupling the vehicle dynamics into the sensor observations (i.e. images)

4. formulating the homography equations between a moving camera and the viewable targets

5. propagating the effects of uncertainty through the state estimation equations

6. establishing confidence bounds on target state estimation

The design and implementation of a vision-based controller is also presented in this dissertation

to verify and validate many of the concepts pertaining to tracking of moving targets.

1.3 Potential Missions

Various missions involving autonomous navigation will directly benefit from this research.

The estimates of where objects, both stationary and moving, are located in the environment

enable capabilities such as obstacle avoidance, target tracking, object pa ,~pi ng, and even vehicle

docking. NASA has been particularly interested in Autonomous Aerial Refueling (AAR),

where UAV autonomously dock with a tanker aircraft to replenish their fueling supply. This

capability will have enormous benefits to UAV by expanding the potential missions through

increasing range. Employing AAR systems will also have cost benefits in the design due to the

reduced weight in the vehicle caused by fuel. Meanwhile, return to base requests can be reduced










considerably as long as a tanker aircraft is available which will enable a quick and efficient

response to threats around the world.

There are two aerial refueling techniques implemented currently. The first, employed by the

Air Force, involves a remote pilot stationed on the tanker aircraft that manually controls the boom

to the target while the receiver aircraft maintains a fixed attitude and relative position. On the

other hand, the Navy employs the probe-and-drogue method. This method involves the receiver

pilot controlling the aircraft to a drogue basket that is attached to the tanker and requires a

relative position accuracy of 0.5 to 1.0 cm [15]. The drogue is designed in an aerodynamic shape

that permits the extension from the tanker without instability. The probe-and-drogue method is

considered the preferred method for AAR, mainly due to the high pilot workload in controlling

the boom [16]. Figure 1-4 illustrates the view observed by receiver aircraft during the refueling

process where feature points have been placed on the drogue.



















Figure 1-4. Refueling approach using the probe-drogue method

Vision can be used to facilitate the AAR problem by augmenting traditional aircraft sensors

such as global positioning system (GPS) and inertial measurement unit (IMU). Gigh precision

GPS/IMU sensors can provide relative information between the tanker and the receiver then

vision can be used to provide relative information on the drogue. The advantage to vision in

this case is its passive nature which eliminates sensor emissions during refueling over enemy air










space. Additionally, employing a vision system will prevent from placing sensors, such as GPS,

on the actual drogue itself considering that most sensors are unable to handle the aerodynamic

loads and provide the necessary update rate for refueling. Utilizing vision in the AAR problem

requires accurate estimation of the drogue which entails precise tracking of both vehicles

throughout the mission. This accuracy presents many challenges for a camera system considering

the variations in estimates due to noise, interference, calibration, and feature tracking errors.

Another main challenge is modeling the dynamics of the drogue to estimate a predicted value.

This step is extremely difficult due to the stochastic nature of the drogue's motion during flight

conditions but is needed for control purposes. The issues involved in the AAR problem can

be categorized into several vision processing tasks followed by two control tasks. The vision

processing tasks include detecting, classifying, tracking, and state estimation of moving objects

while the control tasks involve modeling and waypoint navigation.

Additionally, a potential civilian mission that could benefit from this technology is traclong

a high-speed police pursuit. Imagine a scenario where a high speed pursuit is underway through

a city highway. The police typically have several pursuit cars along with a helicopter for air

support. Ordinarily this situation results in a catastrophic accident where police and innocent

civilians are either hurt or killed. Employing vision-based UAV technology can help to avoid

unnecessary fatalities. During the pursuit, an officer several blocks away can release a small UAV

equipped with a camera and the necessary software and communication to provide every police

car with the criminals 3D location along with direction and speed of travel. This technology

allows the police to back off the chase when speeds reach dangerous numbers. By keeping a

safe distance from the chase vehicle, this in turn naturally results in the suspect also reducing

speed which can decrease the chances of fatal accidents. This technique is especially useful in

residential areas where most innocent fatalities occur. The overall mission involves tracking

and maybe even homing off the target vehicle to provide location information to officers on the

ground. Figure 1-5 illustrates in a simulated environment this scenario where a UAV observes the










pursuit from above to estimate the suspect's vehicle location. Similarly, this technology applies

directly to aiding the officials for border patrol.



















Figure 1-5. Tracking a pursuit vehicle using a vision equipped UAV

1.4 System Architecture

The system architecture presented in this dissertation was designed in a modular fashion

and is amenable to closed-loop control. The closed-loop block diagram is depicted in Figure 1-6,

where commands are sent to a vehicle based on the motions observed in the images. The vehicle

considered in this dissertation is predominately assumed an autonomous UAV, but is generalized

for any dynamical system with position and orientation states. The blocks pertaining to this

dissertation are highlighted in Figure 1-6 in the image processing block and consists of the

moving object detection, state estimation of a moving object, and classifying deterministic versus

stochastic motion. A brief discussion of each topic is described in this section, while the details

are covered in their respective chapters.

Distinguishing moving objects from stationary objects with a moving camera is a

challenging task in vision processing and is the first step in the state estimation process when

considering a dynamic scene. This information is extremely important for guidance, navigation,

and control of autonomous systems because it identifies objects that potentially could be in a

path for collision. For a stationary camera, moving objects in the scene can be extracted using














Camera Feature Point Moving Object State Sohsi
x(0) = 0Model Tracker Distection Estimation Dermnti


LIIIIIImage 13Processing



Motion
Controller Moc eling
Prediction


Figure 1-6. Closed-loop block diagram with visual state estimation

simple image differencing, where the stationary background is segmented out; however, this

approach does not apply to moving cameras. In the case of a moving camera, the background

is no longer stationary and it begins to change over time as the vehicle progresses through the

environment. Therefore, the images taken by a moving camera contain the motion due to the

camera, commonly called ego-motion, and the motion of the object. Techniques that involve

camera motion compensation or image registration have been proposed to work well when there

exists no stationary objects close to the camera which cause high parallax. This dissertation

will establish a technique to classify objects in the field of view as moving or stationary while

accounting for stationary objects with high parallax. Therefore, with a series of observations of a

particular scene, one can determine which objects are moving in the environment.

Knowing which objects are moving in the image dictates the type of image processing

required to accurately estimate the object's states. In fact, the estimation problem becomes

infeasible for a monocular system when both the camera and the object are moving. This

unattainable solution is cause by a number of factors including 1) inability to decouple the

motion from the camera and target and 2) failure to triangulate the depth estimate of the object.

For this configuration, relative information can be obtained and fused with additional information

for state estimation. First, decoupling the motion requires known information regarding motion

of the camera or the motion of the object, which could be obtained through other sensors such










as GPS and IMUs. Second, the depth estimate can be acquired if some information is known

regarding the target geometry (e.g. a fixed distance on the target). For the case of stereo vision,

depth estimates can be obtained for each time step which is suitable for estimating the states of a

moving object. Although this particular configuration addresses the depth estimation, additional

issues involving the correspondence solution emerge when introducing multiple cameras [5].

Furthermore, the accuracy of the state estimates becomes poor for small baseline configurations,

which occurs for MAV using stereo vision. These issues regarding target state estimation will be

considered in this dissertation to show both the capabilities and limitations toward autonomous

control and navigation.

Another important task involved with target estimation is to determine a pattern (if

any) in the object's motion based on the time history. The objects can then be classified into

deterministic and stochastic motions according to past behavior. With this information, prediction

models can be made based on previous images to estimate the position of an object at a later time

with some level of confidence. The predicted estimates can then be used in feedback for tracking

or docking purposes. For stochasticly classified objects, further concerns regarding docking or

AAR are imposed on the control problem.

The primary task of state estimation, for both the vehicle and objects in the environment,

relies on accurate knowledge of the image measurements and the associated camera. Such

knowledge is difficult to obtain due to uncertainties in these measurements and the internal

components of the camera itself. For instance, the image measurements contain uncertainties

associated with the detection of objects in the image, in addition to noise corruption. These

drawbacks have prompted many robust algorithms to increase the accuracy of feature detection

while handling noise during the estimation process. Alternatively, many techniques have been

used to accurately estimate the internal parameters of the camera through calibration. The

parameters that describe the internal components of the camera are referred to as intrinsic

parameters and typically consist of focal length, radial distortion, skew factor, pixel size, and

optical center. This calibration process can become cumbersome for a large number of cameras










and incur cost and time delays. These additional expenses add complexity and eliminate the

attractiveness of low cost autonomous systems. Meanwhile, the current appeal of these systems

has been the use of low cost off-the-shelf components, such as cameras that are easily replaced.

Maintaining a low cost product is a goal for UAV that can be accomplished by considering a

vision system. If future operations require a stockpile of thousands of UAV or MAV ready to

deploy, then the capability to switch out or replace components in a timely fashion with little cost

is a tremendous functionality. Therefore, this dissertation describes a method that would enable

cameras to be replaced rapidly and without the need for extensive calibration.

1.5 Contributions

The goal of the work presented in this dissertation is to establish a methodology that

estimates the states of a moving object using a monocular camera configuration in the presence

of uncertainty. The estimates will provide not only critical information regarding target-motion

estimation for autonomous systems but also retain confidence values through a distribution

around a target's estimate. Previous work has investigated many problems and issues related

to this topic but has neglected several key features. In particular, this thesis addresses (i) the

physical effects of camera nonlinearities on state estimates, (ii) a multi-layered classification

approach to object motion based on visual sensing that determines the confidence measure in the

estimates, and (iii) the relationships between vehicle and sensor constraints coupled with sensor

fusion in an autonomous system framework.

The main contribution of this dissertation is the development of a state estimation process of

a dynamic object using a monocular vision system for autonomous navigation. In addition to the

main contribution, there exists some secondary contributions solved in the process of facilitating

the main goal. The contributions presented in this dissertation consist of the following:

*A homography approach to state estimation of moving objects is developed through a virtual
camera to estimate the relative pose of the target relative to the true camera. This virtual
camera facilitated the estimation process by maintaining a constant homography relative to a
known reference object.










* A new approach to detecting moving objects in a sequence of images is developed. This
method computes estimates for the focus of expansion (FOE) and then classifies each
feature point into their respective motions through an iterative least-squares solution. The
decision scheme for classification maintains a cost function, which determines if a feature
point obeys a particular FOE, under a desired threshold. The dominant motion assumption is
then used to determine which FOE class is considered stationary objects in the environment
and which are associated with moving objects.

* The nonlinear dynamics for an aircraft-camera system are derived for a general camera
configuration and model. This structure allows multiple cameras with time varying positions
and orientations within the derivation to compute image plane quantities such as feature
point position and velocity.

* A new method for obtaining error bounds on the target state is established to provide a
region of where the estimate can lie from the effects of uncertainty. This method can be
described as a deterministic framework that computes upper bound uncertainty and was
implemented to describe variations to image plane coordinates and propagated through
vision based algorithms. Although this upper bound or worse-case approach to uncertainty
is a conservative technique, it provides a fast implementation scheme to account for
inaccurate camera calibration.

* The implementation of the homography of a moving target along with a model prediction
scheme will be incorporated into a controls framework to enable closed-loop tracking of an
unknown moving object of interest.

The first chapter of this thesis describes the motivation for this research, some current

objectives and limitations to address followed by a summary of the contribution and descriptions

of potential applications for this research.

Chapter 2 describes the related work and literature review that applies to this particular

research topic.

Chapter 3 introduces the foundation of computer vision and image processing. First the

camera geometry is described along with the projection model followed by the constraints used to

facilitate the estimation process. Lastly, traditional algorithms which estimate both the 3D motion

of the camera and the motion of targets are described.

Chapter 4 quantifies the effects of uncertainty in state estimation from variations in feature

point position caused from camera calibration and feature point tracking.










Chapter 5 derives the system dynamics for an aircraft-camera configuration by formulating

the differential equations and observations into a controls framework.

Chapter 6 describes a method that utilizes image processing techniques to detect and

segment moving objects in a sequence of images.

Chapter 7 formulates a homography technique that estimates relative position and

orientation with respect to a moving reference object. The method fuses traditional guidance

and navigation sensors with the computed homography to obtain relative state estimates of the

unknown object with respect to the moving camera. This process applies directly to solving a

significant portion of the AAR problem.

Chapter 8 summaries a modeling technique for moving objects to predict the target's motion

in inertial space.

Chapter 9 discusses a control design scheme that exploits vision-based state estimates to

track and home on a desired target. The control framework will be generalized for many mission

scenarios involving autonomous UAV but will be discussed in the context of the AAR problem.

Chapter 10 will implement in MATLAB the vision algorithms for both open and closed-loop

architectures to demonstrate and verify the purposed methods.

Chapter 11 discusses concluding remarks and proposes future research directions for this

work.









CHAPTER 2
LITERATURE REVIEW

The proliferation of autonomous systems is generating a demand for smarter, more complex

vehicles. The motivation behind these concept vehicles is to operate in urban environments

which requires a number of complex systems. Video cameras have been chosen as sensors to

facilitate topics such as obstacle detection and avoidance, target tracking and path planning.

These technologies have stemmed from two communities in the literature: (i) image processing

and computer vision and (ii) performance and control of autonomous vehicles. This chapter

will focus on the research applied to autonomous systems and describe the current state of this

research, problems that have been addressed, some difficulties associated with vision, and some

areas in need of contribution. In particular, the review will cover the topics of most relevance to

this dissertation and highlight the efforts toward autonomous UAV

The block diagram shown in Figure 1-6 illustrates the components of interest described

in this dissertation for state estimation and tracking control with respect to a moving object

which involves object motion detection, object state estimation, and object motion modeling and

prediction. The literature review of these topics is given in this section.

2.1 Detection of Moving Objects

In order to track and estimate the motion of objects in images using a monocular camera

system, a number of steps are required. A common first step in many image processing

algorithms is feature point detection and tracking. This step determines features of interest,

such as comers, in the image that usually correspond to objects of interest, such as windows, in

the environment. The famous feature point tracker proposed by Lucas and Kanade [17, 18] has

served as a foundation for many algorithms. This technique relies on a smoothness constraint

imposed on the optic flow that maintains a constant intensity across small base-line motion of

the camera. Many techniques have built upon this algorithm to increase robustness to noise and

outliers. Once feature tracking has been obtained, the next process involves segmenting the

image for moving objects. The need for such a classification is due the fact that standard image










processing algorithms such as SFM are not valid for moving objects viewed by a moving camera.

This limitation is caused by the epipolar constraint no longer maintaining a coplanar property

across image sequences; consequently, research has evolved for the detection of moving objects

in a scene viewed by a non-stationary camera.

The detection of moving object in an image sequence is an important step in image analysis.

For cases involving a stationary camera, simple image differencing techniques are sufficient in

determining moving objects [19-21]. Techniques for more realistic applications involve Kalman

filtering [22] to account for lighting conditions and background modeling techniques using

statistical approaches, such as expectation maximization and mixture of Gaussian, to account

for other variations in real-time applications [23-28]. Although these techniques work well for

stationary cameras, they are insufficient for the case of moving cameras due to the motion of the

stationary background.

Motion detection using a moving camera, as in the case of a camera mounted to a vehicle,

becomes significantly more difficult because the motion viewed in the image could result from

a number of sources. For instance, a camera moving through a scene will view motions in the

image caused by camera induced motion, referred to as egomotion, changes in camera intrinsic

parameters such as zoom, and independently moving objects. There are two classes of problems

considered in literature for addressing this topic. The first considers the scenario where the 3D

camera motion is known a priori then compensation can be made to account for this motion to

determine stationary objects through an appropriate transformation [29, 30]. The second class of

problems does not require knowledge of the camera motion and consists of a two stage approach

to the motion detection. The first stage involves camera motion compensation while the last stage

employs image differencing on the registered image [31] to retrieve non-static objects.

The transformation used to account for camera motion is commonly solved by assuming the

majority of image consists of a dominant background that is stationary in Euclidean space [32,

33]. This solution is obtained through a least-squares minimization process [32] or with the

use of morphological filters [34]. The transformations obtained from these techniques typically










provide poor estimation if the motions of moving objects are not accounted for in the registration

process or if the image contains stationary objects close to the camera that result in high parallax.

A technique presented by Irani et al. [35] proposed a unified method to detect moving

objects. This proposed method handles various levels of parallax in the image through a

segmentation process that is performed in layers. The first layer extracts the background objects

which are far away from the camera and have low parallax through a general transformation

involving camera rotation, translation, and zoom through image differencing. The next layer

contains the object with high parallax consisting of both objects close to the camera and objects

that are moving independently of the camera. The parallax is then computed for the remaining

pixels and compared to one pixel. This process separates the objects within the image based

on their computed parallax. The selection may involve choosing a point on a known stationary

object that contains high parallax so any object not obeying this parallax is classified as a moving

object in the scene.

Optic flow techniques are also used to estimate moving target locations once ego-motion

has been estimated. A method that computes the normal image flow has been shown to obtain

motion detection [36]. Coordinate transformations are sometimes used to facilitate this approach

to detecting motion. For instance, a method using complex log mapping was shown to transform

the radial motions into horizontal lines upon which vertical motion indicate independent

motion [37]. Alternatively, spherical mapping was used geometrically to classify moving objects

by segmenting motions which do not radiate from the focus of expansion (FOE) [29].

2.2 State Estimation Using Vision Information

The types of state estimation that can be obtained from an on-board vision system

are (i) localization which estimates the camera motion between image frames from known

stationary feature points, (ii) mapping which estimates the location of 3D feature points using

reconstruction and structure from motion, and (iii) targpet-motion which estimates 3D feature

points that have independent motion. The work related to these topics are described in this

section.









2.2.1 Localization

Localizing the camera position and orientation relative to a stationary surrounding has been

addressed using a number of methods. An early method presented by Longuet-Higgins [38, 39]

used the coplanarity constraint also known as the epipolar constraint. Meanwhile, the subspace

constraint has also been employed to localize camera motion [40]. These techniques have

been applied to numerous types of autonomous systems. The mobile robotic community has

applied these techniques for the development of navigation in various scenarios [41-45]. The

applications have also extended into the research of UAV for aircraft state estimation. Gurfil

and Rotstein [46] was the first to extend this application in the framework of a nonlinear aircraft

model. This approach used optical flow in conjunction with the subspace constraint to estimate

the angular rates of the aircraft and was extended in [47]. Webb et al. [48, 49] employed the

epipolar constraint to the aircraft dynamics to obtain vehicle states. The foundation for both

of these approaches is a Kalman filter in conjunction with a geometric constraint to estimate

the camera motion. Some applications for aircraft state estimation have involved missions for

autonomous UAV such as autonomous night landing [50] and road following [51].

2.2.2 Mapping

Location estimation of stationary targets using algorithms such as structure from motion

has been extensively researched for non-static cameras with successful results. The foundation

of these techniques still rely on the geometric constraints imposed on stationary targets. The

decoupling of structure from the motion has been characterized in a number of papers by

Soatto et al. [52-58]. These approaches employ the subspace constraint to reconstruct feature

point position through an extended Kalman filter. Several survey papers have been published

describing the current algorithms while comparing the performance and robustness [59-62].

Robust and adaptive techniques have been proposed that use an adaptive extended Kalman filter

to account for model uncertainties [63]. In addition, Qian et al. [64] designed a recursive Hoo filter

to estimate structure from motion in the presence of measurement and model uncertainties while










Weng et al. [65] investigated the optimal approaches to target state estimation and described the

effects of linear solutions on various noise distributions.

2.2.3 Target-Motion Estimation

The topic of target-motion estimation is described as the process of estimating the states

of a moving object from image sequences obtained from a moving camera system. For a stereo

camera configuration, the solution can be obtained using the standard epipolar constraint as long

as there exist a sufficient amount of baseline and the correspondence problem can be solved

accurately. For a stationary camera, full state estimates were achieved of a rigid object using a

statistically combined feature point/optical flow method with an extended Kalman filter [66].

This method extended the previous work of Broida et al. [67] that only considered a feature

point approach. For the case of moving monocular camera configuration, the problem becomes

extremely difficult due to the additional motion of the camera. One approach used in literature

relevant to monocular camera systems is bearings-only-tracking. In this approach, there are

several assumptions made: (i) the vehicle has knowledge of its position, (ii) an additional range

sensor, such as sonar or laser range finder, is used to provide a bearing measurement, and (iii) an

image measurement is taken for an estimate of lateral position. The initial research has involved

the estimation process and design with improvements to the performance [68-72]. This approach

was implement by Flew [5] to estimate the motion of target within a computed covariance.

Guanghui et al. [73] provided a method for estimating the motion of a point target from known

camera motion.

The robotic community has examined the target-motion estimation problem from a visual

servo control framework. Tracking relative motion of a moving target has been shown using

homography-based methods. These methods have been demonstrated to control an autonomous

ground vehicle to a desired pose defined by a goal image, where the camera was mounted on

the ground vehicle [74]. Chen et al. [75, 76] regulated a ground vehicle to a desired pose using a

stationary overhead camera. Mehta et al. [77] extended this concept for a moving camera, where

a camera was mounted to an UAV and a ground vehicle was controlled to a desired pose.










Target-motion estimation has been demonstrated in simulation for applications toward the

AAR problem. Kimmett [15] applied a vision navigation algorithm called VisNAV that was

developed by Junkins et al. [78] to estimate the current relative position and orientation of the

target drogue through a Gaussian least-squares differential correction algorithm. This algorithm

has also been applied to spacecraft formation flying [79].

2.3 Modeling Object Motion

The modeling of objects in motion from position and/or velocity measurements has been a

topic of interest for many applications that employ vision systems. This additional information

can provide systems with knowledge for tracking, collision avoidance, and docking. For instance,

intelligent robots using vision have been considered for industrial and medical applications that

require tracking and graphing of a moving target. Houshangi at al. [80] demonstrated the control

required to grab an unknown moving object with robotic manipulator using an auto-regressive

(AR) model. This model predicts a future position of the target based on velocity estimates

computed from image sequences.

For aerial vehicles, detecting other aircraft in the sky is critical for collision avoidance.

NASA has considered vision in this scenario to aid pilots in detecting aircraft on a crossing

trajectory. A technique combining image and navigation data established a prediction method

through a Kalman filter approach to estimate the position and velocity of the target aircraft ahead

in time [34]. Similarly, the AAR problem requires some form of model prediction when docking

to a moving drogue. Kimmett et al. [15] utilized a discrete linear model for the prediction of the

drogue. The predicted states used for control were computed using the discrete model, the current

states, and light turbulence as input to the drogue dynamics. Successful docking was simulated

for only light turbulence and with low frequency dynamics imposed on the drogue. NASA is

extremely interested in AAR problem and currently has a project on this topic. Flight tests have

been conducted by NASA in an attempt to model the drogue dynamics [81]. In this study, the

aerodynamic effects from both the receiver aircraft and the tanker aircraft were examined on the










drogue, especially moments before the docking phase. The aerodynamic data acquired in these

experiments confirmed several dependencies on turbulence, flight conditions, and geometry.

2.4 Uncertainty in Vision Algorithms

The location of environmental features can be obtained using structure from motion. The

basic concepts are mature but their application to complex problems is relatively limited due to

complexities of real-time implementation. In particular, the noise issues involved with camera

calibration and feature tracking cause considerable difficulties in reconstructing 3-dimensional

states. A sampling-based representation of uncertainty was introduced to investigate robustness

of state estimation [82]. Robustness was also analyzed using a least-square solution to obtain an

expression for the error in terms of the motion variables [83].

The uncertainty in vision-based feedback is often chosen as variations within feature

points; however, uncertainty in the camera model may actually be an underlying source of

those variations. Essentially, the uncertainty may be associated with the image processing

to extract feature points or with the camera parameters that generated the image. The proper

characterization of camera uncertainty may be critical to determine a realistic level of feature

pomnt uncertainty.

The analysis of camera uncertainty is typically addressed in a probabilistic manner. A

linear technique was presented that propagates the covariance matrix of the camera parameters

through the motion equations to obtain the covariance of the desired camera states [84]. An

analysis was also conducted for the epipolar constraint based on the known covariance in the

camera parameters to compute the motion uncertainty [85]. A sequential Monte Carlo technique

demonstrated by Qian et al. [86] proposed a new structure from motion algorithm based on

random sampling to estimate the posterior distributions of motion and structure estimation. The

experimental results in this paper revealed significant challenges toward solving for the structure

in the presence of errors in calibration, feature point tracking, feature occlusion, and structure

ambiguities.










2.5 Control Using Visual Feedback in Dynamic Environments

The control applications considered for autonomous systems include collision avoidance,

target tracking, surveillance, and docking. These missions can be categorized into reactive

control, tracking control, and homing control schemes. The goals of each control scheme are

diverse and rely on vision information in different ways. In reactive control, the purpose is to

make fast decisions based on image measurements of the environment and respond quickly to

a given control. Alternatively, tracking control attempts to maintain a target of interest within

the field of view for a desired amount of time. Strategies involving homing control use vision to

command the vehicle to a desired location either from image location or state estimation.

The ability to avoid obstacles and moving objects in an unfamiliar surrounding is a key

feature for autonomous navigation. A considerable amount of research in the vision community

has been established to facilitate a variety of autonomous vehicles for control purposes. For

instance, a number of detection and avoidance approaches have been applied to scenarios such

as pedestrian avoidance [87] in traffic situations, low altitude flight of a rotorcraft [88], avoiding

obstacles in the flight path of an aircraft [34], and navigating underwater vehicles [89]. Optical

flow techniques have also been utilized as a tool for avoidance by steering away from areas with

high optic flow which indicate regions of close obstacles [90].

Target tracking is another desired capability for autonomous systems. In particular, the

military is interested in this topic for surveillance missions both in the air and on the ground.

The common approaches to target tracking occur in both feature point and optical flow

techniques. The feature point method typically constrains the target motion in the image to

a desired location by controlling the camera motion [91, 92]. Meanwhile, Frezza et al. [93]

imposed a nonholonomic constraint on the camera motion and used a predictive output-feedback

control strategy based on the recursive tracking of the target with feasible system trajectories.

Alternatively, optical flow based techniques have been presented for robotic hand-in-eye

configuration to track targets of unknown 2D velocities where the depth information is










known [94]. Adaptive solutions presented in [91, 95-97] have shown control solutions for

target tracking with uncertain camera parameters while estimating depth information.

The homing control problem has numerous applications toward autonomous systems such

as autonomous aerial refueling, spacecraft docking, missile guidance, and object retrieval

using a robtotic manipulator. Kimmett et al. [15, 98] developed a candidate autonomous

probe-and-drogue aerial refueling controller that uses a command generator tracker (CGT) to

track time-varying motions of a non-stationary drogue. The CGT is an explicit model following

control technique and was demonstrated in simulation for a moving drogue with known dynamics

subject to light turbulence. Tandale et al. [16] extended the work of Kimmett and Valasek by

developing a reference observer based tracking controller (ROTC) which does not require a

drogue model or presumed knowledge of the drogue position. This system consist of a reference

trajectory generation module that sends commands to an observer that estimates the desired

states and control for the plant. The input to this controller is the relative position between the

receiver aircraft and the drogue measured by the vision system. A similar vision approach to

aerial refueling is also presented in [99], where models of the tanker and drogue are used in

conjunction with an inferred camera. The drogue model used in this paper was taken from [100]

that uses a multi-segment approach to deriving the dynamics of the hose. Meanwhile, Houshangi

et al. [80] considered grasping a moving target by adaptively controlling a robot manipulator

using vision interaction. The adaptive control scheme was used to account for modeling errors in

the manipulator. In addition, this paper considered unknown target dynamics. An auto-regressive

model approach was used to predict the target's position based on passed visual information

and an estimated target velocity. Experimental test cases are documented that show tracking

convergence.










CHAPTER 3
IMAGE PROCESSING AND COMPUTER VISION

Image processing and computer vision refers to the process of acquiring and interpreting

2-dimensional visual data to achieve awareness of the surrounding environment. This information

is used to infer spatial properties of the environment that are necessary to perform essential tasks

such as guidance and navigation through unfamiliar environments. An important breakthrough in

computer vision occurred when algorithms were able to detect, track, and estimate locations of

features in the environment.

This dissertation relies on feature points as the foundation for any vision-based feedback.

The term "features" allows one to establish a relationship between the scene geometry and

the measured image. These points generally correlate to items in the environment of special

significance. Some examples of items that often constitute feature points are corners, edges

and light sources. Such feature points can provide information about the overall object in the

sense that a set of corners can outline a building. Feature points do not necessarily provide

enough information to completely describe an environment but, in practice, they usually provide

sufficient information for target tracking and position estimation. To understand the algorithms

that use feature points, an establishment of the fundamental equations governed by the physics of

a camera will be described.

3.1 Camera Geometry

A camera effectively maps the 3-dimensional environment onto a 2-dimensional image

plane. This image plane is defined as the plane normal to the camera's central axis located a focal

length, f, away from the origin of the camera basis. The geometry provided by a pin-hole camera

lens is described in Figure 3-1. The vector, rl, represents the vector between the camera and a

feature point in the environment relative to a defined camera-fixed coordinate system, as defined

by I. This vector and its components are represented in Equation 3-1.










The components of this vector are decomposed along the camera's coordinate frame

to compute the image projection. The projection onto the image plane is where this vector

penetrates the focal plane. Here the offset of the feature from the center is given by the image

plane coordinates, pu and v, which will turn out to be a function of the components of rl and the

focal length for a pin-hole camera model.


Figure 3-1. Mapping from environment to image plane


A major constraint placed on this sensor is the camera's field of view (FOV). Here the FOV

can be described as the 3D region for which feature points are visible to the camera; hence,

features outside the FOV will not appear in the image. The three physical parameters that define

this constraint are the field of depth, the horizontal angle and the vertical angle. A top view

illustration of the FOV can be seen in Figure 3-2, where the horizontal FOV is defined by the

half angle, yh, and the distance to the image plane is of length f. Likewise, a similar plot can be

shown to illustrate the vertical angle, which can be defined as y.


image
plane ,


camera
center


camera
axis


_ Feature point


Figure 3-2. Image plane field of view (top view)










Expressions for these angles are shown in Equation 3-2, where rh,v is defined as the largest

spatial extension in the horizontal and vertical directions.


'Y,v = arctan(rh,v/ f) (3-2)


Feature points within the field of view must have a horizontal coordinate in the image plane

which lies between maximum and minimum values. These values, given as v and v respectively,

are determined by the tangential angle between the depth component, rlz, and the horizontal

component, Try, of the vector between the camera and the feature point. The range of image

coordinates is given in Equation 3-3 for the horizontal component.


[V, V] = [- f tan(h) f tan(Yh)] (3-3)


A similar relationship is computed for the vertical component of field of view. The

minimum coordinate, pu, and the maximum coordinate, pu, for the image plane are computed

using the vertical component, rlx, of the vector connecting the camera and the feature point. This

range is given in Equation 3-4 for the vertical angle.


[p, p = [-f tan( ,), ftan(r,)] (3-4)

3.2 Camera Model

3.2.1 Ideal Perspective

A geometric relationship between the camera properties and a feature point is required

to determine the image plane coordinates. This relationship is made by first separating the

components of rl that are parallel to the image plane into two directions. The image plane

coordinates are then computed from a tangent relationship of similar triangles between the

vertical and horizontal directions and the depth with a scale factor of focal length. This

relationship establishes the standard 2D image plane coordinates referred to as the pin-hole

camera model [101, 102]. Equations 3-5 and 3-6 represent a general pin-hole projection model









in terms of the relative position with a lens offset, c, relative to the camera frame.

p = x x (3-5





If the origin of the camera frame is placed at the lens, (i.e., c = 0), Equations 3-5 and 3-6

reduce to the very common pin-hole camera model and is represented by Equations 3-7 and 3-8.


p = f Ex(3-7)


v =f (3-8)

This projection is commonly written as a map H:


H : RW3 ,2; X x (3-9)


The ideal perspective projection given in Equations 3-7 and 3-8 can be expressed in

homogeneous coordinates and is shown in Equation 3-10.

pu f 0 0 rlx

Ezv =0 f 0 Try (3-10)

1 0 0 1 rlz


3.2.2 Intrinsic Parameters

The image plane that is acquired from physical cameras is more complicated than the ideal

projection given in Equation 3-10. First, the image plane is discretized into a set of pixels,

corresponding to the resolution of the camera. This discretization is based on scale factors that

relate real-world length measures into pixel units for both the horizontal and vertical directions.

These scaling terms are defined as s, and sv which have units of pixels per length, where the

length could be in feet or meters. In general, these terms are different but when the pixels are

square then s, = sy. Second, the origin of the image plane is translated from the center of the









image where the optical axis penetrates the image plane to the upper left hand corner of the

image. This translation is done using the terms o,, and ov, given in units of pixels. The skew

factor is another intrinsic parameter which accounts for pixels that are not rectangular and is

defined as so. The ideal perspective transformation now takes the general form given in Equation

3-11, where pixel mapping, origin translation, and skewness are all considered.

p' fs,, fse o,, 1 00





The perspective transformation obtained in Equation 3-11 is rewritten to Equation 3-12.


rlzx' = Knorl (3-12)


The 3 x 3 matrix K is called the intrinsic parameter matrix or the calibration matrix while

the 3 x 4 constant matrix Ho defines the perspective projection, and finally x' represents the

homogeneous image coordinates [p', v', 1]' that contain pixel mapping and skew.
3.2.3 Extrinsic Parameters

In order to achieve this transformation to image coordinates, both intrinsic and extrinsic

parameters must be known or estimated a priori through calibration. The extrinsic parameters

of the camera can be described as the geometric relationship between the camera frame and the

inertial frame. This relationship consists of the relative position, T, and orientation, R, of the

camera frame to an inertial frame. By defining the position vector of a feature point relative to

an inertial as 5 = [t, y,(, ]'l, transformations can map the expression found in Equation 3-12

to obtain a general equation that maps feature points in the inertial frame to coordinates in the

image plane for a calibrated camera.

p' fs,, fse o,, 1 0
Be v' 0 fsv ov 0 0 R (3-13)

1 0 1 001









3.2.4 Radial Distortion

Other nonlinear camera effects that are not accounted for in the pin-hole model, such as

radial distortion, can be addressed through additional terms. A standard lens distortion model is

considered to account for such nonlinearities in the camera. The general distortion term, given in

Equation 3-14, requires an infinite series of terms to approximate the value.


d = dr r2+-td2r4+-td3r,6+--- HOT (3-14)


The distortion model, shown in Equations 3-15 and 3-16, maps an undistorted image,

(p', v'), which is not measurable on a physical camera, into a distorted image, (p'd d&), which

is observable [104]. This distortion model only considers the first term in the infinite series to

describe radial distortion and excludes tangential distortion. This approximation in distortion has

been used to generate an accurate description of real cameras without additional terms [105],


p'd = v'(1+t dr2) (3-15)


vid = p'(1+ dr2) (3-16)

where r2 __ /1 C1)2 /t _V 2)2 and d is the radial distortion parameter of the camera. Assuming

the origin of the camera frame is placed at the lens, then this term becomes r2 __ 1u2 +t v'2.

In addition, the radial distortion parameter, d, which is not described in Figure 3-1, attempts

to model the curvature of the lens during the image plane mapping. This distortion in the image

plane varies in a nonlinear fashion based on position. This effect demonstrates an axisymmetric

mapping that increases radially from the image center. An example can be seen in Figure 3-3B

and 3-3C which illustrates how radial distortion changes feature point locations of a fixed

pattern in the image by comparing it to a typical pin-hole model shown in Figure 3-3A. Notice

the distorted images seem to take on a convex or concave shape depending on the sign of the

distortion.














20 1 1 0 20 20 1 0 2



5 > 5 **5 **-0.0










Fiogur 3-3. Radifal DisgtortionEfcs foria dsA)c f s = ls 0.5 te di = 0,liea B)ai i fo = d=-0005 n

Theia cameora is Ceffectivey modeled usin the fetr oa lngth and radial distoration aong wither

thae ohrterms. dhpesr ibe d in s Eqa tiohn 311Auch those farmers arun ed termeny oad theinrsc

itis parameters and arebise f ounde thouh albr tion. Ant feature point mut eanlze ith respett

these ~ ~ ~ ~ ~ inrni aaeest nuepoersatue estimatin DTherdaldsanefrmafetr

pit t he cent ter of the esimage is dependent on bot the relatv oit t e ions o caeraand features

alohngwt the focwable lniength This radial distance isalso related s viat anonlineare rltinhip s toth

raia distcortison.nCleaprobe fly any nlsso h fatcuraely poins wireuirsiation otie of thlecamera

peatraeers Chapoter 4 wnille dicussatertchnwiqetharset consider bondedn uncertainty toard the

inrisicy pramonie tersuu an esalshqnes af bounded conditionong thfatue poinestht psitios.y


Thes fritrst stepi thoe estimabutiohn pobemi rqirtes teaiity trcoor dtct intcerestig features,






and curves. These features usually correlate to objects of interest in the environment such as

buildings, vehicles, bridges, etc. Although this gradient-based criterion is good at detecting

these features, it also produces a large number of detections from highly textured surfaces that









are not as interesting. For example, grassy areas, trees, and shrubbery are problematic under

this criterion due to the noisy images they produce. These additional detections can be limited

through simple smoothing filters and thresholding techniques.

The gradient-based corner detection has been a common algorithm for selecting strong

features in an image. These methods require the computation of the image gradient, which can be

done by convolving a 2-dimensional derivative filter with the image. This derivative is realized by

approximating the ideal derivative with sampled Guassian filters defined as g [p], g[v]. Therefore,

an approximation of the image gradients are expressed in Equations 3-17 and 3-18 [102, 103].

The image coordinates (pu, v) in these expressions are computed using either Eqaution 3-10 or

Equation 3-11 depending on the camera model.


I,, [p, v] = IC, v] +g' [p] g[v] (3-17)


Iv [p, V] = I~p, v] a g [] a g' [v] (3-18)

Once the image gradients are computed, then the algorithm proceeds to compute the

summation of the outer product of the gradients within a users specified window, W, which is

given in Equation 3-19 [102, 103]. The pixel values within the search window are defined as Si.


G(x) = VI(i) VIT(i) (-9
isw/l(x) EI ~

This computation is performed to check if the gradient is above some specified threshold,

z, that meets a feature point requirement. This criterion is determined if the smallest singular

value of G is greater than the threshold, z, as shown in Equation 3-20. If Equation 3-20 is

satisfied then this is a valid feature point based on the users criterion [102, 103]. This selection is

a function of both the window size, W, and the threshold, z.


o(G) > z (3-20)









A commonly used algorithm that employed these equations with slight variations is the

Harris comer detector [106]. This method can be extended to edge detection by considering

the structure of the singular values of G. An example of this algorithm is the Canny edge

detector [107].

3.4 Feature Point Tracking

Feature tracking or feature correspondence is the next step in the general state estimation

problem. The correspondence problem is described as the association of a feature point between

two or more images. In other words, the solution to this problem determines that a feature point

in two or more images corresponds to the same physical point in 3D space. The most common

approach to discerning how points are moving between images is the use of intensity or color

matching. This brightness matching is typically performed over a small window, W(x), centered

around the point of interest, as opposed to only matching a single brightness value, which could

have numerous false solutions. The vector of brightness values over a small window set, 1,

contained in the image is shown in Equation 3-21.


1(x) = {I(i)| IE W (x)} (3-21)


This brightness vector can be compared across images, li and I2, and optimized to find

the minimum error. If a feature point of interest is located at xl = [pt, v1] in image 1, li then a

simple translational model of the same scene can be used as an image matching constraint. This

relationship is shown in Equation 3-22,


11 (xl) = I2(h(x1)) +t n(h(x1)) (3-22)


where h(x) defines a general motion transformation to the proceeding image and n(xl) is additive

noise caused by ambiguities such as variations in lighting, reflections, and view point.

Therefore, the correspondence solution is cast as a minimization problem that computes the

best intensity match over a small window by minimizing the intensity error. An equation for the

translation estimate can then be found from this minimization process through Equation 3-23,









subject to Equation 3-22. One important limitation of this criterion occurs when the window in

both images contains relatively constant intensity values. This results in the aperture problem

where a number of solutions for h are obtained. Therefore, during the feature selection process

it's beneficial to choose features that contain unique information in this window.


hi=argm~in ||li(i-I2((hSi))|2 (3-23)


There are two common techniques to solve Equation 3-23 for small baseline tracking: (1)

using the brightness consistency constraint and (2) applying the sum of squared differences

(SSD) approach. Each of these techniques employs a translational model to describe the image

motion. Therefore, if one assumes a simple translational model then the general transformation is

shown in Equation 3-24.

h(x) = x +t Ax (3-24)

The brightness consistency constraint is derived by substituting Equation 3-24 into

Equation 3-22 while initially neglecting the noise term. Applying the Taylor series expansion

to this expression about the point of interest, x, while retaining only the first term in the series

results in Equation 3-25.
aI dpu aI dv aI
aSp dt av dt tat (-5
This equation relates the spatial-temporal gradients to the pixel motion assuming the

brightness remains constant across images. Rewrting Equation 3-25 in matrix form results in

Equation 3-26.

AITu +t It = 0 (3-26)

where u = [f ].

Equation 3-26 constitutes 1 equation with 2 unknown velocities; therefore, another

constraint is needed to solve this problem. A unique solution for the velocities can be determined

by enforcing an additional constraint on the problem, which entails restraining regions to a local

window that moves at constant velocity. Upon these assumption one can minimize the error









function given in Equation 3-27.


El(u) = [VI (i, t)u(x) +tIt (1, t)]2 (3-27)
w(x)

The minimum of this function is obtained by setting VE1 = 0 to obtain Equation 3-28,


LI,2 LCly + I<^ = 3-

I C,tv LIS ut LIvT, O(-8

or, rewritten in matrix form results in the following


Gu -tb= (3-29)


where G(x) was derived in Equation 3-19.

The final solution for the pixel velocity is found through a least-squares estimate given in

Equation 3-30. These image velocities are also referred to as the optic flow. Once the optic flow

is computed for a feature point then the image displacement for feature tracking is trivial to find.


u = G b (3-30)


On the other hand, the method using SSD, shown in Equation 3-24, attempts to estimate

the Ax while not requiring the computation of image gradients. This approach also employs

the translational model over a windowed region. The method considers the possible range that

window could move, dpu and dv, in the time, dt. This consistency constraint then leads to a

problem of minimizing the error over the possible windows within the described range. This error

function is described mathematically in Equation 3-31.


E2 (dpdv) = [I(p + dp, lv + dv, t +dt) I0pv, t)]2 (3-31)
W( c,v)

The solutions obtained are the displacement components, dpu and dv, of the specified

window that correlates to the translation of the center pixel. This techniques is the foundation

for the Lucas-Kanade tracker [17]. For large baseline tracking simple translational models










begin to falter due to the drastic changes in lighting conditions and the large view point change.

Therefore, a more general motion transformation is used such as an affine transformation.

Normalized cross-correlation techniques are also used for large baseline configurations to handle

considerable changes in lighting conditions.

An extremely important concern is the accuracy of these algorithms and how variations in

feature point tracking can effects the final state estimation. These concerns will be addressed in

detail in Chapter 4.

3.5 Optic Flow

The next metric of interest in the image plane is an expression for the velocity of a feature

point. This expression is found simply by taking the time derivative of the feature point position

defined in Equations 3-7 and 3-8. The velocity expressions, shown in Equations 3-32 and 3-33,

describe the movement of feature points in the image plane and is commonly referred to in

literature as the optic flow.






Likewise, the feature point velocity with radial distortion can be computed by differentiating

Equations 3-15 and 3-16 while assuming c = 0 is as follows


ild = ii(1+- dr2) +t 2pdri (3-34)


itd = 9j(1+- dr2) +t 2vdrt (3-35)

where
pu v
r = p- + 9 (3-36)


3.6 Two-View Image Geometry

The two-view image geometry relates the measured image coordinates to the 3D scene.

The camera configuration could be either two images taken over time of the same scene, as










in the monocular case, or two cameras simultaneously capturing two images of the same

scene, as in the stereo vision case. This section will describe the geometry and establish the

mathematical equations for estimating (i) the camera's pose between frames which consists of

relative translation and rotation, and (ii) the position of feature points in 3D space. First, the

geometry of the two-view configuration will generate the epipolar constraint to allow for the

computation of the camera pose from tracked feature points. Second, the 3D scene reconstruction

will be formulated based on the two-view geometry. Lastly, the limitations on feature points will

be discussed based on the type of camera configuration exploited to obtain a feasible solution.

3.6.1 Epipolar Constraint

The implicit relationship between camera and environment throughout this dissertation is

the epipolar constraint or, alternatively, the essential or coplanarity constraint. This constraint

requires position vectors, which describe a feature point relative to the camera at two instants

in time, to be coplanar with the translation vector and the origins of the camera frames. This

geometric relationship is illustrated in Figure 3-4 where rll and rl2 denote the position vectors

of the feature point, P, in the camera reference frames. Also, the values of xl and xa represent

the position vectors projected onto the focal plane while T indicates the translation vector of the

origin of the camera frames.

A geometric relationship between the vectors in Figure 3-4 is expressed by introducing R

as a rotation matrix. This rotation matrix includes the roll, pitch and yaw angles that transform

the camera frames between measurements. The resulting epipolar constraint is expressed in

Equation 3-37.

r12 (T x Rrll) = 0 (3-37)

The relationship can also be written in terms of coordinates within the image plane. The

relationship, given in Equation 3-38, assumes a pin-hole camera which is colinear with its

projection into the focal plane.

x2 (T x Rxl) = 0 (3-38)































(R, T)

Figure 3-4. Geometry of the epipolar constraint

The expressions in Equation 3-37 and Equation 3-38 reflect that the scalar triple product

of three coplanar vectors is zero, which forms a plane in space. These relationships can be

expanded using linear algebra [102, 103] to generate a standard form of the epipolar geometry

as in Equation 3-39. This new form indicates a relationship between the rotation and translation,

written as the essential matrix denoted as Q, to the intrinsic parameters of the camera and

associated feature points. In this case, the equation is derived for a single feature point that is

correlated between the frames,


[#2 V2 fe 91 v1 fT = 0 (3-39)


where Q = [T] xR and [T] x is defined as the skew-symmetric form of the translation T.

The geometric relationship formed by this triangular plane is also seen in the epipolar lines

of each image. The 3D plane formed through this triangle constrains a feature point in one image

to lie on the epipolar line in the other image. These constraints can be mathematically expressed









as in Equation 3-40 with 11 and 12 representing the epipolar lines in image 1 and image 2 being

proportional to the essential matrix, respectfully.


12 ~ Qxt, 11- ~ Qx2 (3-40)


To extend this analysis to the general case of uncalibrated cameras, Equations 3-38

and 3-40 are rewritten in terms of the fundamental matrix, F, and are shown in Equations 3-41

and 3-42,

xK TxRK x'g = (3-41)

where F = K-TxRK1 and xi = K-'xi and the calibration matrix, K, was defined in

Equation 3-12.

12 = Fx'g, 11 = FTxy (3-42)

In the uncalibrated case, the ability to decompose F into R and T is infeasible. The matrix

products of F allow an infinite number of matrices that satisfy the solution, and therefore,

contributing to its impracticality.

3.6.2 Eight-Point Algorithm

The eight-point algorithm is a linear solution to Equation 3-39 which solves for the entries

of the essential matrix. This algorithm was developed by Longuet-Higgins [39] and is described

in this section.

The expression in Equation 3-39 can actually be expressed as in Equation 3-43 using

additional arguments from linear algebra [102, 103]. The vector, q E (9P, contains the stacked

columns of the essential matrix Q.


Cq = [#1U2. V1U2. U2. U1V2 V1V2. V2. i1 V1 1] q = 0 (3-43)


Finally, a set of constraints must be formulated that introduce an expression, given in

Equations 3-43, for each feature point where the entries of the essential matrix are stacked in the

vector q. A set of row vectors are stacked to form a matrix, C, of n matched feature points and










is related to q as in Equation 3-44. The matricx C, shown in Equation 3-45, is a nx 9 matrix of

stacked feature points matched between two views.


Cq = 0 (3-44)


#1l,192,1 V1,19u2,1 #u2,1 91l,192,1 V1,192,1 V2,l #1l,1 V1,1 1

u1,2iu2,2 V1,2iu2,2 #u2,2 #u1,2V2,2 V1,2V2,2 V2,2 #u1,2 V1,2 1 3-5



i#1,niU2,n V1,niU2,n iU2,n i#1,nV2,n V1,nV2,n V2,n i#1,n V1,n 1
A unique solution for Equation 3-44 exists using a linear least-squares approach only if

the number of matched features in each frame is at least 8 such that rank(C) = 8. Additionally,

more feature points will obviously generate more constraints and, presumably, increase accuracy

of the solution due to the residuals of the least-squares. In practice, the least-squares solution to

Equation 3-44 will not exist due to noise, therefore, a minimization is used to find an estimate of

the essential matrix, as shown in Equation 3-46.


min||Cq||, ||q|| =1 (3-46)


Once an estimate of the essential matrix is found, the next step is to decompose this matrix

into its translational and rotational components. This decomposition is obtained through singular

value decomposition (SVD) of the essential matrix, and is shown in Equation 3-47.


Q = UEV' (3-47)


where E = diag {ol, 2, 03 } are the singular values. In general, this solution is corrupted

from noise and needs to be projected onto the essential space. This projection is performed

by normalizing the singular values to C = diag {1, 1,0} and adjusting the corresponding U

and V. The motion decomposition can now be obtained through Equation 3-48, where the

translation T is found up to a scaling factor. These four solutions, which consist of all possible

combinations of R and Tx, are checked to verify which combination generates a positive depth










during reconstruction.


R = UR ( + VT) T = URz( +\ EUT (3-48)


0 +1 0

whr Ry"2+0- 41 0 0

0 01

The eight-point algorithm fails with a non-unique solution when all points in 3D space lie on

the same 2D plane [102, 103]. When this situation occurs one must use the planar homography

approach, which is the topic of the next section.

3.6.3 Planar Homography

The homography approach can be used to solve the degenerate cases of the eight-point

algorithm. For instance, a very common case where the feature points of interest all lie on the

same 2D plane in 3D space causes the algorithm to produce nonunique solutions. This case, in

particular, is a crucial part of enabling autonomous systems to navigate in urban environments.

Manmade structures such as buildings, roads, bridges, etc. all contain planar characteristics

associated with their geometry. This characteristic also applies especially to aerial imagery at

high altitudes where objects on the ground are essentially viewed as planar objects. Therefore,

this section describes the planar case to estimating motion from two images of the same scene as

shown in Ma et al. [102, 103]. Figure 3-5 depicts the geometry involved with planar homography.

The fundamental relationship expressing a point feature in 3D space across a set of images is

given through a rigid body transformation shown in Equation 3-49.


r12 = Rrll + T (3-49)


Recall that rla and rll are relative position vectors describing the same feature point in space

with respect to camera 2 and camera 1, respectfully, and R and T are the relative rotation and

translation motion between frames.

































(R, T)

Figure 3-5. Geometry of the planar homography

If an assumption is made that the feature points are contained on the same plane, then a new

constraint involving the normal vector can be established. Denote N = [121,122,13 T as the normal

vector of the plane containing the feature points relative to camera frame 1. Then the projection

onto the unit normal is shown in Equation 3-50, where D is the projected distance to the plane.


N rll = nlrl,x+/t 2291,y ft l23T1,z= (3-50)


Substituting Equation 3-50 into Equation 3-49 results in Equation 3-51,

82=R+TNI qir (3-51)


where the planar homography matrix is defined to be the following


H = R t _TNT (3-52)










The relationship shown in Equation 3-51 can be extended to image coordinates through

Equation 3-53.

x2 = Hxl (3-53)

A similar approach as used in the eight-point algorithm can be used to solve for the entries

of H. Multiplying both sides of Equation 3-53 with the skew symmetric matrix xi results in the

planar homography constraint shown in Equation 3-54.


xiHxy = 0 (3-54)


Since H is linear, linear algebra techniques can be used to stack the entries of H as a column

vector h and, therefore, Equation 3-54 can be rewritten to Equation 3-55,


a'h = 0 (3-55)


where a is the Kronecker product of xi and xl. Each feature point correspondence between

frames provides two constraints in determining the entries of H. Therefore, to solve for a

unique solution of H, Equation 3-55 requires at least four feature point correspondences.

These additional constraints can be stacked to form a new constraint matrix ?, as shown in

Equation 3-56.

'Y= aX1,a2,8 3, --, nn T (3-56i)

Rewriting Equation 3-55 in terms of the new constraint matrix results in Equation 3-57.


Wh = 0 (3-57)


The standard least-squares estimation can be used to recover H up to a scale factor.

Improvements can be made to the solution when more than four feature point correspondences

are used in the least-squares solution. The scale factor is then determined as the second largest

singular value of the solution H [102, 103], shown in Equation 3-58 for the unknown scaler h.


|1| = G2 (H) (3-5 8)









The homography solution is then decomposed into its rotational and translational

components through a similar technique used in the eight-point algorithm. This approach

uses SVD to rewrite the homography matrix, as shown in Equation 3-59.


HTH = VEVT (3-59)

The matrix C = diag [o0a~ 21 -- ] nd ,mthe vetor V,- alt~~re nn otn orma nl eigenvector c~~ol-~rresodn to

the singular values of E. The columns of the matrix V can be written as V = [vl, v2, v3]. Defining

two other unit-length vectors, shown in Equation 3-60, that are preserved in the homography

mapping and will facilitate in the decomposition process.

vi +v3 vi-v3
U11= /2. = (3-60)


Furthermore, defining the matrices shown in Equation 3-61 will establish a homography

solution expressed in terms of these known variables.


Ui = [v2,111slT21/2] Wi = [HyI2-Hui~H 2Hul
[ ] (3-61)
U2 = 7Z2:l 82-1T22 W2 = [Hy2Hu(II2,H 2H2

The four solutions are shown in Table 3-1 in terms of the matrices given in Equations 3-61,

3-60 and the columns of the matrix V. Notice the translation component is estimated up to a 2

scale factor. This is the same scale ambiguity associated with the eight-point algorithm, which is

caused by the loss of depth during the image plane transformation.

Table 3-1. Solutions for homography decomposition
R1 = W1 Uz R3 = R1
Solution 1 NI1 = itul Solution 3 N3 = -NI1
T = (H-R1)Nl ci73 --2T1
R 2 = W2 U2 R 4 = R 2
Solution 2 N2~ = Vi2 Solution 4 N4 q
gT2 = (H -R2) N2 I f4 = -gf2

A unique solution for the homography is then found by imposing the positive depth

constraint, which is associated with the physically possible solution. This imposition involves









checking the condition that NTe3 = n3 > 0, where e3 is in the direction of the optical axis normal

to the image plane.

3.6.4 Structure from Motion

Structure from motion (SFM) is a technique to estimate the location of environmental

features in 3D space. This technique utilizes the epipolar geometry in Figure 3-4 and assumes

that the rotation, R, and translation, T, between frames is known. Given that, the coordinates of

rll and r12 can be computed. Recall, the fundamental relationship repeated here in Equation 3-62.


r12 = Rrll + T (3-62)


The location of environmental features is obtained by first noting the relationships

between feature points and image coordinates given in Equation 3-7 and Equation 3-8. These

relationships allow some components of rlx and rly to be written in terms of pu and v which are

known from the images. Thus, the only unknowns are the depth components, rl1,z and rl2,z, fOr

each image. The resulting system can be cast as Equation 3-63 and solved using a least-squares

approach.
2 (R11 z -+R12" +1IR13) Tx
z- (R21+R2 +R23 r2,z =I T (3-63)

1 (R31 9+-tR32" +1IR33)41 Tz
This equation can be written in a compact form as shown in Equation 3-64 using z =

[r12,z, r1,z] as the desired vector of depths.

Az = T (3-64)


The least-squares solution to Equation 3-64 obtains the depth estimates of a feature point

relative to both camera frames. This information along with the image plane coordinates can be

used to compute (Tll,x,Tl1,y) and (r12,x,r12,y) by substituting these values back into Equations 3-7

and 3-8. The resulting components of rll can then be converted to the coordinate frame of the

second image and it should exactly match r12. These values will never match perfectly due to










noise and unknown camera parameters so, in practice, an averaging process is often used to

estimate the feature coordinates.

There are two fundamental issues regarding the obtained solution. First, by relying on

the solution provided by the eight-point algorithm, then the translation is only determined up

to a scaling factor. The SFM solution will therefore be corrupted from this scale factor unless

an alternative method is used to obtain this scaling. Second, the uncertainty due to intrinsic

parameters, feature detection, feature tracking, along with the uncertainty in the solution of the

eight-point algorithm contributes to large variations in the SFM solution. The solution obtained

from Equation 3-64 is very sensitive to these uncertainties. Chapter 4 will discuss a method to

obtain uncertainty bounds on the SFM estimates based on the sources described.









CHAPTER 4
EFFECTS ON STATE ESTIMATION FROM VISION UNCERTAINTY

The image processing techniques commonly used today for aiding navigation require the

detection of feature points in the image to describe the environment. The concept of feature point

detection and tracking fundamentally relies on the accuracy of the camera intrinsic parameters,

as seen in Chapter 3. Once feature points are located and tracked across images, a number state

estimation algorithms, such as optic flow, epipolar constraint, and structure from motion, can be

employed. Although camera calibration techniques have proven to provide accurate estimates

of the intrinsic parameters, the process can be cumbersome and time consuming when using a

large quantity of low quality cameras. This chapter describes quantitatively the effects on feature

point position due to uncertainties in the camera intrinsic parameters and how these variations are

propagated through the state estimation algorithms. This deterministic approach to uncertainty

is an efficient method that determines a level of bounded variations on state estimates and can be

used for camera characterization. In other words, the maximum allowable state variation in the

system will then determine the accuracy required in the camera calibration step.

4.1 Feature Points

The locations of feature points within the image plane are computed using the geometry

of Figure 3-1. The resulting values are repeated in Equations 4-1 and 4-2 as a function of focal

length, f, and radial distortion, d, in terms of the components of rl.


u= fx 1+td fT2 22 n (4-1)



v =f 1 + d f2 22 ) (4-2)

The camera is effectively modeled using the focal length and radial distortion. As such,

these parameters are termed the intrinsic parameters and are found through calibration. A

feature point must be analyzed with respect to these intrinsic parameters to ensure proper

state estimation. The radial distance from a feature point to the center of the image, as shown












in Figure 4-1, is dependent on both the relative positions of camera and the feature. This

radial distance, as shown in Figure 4-2, is also related via a nonlinear relationship to the radial

distortion. The analysis of the feature points will require estimation of the camera parameters.


20

15 *

10

5

S 0 *

-5

-10


20

15

10

5

1 0

-5

-10

-15

-20
-
-20


***
* **
***


10 0


10 20


-10 0
v


10 20


A B


Feature Point Dependence on Focal Length for A) f = 0.5 and B) f


Figure 4-1.


0.25


15




10


10


20

15

10


10 0 10 20
v


-10 0
v


10 20


B


-0.0001 and B)


Figure 4-2.


Feature Point Dependence on Radial Distortion for A) d
d = -0.0005


The intrinsic parameters, given as focal length and radial distortion, can not be exactly

known, instead, they should be considered as uncertain variables. This chapter uses a sector

bounded approach wherein each parameter is constrained to lie within a set. The set is centered

around a nominal value and extends to a desired norm bound. The expression for focal length,


given in Equation 4-3, shows the range of values that must be considered for a nominal estimate,









fo, and uncertainty bounded by size of Ay E R. A similar expression in Equation 4-4 presents
the range of values for radial distortion.


f = { fo $ f : If || < Ay} (4-3)

d = {do+-t8d 116 8d ~d} (4-4)

The variations of feature points due to the camera uncertainties can be directly computed.

The uncertain parameters given in Equation 4-3 and Equation 4-4 are substituted into the

camera model of Equation 4-1 and Equation 4-2. The resulting expressions for feature points are

presented in Equations 4-5 and 4-6.

pl = foil~lx~ 1+d fo24_ r 2872 (4-5)
+3doffi, +3dlo +doi +fid+3f d + 3 fod8yd

vl = fo 1+off + + xE28(46
+3dof rl +3dfo +di +f id +3f8df+3f yd d

These~~~~~~ eqain deosrt a opiae eaiosi ewe neranyi etr
points and uncrtainty in camra~ ,6 paaees The featur poits3,6, actual vary lieal with

unerait in foa eghfracmr ihu ail itrinoeeteicuino







u=~~~~~~ {u 8, : ,| -t~lof2v = {vgo~f +L~- fiv : t 3fi~ | < Av } (4-8)f










The uncertainties of 8,, and 8, are norm bounded but are not simple to describe. The range

of values for 8,, and 8v must be computed by evaluating their nonlinear relationship to 6f and 6d.

This range also depends on the relative position between camera and feature, as given by rlx and

Try, so the range of uncertainty will actually vary for each feature point. The norm bounds can be

expressed for a given vector, rl, using Equation 4-9 and Equation 4-10.



'1r 8I' 2 < A d +3 ff 8dOOf +3O fod d




+f36 8t



4. Optica Flow r 8


featre oins i tha motio is no p rely alon th lineof sigh-t J~o~f th caea.Teopi lo a





postios nd recomutd uin Eqaton -3 Oiand Eqaton -3 npatctevlcte

aromputed byo rrsubtracting location of feature points ac rosste a t pai o images taknatdferen



times Scanppro ach assumen es ao th sat af fesatures pointcne straked andt corereated betweentes





frames. The optic flow is then given as IJ using Equation 4-11 for a feature point at pul and vl in

one frame and #u2 and v2 in another frame.


y 2. 91l (4-11)
V2. 91









The expressions for features points, given in Equation 4-7 and Equation 4-8, can

be substituted into Equation 4-11 to introduce uncertainty. The resulting expression in

Equation 4-12 separates the known from unknown elements.


J = +(4-12)
Vog Votv -v

A range of variations are allowed for optic flow due to the uncertainty in feature points.

The expression for IJ can thus be written using nominal, IJo, and uncertain, Sy, terms as in

Equation 4-13 where the uncertainty is bounded by Ay E R.


g = {$o+8y 6~ : |Sy|

The amount of uncertainty in optic flow depends on the uncertainty in each feature point.

The maximum variation in velocity for a given point, determined by rl, is given in Equation 4-14.

The actual bounds on the feature points, as noted in Equation 4-9 and Equation 4-10, varies

depending on the location of each feature point so bounds of A,, and A,, are given for each

vertical component and Av, and Av, are given for each horizontal component. As such, the bound

on variation is noted in Equation 4-14 as specific to the rll and rl2 used to gather feature points in

each image.

Ay= max || (8,,, 8)2+ _6v 8,2 | (4-14)



V1? I a,1




4.3 Epipolar Geometry

State estimation using epipolar geometry, computed as a solution to Equation 3-44, requires

a pin-hole camera whose intrinsic parameters are exactly known. Such a situation is obviously

not realistic so the effect of uncertainty can be determined. A non-ideal camera will lose the









colinearity and coplanarity between the images so the computed solution, q, will not agree with

the true value.

Uncertainty in the constraint matrix, C, will result from variations in the feature points,

as noted in Equation 4-7 and Equation 4-8, which are actually caused by uncertainty in the

camera parameters as noted in Equation 4-3 and Equation 4-4. The constraint matrix from

Equation 3-44 can then be written as a nominal component, Co, plus some uncertainty, Sc, as in

Equation 4-15.

C = Co- + i (4-15)

The matrix of Sc can be directly computed in terms of uncertainty in the feature points by

substituting Equation 4-7 and Equation 4-8. The ith row of this matrix can then be written as

Equation 4-16.


Sc = [miP, 2 #U26pi 1 #2~' V12' t#26vl V1~' #2 (4-16i)

U16v v2 V2.6pi -t1 2v V18 -2 V26vl 6vl 6v2

8jV2 Cjl Cj 1 0]

A solution to Equation 3-44, when including the uncertainty matrix in Equation 4-15,

will exist; however, that solution will differ from the true solution or the nominal solution.

Essentially, the solution can be expressed as the nominal solution, go, and an uncertainty, Sq, as

in Equation 4-17. This perturbed system can now be solved using a linear least-squares approach

for the entries of the essential matrix.


(Co +t 6c) (4o +t 6q) = 0 (4-17)


The solution vector, q = qo +t 6q, for Equation 4-17 has variation which will be norm

bounded by Aq as in Equation 4-18 which indicates the worse-case variation imposed on the

entries of q.

q = {qo+-t6q 16q q}~ (4-18)










The size of this uncertainty, which reflects the size of error in the state estimation, can

be bounded using Equation 4-19. This bound uses the relationship between uncertainties in

Equation 4-16 through the constraint in Equation 4-17. Also, the size of this uncertainty depends

on the location of each feature point so the bounds is noted as specific to the rll and rl2 obtained

from Figure 3-4.

A = ~max || (Co + 6c) 1 c4o ||(-9




Vj12 I a91




The maximum variation of the entries of q = go +t Aq, determined through Equation 4-19,

can then be used directly to compute the variation in state estimates. The entries of q are first

arranged back into matrix form to construct the new essential matrix that includes parameter

variations. This new essential matrix is then decomposed using SVD techniques described in

Section 3.6.1i.

4.4 Homography

A similar approach can be used to describe the variations to the entries of the homography

matrix, H, where the system equation was shown in Equation 3-57. Substituting Equation 4-7

and Equation 4-8 into Equation 3-57 results with a variation in the system matrix Y. Likewise,

the new system matrix with uncertain intrinsic paramters can be written as a nominal matrix, Yo

plus some variation, SY, as shown in Equation 4-20.


Y = Yo +t 8, (4-20)


As in the epipolar solution, the matrix of 8,u can be directly computed in terms of

uncertainty in the feature points by substituting Equation 4-7 and Equation 4-8. correspondingly,









the ith row of this matrix can then be written as Equation 4-21.


6iu = ~I,;+2-tpU 11 pillig V16p, -t26v1 8vi,; 8,;,?~ L (4-21)

U16,v V28pi -t ivz V18vz V26vl -t vl vz




A solution to Equation 3-57, when including the uncertain matrix in Equation 4-20, will

exist, however, that solution will differ from the true solution. Essentially, the solution can be

expressed as the nominal solution, ho, and an uncertainty, Sh, as in Equation 4-22.


('Fo +t 8,) (ho +t Sh) = 0 (4-22)


The solution vector, h = ho +t Sh, for Equation 4-22 has variation which will be norm

bounded by Ah as in Equation 4-23.


h = {ho+8 h 16 8h ~h} (4-23)


The size of this uncertainty, which reflects the size of error in the state estimation, can

be bounded using Equation 4-24. This bound uses the relationship between uncertainties in

Equation 4-21 through the constraint in Equation 4-22. Also, the size of this uncertainty depends

on the location of each feature point so the bounds is noted as specific to the rll and rl2 obtained

from Figure 3-4.

Ahx = I (Fo 6v)-1 who|| (4-24)


8v, < av,

Iav1 < Av2


The maximum variation of the entries of h = ho +t Ah, determined through Equation 4-24,

can then be used directly to compute the variation in state estimates. The entries of h are first

arranged back into matrix form to construct the new homography matrix that includes parameter









variations. This new homography matrix is then decomposed using SVD techniques described in

Section 3.6.3.

4.5 Structure From Motion

Any uncertainty in the camera will result in uncertainty in the feature points and,

consequently, create uncertainty in the matrix used in Equation 3-64 for the structure from

motion relationship. As such, the matrix should be written in terms of a nominal value, Ao, and

an uncertain perturbation, 6A, as in Eq. 4-25


A = Ao +t 6A (4-25)


The uncertain perturbation can actually be computed by substituting the uncertain

expression in Equation 4-7 and Equation 4-8 into Equation 3-64. The perturbation is then

written as Equation 4-26.


fsfi -(R"o +R+12 ~~i + R13)
SA = -(R21 +R 22 + +R23 (4-26)

0 (R31 ~~h + R32 + R33);


The solution to Equation 3-64 when considering Equation 4-25 will obviously result in a

depth estimate that differs from the correct value. Define zo as the actual depths that would be

computed using the known parameters of the nominal camera and 8z as the corresponding error

in the actual solution. The least-squares problem can then be written as Equation 4-27 and solved

using a pseudo-inverse approach.


(Ao +t 6A) Zo +t 6z) = T (4-27)


The solution, zo +t 6z, will have a range of values bounded by Az as in Equation 4-28. This range

of solutions will lie within the bounded range determined from the worst-case bound.


z = {Zo+-t6z : 16z|








The bound on error, Az, can be expressed using Equation 4-29. This bound notes that the
bound on variations in feature points, and ultimately the bound on solutions to structure from
motion, depends on the location of those feature points.


Az n


|| (Ao + BA)-1( T -(Ao + BA) o) ||


(4-29)


m~ax
Is1, P la
8#2 I < 2,









CHAPTER 5
SYSTEM DYNAMICS

The previous chapters described techniques to (i) compute image coordinates, and (ii) the

effects of uncertainty for inertial estimations. Future chapters will discuss (i) detecting and

tracking of moving objects in a scene, (ii) obtain state estimates of moving objects, and (iii)

classify and model these objects into deterministic or stochastic motion. These topics each

build upon a commonality of feature points. As a result, this chapter describes the feature point

dependence on vehicle dynamics for a camera mounted system. The vehicle-camera relationships

presented will focus on the nonlinear aircraft dynamics and how they relate to feature points for a

camera-aircraft system. Although this description concentrates on aircraft dynamics, the modular

form of these equations is compliant to any dynamical system.

5.1 Dyanmic States

The formulation that describes feature points in the image plane starts by considering the

vector geometry involved in an aircraft-camera setup. The geometry can be described through

a number of coordinate frames. This section will utilize the camera geometry described in

Chapter 3 to derive the system equations.

5.1.1 Aircraft

The kinematics of an aircraft-camera system in flight are derived by first defining the

required coordinate frames. The standard measurements for an aircraft are based in either the

Earth-fixed coordinate system or the body-fixed coordinate system. Each of these coordinate

systems use a right-handed axes framework that obeys cross-product rules. A pictorial

representation of these axes is given in Figure 5-1 along with the respective origins.

The body-fixed coordinate system has the origin located at the center of gravity of the

aircraft. The axes are oriented such that 61 aligns out the nose and 62 aligns out the right wing

with $3 pointed out the bottom. The movement of the aircraft, which includes accelerating, will

obviously affect the coordinate system; consequently, the body-fixed coordinate system is not an

inertial reference frame.













a I


Figure 5-1. Body-fixed coordinate frame

The orientation angles of the aircraft are of particular interest for modeling a vision-based

sensor. The roll angle, 4, describes rotation about $1, the pitch angle, 6, describes rotation about

$2. and the yaw angle, W, describes rotation about 63.

The transformation from a vector represented in the Earth-fixed coordinate system to

the body-fixed coordinate system is required to relate on-board measurements to inertial

measurements. This transformation, given in Equation 5-1, uses REB which are Euler rotations

of roll, pitch and yaw [29, 108],

by 1

b2. = REB e2 (5-1)

b3 B 3 E


where REB is the relative rotation between frame E and B, respectfully which can be decomposed

as a sequence of single-axis Euler rotations as seen in Equation 5-2. The order of this matrix

multiplication needs to be maintain for correct computation.


REB = 1 2>]e(ele 3(w)] (5-2)










where the individual single-axis rotations are commonly referred to as 3-2-1, or roll-pitch-yaw,

[el(#) 2 6) e3(w)] respectfully. The full rotation matrix is represented by Equation 5-3.

cos(0) cos(y) sin( ) sin(0) cos(W) cos( ) sin(W) cos( ) sin(0) cos(W) + sin( ) sin(W)

REB = COS(8) Sin(W) Sin( ) Sin(8) Sin(W) COS( ) COS(W) COS( ) Sin(8) Sin(W) Sin( ) COS(W)

sin(0) sin( ) cos(0) cos( ) cos(0)
(5-3)

The rates of change of these orientation angles also require a coordinate transformation. The

roll rate, p, is the angular velocity about bl, the pitch rate, q, describes rotation about b2, and the

yaw rate, r, describes rotation about $3. The vector, m,, is given in Equation 5-4 to represent these

rates.

o,= p61+-tq62 -t r3 (5-4)

5.1.2 Camera

The camera is also described using a right-handed coordinate system defined using

orthonormal basis vectors. The axes, as shown in Figure 5-2, use the traditional choice of i3

aligning through the center of view of the camera. The remaining axes are usually chosen with

12 aligned right of the view and it aligned out the top although some variation in these choices

is allowed as long as the resulting axes retain the right-handed properties. The direction of the

camera basis vectors are defined through the camera's orientation relative to the body-fixed

frame. This framework is noted as the camera-fixed coordinate system because the origin is

always located at a fixed point on the camera and moves in the same motion as the camera.

The camera is allowed to move along the aircraft through a dynamic mounting which

admits both rotation and translation. This functionality enables the tracking of features while

the vehicle moves through an environment. The origin of the camera-fixed coordinate system is

attached to this moving camera, consequently, the camera-fixed frame is not an inertial reference.

A 6 degree-of-freedom model of the camera is assumed which admits a full range of motion.

Figure 5-2 also illustrates the camera's sensing cone which describes both the image plane and

the field of view constraint.


















b3i








Figure 5-2. Camera-fixed coordinate frame

Similar to the body-fixed coordinate frame, a transformation can be defined for the mapping

between the body-fixed frame, B and the camera frame, I as seen in Equation 5-5

i; by

i2. = RBI b2. (5-5)

13 b3B

where RBI is the relative rotation between frame B and I, respectfully. This transformation

is analogous to the aircraft's roll-pitch-yaw, where now these rotation angles define the roll,

pitch and yaw of the camera relative to the aircraft's orientation. The coordinate rotation

transformation, RBI, can be decomposed as a sequence of single-axis Euler rotations as seen in

Equation 5-6, similar to the body-fixed rotation matrix. The orientation angles of the camera are

required to determine the imaging used for vision-based feedback. The roll angle, #c, describes

rotation about ;;3, the pitch angle, 8c, describes rotation about 12 and the yaw angle, c,, describes

rotation about ii.

RBI= l(cOle 2e>le Oc)3c)] (5-6)

The matrix RBI in Equation 5-6 will transform a vector in body-fixed coordinates to
camera-fixed coordinates. This transformation is required to relate camera measurements to
on-board vehicle measurements from inertial sensors. The matrix again depends on the angular










differences between the axes in each coordinate system and the sequence of single-axis rotations.

In particular, the rotation order used for this transformation was a 3-2-1 sequence.

cos(e,) cos(Ve) sin(Qc) sin(ec) cos(Ve,) cos(Qc) sin(Ve,) cos(Qc) sin(ec) cos(Ve,) + sin(Qc) sin(Ve,)
RBI = cos(8c) sin(Vec) sin(#c ) sin(8c) sin(Ve,) + cos(#c) cos(Ve,) cos(#c) sin(8c) sin(Ve) sin(#c) cos(Ve,)

sin(ec) sin(Qc) cos(6c) cos(Qc) cos(6c)
(5-7)

The rates of change of these orientation angles are again required for coordinate frame

transformations. The roll rate, pe, is the angular velocity about 13, the pitch rate, q,, describes

rotation about i2, and the yaw rate, re, described rotation about ii. The vector, me, is given in

Eq. 5-8 to represent these angles.


OWe = rcil +t qc 2 tPc 3


(5-8)


5.2 System Geometry

The fundamental scenario involves an aircraft-mounted camera and a feature point in the

environment. This scenario, as outlined in Figure 5-3, thus relates the camera and the aircraft to

the feature point along with some inertial origin.


2, ~
;;;;~e


Feature Point


Figure 5-3. Scenario for vision-based feedback










The sensor modeling for vision-based feedback has to carefully account for the various

coordinate systems utilized in the scenario. The location of the aircraft and the feature point, as

given in Equation 5-9 and Equation 5-10 are typically represented in the inertial reference frame

relative to the Earth-axis origin.


TEB = Xb81 Yb82 +t Zb83 (5-9)


5 = 5xil +t (vi2 +t 523 (5-10)

The location of the camera, as given in Equation 5-11, is typically given with respect to

the body-axis origin. This choice of coordinate systems reflects that the camera is intrinsically

affected by any aircraft motion.


TBI = xIc 1t yc62 +t ze63 (5-11)


The remaining vector, rl, was defined in Equation 3-1 to describe the relative position

between the camera and the feature point. Recall, this vector was given in the camera-fixed

coordinate system to note the resulting image is directly related to properties relative to the

camera. The representation of rl is repeated here in Equation 5-12 for completeness.


4 = ~vil+ Br2 + :13(5-12)


Applying the two rotational and translational concepts described in this chapter one

can transform vectors across all three coordinate frames. To fully describe a vector in the

camera-fixed frame a transformation defined in Equation 5-13 is used. This expression

incorporates the translations involved with the origins of each coordinate frame through a

series of single-axis rotations until the correct frame is reached.

11 e1

[2= RBIREB e2 +t RBITEB +t TBI (5-13)

13 e3










5.3 Nonlinear Aircraft Equations

The equations of motion of an aircraft can be represented in several different fashions.

The most general form of the aircraft equations are the nonlinear, highly coupled equations of

motion. These equations of motion are the standard equations which have been derived in a

typical aircraft mechanics book [108-110] and are repeated in Equation 5-14 to 5-26 for overall

completeness.

P' mg sin O = m(ui + qw ry) (5-14)

F, + mg cos 6 sin = m(v + ru pw) (5-15)

F, + mg cos ecos = m(wi,+ py qu) (5-16)

L= A -Grqr(z -l) -1-!4(5-17)

M =Ir,q+rp(l,- I,) + ,(P2_ -2) (5-18)

N = -IeI + Ir~+ pq(1-I,-) + zqr (5-19)

p = #- isine (5-20)

q = O cos #+ ~icos 8sin (5-21)

r = Qicos ecos @- 0sin (5-22)

0= qcos -rsin~ (5-23)

S= p +q9sin tan O+ rcos ~tan 8 (5-24)

i = (q sin + r cos #) sec 0 (5-25)


=xd CeS, S4SecwcS, +CC, CSeS, -S4C, v (5-26)


dZb -Se S4Ce C4Ce

The shorthand notation for Sw sinW, Cw cosy, So sin6, Co cos6, and Sq sing,

C4 cos # is used in Equation 5-26.

The aircraft states of interest for the camera motion system consist of the position and

velocity of the aircraft's center of mass, TEB and Vb, the angular velocity, co,, and the orientation










angles of the aircraft, (#, 6,11). The velocity of the aircraft's center of mass is vb and is defined in

Equation 5-27. As stated in Equation 5-27, the aircraft's velocity is expressed in the body-fixed

coordinate frame. Each of these parameters will appear explicitly in the aircraft- camera

equations.

vb = ub1 t +vb2+ twb3 (5-27)

The first six equations represent the force and moment equations, while the remaining

equations are kinematic relationships. The aerodynamic parameters consist of both the

aerodynamic forces, {K,, F, FZ}", on the aircraft and the aerodynamic moments, {L,1M, N}",

which are all contained in the force and moment equations. Although these equations do not

contain control inputs explicitly, the aerodynamic parameters are directly effected by the position

of the control surfaces on the aircraft. In other words, when the control surface deflections are

changed the flow over that surface also changes. This flow change over a surface results in

changes of the aerodynamic forces, such as lift and drag, which directly produce forces and

moments that roll, pitch, and yaw the aircraft and are described by the stability derivatives for

each aircraft. Therefore, controlled maneuvers are accomplished by changing these aerodynamic

parameters through the control surfaces.

An alternative approach to solving the nonlinear equations is to linearize these equations

about a trim condition using a Taylor series expansion. By linearizing these equations about a

level flight condition, the aircraft equations become decoupled into two planar motions. This set

of equations, although easy to solve, have limitations outside the chosen trim state, especially for

smaller more maneuverable aircraft. The choice of what set of aircraft equations to use depends

primarily on the aircraft and the application.

5.4 Aircraft-Camera System

The preliminary definitions established in the previous sections will now be used to

formulate the aircraft-camera system by using the systems described in this chapter. Here the

dependence of image plane position and velocity on the aircraft states along with the kinematic










states of the camera are shown. This derivation is shown here for one camera but is easily

extended to multiple cameras at various locations on the aircraft, as shown in the next section.

Meanwhile, this section obtains a result for feature points as a function of camera location and

aircraft states.

5.4.1 Feature Point Position

The fundamental results regarding the aircraft-camera system that relates 3D motion

to image plane motion starts simply by the vector summation of the defined positions. This

relationship is illustrated in Figure 5-3 for a feature point relative to the inertial frame. Therefore,

the vector sum can be used to solve for the relative position between the camera and a 3D feature

point. After making the proper coordinate transformations by using Equations 5-5 and 5-13, this

relative position can be expressed in camera frame, I, as shown in Equation 5-28.


rl = RBIREB(S TEB) RBITBI (5-28)


In summary, the resulting expression allows the position of each feature point in space

to be characterized by it's position in the image plane. By substituting the components of

Equation 5-28 into Equations 3-7 and 3-8 an image can be constructed as a function of aircraft

states. The major assumption of these equations is prior knowledge of the feature point location

relative to the inertial frame, which may be provided by GPS maps. Furthermore, the image

results obtained can also be passed through Equations 3-15 and 3-16 to add the effects of radial

distortion. The distorted image will provide a more accurate description of an image seen by a

physical camera, assuming the intrinsic parameters of the camera are known.

5.4.2 Feature Point Velocity

A feature point in the focal plane can be further characterized by deriving its velocity vector.

The velocity of a feature point in the image plane can be found by taking the time derivative of

Equation 5-28 with respect to the inertial frame, as shown in Equation 5-29.

Ed Ed Ed Ed
(l) = (5) (TEB) (TBI) (5-29)
dt dt dt dt









For a stationary feature point in space, the position vector, 5, is constant in magnitude and

direction and is expressed in the inertial frame; therefore, this time derivative is zero. Likewise,

the position vector of the aircraft's center of mass, TEB, is also expressed in the inertial basis and,

therefore, the time derivative just becomes ~TEB. Meanwhile, the Derivative Theorem is employed

on such terms as rl and TBI to express these terms in the moving frame. By applying this theorem

and solving for feature point velocity with respect to the camera frame, Equation 5-29 can now

be rewritten to Equation 5-30 for a non-stationary feature point.


'd(r) = ( ~TEB Bd(TBI) a~x TBI Eo Ix r (5-30)
dt dt

This equation can be reduced further if the cameras are constrained to have no translation

relative to the aircraft so 3(TEI) = 0. Alternatively, this term is retained in the derivation to allow

this degree of freedom in the camera setup. The angular velocity, E I, can be further decomposed

using the Addition Theorem. The final step implements Equations 5-5 and 5-13 to transform

each term into the camera frame. After some manipulation, the expression for the velocity of a

feature point relative to the camera frame results in Equation 5-31.


Ti = RBIREB ( TEBR RBITBI RBI( (x TBI) ((RBIO+ Wc) x 8) (5-31)

The image plane velocity of a feature point relative to the camera frame is finalized by

substituting both equations for position and velocity derived in Equation 5-28 and 5-31 into

Equations 3-32 and 3-33. This result will provide a description of the optical flow for each

feature point formed by either the camera traveling through the environment or the motion of the

feature points themselves. To incorporate radial distortion effects into the optic flow computation

requires the additional substitution into Equations 3-34 and 3-35.

5.5 System Formulation

The derivation of the aircraft-camera equations can be easily extended to systems with

multiple cameras all of which have their own position and orientation relative to the aircraft while

acquiring numerous feature points in each image. Although this adds computational complexity,










typical solutions to most vision-based problems require multiple views of the environment

in addition to having an adequate number feature points. The freedom to translate and rotate

cameras also gives the ability to track a particular target or region which reduces the amount

of aggressive maneuvers required by an aircraft to keep the target in the FOV. This additional

capability is extended by treating each camera and feature point separately when computing

Equations 5-28, 5-31, 3-15, 3-16, 3-34, and 3-35. Arranging the parameters for the kth camera

into a single vector, as shown in Equation 5-32, results then in the formulation of a generic

aircraft-camera system with k cameras all having independent motion that track n feature points

is obtained.

OCkT (t)= 1~,~2k, 2,,k, kc,k, Oc~kl Wc,k: fk, dk } (5-32)

The focal length, radial distortion, position, and orientation are now represented for each

camera present in the system, where the parameters of the kth camera are explicitly shown in

Equation 5-32. This vector can be extended to include other camera features such as CCD array

misalignment, skewness, etc.

The focal plane positions can then be assembled into a vector of observations as shown in

Equation 5-33, where n number of feature points are obtained. Likewise, the states of the aircraft

can be collected and represented as a state vector as shown in Equation 5-34. In addition, the

initial states of the vehicle are defined as Xo.


YT = {(p1,v1), (iU2,V2),..., /Pn, Vn> T (5-33)


XT(t) = {u, v, w, p, q, r, Xc, Yc, Zc, #, 6, W}" (5-34)

The coupled aircraft-camera system can now be formulated as a control problem by

incorporating the aircraft's equations of motion, the states, observations, and the kinematics of

the camera given in Equations 5-28 and 5-31. The observations used in this dissertation consist

of measureable images shown in Equations 3-15 and 3-16 which capture nonlinearities such as

radial distortion. This system, which measures image plane position, is described mathematically









through Equation 5-37


Xi(t) = (X (t), U(t), a(t), t) (5-35)

X (0) = Ko (5-36)

Y (t) = g (X (t), a(t), rl, t) (5-37)


where U(t) is defined as a set of control inputs to the aircraft and u(t) is a vector containing

the camera parameters aT = {ul, a2, ak T for k number of cameras. These equations that

utilized feature position will be referred to as the Control Theoretic Form of the governing

camera-aircraft equations.

Alternatively, if the image plane velocities are employed instead of the image plane

positions, as seen in Equation 5-37, then a different set of equations can be obtained which will

be referred to as the Optic Flow Form of the governing aircraft-camera equations of motion. This

system is given in Equation 5-40, which uses the optic flow expression given in Equations 3-34
and 3-35 as the observations.


Xi(t) = (X (t), U(t), a(t), t) (5-38)

X(0) = Ko (5-39)

J"(t) = 3M(X (t), a(t), rl, t) (5-40)


The two system equations just described both have applications to missions involving

unmanned aerial vehicles. The Control Theoretic Form primarily applies to missions involving

target tracking and surveillance such as aerial refueling and automated visual landing.

Meanwhile, the Optic Flow Form is useful for guidance and navigation through unknown

environments. The information provided by optic flow reveals magnitude and direction of each

feature point in the image which gives a sense of objects in close proximity. Incorporating

this information, along with some logic, a control system can be designed to avoid unforeseen

obstacles throughout the desired path.










5.6 Simulating

The inclusion of vision-based feedback into a flight simulator is a critical application of

these sensor models. A straightforward procedure is outlined that allows any flight simulator to

be augmented with vision-based feedback. Such a simulator can be augmented with additional

algorithms for image processing and synthetic vision to generate situational awareness.

Several requirements are required to implement this approach. First, a flight simulator, using

either linear or nonlinear equations of motion, must allow access to all vehicle states. Second, a

mounting system with known dynamics must be available to describe the camera states. Third,

a virtual environment must exist as a database of 3-dimensional coordinates along with inherent

states of any time-varying features. Given these baseline tools, an algorithm is outlined to

compute vision-based feedback.

Algorithm 1.

for every time value of t {

compute aircraft states

compute camera states

compute environment states

for every feature in environment {

compute rl between camera and feature

compute pu and v

eliminate if outside field of view

eliminate if occluded



assemble image from pairs of (p,V) for features



This algorithm will directly augment a simulator with vision-based feedback; however, it is

not an optimal formulation to minimize computational cost. Various subroutines, such as feature

selection and database reduction, may be included to increase the efficiency of the simulator.









CHAPTER 6
DISCERNING MOVING TARGET FROM STATIONARY TARGETS

Classifying objects in a scene as stationary or moving is an essential task for autonomous

systems navigating through unknown terrain. The need for such a classification is due to the fact

that standard image processing algorithms, such as SFM reconstruction, do not hold for moving

objects since the epipolar constraint is violated when both the camera and the objects in the

scene are moving in space. Although the image processing algorithms for fixed features had been

introduced in Chapter 3 and how these features relate to the states of an aircraft in Chapter 5, the

effects of independently moving objects need to be handled in a different manner.

For cases involving a stationary camera, such as in surveillance applications, simple filtering

and image differencing techniques are employed to determine independently moving objects.

Although these techniques work well for stationary cameras, a direct implementation to moving

cameras will not suffice. For a moving camera, the apparent image motion is caused by a number

of sources, such as camera induced motion (i.e. ego-motion) and the motion due to independently

moving objects. A common approach to detecting moving object considers a two stage process

that includes (i) a compensation routine to account for camera motion and (ii) a classification

scheme to detect independently moving objects.

6.1 Camera Motion Compensation

Camera motion estimation is a critical first step in the process of detecting moving objects.

The goal of camera motion estimation is to find a transformation that maps one image into

another given a sequence of images of the same scene. Two common approaches in literature for

this problem (i) employ the epipolar constraint with known feature point correspondence and/or

(ii) reliably compute image flow to decouple the apparent motion. The first technique assumes

feature point correspondence across image frames and uses the epipolar constraint to estimate the

relative rotation and translation between frames, as shown in Chapter 3.6. The second approach

uses the smoothness constraint in attempt to minimize the sum of square differences (SSD) over

either a select number of features or the entire flow field. This approach assumes the stationary










background exists as the "dominant" motion in the image which is not always true for urban

scenarios. For this dissertation a feature point solution will be employed which follows the

material presented in Chapter 3.6.

The epipolar constraint can be used to relate feature points across image sequences through

a rigid body transformation. The epipolar lines of a static environment are computed using

Equation 3-40 or Equation 3-42 depending if the essential matrix or the fundamental matrix

is required. An illustration of the computed epipolar lines is depicted in Figure 6-1 for a static

environment observed by a moving camera. Notice for this static case, the feature points in the

second image (the right image containing the overlaid epipolar lines) are shown to lie directly on

the epipolar lines.




0.4~ 0.4 .




-0.2~ *. .0.2F

-0.4~ 0.


-0.6 -0.4 -0.2 0 0.2 0.4 0.6 -0.6 -0.4 -0.2 0 0.2 0.4 0.6
v v
A B

Figure 6-1. Epipolar Lines Across Two Image Frames: A) Initial Feature Points and B) Final
Feature Points with Overlayed Epipolar Lines

Once camera motion estimation has been found, the epipolar lines can be used as an

indication of moving objects in the image. For instance, the feature points corresponding to the

stationary background will lie on the epipolar lines while the feature points corresponding to

moving objects will violate this constraint.

Similarly, the computation of optical flow can also be used for detecting independently

moving objects. In computing the optical flow, the motion induced by the camera along with

moving objects is fused together in the measured image. Recall, the optic flow expressions









are given in Equations 3-32 and 3-33 or Equations 3-34 and 3-35 for radial distortion.
Decomposing the optical flow into its components of camera rotation (Ar,, rt) and translation

(t, it,) and independently moving objects (A~i, ii) facilitates the detection problem. Therefore, the
components of the optical flow can be written as in Equation 6-1.


1:1+i +Ii Iiiii (6-1)

An expression for the optical flow induced by camera motion only can be rewritten in terms
of the aircraft states defined in Chapter 5S: the translational velocity [u,: v, w]T and the angular
velocity [p, q, r] of the camera. The resulting expressions are shown in Equations 6-2 and 6i-3
and applies only to features stationary in the environment. The details describing the substitution
of the camera motion states are described in Chapter 5.


prv (1+p~2) u
= f q (6-2)
Iir ] -u puV (1+tv2)




= z f oz V (6-3)


It was shown by Kehoe et al. [1 11] that the rotational states [p, q, r] can be estimated
accurately for a static environment through a nonlinear minimization procedure for n features
where n > 6. The approach used a vector-valued flow field J (x) and is given in Equation 6-4,



91 "x, + x?2 -912 14 23 \115 +12)r6)
J (x) = I (6-4)

(n xt x3n Un 4~r( np2)X 5 Vn 6)
x(6+n) x6n









where the vector x in Equation 6-5 is composed of unknown vehicle states and depth parameters.

x = u v w p q r rlz zn 1~(6-5)

The estimated vector, ^<, is found through solving the optimization problem posed in

Equation 6-6 that minimizes the magnitude of the cost function.

x =arg min 1 | J(x) ||2 (6-6)

The same approach is taken here with caution. Recall that the measured optical flow also

contains motion due to independently moving objects in addition to the induced optical flow

caused by the camera motion. In general, these variations in the measured optical flow will

introduce error into the [pl, q, r estimates. If some assumptions are made regar-ding the r-elative

optical flow between the static environment and moving objects, then errors in the state estimates
can have minimal effect. For instance, if the static portion of the scene is assumed to be the

dominant motion in the optical flow then the estimates will contain minimal errors. Employing

this assumption, estimates for the angular velocities [li:P ~1r~ of the camera/vehicle are obtained.

Substituting these estimates into Equation 6-2 results in estimates for the rotational portion

of the optical flow, as shown in Equation 6-7.



= f 4(6-7)
fr -pI pv1ti (1+tv2)

Eliminating the rotational effects of the camera motion from the optical flow results in

Equation 6-8. The residual optical flow j~ess Res,, contains only the components of the camera
translation and independently moving objects. From this expression, constraints can be employed

on the camera induced motion to detect independently moving objects.


PRes it, ii t itr
I II (6-8)





For feature points that are stationary in the environment, the translational optic flow

induced by the camera motion is constraint to radial lines emanating from the FOE, as shown

in Figure 6-2. Consequently, feature points that violate this condition can be classified as

independently moving objects. This characteristic observed from static features will be the basis

for the classification scheme.


1 r
c
r
r
I ~JI
r
I I
r ,r
I ( I
I ~
r'r r
*~

2-

~, r
/* ir
I
1.
r



E;OE


J'


Figure 6-2. FOE constraint on translational optic flow for static feature points


The residual optical flow may contain independently moving objects within the environment

that radiate from their own FOE. An example of a simple scenario is illustrated in Figure 6-3 for

a single moving object on the left and a simulation with synthetic data of two moving vehicles

on the right. Notice the two probable FOEs in picture on the left, one pertaining to the static

environment and the other describing the moving object. In addition, the epipolar lines of the two

distinct FOEs intersect at discrete points in the image. These properties of moving objects are

also verified in the synthetic data shown in the plot on the right. Thus, a classification scheme

must be designed to handle these scenarios to detect independently moving objects. The next










section examines the motion detection problem through the residual optical flow to further

classify static objects from dynamic objects in the field of view.






--~---0.5
FOE2 -'-






0.5



SOE 1
10.5 0 -0.5 -1


Figure 6-3. Residual optic flow for dynamic environments

6.2 Classification

The classification scheme proposed in this dissertation is an iterative approach to computing

the FOE of the static environment using the residual optical flow given in Equation 6-8. An

approximation for the potential location of the FOE is found by extending the translational

optical-flow vectors to form the epipolar lines, as illustrated in Figure 6-3, and obtaining all

possible points of intersection. As mentioned previously, the intersection points obtained will

constitute a number of potential FOEs; however, only one will describe the static background

while the rest are due to moving objects. The approach considered for this classification that

essentially groups the intersection data together through a distance criterion is an iterative

least-squares solution for the potential FOEs.

The iteration procedure tests all intersection points as additional features are introduced

to the system of equations each of which involves 2 unknown image plane coordinates of the

FOE (Pfoe;, VSoe;). The process starts by considering 2 feature points and their FOE intersection









for the first iteration. It is assumed for the first iteration that the two features are static. The

least-squares solution is then given in Equation 6-9 for the FOE coordinates (Pfoetv,Vfoe) (for

the first iteration a least-squares solution is not necessary because two lines intersect at a single

point) .

= ar mm | b ||2 (6-9)




where

1M= I # ##2l Prz Vi+1 (6-10a)

-1 -1 -1

b ~ ~ ~ t = z- p 2 2 "" Vi+1 A Pi 1 (6-10b)


The next iteration adds another feature into the system of equations and a new potential FOE

point is obtained. If the new feature point is a static feature, then the new estimated FOE will be

near the static FOE, which is found in the first iteration, causing a small residual. Alternatively,

if the feature is point is due to a moving object then the epipolar line will not intersect the

static FOE and shift the solution causing a large residual. Defining the new FOE coordinates as

(pfoue27 1982). A cost function is then checked to verify if the new feature point contains a similar

motion to that of the static background by checking the residual. This residual is defined as the

Euclidean distance from the two FOE solutions found before and after adding the next feature.

If cost is higher than some maximum threshold Jmax then the feature point is discarded into a set

of points classified as moving, fl; else, the feature point is classified into the static FOE solution,

C. This process is repeated until all n feature points have been checked using this cost function

which is shown in Equation 6-11 for the ith iteration. Mathematically, the classification scheme

for the ith iteration is given in Equations 6-12 and 6-13.


12 PVi Pfei -Pfoi-1 2 foei-Vfei-) 2(6-11)










Ci = {(Oli, Vi) if A (iPi, Vi) < Jmax }


(6-12)


else

Hi={(ii)if J2 (Pui,Vi) > Jmax} (6-13)

After all n feature points have been examined under this criterion, a set of m feature points

are classified to the static background, C. Meanwhile, a set of n m feature points are classified

as objects disobeying the static trend, 'L, and are considered moving objects. The class of moving

objects can be further classified into distinct objects through a clustering method. This method

removes all static features and uses the intersections of the epipolar lines pertaining to moving

objects as data points in the clustering algorithm. The resulting data will produce distinct clusters

around the FOEs pertaining to moving objects. The threshold Jax is a design parameter that

segments the feature points into their respective classes and needs to be tuned to account for

measurement noise.









CHAPTER 7
HOMOGRAPHY APPROACH TO MOVING TARGETS

7.1 Introduction

Autonomous vehicles have gained significant roles and assisted the military on the

battlefield over the last decade by performing missions such as reconnaissance, surveillance,

and target tracking with the aid of humans. These vehicles are now being considered for more

complex missions that involve increased autonomy and decision making to operate in cluttered

environments with less human interaction. One critical component that autonomous vehicles

need for a successful mission is the ability to estimate the location and movement of other objects

or vehicles within the scene. This capability, from a controls standpoint, enables autonomous

vehicles to navigate in complex surroundings for tracking or avoidance purposes.

Target state estimation is an attractive capability for many autonomous systems over a

broad range of applications and is the focus of this dissertation. In particular, unmanned aerial

vehicles (UAV) have shown a great need for this technology. With UAV becoming more prevalent

in the aerospace community, researchers are striving to extend their capabilities while making

them more reliable. The key applications of interest for future UAV regarding target estimation

pertain to both civilian and military tasks. These tasks range from police car pursuits and border

patrol to locating and pursuing enemy vehicles during lethal engagement. A major limitation

to small UAV are their range, payload constraints and fuel capacity. These limitations generate

the need for autonomous aerial refueling (AAR) to extend the vehicle's operational area. Target

state estimation facilitates a portion of the AAR problem by estimating the receptacle's current

position and orientation during approach. Therefore, the purpose of this paper is to demonstrate

a method that estimates the motion of a target using an on-board monocular camera system to

address these applications.

Most techniques for vision-based feedback share some commonality; namely, a sequence of

image processing and vision processing are performed on an image or a set of images to extract

information which is then analyzed to make a decision. The basic unit of information from an










image is a feature point which indicates some pixel of particular interest due to, for example,

color or intensity gradient near that pixel. These intensity variations correlate well to physical

features in the environment such as comers and edges which describe the character of buildings

and vehicles within a scene as described in Chapter 3. Among the techniques that utilize feature

points, the approach related to this paper involves epipolar geometry [39, 112]. The purpose of

this technique is to estimate relative motion based on a set of pixel locations. This relative motion

can describe either motion of the camera between two images or the relative distance of two

objects of the same size from a single image.

The 3D scene reconstruction of a moving target can be determined from the epipolar

geometry through the homography approach described in Chapter 3. For the case described in

this chapter, a moving camera attached to a vehicle observes a known moving reference object

along with an unknown moving target object. The goal is to employ a homography vision-based

approach to estimate the relative pose and translation between the two objects. Therefore, a

combination of vision and traditional sensors such as a global positioning system (GPS) and

an inertial measurement unit (IMU) are required to facilitate this problem for a single camera

configuration. For example in the AAR case, GPS and IMU measurements are available for both

the receiver and tanker aircraft.

In general, a single moving camera alone is unable to reconstruct the 3D scene containing

moving objects. This restriction is due to the loss of the epipolar constraint, where the plane

formed by the position vectors relative to two camera positions in time to a point of interest

and the translation vector is no longer valid. Techniques have been formulated to reconstruct

moving objects viewed by a moving camera with various constraints [35, 113-116]. For instance,

a homography based method that segments background from moving objects and reconstructs

the target's motion has been achieved [117]. Their reconstruction is done by computing a virtual

camera which fixes the target's position in the image and decomposes the homography solution

into motion of the camera and motion caused by the target. This decomposition is done using a

planar translation constraint which restricts the target's motion to a ground plane. Similarly, Han










and Kanade [115] proposed an algorithm that reconstructs 3D motion of a moving object using

a factorization-based algorithm with the assumption that the object moves linearly with constant

speeds. A nonlinear filtering method was used to solve the process model which involved both

the kinematics and the image sequences of the target [118] This technique requires knowledge

of the height above the target which was done by assuming the target traveled on the ground

plane. This assumption allowed other sensors, such as GPS, to provide this information. The

previous work of Mehta et al. [77] showed that a moving monocular camera system could

estimate the Euclidean homographies for a moving target in reference to a known stationary

object.

The contribution of this chapter is to cast the formulation shown in Mehta el al. to a

more general problem where both target and reference vehicles have general motion and are

not restricted to planar translations. This proposed approach incorporates a known reference

motion into the homography estimation through a transformation. Estimates of the relative

motion between the target and reference vehicle are computed and related back through known

transformations to the UAV. Relating this information with known measurements from GPS

and IMU, the reconstruction of the target's motion can be achieved regardless of its dynamics;

however, the target must remain in the image at all times. Although the formulation can be

generalized for n cameras with independent position, orientation, translations, and rotation this

chapter describes the derivation of a single camera setup. Meanwhile, cues on both the target

and reference objects are achieved through LED lights or markers placed in a known geometric

pattern of the same size. These markers facilitate the feature detection and tracking process by

placing known features that stand out from the surroundings while the geometry and size of the

pattern allows for the computation of the unknown scale factor that is customary to epiploar and

homography based approaches.

This chapter builds on the theory developed in Chapters 3 and 5 while relying on the moving

object detection algorithm to isolate moving objects within an image. Recall the flow of the

overall block diagram shown in Figure 1-6. The process started by computing features in the










image relative to an aircraft and then employing the moving object detection algorithm shown in

Chapter 6. Once moving objects in the image are detected, the homography estimation algorithm

proposed in this chapter is implemented for target state estimation.

7.2 State Estimation

7.2.1 System Description

The system described in this paper consists of three independently moving vehicles or

objects containing 6-DOF motion. To describe the motion of these vehicles a Euclidean space is

defined with five orthonormal coordinate frames. The first frame is an Earth-fixed inertial frame,

denoted as E, which represents the global coordinate frame. The remaining four coordinate

frames are moving frames attached to the vehicles. The first vehicle contains two coordinate

frames, denoted as B and I, to represent the vehicle's body frame and camera frame, as described

in Chapter 5 in Figure 5-1. This vehicle is referred to as the chase vehicle and is instrumented

with an on-board camera and GPS/IMU sensors for position and orientation. The second vehicle,

denoted as F, is considered a reference vehicle that also contains GPS/IMU sensors and provides

its states to the chase vehicle through a communication link. Lastly, the third vehicle, denoted

as T, is the target vehicle of interest in which unknown state information is to be estimated. In

addition, a fictitious coordinate frame will be used to facilitate the estimation process and is

defined as the virtual coordinate system, V.

The coordinates of this system are related through transformations containing both rotational

and translational components. The rotational component is established using a sequence of

Euler rotations in terms of the orientation angles to map one frame into another. Let the relative

rotation matrices REB, RBI, REF, REV, RIV, RFV, RTy and RET denote the rotation from E to B,

B to I, E to F, E to V, I to V, F to V, T to V, and E to T. Secondly, the translations are defined

as TEB, F, XV, XT, F,n, T,n, TBI, XIV, mFI mIT, TF,n N1T,n, W1VF, FVT, TVF,n, and rlvr,n which

denote the respective translations from E to B, E to F, E to V, E to T, E to the 12th feature point

on the reference vehicle and target vehicles all expressed in E, B to I expressed in B, I to V, I to

F, I to T, I to the 12th feature point on the reference and target vehicles expressed in I, V to F, V









to T, V to the nth feature point on the reference and target vehicles expressed in V. This vector

geometry relating the coordinate frames is illustrated in Figure 7-1 for a camera on board a UAV

while the vectors relating the feature points to both the real and virtual cameras are depicted in

Figure 7-2. The estimated quantities computed from the vision algorithm are defined as RTB and

xTB which are the relative rotation and translation from T to B expressed in B.













1 3
02\






Feature Point

Figure 7-1. System vector description

The camera is modeled through a transformation that maps 3-dimensional feature points

onto a 2-dimensional image plane as described in Chapter 3. This transformation is a geometric

relationship between the camera properties and the position of a feature point. The image plane

coordinates are computed based on a tangent relationship from the components of rln. The

camera relationship used in this chapter is referred to as the continuous pinhole camera model

and is given in Equations 3-7 and 3-8 for a zero lens offset, where f is the focal length of the

camera and rlx,n, Tly,n, rlz,n are the (x, y, z) components of the nth feature point.

This pinhole model is a continuous mapping that can be further extended to characterize

properties of a physical camera. Some common additions to this model include skewness, radial
















77VT,n
/IT~n XV
11IT
mlVT
Target
e eFeature
Fly~ syPoin E lve~ sPoint


sF FauePoint sF) Fetur n

IF F



A B

Figure 7-2. Moving target vector description relative to A) camera I and B) virtual camera V

distortion, discrete mapping into pixels, and field of view constraints which are further also

specified in Chapter 3. Each extension to the model adds another parameter to know for the

estimation problem and each can introduce uncertainty and large errors in the estimation result.

Therefore, this chapter will only consider the field of view constraint and leave the nonlinear

terms and the effects on estimation for future work. Recall the field of view constraints given in

Chapter 3. These constraints can be represented as lower and upper bounds in the image plane

and are dependent on the half angles (yh, t) which are unique to each camera. Mathematically,

these bounds are shown in Equations 7-1 and 7-2 for the horizontal and vertical directions.



[pip] = [-f tanh, f tanyh (7-1)


[v, V] = [- f tany,, f tany,] (7-2)


7.2.2 Homography Estimation

The implicit relationship between camera and environment is known as the epipolar

constraint or, alternatively, the homography constraint. This constraint notes position vectors that

describe a feature point, rln, at two instances in time are coplanar with the camera's translation










vector [102]. The same constraint holds for the image coordinates as well but also introduces an

unknown scale factor. Employing this constraint, estimates of relative motion can be acquired

for both camera-in-hand and fixed camera configurations. This dissertation deals with the

camera-in-hand configuration while assuming a perfect feature point detection and tracking

algorithm. This assumption enables the performance of the vision based state estimation to be

tested before introducing measurement errors and noise.

The homography constraint requires a few assumptions based on the quantity and the

structure of the feature points. The algorithm first requires a minimum of four planar feature

points to describe each vehicle. This requirement enables a unique solution to the homography

equation based on the number of unknown quantities. The reference vehicle will have a minimum

of four pixel values in each image which will be defined as pF,n = CUF,n-,VF,n] Vn2 feature points.

Likewise, the target vehicle will have four pixel values and will be defined as pr,n = [PT,n, VT,n] Vn2

feature points. This array of feature point positions are computed at 30 Hz which is typical for

standard cameras and the frame count is denoted by i. The final requirement is a known distance

for both the reference and target vehicle. One distance represents the position vector to a feature

on the reference vehicle in Euclidean space relative to the local frame F and the second distance

represents the position vector to a feature on the target vehicle in Euclidean space relative to

the local frame T. In addition, the length of these vectors also must be equal which allows the

unknown scale factor to be determined. The vector describing the reference feature point will be

denoted aS SF expressed in F, while the vector describing the target feature point is referred to as

sT expressed in T. These feature point position vectors are also illustrated in Figure 7-2.

The feature points are first represented by position vectors relative to the camera frame,

I. The expressions for both the reference and target feature points are given in Equations 7-3

and 7-4. These vector components are then used to compute the image coordinates given in

Equations 7-1 and 7-2. The computation in Equation 7-4 requires information regarding the

target which is done solely to produce image measurements that normally would be obtained

from the sensor. Remaining computations, regarding the homography, will only use sensor










information provided only by the camera vehicle, the reference vehicle and the images acquired

from the camera.

T1F,n = RBIREB (F,n TEB) RBITBI +tRBIRFBSF (7-3)

T1T,n = RBIREB (T,n TEB) RBITBI +tRBIRTBST (7-4)

The variables RFB and RTB in Equations 7-3 and 7-4 are the true rotations matrices from F to B

and T to B, repectfully, and are shown in Equations 7-5 and 7-6.


RFB = REBRETF (7-5)


RTB = REBRETT (7-6)

For state estimation of a moving target using a moving camera the homography approach

requires the reference vehicle to be stationary in the image [77]. In this case, both the reference

and target vehicles are in motion and are being viewed by a moving camera. Therefore, the next

step is to transform the camera to a virtual configuration that observes the reference vehicle

motionless in the image over two frames. In other words, this approach computes a Euclidean

transformation that maps the camera's states at i 1 to a virtual camera that maintains the relative

position and orientation between frames to fix the feature points of the reference vehicle. This

transformation is done by making use of the previous image frame and state information at

i 1 from both the camera and the reference vehicle. After the virtual camera is established the

homography equations can be employed for state estimation.

To compute the location and pose of the virtual camera at i the relative position and

orientation from I to F at i 1 is required. This relative motion is computed through known

measurements from GPS/IMU and the expressions are shown in Equations 7-7 and 7-8 for

translation and rotation at i 1, respectfully.


xlF 1) IF 1)- I (i 1)(7-7)


RIF\D (i-1 E i-1 R ( )Rg i-1 (7-8)










Once the relative motion is determined, the position and orientation of the virtual camera

relative to E can be computed. These relationships are shown in Equations 7-9 and 7-10 for the

current frame i.

xy (i) = xF (i) +t REB (i- 1) TBI ( 1) XIF (i 1) (7-9)

R\EV (i) = RBI( (i 1) REB( (i 1) REF (i I1) EF~ (i) (7--10)

The virtual camera position and orientation is then used to update the image coordinates for

both the reference and target vehicles. This update requires computing new rln vectors in terms

of the virtual camera's position and orientation followed by a substitution of those components

into the pin-hole camera model given in Equations 3-7 and 3-8. The expressions for the new

vectors TIVF,n and vrlV,n in terms of the virtual camera are given in Equations 7-11 and 7-12 for

the reference and target vehicles.


rlVF,n = REV (F,n XV) +tRFVSF (7-11)


vrlV,n = REV (T,n XV) +t RTVSy (7-12)

Equations 7-11 and 7-12 are one way to compute image coordinates for the virtual camera,

but there are unknown terms in Equation 7-12 that aren't measurable or computeable in this

case. Therefore, an alternative method must be used to compute image values of the target

in the virtual camera. Using the position and orientation of the virtual camera, as given in

Equations 7-9 and 7-10, the relative motion is computed from camera I to camera V while using

epipolar geometry to compute the new pixel locations. This relative camera motion is given in

Equations 7-13 and 7-14 where the translation is expressed in I.


xyV = R\BIREB InV rEB -R'EBTBI) (-3


RlV = REVREBRBI (7-14)

Using this relative motion and the pixel locations obtained from camera I, the pixel

coordinates are computed of the target in the image plane seen by the virtual camera V. The










epipolar constraint enables the relationship between image frames to compute the features of the

target through Equation 7-15

PvT,n (xyTVRIV) pr,n= (7-15)

where xyjv is the skew symmetric representation of the relative translation from I to V expressed

in I and the new pixel coordinates determined from the virtual camera are denoted as pVF,n =

[PVF,n, VVF,n] Vn2 for the reference vehicle and pyr,n = [PVr,n, Wa~n] Vn2 for the target vehicle. As

a result of the virtual camera, the desired property is obtained regarding pixels of the reference

vehicle computed from the camera at i 1 are equal to the pixels generated by the virtual camera

at i. Mathematically, this property is expressed in Equation 7-16 which relies on the relative

motion remaining constant to maintain the reference stationary in the image.


PF,n ( 1) = PVF,n (i) (7-16)


With this virtual camera in place and the reference pixels stationary, the computation of

the homography between the reference and target vehicles is considered. First, the geometric

relationships are established relative to the virtual camera of both the reference and target

vehicles by denoting their feature point positions in Euclidean coordinates. The time varying

position of a feature point on the reference vehicle expressed in V is given in Equation 7-17.

Likewise, the time varying position of a feature point on the target vehicle expressed in V is given

in Equation 7-18.

rlVF,n = mvF +tRFVSF (7-17)

rlvr,n = myr +tRTVSy (7-18)

The components of these Euclidean coordinates are defined in Equations 7-19 and 7-20 and are

relative to the virtual camera frame.


rlVF,n (t) t yt t)(-9

vrn()a x t r()L t (7-20)









After some manipulation, an expression for the relative translation and rotation between the

reference vehicle and the target vehicle can be written as shown in Equation 7-21.


TIvr,n = X +tRTVF,n (7-21)


The relative translation, x, expressed in V and rotation, R, are defined in Equations 7-22 and 7-23

which describe the relative motion between the reference and target objects.


x = myr R (inVF +t RFV (SF ST)) (7-22)


R = RTVRFV (7-23)

By employing some known quantities and assumptions regarding the feature points, the

unknown scale factor in the homography equation can be determined. Recall, the virtual camera

location is known through Equation 7-9 and the reference vehicle location is known through

GPS along with the feature point locations, therefore, a projected distance can be computed that

scales the depth of the scene. To compute this distance the normal vector, n, that defines the plane

which the reference feature points lie is required and can be computed from known information.

Ultimately, the projective distance can be obtained and is defined in Equation 7-24 through the

use of the reference position.

D (t) = n"BlVF (7-24)

Substituting Equation 7-24 into Equation 7-21 results in an intermediate expression for the

Euclidean homography and is shown in Equation 7-25.


,vr.n = R+ onX _VFT (7-25)

To facilitate the subsequent development, the normalized Euclidean coordinates are used and

defined in Equations 7-26 and 7-27.
rlVF,n
TIVF,n (7-26)









From Equations 7-25, 7-26, and 7-27 the normalized Euclidean homography is established

which relates the translation and rotation between coordinate frames F and T. This homography

expression is shown in Equation 7-28 in terms of the normalized Euclidean coordinates.

TIvr,n = R h )Vn
(7-28)
a H

In Equation 7-28 a (t) denotes the depth ratio, H (t) denotes the Euclidean homo graphy, and

Th (t) denotes the scaled translation and is defined in Equation 7-29.


Th =) (7-29)

The Euclidean homography can now be expressed in terms of image coordinates or pixel

values through the ideal pin-hole camera model given in Equations 3-7 and 3-8. This expressing

is done by first rewriting the camera model into matrix form which is referred to as the camera

calibration matrix, K. Substituting the camera mapping into Equation 7-28 and using the camera

calibration matrix, K, the homography in terms of pixel coordinates is obtained and given in

Equation 7-30. This final expression relates the rotation and translation of the two vehicles F

and T in terms of their images coordinates. Therefore, to obtain a solution from this homography

expression both vehicles need to be viewable in the image frame.

pyr..n = a (KHK ) pVF.,n
v (7-30)


The matrix G (t) is denoted as a projective homo graphy in Equation 7-30 which are a set

of equations that can be solved up to a scale factor using a linear least squares approach. Once

the components of homography matrix are estimated the matrix needs to be decomposed

into translational and rotational components to obtain xh and R. This decomposition is

accomplished using techniques such as singular value decomposition and generates four possible

solutions [119, 120]. To determine a unique solution some physical characteristics of the problem









can be used. First, two solutions can be eliminated by using the positive depth constraint. The

decision regarding the remaining two solutions is more difficult to decipher unless the normal

vector is known or can be estimated, which in this case is known. Recall the normal vector,

n, describes the plane containing the feature points of the reference vehicle. As a result, the

homography solution is determined uniquely.

The final step in this development is to use the homography solution to solve for the relative

translation and rotation from T to B. The resulting equation for the rotation uses a sequence of

transformations and is shown in Equation 7-31.


RTB = RBIREBRETVRRBI (i 1) REB (i 1) RETF (i 1) (7-31)


The translation is found through a series of scalings followed by a vector sum. The relative

translation, xh, is first multiplied by D to scale distance which is given in Equation 7-29 to obtain

x. Secondly, x is then divided by oc to scale the depth ratio resulting in the final x expressed in I.

This result in conjunction with R is then used in Equation 7-22 to solve for myr. The next step is

to compute the relative translation from I to V which is given in Equation 7-32.

my=RE V EB+ETBTI (7-32)


The relative translation from T to B is then given in Equation 7-33.


xTB = REBRETV (mvT mIV) (7-33)


In conclusion, Equations 7-31 and 7-33 represent the relative motion between the camera

vehicle and the target vehicle. This information is valuable for the control tasks described earlier

involving both tracking and homing applications. The next section will implement this algorithm

in simulation to verify the state estimator for the noise free case.









CHAPTER 8
MODELING TARGET MOTION

8.1 Introduction

Once state estimation of a moving target has been obtained the next step is to record these

estimates over time to try and leamn the object's general motion. The purpose of understanding

these motions are useful for prediction and allows for closed-loop control for applications such as

autonomous docking and AAR. In essence, this prediction step provides the tracking vehicle with

future state information of the target which assists the controller in both the tracking and docking

missions. This chapter describes a probabilistic method that employs the time history estimates

of the target's motion to determine future locations. In addition to providing state predictions, the

modeling scheme also provides position updates when features are outside the field of view.

Linear modeling is not sufficient for prediction in this situation, where the motion is

stochastic. Linear techniques that estimate a transfer function, such as ARX, require that the

inputs and outputs of the system are known. Although this is the case for many systems, it

doesn't apply in this scenario because the inputs (i.e. the forces) on the target are assumed to be

unknown. For example, in the AAR mission the target, or drogue, interacts with a flow field that

is potentially turbulent due to the effects of the surrounding aircraft (i.e. tanker and receiver)

and difficult to model. The drogue is also tethered by a flexible boom that applies reaction forces

which are dictated from the tanker aircraft and the aerodynamic forces on the boom. These

factors make the modeling task challenging to accurately represent the motion of a general target

with unmodeled dynamics and disturbances. Therefore, the method considered in this dissertation

will consist of a probabilistic approach to account for general motions with stochastic behavior.

8.2 Dynamic Modeling of an Object

There are numerous modeling schemes in the research community. The probabilistic

approaches can be separated into two main categories consisting of supervised and unsupervised

learning algorithms. Supervised algorithms require training data that determines trends apriori

and classifies the the motion under consideration to the trends observed during training.










Alternatively, unsupervised learning requires no explicit training. Instead, these algorithms

rely on data clustering to determine the natural trends of the motion. The approach taken in

this dissertation is an unsupervised technique presented by Zhu [121] that employs a Hidden

Markov Model to predict the motion of moving objects. The benefits in using a Hidden Markov

Model include a time dependence framework incorporated into the probabilistic model as well

as the ability to handle stochastic processes. The underlining concept of a Hidden Markov

Model describes the probability of a process to sequentially go from one state to another. This

sequential property provides the necessary framework for time dependence modeling, which is an

attractive approach for the applications considered, where the time history data is a critical piece

of information included in the modeling.

8.2.1 Motion Models

The selection of motion models that can be used for predicting the location of a target

contains infinite possibilities due to the various types motions. A target's motion generally

involves a single model but also can contain various models that comprise the overall motion.

These motions can exist at different periods throughout the trajectory. Incorporating this

logic into the prediction scheme, models are chosen based on the current acceleration of the

target which is determined by the time history of the position estimates. Constant velocity and

stochastic acceleration models are two general types of motions considered.

The constant velocity model is derived by assuming the acceleration of the target is zero, as

shown in Equation 8-1. Therefore, the velocity and position are updated through Equations 8-2

and 8-3. Although this model is limited, it describes a foundation for modeling target motion and

covers the basic model constant velocity.


a (t) = (8-1)


v (t) =s (8-2)

p (x, Y, t +t At) = p (x, y, t) +t s~t (8-3)









The next model considered involves a random motion model. The assumed acceleration is

shown in Equation 8-4 and is characterized by a random vector, w (t) and is scaled by a constant,

p. The velocity corresponding to this acceleration is described in Equation 8-5. This model

attempts to capture the stochastic behaviors by utilizing a probabilistic distribution function.


a (t) = pw (t) (8-4)


v (t) = v (t A) + p w (T) d (8-5)

Alternatively, the model shown in Equation 8-4 can be modified to incorporate some

dependence on the previous acceleration value. This dependence is achieved by weighting the

previous acceleration in the model and is shown in Equation 8-6. The benefit to this type of

model as oppose to Equation 8-4 requires some knowledge of the target; namely, that the target

cannot achieve large abrupt changing in acceleration. The resulting velocity expression for this

model is given in Equation 8-7.


a (t) = poa (t At) +t p w (t) (8-6)


v (t) = v i+ JN(t A) + a( -A) w(T) dt (8-7)

8.2.2 Stochastic Prediction

The image sequence obtained from the camera are processed by the homography to obtain

the position estimates of the target. These position estimates are then used to compute a velocity

profile of the target, as shown in Equation 8-8 for the ith target and N image frames. The velocity

profile is computed using a backwards difference method and is given in Equation 8-9.


[vi (t -1) ,vi(t -2) ,...,vi (t N+1 ),vi (t -N)] (8-8)


vi (t j) = pi (t j) pi (t j 1) (8-9)

Similarly, an acceleration profile, defined in Equation 8-10, is obtained from the velocity

profile given in Equation 8-8. The same backwards difference method is used to compute this









profile and is provided in Equation 8-11. This acceleration time history is computed implicitly

through the position estimates obtained from the homography algorithm


[ai (t -1) ,ai (t -2),...,ai (t N+-2),ai (t N+-1)] (8-10)


ai (t j) = vi (t j) vi (t j 1) (8-11)

The motion profiles given in Equations 8-8 and 8-10 provide the initial motion state

description that propagates the Markov transition probability function. The form of the Markov

transition probability function is assumed to be a Gaussian density function that only requires

two parameters for its representation. The parameters needed for this function include the mean

and variance vectors for the acceleration profile given in Equation 8-12. Note, during this chapter

pu (x) is the mean operator and not the vertical component in the image plane. Likewise, 02 (X) iS

referred to as the variance operator.


[p (ai (t +t j)) G2 (ai (t + j))] j ,1 ,k(8-12)

The Markov transition function is defined in Equation 8-13, where the arguments consist of

the mean and variance pertaining to the estimated acceleration.


P (ai (t + j)) = xn Cp (ai (t + j)) 02 (ai (t + j))) (8-13)

The initial mean and variance for acceleration are computed in Equations 8-14 and 8-16

for the transition function. The functions f, and f, are chosen based on the desired weighting of

the time history and can simply be a weighted linear combination of the arguments. These initial

statistical parameters are used in the prediction step and updated once a new measurement is

obtained.

p (ai (t)) = fe (ai (t 1) ai (t 2) ,. .ai (t N)) (8-14)


a2 Si F)) f (a ir l ) i (t-2))2 (8-15)









Finally, the Markov transition probability function is given explicitly in Equation 8-16

as three-dimensional Gaussian density function and is uniquely determined by the mean and

variance.
1 1 I(ai (t) p (ai (t))) 2
P (ai (t))= ep(-6
AGn (ai (t))1 e 2 G2 a t (-6
The probability function is then extended over the entire time interval [t, t +t k] to estimate

the prediction probability. Mathematically, this extension is expressed in Equation 8-17.


P (ai (t +t j)) = P (ai (t +t j 1)) = .. = P (ai (t)) (8-17)


Employing Equations 8-16 and 8-17, the predictive probability for object i at time t +t k

is given as Equation 8-18. This framework enables the flexibility of computing the predicted

estimates at any desired time in the future with the notion that further out in time the probability

diminishes.
k-1
Probe (ai (t + k)) = n P (ai (t. +j) (8-18)
j= 0
A similar process is considered for computing the Markov transition functions for both

velocity and position. First, the mean and variance vectors for velocity and position are defined in

Equations 8-19 and 8-20 for the entire time interval.


[p (vi (t + j)) G2 (Vi (t + j))] j ,1 ,k(8-19)


[p(pi (t+- j)) ,G2 (pi(t + j))] j=01.k(8-20)

The initial mean and variance expressions for the velocity are given in Equation 8-22

and 8-23.


pu (vi (t)) = pu (vi (t 1) +t ai (t 1)) (8-21)

= vi (t 1) +t p (ai (t 1))


G2 (Vi (t)) = G2 (Vi (t 1) +t ai (t 1)) (8-22)









Meanwhile, the expressions for the mean and variance for the position are given in

Equations 8-24 and 8-25.


pu (pi (t)) = u (pi (t- ) +vi (t ) + a~lai (t 1) (8-23)

= pi (t 1) + p (vi (t 1)) 2 p (ai (t 1))


$(02 ) 2 (p (t ) Vi (t ) + a t (t -) (8-24)

= G2 (Vi (t 1)) O2 2a t )

Lastly, the probability functions for velocity and position are used to compute the predictive

probabilities for object i that are given in Equations 8-25 and 8-26 for velocity and position,

respectfully.

Probe (vi (t + k)) = g P (vi (t + j)) (8-25)
j= 0
k-1
Probe (pi (t + k)) = gP t(pi (t. + j) (8-26)
j= 0
Therefore, the probability given in Equation 8-26 is the probability that target i is located in

position p (x, y, z). So the the overall process is an iterative method that uses the motion models,

given in Section 8.2.1, to provide guesses for position and velocity in attempt to maximize

the probability functions given in Equations 8-25 and 8-26. The position that maximizes

Equation 8-26 is the most likely location of the target at t +t k with a known probability.









CHAPTER 9
CONTROL DESIGN

The control strategy considered in this dissertation uses the computed relative states found

between a moving camera and a moving target of interest as shown in Chapter 7. Effectively,

these quantities are the error signals used for control to track the moving camera toward a desired

location based on the motion of the target. The framework presented here will use aircraft and

UAV navigation schemes for the aerial missions described in Chapter 1. Therefore, the control

design described in this chapter focuses on the homing mission to facilitate the AAR problem,

which involves tracking the position states computed from the homography.

Various types of guidance controllers can be implemented for these types of task once

the relative position and orientation are known. Depending on the control objectives and how

fast the dynamics of the moving target are, low pass filtering or a low gain controller may be

required to avoid high rate commands to the aircraft. In the AAR problem, the success of the

docking controller will directly rely on several components. The first component is the accuracy

of estimated target location which during AAR needs to precise. Secondly, the dynamics

of the drogue are stochastic. This causes the modeling task to be impractical in replicating

real life so the controller is limited to the models considered in the design. In addition, the

drogue's dynamics may not be dynamically feasible for the aircraft to track which may further

reduce performance. Lastly, the controller ideally should make position maneuvers in stages by

considering the altitude as one stage, the lateral position as another stage, and the depth position

as the final stage. In close proximity, the controller should implement only small maneuvers to

help maintain the vehicles in the FOV.

9.1 Control Objectives

The control objectives for the AAR mission is to track and home on the target drogue and

successfully dock with the receptacle. This controller is designed using a tracking methodology

that regulates the relative distance to within a specified tolerance. For example, the tolerance

required for aerial refueling is on the centimeter scale [15].










9.2 Controller Development

The control architecture chosen for this mission consisted of a Proportional, Integral and

Derivative (PID) framework for waypoint tracking given in Stevens and Lewis [110]. The

standard design approach was used by considering the longitudinal and lateral states separately as

in typical waypoint control schemes. This approach separated the control into three segments: 1)

Altitude control, 2) Heading Control and 3) Depth Control.

9.2.1 Altitude Control

The first stage considered in the control design to home on a target is the altitude tracking.

This stage considers the longitudinal states of the aircraft using the elevator as the control

effector. The homography generates the altitude command necessary to track and dock with the

refueling receptacle. The architecture for the altitude tracking system is shown in Figure 9-1. The

first portion of this system is described as the inner-loop where pitch and pitch rate are used in

feedback to stabilize and track a pitch command. Meanwhile, the second portion is referred to as

the outer-loop which generates pitch commands for the inner-loop based on the current altitude

error. The inner-loop design enables the tracking of a pitch command through proportional















Figure 9-1. Altitude hold block diagram

control. This pitch command in turn will affect altitude through the changes in forces on the

horizontal tail from the elevator position. The two signals used for this inner-loop are pitch and

pitch rate. The pitch rate feedback helps with short period damping and allows for rate variations

in the transient response. A lead compensator was designed in Stevens et al. [110] to raise the










loop gain and to achieve good gain and phase margins for the pitch command to pitch transfer

function.

The outer-loop design involved closing the loop in altitude. The altitude error signal is

generated by the difference in current altitude and the commanded altitude computed by the

estimation algorithm. The compensator designed for the inner-loop pitch is augmented to

maintain the high loop gain and is defined as G, in Figure 9-1. This structure will provide good

disturbance rejection during turbulent conditions. In addition, bounds were placed on the pitch

command to alleviate any aggressive maneuvers during the refueling process.

9.2.2 Heading Control

The next stage in the control design consist of the turn or heading coordination. This

aspect involves the lateral directional states of the aircraft. The control surfaces that effect these

states are ailerons and rudder. Similar to the altitude controller, the homography estimates a

heading command that steers the aircraft in the desired direction toward the target of interest. The

control architecture that accomplishes this objective is depicted in Figure 9-2. The inner-loop





















Figure 9-2. Heading hold block diagram

component of Figure 9-2 deals with roll tracking. The feedback signals include both roll and

roll rate through proportional control to command a change in aileron position. The inner-loop










stabilization design also included a roll to elevator connect to help counteract the altitude loss

during a tumn.

The outer-loop is completed by simply closing the loop around the roll tracker using a

proportional gain to follow to desired heading. In addition, command limits of +600 were

placed on roll to regulate aggressive turns and a yaw damper was also implemented that included

a aileron-rudder interconnect which helps a tumn in a number of ways. The aileron-rudder

interconnect helps to raise the nose up during a turn. Meanwhile, the yaw damper is employed to

damp oscillations from the Dutch-roll mode during a heading maneuver. The design of the yaw

damper is provided in Stevens et al. [110]. Consequently, the tumn smoother and contains less

oscillations.

Tracking heading is not sufficient to track the lateral position with the level of accuracy

needed for refueling task. The final loop was added to account for any lateral deviation

accumulated over time due to the delay in heading from position. This delay is mainly due to the

time delay associated with sending a roll command and producing a heading change. Therefore,

this loop was added to generate more roll for compensation. The loop commanded a change in

aileron based of the error in lateral position. This deviation, referred to as Ay, was computed

based on two successive target locations provided by the estimator. The current and previous

(x, y) positions of the target were used to compute a line in space to provide a reference of the it's

motion. The perpendicular distance from the vehicle's position to this line was considered the

magnitude of the lateral command. In addition, the sign of the command was needed to assign

the correct direction. This direction was determined from the relative y position, expressed in the

body-fixed frame, that was found during estimation. Once the lateral deviation was determined,

that signal was passed through a PI structure, as shown in Figure 9-2. The gains corresponding to

the proportional, kyp, and integrator, kyi, were then summed and added to compute the final roll

command. The complete expression for the roll command is shown in Equation 9-1.

ky
#cmd = kW (Vcmd W) +t ky p~y + ~Yi (9-1)










9.2.3 Depth Control

The last stage in the homing mission is the depth position or axial position to the tracked

target. Once the altitude and lateral position are aligned using the two previous controllers then

the depth control is engaged. The error to regulate for this last stage is the axial position in

the body-fixed frame. There are many challenges when approaching this control design due to

the reliance of vision-based feedback. One particular problem associated with state estimation

algorithms that employ vision is the break down when features exit the field of view. For

instance, during approach the objects within the image become larger which makes them harder

to maintain within the field of view. Therefore, to account for this drawback the controller should

be restricted to very slow steady maneuvers and avoid sudden changes in orientation.

The design approach taken for this control loop is to increase velocity while maintaining

altitude and restrict large changes in pitch angle. Once the lateral position and altitude are aligned

then the axial position is regulated to zero. The control architecture chosen for this loop was

proportional where the error is multiplied by a the gain factor, ks, which generates a change in

thrust command. During thrust changes, the aircraft tends to climb or descend due to the change

in airspeed. This resulting altitude change is counteracted by adjusting the elevator through the

altitude controller designed in the beginning of this chapter.

Meanwhile, any adjustments that are made to maintain altitude are governed by the pitch

angle, which directly affects the field of view. As a result, the pitch was limited to maintain

features within the image when the receiver is within a specified distance. This methodology

will still not guarantee features will stay in the image. For example, the homography requires a

minimum of four distinct feature on each vehicle. The closer in proximity the receiver gets the

larger the objects get in the image. At close distances the object can fill the entire image causing

the feature points to leave the field of view even if the object is centered in the image. This

creates a dead zone in the measurable space that can either be fixed by estimating a prediction of

the target over time or by customizing the camera parameters to correspond with camera position

and orientation along with the size of the objects to ensure feature remain in the image.










The modeling scheme presented in Chapter 8 provides a method to estimate targets in

Euclidean space when features do exit the image. This method works well for short periods of

time after the target has left; however, the trust in the predicted value degrades tremendously

as time increases. Consequently, when a feature leaves the image the controller can rely on the

predicted estimates to steer the aircraft initially but may resort to alternative approaches beyond a

specified time. As a last resort, the controller can command the aircraft to slow down and regain a

broader perspective of the scene to recapture the target.










CHAPTER 10
SIMULATIONS

10.1 Example 1: Feature Point Generation

A simulation of vision-based feedback is presented to demonstrate the implementation,

and resulting information, associated with sensor models and aircraft dynamics. This simulation

utilizes a nonlinear model of the flight dynamics of an F-16 [110]. A baseline controller is

implemented that allows the vehicle to follow waypoints based entirely on feedback from inertial

sensors.

Images are obtained from a set of cameras mounted on the aircraft. These cameras include

a stationary camera mounted at the nose and pointing along the nose, a translating camera under

the centerline that moves from the right wing to the left wing, and a pitching camera mounted

under the center of gravity. The parameters for these cameras are given in Table 10-1 in values

relative to the aircraft frame and functions of time given as t in seconds.

Table 10-1. States of the cameras
position (ft) orientation (deg)
camera xe ye ze Wcc V
1 24 0 0 0 90 0
2 -10 15-3t 0 0 45 0
3 0 0 3 0 45-9t 0


The camera parameters are chosen as similar to an existing camera that has been flight

tested [111]. The focal length is normalized so f = 1. Also, the field of view for this model

correlates to angles of y;; = 32 deg and y, = 28 deg. The resulting limits on image coordinates are

given in Table 10-2.

Table 10-2. Limits on image coordinates
coordinate minimum maximum
pu -0.62 0.62
v -0.53 0.53


A virtual environment is established with some characteristics similar to an urban

environment. This environment includes several buildings along with a moving car and a










moving helicopter. In actuality, these features are simply represented by sets of points and any

motion results from simple kinematic motion. A specific point, positioned as in Table 10-3, is

associated with each feature for direct identification in the camera images.

Table 10-3. States of the feature points
position (ft)
feature point north east altitude
1 3500 200 -1500
2 1000+200t 500 -500
3 6000 200cos( t) + 1000 200sin( t)-1000


The flight path through this environment is shown in Figure 10-1 along with the features.

The aircraft initially flies straight and level toward the North but then turns somewhat towards the

East and begins to descend from a dive maneuver.


-3000 600
5000
-2000
1L 4000

O ~ 3000
0 2000
-1000
0 1000
1000
2000 400 6000
E f) 30 00 _10000 0 1000 2000 3000
N (ft) E (ft)
A B

Figure 10-1. Virtual Environment for Example 1: A) 3D View and B) Top View

Images are taken at several points throughout the flight as indicated in Figure 10-1 by

markers along the trajectory. The states of the aircraft at these instances are given in Table 10-4.

The image plane coordinates (p, v) are plotted in Figure 10-2 for the three cameras at

t = 2 sec. This computation is accomplished by using Equation 5-28 in conjunction with

Equations 3-5 and 3-6 while applying the field of view constraint shown in Equations 3-4 and

3-3. All three cameras contain some portion of the environment along with distinct views of the

feature points of interest. For example, camera 1 contains a forward looking view of a stationary










Table 10-4. Aircraft states
Time North East Down u v w
(s) (ft) (ft) (ft) (ft/s) (ft/s) (ft/s)
2 1196.9 0.44 -2174.8 573.52 56.79 -126.46
4 2112.7 143.04 -1645.4 527.37 -54.94 17.77
6 2989.8 353.63 -1100.7 528.30 4.26 45.57
Time # 6 W p q r
(s) (degp) (degp) (degp) (deg/s) (deg/s) (deg/s)
2 -13.92 -22.43 1.81 -13.56 -36.82 1.38
4 -39.21 -37.90 22.79 32.31 28.41 0.04
6 6.98 -14.85 11.93 7.63 9.34 -1.15


point on the corner of a building as well as the moving helicopter. Meanwhile, cameras 2 and 3

observe a top view of the moving ground vehicle traveling forward down a road. These image

measurements provide a significant amount of data and allow for more advanced algorithms for

state estimation and reconstruction.


" ~
.t


A


B


C


Figure 10-2. Feature point Mmasurements at t = 2 sec for A) camera 1, B) camera 2, and C)
camera 3


Figure 10-3 depicts the optic flow computed for the same data set shown in Figure 10-2.

This image measurement gives a sense of relative motion in magnitude and direction caused

by camera and feature point motion. The expressions required to compute optic flow consisted

of Eqs. 5-28, 5-31, 3-5, 3-6, 3-32, 3-33, 3-4 and 3-3. In this example, the optic flow has

many components contributing to the final value. For instance, the aircraft's velocity and angular

rates contribute a large portion of the optic flow because of their large magnitudes. In addition,

the smaller components in this example are caused from vehicle and camera motion which are

smaller in magnitude but have a significant effect on direction. Comparing cameras 1 and 2, there










are slight differences in direction due to the translating camera. Likewise, the optic flow observed

by camera 3 is different due to the camera's orientation.









-0 -4 02 0 02 04 06 -06 -04 -02 0 02 04 06 -06 -04 -02 0 0 4 0
A B C

Figure 10-3. Optic flow Measurements at t = 2 sec for A) camera 1, B) camera 2, and C) camera


A summary of the resulting image plane quantities, position and velocity, is given in

Table 10-5 for the feature points of interest as listed in Table 10-3. The table is organized by the

time at which the image was taken, which camera took the image, and which feature point is

observed. This type of data enables autonomous vehicles to gain awareness of their surroundings

for more advanced applications involving guidance, navigation and control.

Table 10-5. Image coordinates of feature points
Time (s) Camera Feature Point pu v p 9
2 1 1 0.157 0.162 0.610 0.044
2 1 3 0.051 0.267 0.563 -0.012
2 2 2 -0.308 0.075 0.464 -0.254
2 3 2 0.011 0.077 0.583 -0.235
4 2 2 -0.279 -0.243 -0.823 0.479
4 3 2 0.365 -0.248 -0.701 0.603
6 1 3 0.265 -0.084 0.267 -0.015


10.2 Example 2: Feature Point Uncertainty

10.2.1 Scenario

Feature point uncertainty is demonstrated in this section by extending the previous example.

This simulation will examine the uncertainty effects on vision processing algorithms using

simulated feature points and perturbed camera intrinsic parameters.









Similarly to the previous example, vision-based feedback is generated using a flight

simulation. The overall setup of this example is the same where a nonlinear model of an F-16 is

used to fly through a cluttered environment while capturing images from an on-board camera.

Camera settings, such as focal length and field of view, are kept the same from the previous

example. The actual environment has been normalized based on the aircraft velocity so units are

not presented.

A set of obstacles, each with a feature point, are randomly distributed throughout the

environment and are not the same same as the previous example. This environment is shown in

Figure 10-4 along with a pair of points indicating the locations at which images will be captured.

The aircraft is initially straight and level then translates forward while rolling 4.0 deg and yawing

1.5 degp at the final location.



100



*. *
-10 00* ** o "
-1000 *0 0 00 50 2

0 **



Figure 10-4. Virtual environment of obstacles (solid circles) and imaging locations (open circles)
A) 3D view and B) top view

A single camera is simulated at the center of gravity of the aircraft with line of sight aligned

to the nose of the aircraft. The intrinsic parameters are chosen such that fo = 1.0 and do = 0.0

for the nominal values. The images for the nominal camera associated with the scenario in

Figure 10-4 are presented in Figure 10-5 to show the variation between frames.

The vision-based feedback is computed for a set of perturbed cameras. These perturbations

range as 87 E [-0.2, 0.2] and 6d E [-0.02, 0.02]. Obviously the feature points in Figure 10-5 will

vary as the camera parameters are perturbed. The amount of variation will depend on the feature















0.4 ~ '* 0.4
.


0.2 0.2



-0.4~ 0.4

-0.6~ 0.6
-0.6 -0.4 -0.2 0 0.2 0.4 0.6 -0.6 -0.4 -0.2 0 0.2 0.4 0.6
v v

A B


Figure 10-5. Feature points for A) initial and B) final images

point, as noted in Equations 4-9 and 4-10, but the effect can be normalized. The variation in

feature point given nominal values of Po = vo = 1 is shown in Figure 10-6 for variation in both

focal length and radial distortion. This surface can be scaled accordingly to consider the variation

at other feature points. The perturbed surface shown in Figure 10-6 is propagated through three

main image processing techniques for analysis.











d 2 0.




10.2.2 Otic Flo








Figure 10-6 t foceral lngth and ftrada doistorin ersnaiecmaio fotcfo o







the nominal camera and a set of perturbed cameras is shown in Figure 10-7.














-04, -0 /-0




A B C

Figure 10-7. Optical flow for nominal (black) and perturbed (red) cameras for A) f = 1.1 and
d = 0, B) f = 1.0 and d = 0.01, and C) f = 1.1 and d = 0.01

The vectors in Figure 10-7 indicate several effects of camera perturbations noted in

Equations 4-5 and 4-6. The perturbations to focal length scale the feature points so the

magnitude of optic flow is uniformly scaled. The perturbations to radial distortion have larger

effect as the feature point moves away from the center of the image so the optic flow vectors

are altered in direction. The combination of perturbations clearly changes the optic flow in both

magnitude and direction and demonstrates the feedback variations that can result from camera

variations.

The optic flow is computed for images captured by each of the perturbed cameras. The

change in optic flow for the perturbed cameras as compared to the nominal camera is represented

as 6Sy and is bounded in magnitude, as derived in Equation 4-14, by Ay. The greatest value of Sy-

presented by these camera perturbations is compared to the upper bound in Table 10-6. These

numbers indicate the variations in optic flow are indeed bounded by the theoretical bound derived

in Chapter 4 and indicate the level of flow variations induced from the variations in camera

parameters.

Table 10-6. Effects of camera perturbations on optic flow
Perturbation Analyze Analyze Analyze
Set only with 87 only with 8d with 8f and 6d

87 = -0.2 and 6d = -0.02 0.0476 0.0476 0.0040) 0.0040) 0.0496 0.0543
87 = -0. 1 and 8d -0.01 0.0238 0.0476 0.0020 0.0040 0.0252 0.0543
87 = 0.1 and 8d 0.01 0.0238 0.0476 0.0020 0.0040 0.0264 0.0543
87 = 0.2 and 6d = 0.02 0.0476 0.0476 0.0040 0.0040 0.0543 0.0543










10.2.3 The Epipolar Constraint

State estimation is performed by considering the epipolar constraint to relate the pair of

images. The evaluation of images generated using the nominal camera for this simulated case is

able to estimate the correct states. An investigation of the epipolar lines shown in Figure 10-8

shows the quality of the estimation. Essentially, the epipolar geometry requires a feature point

in one image to lie along the epipolar line. This epipolar line is constructed by the intersection

between the plane formed by the epipolar constraint and the image plane at the last measurement.

The data in Figure 10-8 show the features in the second image do indeed lie exactly on the

epipolar lines.




0.4~ '* + 0.4~ .




-0.2 ~ -0.2

-0.4~ 0.4


-0.6 -0.4 -0.2 0 0.2 0.4 0.6 -0.6 -0.4 -0.2 0 0.2 0.4 0.6
v v
A B

Figure 10-8. Epipolar lines between two image frames: A) initial frame and B) final frame with
overlayed epipolar lines for nominal camera

The introduction of uncertainty into the epipolar constraint will cause variations in the

essential matrix which will also propagate through the computation of the epipolar line. These

variations in the epipolar line are visual clues of the quality of the estimate in the essential

matrix. These variations can occur as changes in the slope and the location of the epipolar line.

Figure 10-9 illustrates the epipolar variations due to perturbations on 87 = 0.1 and 6d = 0.01 to

the camera parameters. The feature points with uncertainty and the corresponding epipolar line

was plotted along with the nominal case to illustrate the variations. The key point in this figures

is the small variations in the slope of the epipolar lines and the significant variations in feature



































Figure 10-9. Uncertainty results for epipolar geometry: A) initial frame and B) final frame with
overlayed epipolar lines for cameras with f 1.0 and d = 0.0 (black) and f = 1.1
and d = 0.01 (red)

The essential matrix is computed for the images taken using a set of camera models.

Each model is perturbed from the nominal condition using the variations in Figure 10-6. The

change in estimated states between nominal and perturbed cameras is given by 89 over the

uncertainty range and is bounded, as derived in Equation 4-19, by Aq. The value of 6q for

a specific perturbation is shown in comparison to the upper bound in Table 10-7 which also

indicate the variation in entries of the essentail matrix which propagate to the camera states.

Table 10-7. Effects of camera perturbations on epipolar geometry
Perturbation Analyze Analyze Analyze
Set only with 87 only with 6d with 87 and 6d

87 = -0.2 and 6d = -0.02 293.14 293.14 4.45 4.45 288.75 297.34
87 = -0.1 and 6d -0.01 122.26 293.14 2.19 2. 19 288.75 297.34
87 = 0.1 and 6d 0.01 90.48 293.14 2.11 2. 19 288.75 297.34
87 = 0.2 and 6d = 0.02 159.31 293.14 4.15 4.45 288.75 297.34


point locations that occur along these lines. The small variations in the slope would suggest

reasonable estimation of the rotational component of the essential matrix; however, the variation

along the epipolar line would indicate a sensitivity in focal length variations due to the scaling

effects revealed from the plots.


0.6 -0.4 -0.2 0 0.2 0.4 0.6


-0.4


:.* .* *

Z





-0.6 -0.4 -0.2 0 0.2 0.4


-0.2











10.2.4 Structure From Motion

The images taken during the simulation are analyzed using structure from motion to

determine the location of the environmental features. The initial analysis used the nominal

camera to ensure the approach is able to correctly estimate the locations in the absence of

unknown perturbations. The actual and estimated locations are shown in Figure 10-10 to indicate

that all errors were less than 10-6

*Actual
a Estimate
500

0- ..
N. *
-500 *

-1000
1000\ -
2000
1000



Figure 10-10. Nominal estimation using structure from motion

The depths are also estimated using structure from motion to analyze images from the

perturbed cameras. A representative set of these estimates are shown in Figure 10-11 as having

clear errors. An interesting feature of the results is the dependence on sign of the perturbation to

focal length. Essentially, the solution tends to estimate a depth larger than actual when using a

positive perturbation and a depth smaller than actual when using a negative perturbation. Such a

relationship is a direct result of the scaling effect that focal length has on the feature points.

Estimates are computed for each of the perturbed cameras and compared to the nominal

estimate. The worst-case errors in estimation are compared to the theoretical bound, given in

Equation 4-29, to these errors. These numbers shown in Table 10-8 indicate the variation in

structure from motion depends on the sign of the perturbation. The approach is actually seen

to be less sensitive to positive perturbations, which causes a larger estimate in depth, than to

negative perturbations. Also, the theoretical bound was greater than, or equal to, the error caused

by each camera perturbation.















-100 3- c*, -100 100 o*d



A B C

Figure 10-11. Estimation using structure from motion for nominal (black) and perturbed (red)
cameras with A) f = 1.1 and d = 0, B) f = 1.0 and d = 0.01, and C)
f = 1.1 and d = 0.01

Table 10-8. Effects of camera perturbations on structure from motion
Perturbation Analyze Analyze Analyze
Set only with 87 only with 8d with 8f and 6d

87 = -0.2 and 6d = -0.02 4679.8 4679.8 75.02 75.02 4903.5 4903.5
87 = -0.1 and 8d -0.01 1045.6 4679.8 36.90 75.02 1076.6 4903.5
87 = 0.1 and 8d 0.01 485.80 4679.8 35.73 75.02 498.76 4903.5
87 = 0.2 and 6d = 0.02 1092.4 4679.8 70.34 75.02 1092.5 4903.5


10.3 Example 3: Open-loop Ground Vehicle Estimation

An open-loop simulation was executed in Matlab and replayed in a virtual environment

to test the state estimation algorithm. The scenario envisioned in Chapter 1 involving a police

pursuit is demonstrated through this simulation. The setup consisted of three vehicles: an UAV

flying above with a mounted camera, electronics and communication, a reference ground vehicle

which is considered the police pursuit car, and a target vehicle describing the suspects vehicle.

The goal of this mission is for the UAV to track both vehicles in the image, while receiving

position updates from the reference vehicle, and estimate the target's location using the proposed

estimation algorithm.

The camera setup considered in this problem consist of a single downward pointing camera

attached to the UAV with fixed position and orientation. While in flight the camera measures and

tracks feature points on both the target vehicle and the reference vehicle for use in the estimation

algorithm. This simulation assumes perfect camera calibration, feature point extraction, and










tracking so that the state estimation algorithm can be verified. As stated in Chapter 7 the

geometry of the feature points are predescribed and a known distance is provided for each

vehicle. A further description of this assumption is given in Section 7.2.2. Future work will

examine more realistic aspects of the camera system to reproduce a more practical scenario and

try to alleviate the limitations imposed on the feature points.

10.3.1 System Model

The motion of the vehicles were generated to cover a vast range of situations to test the

algorithm. The UAV's motion was generated in open-loop from a nonlinear aircraft model in

trimmed flight. Meanwhile, the reference vehicle and the target vehicle exhibited a standard car

model with similar velocities. Sinusoidal disturbances were added to the target's position and

heading to add some complexity to it's motion and to replicate swerving. The three trajectories

are plotted in Euclidean space, as shown in Figure 10-12, for illustration. The initial frame for

this simulation is located at the aircraft's position when the simulation starts. The velocity of the

ground vehicles were scaled up to the aircraft's velocity which resulted in large distances but also

helped to maintain the vehicles in the image.


Aircraft x1
---Refernce 25x0

0- Target

0 15

1000 1

/ x140 5 -- Refernce
-2 /1 Target
x 10 -1
E (ft) o O N(t E (ft)
A B

Figure 10-12. Vehicle trajectories for example 3: A) 3D view and B) top view

The position and orientation states of the three vehicles are plotted in Figures 10-13 10-18

and all are represented in the inertial frame, E. The positions indicate that all three vehicle












initially travel north until the target vehicles makes a left turn and heads west and is subsequently

followed by the pursuit vehicles.






2 100'""


-10000


00 20 40 60 <* 0 -00 20 40 60
Time (sec) Time (sec) Time (sec)
A B C


Figure 10-13. Position states of the UAV with on-board camera: A) North, B) East, and C) Down

60 10 5




50 0




0 20 40 60 -10 20 40 60 -0020 40 60
Time (sec) Time (sec) Time (sec)
A B C


Figure 10-14. Attitude states of the UAV with on-board camera: A) Roll, B) Pitch, and C) Yaw







1~ 5 800

400
0 5 -15000
020

00 20 40 60 <* 0 00 20 40 60
Time (sec) Time (sec) Time (sec)
A B C


Figure 10-15. Position states of the reference vehicle (pursuit vehicle): A) North, B) East, and C)
Down


10.3.2 Open-loop Results

The homography was computed for this simulation to find the relative rotation and

translation between the ground vehicles. These results are then used to find the relative



135




















-1 )(


Time (sec) Time (sec)

A B


Time (sec)


Figure 10-16. Attitude states of the reference vehicle (pursuit vehicle): A) Roll, B) Pitch, and C)
Yaw


Time (sec)

A


Figure 10-17. Position states o
Down

I


100(
80(
Q



20(

0 20 40 60
Time (sec)


I


1000(

15


Time (sec)


f the target vehicle (chase vehicle): A) North, B) East, and C)


10


20 40 60 10
Time (sec)


20 40 60 0
Time (sec)


20 40
Time (sec)


Figure 10-18. Attitude states of the target vehicle (chase vehicle): A) Roll, B) Pitch, and C) Yaw

motion from the UAV to the target of interest. The norm error of this motion are depicted in


Figure 10-19. These results indicate that with synthetic images and perfect tracking of the

vehicles nearly perfect motion can be extracted. Once noise in the image or tracking is introduced


the estimates of the target deteriorate quickly even with minute noise. In addition, image artifacts


such as interference and drop outs will also have an adverse affect on homography estimation.






136










x 10
4


-011


200 400
Index (counts)


200 400
Index (counts)


Figure 10-19. Norm error for A) relative translation and B) relative rotation

Figures 10-20 and 10-21 show the relative translation and rotation decomposed into their

respective components and expressed in the body frame, B. These components reveal the relative

information needed for feedback to track or home in on the target of interest.


-4500
-5000
-5500

-600


Index (counts)


Index (counts)
C


Index (counts)


Figure 10-20. Relative position states: A) X, B) Y, and C) Z


60
40
20


Index (counts)
B


Figure 10-21. Relative attitude states: A) Roll, B) Pitch, and C) Yaw


-Jc-


100




00O 00 200 400 600
Index (counts)
C


Index (counts)
A










The simulation was then played in a virtual environment to enhance the graphics and

illustrate the application of this algorithm. To add the vehicles within the virtual environment the

velocities of each vehicle had to be scaled down to practical values that fit the scene. Snapshots

are shown in Figure 10-22 of the camera view depicting the vehicles and the surrounding scene.

The red vehicle was designated as the reference whereas the grey vehicle was the target vehicle.

The next step in this process is to implement an actual feature tracking algorithm on the synthetic

images that follows the vehicles. This modification alone will degrade the homography results

immensely due to the troublesome characteristics of a feature point tracker.
























Figure 10-22. Virtual environment

10.4 Example 4: Closed-loop Aerial Refueling of a UAV

A closed-loop simulation was executed in Matlab to replicate an autonomous aerial

refueling task. As Chapter 1 described the motivation and the benefits of AAR, this section will

demonstrate it by combining the control design given in Chapter 9 with the homography result

in Chapter 7 to form a closed-loop visual servo control system. The vehicles involved in this

simulation includes a Receiver UAV instrumented with a single camera, a tanker aircraft also










referred to as the reference vehicle and the target drogue also referred to as the target vehicle.

Ultimately, the goal of this task is to mate in flight the receptacle probe on the receiver aircraft to

the drogue that is tethered from the tanker aircraft.

The camera setup used for this simulation was a forward looking camera located at the

nose of the aircraft with fixed position and orientation. The field of view angles used in this

example were yh = 350 and y, = 350 along with a focal length of f = 1 and radial distortion set

to d = 0. To facilitate feature point tracking cues were painted on both the tanker and drogue

with an identical pattern and size. A square shape was chosen for this simulation with a length

of 4 feet on all sides. The same assumptions given in the previous example regarding feature

point tracking were applied to this example as well, including the assumption that both the

tanker and drogue remain in the field of view at all times. An additional assumption made to

facilitate the estimation was that data communication between the tanker and receiver was in

place to allow transmission of the tanker's position and orientation. Once the homography

estimation was computed, the relative position between the receptacle and the drogue was found.

Finally, the relative position was used in conjunction with the receiver's position to find the

inertial coordinates of the drogue. These inertial coordinates were then used in feedback for the

controller, similar to a waypoint structure except these inertial points are moving.

10.4.1 System Model

The aircraft model used in this development was a high fidelity nonlinear F-16 model

constructed by the University of Minnesota. The aircraft was trimmed at sea level traveling 500

ft/s straight and level. The details of the aircraft model are extensive and include aerodynamic

tables, actuator models, leading edge flap models, and position and rate limits on all actuation.

States for this model are the standard aircraft states given in Equations 5-26 with additional

states such as V, ce, p, the acceleration terms, Mach number, and dynamic pressure. Although

the controller will not use all states, the assumption of full state feedback was made to allow all

states accessible by the controller. The controller uses these states of the aircraft along with the

estimated results to compute actuator commands around the specified trim condition.










The same model was also used for the tanker or reference vehicle. The tanker was exactly

trimmed at the same conditions and airspeed as the receiver aircraft and given a specified

trajectory to follow. Initially the tanker's position was offset from the receiver's position at the

start of the simulation. The values of this offset are described relative to receiver's coordinate

frame and are as follows: 500 ft in front (+tX direction), 20 ft to the side (+tY direction),

and 100 ft above (-Z direction). The trajectory generated for the tanker aircraft prior to the

simulation was a straight and level flight with a slight drift toward the East direction. This lateral

variation was added to the trajectory to incorporate all three dimensions into the motion to test in

all directions.

On the other hand, the modeling of the drogue is much more difficult to characterize and is

of much interest in the research community. The stochastic nature of its motion is what makes

the modeling so challenging. The flow field affecting the drogue consist of many nonlinear

excitations including turbulence due to wake effects and vortex shedding from the tanker aircraft.

For this drogue model an offset trajectory of the tanker's motion was used as the drogue's general

motion. The offset of the drogue is initially at 200 ft in front +tX direction), O ft to the side (+tY

direction), and 70 ft above (-Z direction) relative to the receiver aircraft. More complicated

motions of the drogue were considered during testing but resulted in a diverging trajectory for

the receiver. This deviation from the desired path was due high rate commands saturating the

actuators. Low passing filtering can be incorporated to alleviate this behavior.

10.4.2 Control Tuning

The control architecture described in Chapter 9 is integrated and tuned for the nonlinear

F-16 model to accomplish this simulation. It was assumed that full state feedback of the aircraft

states were measurable including position. The units used in this simulation are given in ft and

deg which means the gains determined in the control loops were also found based on these units.

First, the pitch tracking for altitude controller is considered. The inner-loop gains for this

controller are given as ke = -3 and kg = -2.5. The bode diagram for pitch command to pitch

angle is depicted in Figure 10-23 for the specified gains. This diagram reveals the damping












achieved in the phugoid mode. In addition, a 12.4 dB gain margin at 8.15 rad/s and a 1570 phase


margin at 0.381 rad/s was achieved. This metric reveals robustness of the loop gain with respect

to increases in gain and phase shifts.

Bode Diagram
Gm = 12.4 dB (at 8.15 rad/sec) Pm = 157 deg (at 0.381 rad/sec)
From Pftch emden 1) To rea2deginemuxl (p 3

50


-50



-150


m -180

-270


10 4 102 100
Frequency (rad/sec)


Figure 10-23. Inner-loop pitch to pitch command Bode plot


The step response for the pitch controller is given in Figure 10-24 and shows acceptable


performance. The outer-loop control will now be designed using this controller to track altitude.



-Response
-Command








20







0 5 10 15 20
Time (sec)


Figure 10-24. Pitch angle step response










The outer-loop that connects altitude to pitch commands is considered. The gains for the

inner-loop pitch tracking remained the same while the gain in altitude error was set to k = 1.25.

The final compensation filter is given in Equation 10-1 and was designed in Stevens et al. [110].

A step response for this controller is illustrated in Figure 10-25 that shows a steady climb with

no overshoot and a steady-state error of 2 ft. This response is realistic for an F-16 but not ideal

for autonomous refueling mission where tolerances are on the cm level. The altitude transition is

slow due to the compensator but one may consider more aggressive maneuvers for missions such

as target tracking that may require additional agility.

s2 + 0.35s +t 0.015
Gs2+-t2.41s+-t0.024 (01




2450
-Response
Command


2350




2250

2200

2150
0 20 40 60 80 100
Time (sec)

Figure 10-25. Altitude step response

The next stage that was tuned in the control design was the heading controller. The

inner-loop gains were chosen to be kg = -5.7 and kp = -1.6 for the roll tracker. The bode

diagram for this controller of roll command to roll angle is shown in Figure 10-26 which shows

attenuation in the lower frequency range. This attenuation removes any high frequency response

from the aircraft which is desired during a refueling mission, especially in close proximity.

Meanwhile, the coupling between lateral and longitudinal states during a turn was counteracted











by an aileron-elevator connect. This connection involved a proportional gain of k, = 0.35

multiplied to the roll angle and added to the elevator position.

Bode Diagram
Gm = 15.9 dB (at 43.4 rad/sec) Pm = 179 deg (at 0.0583 rad/sec)
From Bankcmd(pt 1) To r2d(pt 1)
50







r -100


-150


a, -90





-270
10-4 10-2 100 102
Frequency (rad/sec)

Figure 10-26. Inner-loop roll to roll command Bode plot

The step response for this bank controller is illustrated in Figure 10-27. The tracking

performance is acceptable based on a rise time of 0.25 see, an overshoot of 6% and less than a

3% steady-state error.

The outer-loop tuning for heading controller consisted of first tuning the gain on heading

error. A gain of kw = 1.5 was chosen for this mission which demonstrated acceptable

performance. Figure 10-28 shows the heading response using this controller for a right turn.

The response reveal a steady rise time, no overshoot, and a steady-state error of less than 2 deg.

Finally, the loop pertaining to lateral deviation was tuned to k,, = 0.5 and kyi = 0.025 which

produced reasonable tracking and steady error for lateral position.

The final stage of the controller involves the axial position. This stage was designed to

increase thrust based on a velocity command once the lateral and altitude states were aligned.

A proportional gain was tuned based on velocity error to achieve a slow steady approach speed












-Response
- -Command


U'
0 5 10 15 2(
Time (sec)


step response
120
--Response
--Command
100


80


60


40


20


20



",15
av

d 10


5


Figure 10-27. Roll angle


0 20 40 60
Time (sec)


80 100


Figure 10-28. Heading response

to the target. A gain of ks = 3.5 was determined for this loop which generates the desired

approach. Lastly, to help limit the number of times the feature points exit the field of view a

limit was imposed on the pitch angle. This limit was enforced when the approach achieve a

specified distance. For this example, the distance was set to within 75 ft in the axial position of

the body-fixed frame which was determined experimentally from the target's size.

10.4.3 Closed-loop Results

The state estimation performance of the target drogue during this simulation was similar to

the previous simulation regarding the tracking of a ground vehicle. The estimated target states











were plotted against the true values in Figure 10-29 for position and Figure 10-30 for orientation

and revealed a correct fit. This result demonstrates the functionality of the estimator with an

accuracy on the order of 10-9. This error was plotted in Figure 10-31 for both position and

orientation.

SX 10'

2~" 2300


8~ 1 25




0 10 20 30 40 -00 10 20 30 40 200 10 20 30 40
Time (sec) Time (sec) Time (sec)
A B C


Figure 10-29. Open-loop estimation of target's inertial position: A) North, B) East, and C)
Altitude




--Estlmat0 023

~3 -0 005o





0o 10 20 30 40 -000 10 20 30 40 -010 10 20 30 40
Time (sec) Time (sec) Time (sec)
A B C


Figure 10-30. Open-loop estimation of target's inertial attitude: A) Roll, B) Pitch, and C) Yae


Furthermore, the closed-loop results for this simulation were plotted in Figures 10-32

and 10-34 for position and orientation of both the receiver aircraft and the target drogue relative

to the earth-fixed frame. The tracking of this controller showed reasonable performance for the

desired position and heading signals. The remaining orientation angles were not considered in

feedback but estimated for the purpose of making sure the drogue's pitch and roll are within the

desired values before docking. As seen in Figure 10-32, the receiver was able to track the gross

motion of drogue while having some difficultly tracking the precise motion.











x01 t 6x10






05



010 20 30 40 oO 10 20 30 40
Time (sec) Time (sec)
A B


Figure 10-31. Norm error for target state estimates A) translation and B) rotation


---Target 2300--are


8/ 1 -10i




0 10 20 30 40 -50 10 20 30 40 250 10 20 30 40
Time (sec) Time (sec) Time (sec)
A B C


Figure 10-32. Closed-loop target position tracking: A) North, B) East, and C) Altitude

The components of the position error between the receiver and drogue are shown in

Figure 10-33 to illustrate the performance of the tracking controller. These plots depict the initial

offset error decaying over time which indicates the receiver's relatives distance is decreasing. The

altitude showed a quick climb response where as the response in axial position was a slow steady

approach which was desired to limit large changes in altitude and angle of attack. The lateral

position is stable for the time period but contains oscillations due the roll to heading lag.

The orientation angles shown in Figure 10-34 indicate the Euler angles for for the

body-fixed transformations corresponding to the body-fixed frame of the receiver and the

body-fixed frame of the drogue. Recall, the only signal being tracked in the control design was

heading. This selection allowed the aircraft to steer and maintain a flight trajectory similar to the

drogue without aligning roll and pitch. The receiver should fly close to a trim condition rather

then matching the full orientation of the drogue, as illustrated in Figure 10-34 for pitch angle.


















_/1


u0 10 20 30 40
Time (sec)


10 20
Time (sec)


30 40


20
Time (sec)
B


Figure 10-33. Position tracking error: A) North, B) East, and C) Altitude

The error in heading is depicted in Figure 10-35 which shows acceptable tracking performance

over the time interval.


..il


70
60
50
40
30
20
10
10 20 30 40
Time (sec)
B

Roll, B) Pitch, and C) Yaw


2


0 10 20 30 40
Time (sec)
C


Time (sec)
A

Figure 10-34. Target attitude tracking: A)


3.5
3
2.5


o 1.

0.5


0 10 20
Time (sec)


Figure 10-35. Tracking error in heading angle

The results shown in these plots indicate that the tracking in the lateral position and altitude

are nearly sufficient for the refueling task. The simulation reveals bounded errors in these










dimensions of 3 ft. The main issues occur in the axial position during the approach stage. The

state estimator seems to have trouble during approach when the vehicles in the image reach the

bounds of the field of view. Without four features on each vehicle the estimator cannot function

and is unable to provide updates of the relatives states. The performance in axial position

achieved during this simulation was a relative distance of 7 ft until the first feature left the image.

Once the features are out of view the estimator no longer provides updates and the controller

commands the aircraft to fly at a straight and level trim condition at a slower airspeed in attempt

to regain features.

Overall the simulation results shown here are inefficient to achieve a successful aerial

refueling mission based on the requirement of cm precision. The reasons that this mission was

not achieved in these results is the control is unable to maintain the vehicle features within the

image in close proximity during approach. The estimation task is able to function with good

accuracy but has the drawback of requiring a minimum of four features on each vehicle in every

frame. Although this drawback is common in vision processing, there has been techniques used

to estimate the position of the feature that has left the image with some variance. This modeling

approach can help serve two purposes 1) to predict where the features are when they leave the

image and 2) help predict where the features might be going in future steps. Implementing the

modeling task presented in Chapter 8 will help to aid the controller, or at least to help determine a

region of where the features most likely have traveled.

10.4.4 Uncertainty Analysis

The simulation result of the estimated target states were computed from the homography

estimation task given in Chapter 7. To see what levels of variations exist in these results an

uncertainty analysis was performed. Chapter 4 derived a method to compute worse-case bounds

on state estimates from the homography approach using visual information. The technique

described in Chapter 4 was used for this uncertainty analysis.

The target estimates for absolute position and orientation along with upper and lower

bounds were computed for this simulation and are shown in Figures 10-36 and 10-37. These











plots contain error bars computed at 0.5 Hz for three levels of parametric uncertainty. The

three levels consist of 1) focal length uncertainty, 2) radial distortion uncertainty and 3) focal

length and radial distortion uncertainty. The values for the camera parameters were set to fo = 1

and do = 0 for the nominal values and the perturbed set consisted of 87 = [-0.1 : 0.1] and

8d = [-0.05 : 0.05]. These plots describe the worse-case bounds for each state. The computations

confirm the maximum state variations occur at the maximum level of uncertainty where both

focal length and radial distortion are at their maximum perturbations. The trend observed in these

plots indicates an increase in uncertainty as features move closer to the camera.




Od 0, 232
2 -- f 5 8 d 20





10 20 30 40 -00 10 20 30 40 240 10 20 30 40
Tune (sec) Tune (sec) Tune (sec)
A B C


Figure 10-36. Target's inertial position with uncertainty bounds: A) North, B) East, and C)
Altitude


0 08Nomll 6Nonma 3Nonm



d 1CCCC @d


-004

1 20 3 4010 0 2 3 4 30 1 0 0 4
Tune (sec) Tune (sec) Tune (sec)
A B C


Figure 10-37. Target's inertial attitude with uncertainty bounds: A) Roll, B) Pitch, and C) Yaw

The maximum uncertainties in target position relative to the earth-fixed frame are

summarized in Table 10-9. Meanwhile, Table 10-10 contains the maximum uncertainties in

target orientation. The three levels of uncertainty are included in these tables. This comparison

helps to verify that the maximum state variation corresponds to the maximum camera parameter









variation for all states. These state variations indicate levels of uncertainty greater than the

allowable tolerance for autonomous refueling which would indicate an need for improved

performance in camera calibration. Iterations can be made on the uncertainty set involving the

camera parameters to find the range which meets the allowable tolerance for a safe refueling.

This information can provide a method for determining the accuracy needed during camera

calibration for a task that requires such precise estimation.

Table 10-9. Maximum variations in position due to parametric uncertainty
uncertainty parameter north (ft) east (ft) altitude (ft)
f 2.79 4.10 20.54
d 5.66 10.53 14.40
f and d 9.61 15.09 30.82


Table 10-10. Maximum variations in attitude due to parametric uncertainty
uncertainty parameter # (deg) 6 (deg) W (deg)
f 00 0
d 0.06 4.48 2.29
f and d 0. 10 7.94 3.48










CHAPTER 11
CONCLUSION

Vision-based feedback can be an important tool for autonomous systems and is the primary

focus of this dissertation in the context of an unmanned air vehicle. This dissertation describes

a methodology for a vehicle, such as a UAV, to observe features within the environment and

estimate the states of a moving target using various camera configurations. The complete

equation of motion of an aircraft-camera system was derived in its general form that allows

multiple cameras. Camera models were summarized and the effects of uncertainty regarding the

intrinsic parameters was discussed. Expressions for worse-case bounds were derived for varies

vision processing algorithms on a conservative level. A classification scheme was summarized

to discern between stationary and moving objects within the image using a focus of expansion

threshold method. The homography derivation proposed was the main contribution of this

dissertation where the states of a moving target were formulated based on visual information.

Some underlining assumptions were imposed on the features and the system to obtain feasible

estimates. The two critical assumptions imposed on the features were the planar constraint and

the requirement of the distance to a feature on the reference and target vehicles be known and

equal. An additional assumption was placed on the system which involved a communication link

that allows the vehicle to have access to the states of the reference vehicle. The modeling of the

target position attempted to anticipate future locations to enable a predictive capability for the

controller and to provide estimates when the features are outside the field of view. The approach

summarized here consisted of a Hidden Markov method which has limitations for general 6-DOF

motion due to incomplete motion models. Lastly, a standard control design is tuned for an aircraft

performing waypoint navigation to use in closed-loop control where commands are generated

from the state estimator.

Simulations were presented to validated the proposed algorithms and to demonstrate the

applications for autonomous vehicles. The first simulation verified the feature point and optic

flow computation for a aircraft-camera system containing multiple cameras with time varying










position and orientation. The second simulation illustrated the effects of uncertainty on image

processing algorithms due camera intrinsic parameters showed the conservative nature of this

approach. The next simulation confirmed the homography expressions proposed for target state

estimation. This was demonstrated in a open-loop fashion for the scenario involving a fictitious

police pursuit that employed a camera equipped UAV. The results presented in this simulation

revealed accurate estimation under ideal conditions. The final simulation incorporated the

target state estimates in feedback for closed-loop control to accomplish a docking task for the

aerial refueling mission. The target state estimator provided commands to a waypoint tracking

controller in attempt to regulate the relative position between the receiver and the basket drogue.

Simulating the system produced results that were reasonable but inadequate for the requirements

necessary for refueling. The system was capable of homing in on the drogue to within 7 ft in

the axial direction and within 3 ft in both the lateral and altitude directions. The main cause

of this shortcoming is due to the difficulty of maintaining features within the image at close

proximity. Even the slightest shift in orientation can eliminate features from the image causing

a break down in target estimation and overall performance. Error bounds on the state estimates

were also computed for this simulation to examine the effects of uncertain camera parameters.

A worse-case bound was found to exist for the case when both focal length and radial distortion

were at their maximum variations. Analyzing the worse-case bounds, one can determine the

accuracy needed during calibration to obtain a level of confidence in the target estimates.

Future work of this project will examine more realistic aspects of the camera system to

reproduce a more practical scenario. Typical artifacts seen in real images including noise, pixel

quantization, and feature point tracking errors should all be incorporated into the simulation.

Although this will inevitably degrade the estimation results, additional filtering techniques may

be used to improve reconstruction. The next step is to alleviate the limitations imposed on the

feature points. For example, the restriction that the distance of a feature on both the reference

vehicle and the target are known and equal may limit the usefulness in certain applications.

Meanwhile, the aerial refueling simulation requires realistic dynamics of a drogue in flight to









test the system under practical conditions. Additionally, incorporating the modeling scheme

presented in Chapter 8 into the refueling simulation will help the controller by providing state

estimate when the target exits the field of view.










REFERENCES


[1] Secretary of Defense, "Unmanned Aircraft Systems Roadmap 2005-2030," website:
http:/uay. navair:navy, mil/roadmap05/roadmap. htm

[2] Grasmeyer, J. M., and Keennon, M. T., "Development of the Black Widow Micro-Air
Vehicle," 39th Aerospace Sciences 1Meeting and Exhibit, AIAA 2001-0127, Reno, NV,
January 2001.

[3] Beard, R., Kingston, D., Quigley, M., Snyder, D., Christiansen, R., Johnson, W., Mclain,
T., and Goodrich, M., "Autonomous Vehicle Technologies for Small Fixed Wing UAVs,'
AIAA Journal ofAerospace Computing, Information, and Communication, Vol. 2, No. 1,
January 2005, p. 92-108.

[4] Kingston, D., Beard, R., McLain, T., Larsen, M., and Ren, W., "Autonomous Vehicle
Technologies for Small Fixed Wing UAVs, AIAA 2nd Unmanned Unlimited Systems,
Technologies, and Operations-Aerospace, Land, and Sea Corkreticcr, and Workshop and
Exhibit, AIAA-2003-6559, San Diego, CA, September 2003.

[5] Frew, E., "Observer Trajectory Generation for Target-Motion Estimation Using Monocular
Vision," PhD Dissertation, Stanford University, August 2003.

[6] Sattigeri, R., Calise, A. J., Soo Kim, B., Volyanskyy, K., and Nakwan, K., "6-DOF
Nonlinear Simulation of Vision-based Formation Flight," AIAA Guidance, Navigation and
Control C~r~r,rirtic and Exhibit, AIAA-2005-6002, San Francisco, CA, August 2005.

[7] Beard, R., Mclain, T., Nelson, D., and Kingston, D., "Decentralized Cooperative Aerial
Surveillance using Fixed-Wing Miniature UAVs," IEEE Proceedings: Special Issue on
IMulti-Robot Systems, Vol. 94, Issue 7, July 2006, pp. 1306-1324.

[8] Wu, A. D., Johnson, E. N., and Proctor, A. A., "Vision-Aided Inertial Navigation for
Flight Control," A1AA Guidance, Navigation, and Control ~r-rTkreticcl and Exhibit, AIAA
2005-5998, San Francisco, CA, August 2005.

[9] Ettinger S. M., Nechyba, M. C., Ifju, P. G., and Waszak, M., "Vision-Guided Flight
Stability and Control for Micro Air Vehicle," IEEE/RSJ international C~rr r, rrtic on
Intelligent Robots and System, Vol. 3, September/October 2002, pp. 2134-2140.

[10] Kehoe, J., Causey, R., Abdulrahim, M., and Lind, R., "Waypoint Navigation for a Micro
Air Vehicle using Vision-Based Attitude Estimation," AIAA Guidance, Navigation, and
Control C~r~r,rirtic and Exhibit, AIAA-2005-6400, San Francisco, CA, August 2005.

[11] Abdulrahim, M., and Lind, R., "Control and Simulation of a Multi-Role Morphing
Micro Air Vehicle," AIAA Guidance, Navigation and Control ~r-rTkreticcl and Exhibit,
AIAA-2005-6481, San Francisco, CA, August 2005.

[12] Abdulrahim, M., Garcia, G., Ivey, G. F., and Lind, R., "Flight Testing of a Micro Air
Vehicle Using Morphing for Aeroservoelastic Control," A1AA Structures, Structural
Dynamics, and Materials Cr itreticc,. AIAA-2004-1674, Palm Springs, CA, April 20)04.










[13] Garcia, H., Abdulrahim, M., and Lind, R., "Roll Control for a Micro Air Vehicle Using
Active Wing Morphing," AIAA Guidance, Navigation and Control ~r-rTkreticcl and Exhibit,
AIAA-2003-5347, Austin, TX, August 2003.

[14] Waszak, M. R., Jenkins, L. N., and Ifju, P. G., "Stability and Control Properties of
an Aeroelastic Fixed Wing Micro Air Vehicle," AIAA Atmospheric Flight 1Mechanics
C~rr, rirltic and Exhibit, AIAA 2001-4005, Montreal, Canada, August 2001.

[15] Kimmett, J., Valasek, J., and Junkins J. K., "Vision Based Controller for Autonomous
Aerial Refueling," IEEE international C~rrT r; ticc on Control Applications, Glasgow,
Scotland, U.K., September 2002, pp. 1138-1143.

[16] Tandale, M. D., Bowers, R., and Valasek, J., "Robust Trajectory Tracking Controller
for Vision Based Probe and Drogue Autonomous Aerial Refueling," A1AA Guidance,
Navigation, and Control ~r-rTikr; tic and Exhibit, AIAA 2005-5868, San Francisco, CA,
August 2005.

[17] Lucas, B., and Kanade, T., "An Iterative Image Registration Technique with an Application
to Stereo Vision," Proceedings of the DARPA Image Understanding Workshop, 1981,
pp. 121-130.

[18] Tomasi, C., and Kanade, T., "Detection and Tracking of Point Features," Tech. Report
C1MU-CS-91-132, Carnegie Mellon University, April 1991.

[19] Kanade, T., Collins, R., Lipton, A., Burt, P., and Wixson, L., Advances in Cooperative
Multi-Sensor Video Surveillance," Proceedings ofDARPA Anage Understanding Work-
shop, Vol. 1, November 1998, pp. 3-24.

[20] Piccardi, M., Background Subtraction Techniques: A Review," IEEE international
Cllretikreticc on Systems, 1Man and Cybenmetics, The Hague, The Netherlands, October
2004.

[21] Schunck, B. G., "Motion Segmentation by Constraint Line Clusttering," IEEE Workshop
on Computer Vision: Representation and Control, 1984, pp. 58-62.

[22] Ridder, C., Munkelt, O., and Kirchner, H., "Adaptive Background Estimation and
Foreground Detection using Kalman-Filtering," international ~rorik r; ticc on Recent
Advanced in 1Mechatronics, Istanbul, Turkey, June 1995, pp. 193-199.

[23] Bailo, G., Bariani, M., Ijas, P., and Raggio, M., "Background Estimation with Gaussian
Distribution for Image Segmentation, a Fast Approach," IEEE international Workshop on
IMeasurement Systems for Homeland Security, Contraband Detection and Personal Safety,
Orlando, FL, March 2005.

[24] Friedman, N., and Russel, S., "Image Segmentation in Video Sequences: A Probabilistic
Approach," International Proceedings of the Thirteenth ~r-rTkreticcl of Uncertainty in
Ari(17, in/ intelligence (UAI), Providence, RI, August 1997.










[25] Sheikh, Y., and Shah, M., Bayesian Object Detection in Dynamic Scenes," IEEE
Computer Society C~rreakican on Computer Vision and Pattern Recognition, San Diego,
CA, June 2005.

[26] Stauffer, C., and Grimson, W. E. L., "Adaptive Background Mixture Models for Real-Time
Tracking," IEEE CriC irthe, on Computer V/ision and Pattern Recognition, Fort Collins,
CO, June 1999, pp. 246-252.

[27] Toyama, K., Krumm, J., Brumitt, B., and Meyers, B., "Wallflower: Principles and Practice
of Background Maintenance", International C~r rthrn ,l on Computer Vision, Corfu,
Greece, September 1999.

[28] Zhou, D., and Zhang, H, "Modified GMM Background Modeling and Optical Flow for
Detection of Moving Objects," IEE International C~r ratherrn on System, Man, and
Cybernetics, Big Island, Hawaii, October 2005.

[29] Nelson, R. C., "Qualitative Detection of motion by a Moving Observer," International
Journal of Computer Vision, Vol. 7, No. 1, 1991, pp. 33-46.

[30] Thompson, W. B., and Pong, T. G., "Detecting Moving Objects," International Journal of
Computer Vision, Vol. 4, 1990, pp. 39-57.

[31] Odobez, J. M., and Bouthemy, P., "Detection of Multiple Moving Objects Using
Multiscale MRP With Camera Motion Compensation," IEEE International C~rrTkratec, on
Image Processing, Austin, TX, November 1994, pp. 245-249.

[32] Irani, M., Rousso, B., and Peleg, S., "Detecting and Tracking Multiple Moving Objects
Using Temporal integration," European C~rr rthrrn l on Computer Vision, Santa Margherita
Ligure, Italy, May 1992 pp. 282-287.

[33] Torr, P. H. S., and Murray, D. W., "Statistical Detection of Independent Movement from a
Moving Camera," Image and Computing, Vol. 11, No. 4, May 1993, pp. 180-187.

[34] Gandhi, T., Yang, M. T., Kasturi, R., Camps, O., Coraor, L., and McCandless, J.,
"Detection of Obstacles in the Flight Path of an Aircraft," IEEE Transactions on
Aerospace and Electronic Systems, Vol. 39, No. 1, January 2003, pp. 176-191.

[35] Irani, M., and Anandan, P., "A Unified Approach to Moving Object Detection in 2D and
3D Scenes," IEEE Transactions on Pattern Analysis and 1MAchine Intelligence, Vol. 20,
No. 6, June 1998.

[36] Sharma, R., and Aloimonos, Y., "Early Detection of Independent Motion from Active
Control of Normal Flow Patterns," IEEE Transactions on Systems, 1Man, and Cybernetics,
Vol. 26, No. 1, February 1996.

[37] Frazier, J., and Nevatia, R., "Detecting Moving Objects from a Moving Platform," IEEE
International C~rrTk ratec, on Robotics and Automation, Nice, France, May 1992, pp.
1627-1633.










[38] Liu, Y., Huang, T. S., and Faugeras, O. D., "Determination of Camera Location from 2D to
3D line and Point Correspondence," IEEE Transactions on Pattern Anrah\ \i and 1Machine
Intelligence, Vol. 12, No. 1, January 1990 pp. 28-37.

[39] Longuet-Higgins, H. C., "A Computer Algorithm for Reconstructing a Scene from Two
Projections," Nature, Vol. 293, September 1981, pp. 133-135.

[40] Heeger, D. J., and Jepson, A. D., "Subspace Method for Recovering Rigid Motion 1:
Algorithm and Implementation," International Journal of Computer Vision, Vol. 7, No. 2,
January 1992.

[41] Gutmann, J. S., and Fox, D., "An Experimental Comparison of Localization Methods
Continued," IEEE/RSJ International Corrd r, ticc on Intelligent Robots and Systems,
Lausanne, Switzerland, October 2002.

[42] Martin, M. C., and Moravec, H., "Robot Evidence Grids," Technical Report CIMU-RI-TR-
96-06, Robotics Institute, Carnegie Mellon University, March 1996.

[43] Olson, C. F., "Selecting Landmarks for Localization in Natural Terrains," Autonomous
Robots, Vol. 12, 2002, pp. 201-210.

[44] Olson, C. F., and Matthies, L. H., "Maximum Likelihood Rover Localization by Matching
Range Maps," IEEE International C~rr rtf rrticc on Robotics and Automation, Leuven,
Belgium, May 1998, pp. 272-277.

[45] Volpe, R., Estlin, T., Laubach, S., Olson, C., and Balaram, J., "Enhanced Mars Rover
Navigation Techniques," IEEE International C~rrT reticcl on Robotics and Automation, San
Francisco, CA, April 2000, pp. 926-931.

[46] Gurfil, P., and Rotstein, H., "Partial Aircraft State Estimation from Visual Motion Using
the Sub space Contraints Approach," Journal of Guidance, Control, and Dynamics, Vol. 24,
No. 5, September-October 2001, pp. 1016-1028.

[47] Markel, M. D., Lopez, J., Gebert, G., and Evers, J., "Vision-Augmented GNC: Passive
Ranging from Image Flow," AIAA Guidance, Navigation, and Control C~r rTfreticc and
Exhibit, AIAA 2002-5026, Monterey, CA, August 2002.

[48] Webb, T. P., Prazenica, R. J., Kurdila, A. J., and Lind, R., "Vision-Based State Estimation
for Autonomous Micro Air Vehicles," A1AA Guidance, Navigation, and Control Confer-
ence and Exhibit, Providence, RI, August 2004.

[49] Webb, T. P., Prazenica, R. J., Kurdila, A. J., and Lind, R., "Vision-Based State Estimation
for Uninhibited Aerial Vehicles," A1AA Guidance, Navigation, and Control C~r rTforet, <
and Exhibit, AIAA 2005-5869, San Francisco, CA, August 2005.

[50] Chatterji, G. B., "Vision-Based Position and Attitude Determination for Aircraft
Night Landing," A1AA Guidance, Navigation and Control Confer~ret, C and Exhibit,
AIAA-96-3821, July 1996.










[51] Silveira, G. F., Carvalho, J. R. H., Madirid, M. K., Rives, P., and Bueno, S. S., "A Fast
Vision-Based Road Following Strategy Applied to the Control of Aerial Robots," IEEE
Proceedings of XIV Brazilian Symposium on Computer Graphics and Image Processing,
1530-1834/01, Florianopolis, Brazil, October 2001, pp. 226-231.

[52] Soatto, S., and Perona, P., "Dynamic Visual Motion Estimation from Subspace
Constraints," IEEE, 0-8186-6950-0/94, 1994, pp. 333-337.

[53] Soatto, S., Frezza, R., and Perona P., "Motion Estimation via Dynamic Vision," IEEE
Transactions on Automatic Control, Vol. 41, No. 3, March 1996, pp. 393-413.

[54] Soatto, S., and Perona, P., "Recursive 3-D Motion Estimation Using Subspace
Constraints," international Journal of Computer Vision, Vol. 22, No. 3, 1997, pp. 235-259.

[55] Soatto, S., and Perona, P., "Reducing 'Structure from Motion'," IEEE, 1063-6919/96,
1996, pp. 825-832.

[56] Soatto, S., and Perona, P., "Visual Motion Estimation from Point Features: Unified View,"
IEEE international C~rr rtf rrtic on Image Processing, Vol. 3, October 1995, pp. 21-24.

[57] Soatto, S., and Perona, P., "Reducing 'Structure from Motion': A General Framework for
Dynamic Vision Part 1: Modeling," IEEE Transactions on Pattern Alrab\ \i\ and 1Machine
Intelligence, Vol. 20, No. 9, September 1998, pp.933-942.

[58] Soatto, S., and Perona, P., "Reducing 'Structure from Motion': A General Framework
for Dynamic Vision Part 2: Implementation and Experimental Assessment," IEEE
Transactions on Pattern Ata rl~li\ and 1Machine intelligence, Vol. 20, No. 9, September
1998, pp. 943-960.

[59] Erol, A., Bebis, G., Nicolescu, M., Boyle, R. D., and Twombly, X., "A Review on
Vision-Based Full DOF Hand Motion Estimation," IEEE Computer Society International
~r~rTforeticc on Computer Vision and Pattern Recognition, San Diego, CA, June 2005.

[60] Huang, T. S., and Netravali, A. N., "Motion and Structure from Feature Correspondences:
A Review," Proceedings of the IEEE, Vol. 82, No. 2, February 1994, pp. 252-268.

[61] Stewart, C. V., "Robust Parameter Estimation in Computer Vision," Society of Industrial
and Aplied Mathelr manil, \, Vol. 41, No. 3, 1999, pp. 513-537.

[62] Weng, J., Huang, T. S., and Ahuja, N., "Motion and Structure from Two Perspective
Views: Algorithms, Error Analysis, and Error Estimation," IEEE Transactions on Pattern
Analysis and 1Machine intelligence, Vol.11, No. 5, May 1989, pp. 451-476.

[63] Jianchao, Y., "A New Method for Passive Location Estimation from Image Sequence
Using Adaptive Extended Kalman Filter," International Confrrticcr, on Signal Processing,
Beijing, China, October 1998, pp. 1002-1005.

[64] Qian, G., Kale, G., and Chellappa, R., "Robust Estimation of Motion and Structure using a
Discrete H-infinity Filter," IEEE 0- 7803-6297-7/00, 2000, pp. 616-619.










[65] Weng, J., Ahuja, N., and Huang, T. S., "Optimal Motion and Structure Estimation," IEEE
Transactions on Pattern Ana rll\\ and 1Machine Intelligence, Vol. 15, No. 9, September
1993, pp. 864-884.

[66] Blostein, S. D., Zhao, L., Chann, and R. M., "Three-Dimensional Trajectory Estimation
from Image Position and Velocity," IEEE Transactions on Aerospace and Electronic
Systems, Vol. 36, No. 4, October 2000, pp. 1075-1089.

[67] Broida, T. J., Chandrashekhar, S., and Chellappa, R., "Recursive 3-D Motion Estimation
from a Monocular Image Sequence," IEEE Transactions on Aerospace and Electronic
Systems, Vol. 26, No. 4, 1990, pp. 639-656.

[68] Aidala, V. J., "Kalman Filter Behavior in Bearings-Only Tracking Applications," IEEE
Transactions on Aerospace and Electronic Systems, Vol. 15, No. 1, 1979, pp. 29-39.

[69] Bolger, P. L., "Tracking a Maneuvering Target Using Input Estimation," IEEE Transac-
tions on Aerospace and Electronic Systems, Vol. 23, No. 3, 1987, pp. 298-310.

[70] Gavish, M., and Fogel, E., "Effect of bias on Bearings-Only Target Location," IEEE
Transactions on Aerospace and Electronic Systems, Vol. 26, No. 1, January 1990, pp.
22-25.

[71] Peach, N., "Bearings-Only Tracking Using a Set of Range-Parameterized Extended
Kalman Filters," IEE Proceedings-Control Theory and Aplications, Vol. 142, No. 1,
January 1995, pp.73-80.

[72] Taff, L. G., "Target Localization from Bearings-Only Observations," IEEE Transactions on
Aerospace and Electronic Systems, Vol. 33, No. 1, January 1997, pp. 2-9.

[73] Guanghui, O., Jixiang, S., Hong, L., and Wenhui, W., "Estimating 3D Motion and Position
of a Point Target," Proceedings ofSPIE, Vol. 3173, 1997, pp. 386-394.

[74] Fang, Y., Dawson, D. M., Dixon, W. E., and Queiroz, M. S. de, "2.5D Visual Servoing
of Wheeled Mobile Robots," IEEE ~r rT reticcl on Decision and Control, Las Vegas, NV,
December 2002, pp. 2866-2871.

[75] Chen, J., Dixon, W. E., Dawson, D. M., and Chitrakaran V. K., "Visual Servo Tracking
Control of a Wheeled Mobile Robot with a Monocular Fixed Camera," IEEE C~rr r, rrtic
on Control Applications, Taipei, Taiwan, September 2004, pp. 1061-1066.

[76] Chen, J., Dawson, D. M., Dixon, W. E., and Behal, A., "Adaptive Homography-Based
Visual Servo Tracking for Fixed and Camera-in-Hand Configurations," IEEE Transactions
on Control Systems TL, ArIr 9vl~, Vol. 13, No. 5, September 2005, pp. 814-825.

[77] Mehta, S. S., Dixon, W. E., MacArthur, D., and Crane, C. D., "Visual Servo Control of an
Unmanned Ground Vehicle via a Moving Airborne Monocular Camera," IEEE American
Control C~rtfreti c c. Minneapolis, MN, June 2006.










[78] Junkins, J. L., and Hughes, D., "Vision-Based Navigation for Rendezvous, Docking and
Proximity Operations," AAS Guidance and Controls C~r-rikrretc. Breckenridge, CO,
February 1999.

[79] Alonso, R., Crassidis, J. L., and Junkins, J. L., "Vision-Based Relative Navigation for
Formation Flying of Spacecraft," A1AA Guidance, Navigation and Control ~r-rTikreticc and
Exhibit, AIAA-2000-4439, Denver, CO, August 2000O.

[80] Houshangi, N., "Control of a Robotic Manipulator to Grasp a Moving Target
using Vision,' IEEE International C~rrTd reticcl on Robotics and Automation,
CH2876-1/90/0000/0604, Cincinnati, Ohio, 1990, pp. 604-609.

[81] Hansen, J. L., Murry, J. E., and Campos, N. V., "The NASA Dryden AAR Project: A
Flight Test Approach to an Aerail Refueling System," AIAA Atmospheric Flight 1Mechanics
C~r~rireticcl and Exhibit, Providence, Rhode Island, August 2004.

[82] Chang, P., and Hebert, P., "Robust Tracking and Structure from Motion through Sampling
Based Uncertainty Representation," International C~lirorirtic on Robotics and Automa-
tion, Washington D.C., May 2002.

[83] Oliensis, J., "Exact Two-Image Structure From Motion," IEEE Transactions on Pattern
Analysis and 1Machine intelligence, Vol. 24, No. 12, December 2002, pp. 1618-1633.

[84] Svoboda, T., and Sturm, P., "Badly Calibrated Camera in Ego-Motion Estimation -
Propagation of Uncertainty," International C~rrTd reticcl Computer Ana rll i\ of Image and
Patterns, Kiel, Germany, September 1997, pp. 183-190.

[85] Zhang, Z., "Determine the Epipolar Geometry and its Uncertainty: A Review," hnterna-
tional Journal of Computer Vision, Vol. 27, No. 2, 1998, pp. 161-195.

[86] Qian, G., and Chellappa, R., "Structure From Motion Using Sequential Monte Carlo
Methods," International C~rrTd reticcl On Computer Vision, Vancouver, Canada, July 2001,
pp. 614-621.

[87] Franke, U., and Heinrich, S., "Fast Obstacle Detection for Urban Traffic Situations," IEEE
Transactions on intelligent Transportation Systems, Vol. 3, No. 3, September 2002, pp.
173-181.

[88] Bhanu, B., Das, S., Roberts, B., and Duncan, D., "A System for Obstacle Detection During
Rotorcraft Low Altitude Flight," IEEE Transactions on Aerospace and Electronic Systems,
Vol. 32, No. 3, July 1996, pp. 875-897.

[89] Huster, A., Fleischer, S. D., and Rock, S. M., "Demonstration of a Vision-Based
Dead-Reckoning System for Navigation of an Underwater Vehicle," OCEANS '98
Clretireticc- Pr~oceedings, 0-7803-5045-6/98, Vol. 1, September 1998, pp. 326-330.

[90] Roderick, A., Kehoe, J., and Lind, R., "Vision-Based Navigation using Multi-Rate
Feedback from Optic Flow and Scene Reconstruction," A1AA Guidance, Navigation, and
Control C~r~r,rirtic and Exhibit, San Francisco, CA, August 2005.










[91] Papaikolopoulos, N. P., Nelson, B. J., and Khosla, P. K., "Six Degree-of-Freedom
Hand/Eye Visual Tracking with Uncertain Parameters," IEEE Transactions on Robotics
and Automation, Vol. 11, No. 5, October 1995, pp. 725-732.

[92] Sznaier, M., and Camps, O. I, "Control issues in Active Vision: Open Problems and Some
Answers," IEEE C~rr rikrrtec on Decision and Control, Tampa, FL, December 1998, pp.
3238-3244.

[93] Frezza, R., Picci, G., and Soatto, S., "Non-holonomic Model-based Predictive Output
Tracking of an Unknown Three-dimensional Trajectory," IEEE ~r rTkratec, on Decision
and Control, Tampa, FL, December 1998, pp. 3731-3735.

[94] Papaikolopoulos, N. P., Khosla, P. K., and Kanade, T., "Visual Tracking of a Moving
Target by a Camera Mounted on a Robot: A Combination of Control and Vision," IEEE
Transactions on Robotics and Automation Vol. 9, No. 1, February 1993, pp. 14-35.

[95] Papaikolopoulos, N. P., and Khosla, P. K., "Adaptive Robotic Visual Tracking: Theory and
Experiments," IEEE Transactions on Automatic Control, Vol.38, No. 3, March 1993, pp.
429-445.

[96] Zanne, P., Morel, G., and Plestan, F., Robust Vision Based 3D Trajectory Tracking using
Sliding Mode Control," IEEE International C~rr rikrate on Robotics and Automation, San
Francisco, CA, April 2000, pp. 2088-2093.

[97] Zergeroglu, E., Dawson, D. M., de Queiroz, M. S., and Behal, A., "Vision-Based
Nonlinear Tracking Controllers with Uncertain Robot-Camera Parameters," IEEE/ASIME
International C~rrik reticc on Advanced 1Mechanics, Atlanta, GA, September 1999, pp.
854-859.

[98] Valasek, J., Kimmett, J., Hughes, D., Gunnam, K., and Junkins, J. L., "Vision Based
Sensor and Navigation System for Autonomous Aerial Refueling," A1AA 's 1st Technical
C~r~rikicticc and Workshop on Unmanned Aerospace Vehicles, Portsmouth, Virginia, May
2002.

[99] Pollini, L., Campa, G., Giulietti, F., and Innocenti, M., "Virtual Simulation Set-Up for
UAVs Aerial Refueling," AIAA 1Modeling and Simulation Technologies C~rrakican c. Austin,
TX, August 2003.

[100] No, T. S., and Cochan, J. E., "Dynamics and COntrol of a Teathered Flight Vehicle,"
Journal of Guidance, Control, and Dynamics, Vol. 18,No. 1, January 1995, pp. 66-72.

[101] Forsyth, D. A., and Ponce, J., "Computer Vision : A Modern Approach, Prentice-Hall
Publishers, Upper Saddle River, NJ, 2003.

[102] Ma, Y., Soatto, S., Kosecka, and Sastry, S. S., "An Invitation to 3-D Vision: From Images
to Geometric Models," Springer-Verlag Publishing New York, NY, 2004.

[103] Faugeras, O., "Three-Dimensional Computer Vision," The MIT Press, Cambridge
Massachusetts, 2001.










[104] Castro, G. J., Nieto, J., Gallego, L. M., Pastor, L., and Cabello, E., "An Effective Camera
Calibration Method," IEEE 0-7803-4484-7/98, 1998, pp. 171-174.

[105] Tsai, R. Y., "A Versatile Camera Calibration Technique for Hight-Accuracy 3D Machine
Vision Metrology UsingZ Off-the shelf TV Cameras and Lens," IEEE Journal of Robotics
anzd Automation, Vol. RA- 3, No. 4, August 1987, pp. 323-344.

[106] Harris, C., and Stephens, M., "A Combined Corner and Edge Detector," Proceedings of the
Alvey Vision C~r~rirr i, e. 1988, pp. 147-151.

[107] Canny, J. F., "A Computational Approach to Edge Detection," IEEE Transactions on
Pattern Analysis and Machine Intelligence, Vol. 8, No. 6, November 1986, pp. 679-698.

[108] Etkin, B., and Reid, L. D., Dynamics of Fight: Stability and Control, John Wiley & Sons,
New York, 1996.

[109] Nelson, R. C., Flight Stability and Automatic Control, McGraw-Hill, New York, 1989.

[110] Stevens, B. L., and Lewis, F. L., Aircraft Control and Simulation, John Wiley & Sons, Inc.,
New York, 1992.

[111] Kehoe, J. J., Causey, R. S., Arvai, A., and Lind, R., "Partial Aircraft State Estimation from
Optical Flow using Non-Model-Based Optimization," IEEE American Control C~rrTd re it, e.
Minneapolis, MN, June 2006.

[112] Kaminski J. Y., and Teicher, M., "A General Framework for Trajectory Triangulation,"
Journal of Mathematical Imaging and Vision, Vol. 21, 2004, pp. 27-41.

[113] Avidan S., and Shashua A., "Trajectory Triangulation: 3D Reconstruction of Moving
Points from a Monocular Image Sequence," IEEE Transactions on Pattern Analysis and
1Machine Intelligence, Vol. 22, No. 4, 2000, pp. 348-357.

[114] Fitzgibbon, A. W., and Zisserman, A., "Multibody Structure and Motion: 3D
Reconstruction of Independently Moving Objects," European Confrrticcr, on Computer
Vision, Dublin, Ireland, July 2000, Vol. 1, pp. 891-906.

[115] Han, M., and Kanade, T., "Reconstruction of a Scene with Multiple Linearly Moving
Objects," International Journal of Computer Vision, Vol. 59, No. 3, 2004, pp. 285-300.

[116] Ozden, K. E., Comelis, K., Van Eycken, L., and Van Gool, L., "Reconstructing 3D
Trajectories of Independently Moving Objects using Generic Constraints," J7oumrnl of
Computer Vision and Image Understanding, Vol. 96, No. 3, 2004, pp. 453-471.

[117] Yuan C., and Medioni, G., "3D Reconstruction of Background and Objects Moving on
Ground Plane Viewed from a Moving Camera," IEEE ~r rT reticcl on Computer Vision and
Pattern Recognition, New York, NY, June 2006.










[118] Dobrokhodov, V. N., Kaminer, I. I., Jones, K. D., and Ghabcheloo, R., "Vision-Based
Tracking and Motion Estimation for Moving Targets using Small UAVs," American
Control CarTrrthrnc. Minneapolis MN, June 2006.

[119] Faugeras, O., and Lustman, F., "Motion and Structure From Motion in a Piecewise Planar
Environment," International Journal of Pattern Recognition and Ari(17, in/ Intelligence,
Vol. 2, No. 3, pp. 485-508, 1988.

[120] Zhang, Z., and Hanson, A. R., "Scaled Euclidean 3D Reconstruction Based on Externally
Uncalibrated Cameras," IEEE Symp. on Computer Vision, 1995, pp. 37-42.

[121] Zhu, Q., "A Stochastic Algorithm for Obstacle Motion Prediction in Visual Guidance of
Robot Motion," IEEE International C~rrTikr, tic on Systems Engineering, Pittsburgh, PA,
August 1990.










BIOGRAPHICAL SKETCH

Ryan Scott Causey was born in Miami, Florida, on May 10, 1978. He grew up in a stable

family with one brother in a typical suburban home. During his teenage years and into early

adolescence, Ryan built and maintained a small business providing lawn care to the local

neighborhood. The tools acquired from this work carried over into his college career. After

graduating from Miami Killian Senior High School in 1996, Ryan attended Miami Dade

Community College for three years and received an Associate in Arts degree. A transfer student

to the University of Florida, Ryan was prepared to tackle the stresses of a university aside from

the poor statistics on transfer students. A few years later, he received a Bachelor of Science in

Aerospace Engineering with honors in 2002 and was considered in the top three of his class.

Ryan soon after chose to attend graduate school back at the University of Florida under Dr. Rick

Lind in the Dynamics and Controls Laboratory. During the summertime, Ryan interned twice at

Honeywell Space Systems as a Systems Engineer in Clearwater, FL and once at The Air Force

Research Laboratory in Dayton, OH. Vision-based control of autonomous air vehicles became

his interest and he is now pursuing a doctorate degree on this topic. Ryan was awarded a NASA

Graduate Student Research Program (GSRP) fellowship in 2004 for his proposed investigation on

this research.





PAGE 1

1

PAGE 2

2

PAGE 3

3

PAGE 4

ThisworkwassupportedjointlybyNASAunderNND04GR13HwithSteveJacobsonandJoePahleasprojectmanagersalongwiththeAirForceResearchLaboratoryandtheAirForceOfceofScienticResearchunderF49620-03-1-0381withJohnnyEvers,NealGlassman,SharonHeise,andRobertSierakowskiasprojectmonitors.Additionally,IthankDr.RickLindforhisremarkableguidanceandinspirationthatwilltrulylastalifetime.Finally,IthankmyparentsSandraandJamesCauseyformakingthisjourneypossiblebyprovidingmetheguidanceanddisciplineneededtobesuccessful. 4

PAGE 5

page ACKNOWLEDGMENTS .................................... 4 LISTOFTABLES ....................................... 8 LISTOFFIGURES ....................................... 9 LISTOFTERMS ........................................ 12 ABSTRACT ........................................... 19 CHAPTER 1INTRODUCTION .................................... 21 1.1Motivation ...................................... 21 1.2ProblemStatement ................................. 27 1.3PotentialMissions .................................. 27 1.4SystemArchitecture ................................. 30 1.5Contributions .................................... 33 2LITERATUREREVIEW ................................. 36 2.1DetectionofMovingObjects ............................ 36 2.2StateEstimationUsingVisionInformation ..................... 38 2.2.1Localization ................................. 39 2.2.2Mapping ................................... 39 2.2.3Target-MotionEstimation .......................... 40 2.3ModelingObjectMotion .............................. 41 2.4UncertaintyinVisionAlgorithms .......................... 42 2.5ControlUsingVisualFeedbackinDynamicEnvironments ............ 43 3IMAGEPROCESSINGANDCOMPUTERVISION .................. 45 3.1CameraGeometry .................................. 45 3.2CameraModel .................................... 47 3.2.1IdealPerspective .............................. 47 3.2.2IntrinsicParameters ............................. 48 3.2.3ExtrinsicParameters ............................. 49 3.2.4RadialDistortion .............................. 50 3.3FeaturePointDetection ............................... 51 3.4FeaturePointTracking ............................... 53 3.5OpticFlow ..................................... 56 5

PAGE 6

............................. 56 3.6.1EpipolarConstraint ............................. 57 3.6.2Eight-PointAlgorithm ............................ 59 3.6.3PlanarHomography ............................. 61 3.6.4StructurefromMotion ............................ 65 4EFFECTSONSTATEESTIMATIONFROMVISIONUNCERTAINTY ........ 67 4.1FeaturePoints .................................... 67 4.2OpticalFlow ..................................... 70 4.3EpipolarGeometry ................................. 71 4.4Homography .................................... 73 4.5StructureFromMotion ............................... 75 5SYSTEMDYNAMICS .................................. 77 5.1DyanmicStates ................................... 77 5.1.1Aircraft ................................... 77 5.1.2Camera ................................... 79 5.2SystemGeometry .................................. 81 5.3NonlinearAircraftEquations ............................ 83 5.4Aircraft-CameraSystem .............................. 84 5.4.1FeaturePointPosition ............................ 85 5.4.2FeaturePointVelocity ............................ 85 5.5SystemFormulation ................................. 86 5.6Simulating ...................................... 89 6DISCERNINGMOVINGTARGETFROMSTATIONARYTARGETS ........ 90 6.1CameraMotionCompensation ........................... 90 6.2Classication .................................... 95 7HOMOGRAPHYAPPROACHTOMOVINGTARGETS ................ 98 7.1Introduction ..................................... 98 7.2StateEstimation ................................... 101 7.2.1SystemDescription ............................. 101 7.2.2HomographyEstimation .......................... 103 8MODELINGTARGETMOTION ............................ 111 8.1Introduction ..................................... 111 8.2DynamicModelingofanObject .......................... 111 8.2.1MotionModels ............................... 112 8.2.2StochasticPrediction ............................ 113 9CONTROLDESIGN ................................... 117 9.1ControlObjectives ................................. 117 6

PAGE 7

............................... 118 9.2.1AltitudeControl ............................... 118 9.2.2HeadingControl ............................... 119 9.2.3DepthControl ................................ 121 10SIMULATIONS ...................................... 123 10.1Example1:FeaturePointGeneration ........................ 123 10.2Example2:FeaturePointUncertainty ....................... 126 10.2.1Scenario ................................... 126 10.2.2OpticFlow .................................. 128 10.2.3TheEpipolarConstraint ........................... 130 10.2.4StructureFromMotion ........................... 132 10.3Example3:Open-loopGroundVehicleEstimation ................ 133 10.3.1SystemModel ................................ 134 10.3.2Open-loopResults .............................. 135 10.4Example4:Closed-loopAerialRefuelingofaUAV ................ 138 10.4.1SystemModel ................................ 139 10.4.2ControlTuning ............................... 140 10.4.3Closed-loopResults ............................. 144 10.4.4UncertaintyAnalysis ............................ 148 11CONCLUSION ...................................... 151 REFERENCES ......................................... 154 BIOGRAPHICALSKETCH .................................. 164 7

PAGE 8

Table page 3-1Solutionsforhomographydecomposition ....................... 64 10-1Statesofthecameras .................................. 123 10-2Limitsonimagecoordinates .............................. 123 10-3Statesofthefeaturepoints ............................... 124 10-4Aircraftstates ...................................... 125 10-5Imagecoordinatesoffeaturepoints ........................... 126 10-6Effectsofcameraperturbationsonopticow ...................... 129 10-7Effectsofcameraperturbationsonepipolargeometry ................. 131 10-8Effectsofcameraperturbationsonstructurefrommotion ............... 133 10-9Maximumvariationsinpositionduetoparametricuncertainty ............ 150 10-10Maximumvariationsinattitudeduetoparametricuncertainty ............. 150 8

PAGE 9

Figure page 1-1TheUAVeet ...................................... 23 1-2AeroVironment'sMAV:TheBlackWidow ....................... 23 1-3TheUFMAVeet .................................... 24 1-4Refuelingapproachusingtheprobe-droguemethod .................. 28 1-5TrackingapursuitvehicleusingavisionequippedUAV ................ 30 1-6Closed-loopblockdiagramwithvisualstateestimation ................ 31 3-1Mappingfromenvironmenttoimageplane ....................... 46 3-2Imageplaneeldofview(topview) .......................... 46 3-3Radialdistortioneffects ................................. 51 3-4Geometryoftheepipolarconstraint ........................... 58 3-5Geometryoftheplanarhomography .......................... 62 4-1Featurepointdependenceonfocallength ........................ 68 4-2Featurepointdependenceonradialdistortion ..................... 68 5-1Body-xedcoordinateframe .............................. 78 5-2Camera-xedcoordinateframe ............................. 80 5-3Scenarioforvision-basedfeedback ........................... 81 6-1Epipolarlinesacrosstwoimageframes ......................... 91 6-2FOEconstraintontranslationalopticowforstaticfeaturepoints ........... 94 6-3Residualopticowfordynamicenvironments ..................... 95 7-1Systemvectordescription ................................ 102 7-2Movingtargetvectordescription ............................ 103 9-1Altitudeholdblockdiagram ............................... 118 9-2Headingholdblockdiagram .............................. 119 10-1Virtualenvironmentforexample1 ........................... 124 10-2Featurepointmeasurementsforexample1 ....................... 125 9

PAGE 10

........................ 126 10-4Virtualenvironmentforexample2 ........................... 127 10-5Featurepointsacrosstwoimageframes ........................ 128 10-6Uncertaintyinfeaturepoint ............................... 128 10-7Uncertaintyresultsinopticow ............................ 129 10-8Nominalepipolarlinesbetweentwoimageframes ................... 130 10-9Uncertaintyresultsforepipolargeometry ........................ 131 10-10Nominalestimationusingstructurefrommotion .................... 132 10-11Uncertaintyresultsforstructurefrommotion ...................... 133 10-12Vehicletrajectoriesforexample3 ............................ 134 10-13PositionstatesoftheUAVwithon-boardcamera ................... 135 10-14AttitudestatesoftheUAVwithon-boardcamera ................... 135 10-15Positionstatesofthereferencevehicle ......................... 135 10-16Attitudestatesofthereferencevehicle ......................... 136 10-17Positionstatesofthetargetvehicle ........................... 136 10-18Attitudestatesofthetargetvehicle ........................... 136 10-19Normerror ........................................ 137 10-20Relativepositionstates ................................. 137 10-21Relativeattitudestates .................................. 137 10-22Virtualenvironment ................................... 138 10-23Inner-looppitchtopitchcommandBodeplot ..................... 141 10-24Pitchanglestepresponse ................................ 141 10-25Altitudestepresponse .................................. 142 10-26Inner-looprolltorollcommandBodeplot ....................... 143 10-27Rollanglestepresponse ................................. 144 10-28Headingresponse .................................... 144 10-29Open-loopestimationoftarget'sinertialposition .................... 145 10

PAGE 11

.................... 145 10-31Normerrorfortargetstateestimates .......................... 146 10-32Closed-looptargetpositiontracking .......................... 146 10-33Positiontrackingerror .................................. 147 10-34Targetattitudetracking ................................. 147 10-35Trackingerrorinheadingangle ............................. 147 10-36Target'sinertialpositionwithuncertaintybounds ................... 149 10-37Target'sinertialattitudewithuncertaintybounds .................... 149 11

PAGE 12

12

PAGE 13

13

PAGE 15

15

PAGE 18

18

PAGE 19

19

PAGE 20

20

PAGE 21

1 ].Thisincreaseincapabilityforsuchcomplextasksrequirestechnologyformoreadvancedsystemstofurtherenhancethesituationalawareness.Overthepastseveralyears,theinterestanddemandforautonomoussystemshasgrownconsiderably,especiallyfromtheArmedForces.Thisinteresthasleveragedfundingopportunitiestoadvancethetechnologyintoastateofrealizablesystems.Sometechnicalinnovationsthathaveemergedfromtheseefforts,fromahardwarestandpoint,consistmainlyofincreasinglycapablemicroprocessorsinthesensors,controls,andmissionmanagementcomputers.TheDefenseAdvancedResearchProjectsAgency(DARPA)hasfundedseveralprojectspertainingtotheadvancementofelectronicdevicesthroughsizereduction,improvedspeedandperformance.Fromthesedevelopments,thecapabilityofautonomoussystemhasbeendemonstratedonvehicleswithstrictweightandpayloadrequirements.Inessence,thecurrenttechnologyhasmaturedtoapointwhereautonomoussystemsarephysicallyachievableforcomplexmissionsbutnotyetalgorithmicallycapable.TheaerospacecommunityhasemployedmanyoftheresearchdevelopedforautonomoussystemsandappliedittoUnmannedAerialVehicles(UAV).Manyofthesevehiclesarecurrently 21

PAGE 22

1 ].FuturemissionsenvisionUAVtoconductmorecomplextasksuchasterrainmapping,surveillanceofpossiblethreats,maritimepatrol,bombdamageassessment,andeventuallyoffensivestrike.Thesemissionscanspanovervarioustypesofenvironmentsand,therefore,requireawiderangeofvehicledesignsandcomplexcontrolstoaccommodatetheassociatedtasks.TherequirementsanddesignofUAVareconsideredtoenableaparticularmissioncapability.Eachmissionscenarioisthedrivingforceoftheserequirementsandaredictatedbyrange,speed,maneuverability,andoperationalenvironment.CurrentUAVrangeinsizefromlessthan1poundtoover40,000pounds.SomepopularUAVthatareoperational,intestingphase,andintheconceptphasearedepictedinFigure 1-1 toillustratethevariousdesigns.ThetwoUAVontheleft,GlobalHawkandPredator,arecurrentlyinoperation.GlobalHawkisemployedasahighaltitude,longendurancereconnaissancevehiclewhereasthePredatorisusedforsurveillancemissionsatloweraltitudes.Meanwhile,theremainingtwopicturespresentJ-UCAS,whichisajointcollaborationforboththeAirForceandNavy.ThisUAVisdescribedasamediumaltitudeyerwithincreasedmaneuverabilityoverGlobalHawkandthePredatorandisconsideredforvariousmissions,someofwhichhavealreadybeendemonstratedinight,suchasweapondeliveryandcoordinatedight.Theadvancementsinsensorsandcomputingtechnology,mentionedearlier,hasfacilitatedtheminiaturizationoftheseUAV,whicharereferredtoasMicroAirVehicles(MAV).Thescaleofthesesmallvehiclesrangesfromafewfeetinwingspandowntoafewinches.DARPAhasalsofundedtherstsuccessfulMAVprojectthroughAeroVironment,asshowninFigure 1-2 ,wherebasicautonomywasrstdemonstratedatthisscale[ 2 ].Thesesmallscalesallowhighlyagilevehiclesthatcanmaneuverinandaroundobstaclessuchasbuildingsandtrees.ThiscapabilityenablesUAVtooperateinurbanenvironments,belowrooftoplevels,toprovide 22

PAGE 23

TheUAVeet thenecessaryinformationwhichcannotbeobtainedathigheraltitudes.ResearchersarecurrentlypursuingMAVtechnologytoaccomplishtheverysamemissionsstatedearlierfortheuniqueapplicationofoperatingatlowaltitudesinclutteredenvironments.Assensorandcontroltechnologiesevolve,theseMAVcanbeequippedwiththelatesthardwaretoperformadvancedsurveillanceoperationswherethedetection,tracking,andclassicationofthreatsaremonitoredautonomouslyonline.Althoughasinglemircoairvehiclecanprovidedistinctinformation,targetsmaybedifculttomonitorduetobothightpathandsensoreldofviewconstraints.ThislimitationhasmotivatedtheideaofacorporativenetworkoraswarmofMAVcommunicatingandworkingtogethertoaccomplishacommontask. Figure1-2. AeroVironment'sMAV:TheBlackWidow 23

PAGE 24

3 4 ].Meanwhile,Stanfordhasexaminedmotionplanningstrategiesthatoptimizeighttrajectoriestomaintainsensorintegrityforimprovedstateestimation[ 5 ].TheworkatGeorgiaTechandBYUhasconsideredcorporativecontrolofMAVforautonomousformationying[ 6 ]andconsensusworkfordistributedtaskassignment[ 7 ].Alternatively,visionbasedcontrolhasalsobeenthetopicofinterestatbothGeorgiaTechandUF.ControlschemesusingvisionhavebeendemonstratedonplatformssuchasahelicopteratGeorgiaTech[ 8 ],whileUFimplementedaMAVthatintegratedvisionbasedstabilizationintoanavigationarchitecture[ 9 10 ].TheUniversityofFloridahasalsoconsideredMAVdesignsthatimprovetheperformanceandagilityofthesevehiclesthroughmorphingtechnology[ 11 13 ].FabricationfacilitiesatUFhaveenabledrapidconstructionofdesignprototypesusefulforbothmorphingandcontroltesting.TheeetofMAVproducedbyUFareillustratedinFigure 1-3 wherethewingspanofthesevehiclesrangefrom24indownto4in. Figure1-3. TheUFMAVeet ThereareanumberofcurrentdifcultiesassociatedwithMAVduetotheirsize.Forexample,characterizingtheirdynamicsunderightconditionsatsuchlowReynoldsnumbersisanextremelychallengingtask.Theconsequenceofincreasedagilityatthisscalealsogivesrisetoerraticbehaviorandaseveresensitivitytowindgustandotherdisturbances.Waszaketal.[ 14 ]performedwindtunnelexperimentson6inchMAVandobtainedtherequiredstabilityderivativesforlinearandnonlinearsimulations.AnothercriticalchallengetowardMAV 24

PAGE 25

25

PAGE 26

3 .Thisdissertationwillfocusonthemonocularcameracongurationtoaddressthestateestimationproblemregardingmovingtargets.TheadvantageofthesetechniquesbecomesmoreapparenttoUAVwhenappliedtoguidance,navigation,andcontrol.Bymountingacameraonavehicle,stateestimationofthevehicleandobjectsintheenvironmentcanbeachievedinsomeinstancesthroughvisionprocessing.Oncestateestimatesareknown,theycanthenbeusedinfeedback.Controltechniquescanthenbeutilizedforcomplexmissionsthatrequirenavigation,pathplanning,avoidance,tracking,homing,etc.Thisgeneralframeworkofvisionprocessingandcontrolhasbeensuccessfullyappliedtovarioussystemsandvehiclesincludingroboticmanipulators,groundvehicles,underwatervehicles,andaerialvehiclesbuttherestillexistssomecriticallimitations.Theproblematicissueswithusingvisionforstateestimationinvolvescameranonlinearities,cameracalibration,sensitivitytonoise,largecomputationaltime,limitedeldofview,andsolvingthecorrespondenceproblem.Aparticularsetoftheseimageprocessingissueswillbeaddresseddirectlyinthisdissertationtofacilitatethecontrolofautonomoussystemsincomplexsurroundings. 26

PAGE 27

1. segmentingmovingtargetsfromstationarytargetswithinthescene 2. classifyingmovingtargetsintodeterministicandstochasticmotions 3. couplingthevehicledynamicsintothesensorobservations(i.e.images) 4. formulatingthehomographyequationsbetweenamovingcameraandtheviewabletargets 5. propagatingtheeffectsofuncertaintythroughthestateestimationequations 6. establishingcondenceboundsontargetstateestimationThedesignandimplementationofavision-basedcontrollerisalsopresentedinthisdissertationtoverifyandvalidatemanyoftheconceptspertainingtotrackingofmovingtargets. 27

PAGE 28

15 ].Thedrogueisdesignedinanaerodynamicshapethatpermitstheextensionfromthetankerwithoutinstability.Theprobe-and-droguemethodisconsideredthepreferredmethodforAAR,mainlyduetothehighpilotworkloadincontrollingtheboom[ 16 ].Figure 1-4 illustratestheviewobservedbyreceiveraircraftduringtherefuelingprocesswherefeaturepointshavebeenplacedonthedrogue. Figure1-4. Refuelingapproachusingtheprobe-droguemethod VisioncanbeusedtofacilitatetheAARproblembyaugmentingtraditionalaircraftsensorssuchasglobalpositioningsystem(GPS)andinertialmeasurementunit(IMU).GighprecisionGPS/IMUsensorscanproviderelativeinformationbetweenthetankerandthereceiverthenvisioncanbeusedtoproviderelativeinformationonthedrogue.Theadvantagetovisioninthiscaseisitspassivenaturewhicheliminatessensoremissionsduringrefuelingoverenemyair 28

PAGE 29

1-5 illustratesinasimulatedenvironmentthisscenariowhereaUAVobservesthe 29

PAGE 30

Figure1-5. TrackingapursuitvehicleusingavisionequippedUAV 1-6 ,wherecommandsaresenttoavehiclebasedonthemotionsobservedintheimages.ThevehicleconsideredinthisdissertationispredominatelyassumedanautonomousUAV,butisgeneralizedforanydynamicalsystemwithpositionandorientationstates.TheblockspertainingtothisdissertationarehighlightedinFigure 1-6 intheimageprocessingblockandconsistsofthemovingobjectdetection,stateestimationofamovingobject,andclassifyingdeterministicversusstochasticmotion.Abriefdiscussionofeachtopicisdescribedinthissection,whilethedetailsarecoveredintheirrespectivechapters.Distinguishingmovingobjectsfromstationaryobjectswithamovingcameraisachallengingtaskinvisionprocessingandistherststepinthestateestimationprocesswhenconsideringadynamicscene.Thisinformationisextremelyimportantforguidance,navigation,andcontrolofautonomoussystemsbecauseitidentiesobjectsthatpotentiallycouldbeinapathforcollision.Forastationarycamera,movingobjectsinthescenecanbeextractedusing 30

PAGE 31

Closed-loopblockdiagramwithvisualstateestimation simpleimagedifferencing,wherethestationarybackgroundissegmentedout;however,thisapproachdoesnotapplytomovingcameras.Inthecaseofamovingcamera,thebackgroundisnolongerstationaryanditbeginstochangeovertimeasthevehicleprogressesthroughtheenvironment.Therefore,theimagestakenbyamovingcameracontainthemotionduetothecamera,commonlycalledego-motion,andthemotionoftheobject.Techniquesthatinvolvecameramotioncompensationorimageregistrationhavebeenproposedtoworkwellwhenthereexistsnostationaryobjectsclosetothecamerawhichcausehighparallax.Thisdissertationwillestablishatechniquetoclassifyobjectsintheeldofviewasmovingorstationarywhileaccountingforstationaryobjectswithhighparallax.Therefore,withaseriesofobservationsofaparticularscene,onecandeterminewhichobjectsaremovingintheenvironment.Knowingwhichobjectsaremovingintheimagedictatesthetypeofimageprocessingrequiredtoaccuratelyestimatetheobject'sstates.Infact,theestimationproblembecomesinfeasibleforamonocularsystemwhenboththecameraandtheobjectaremoving.Thisunattainablesolutioniscausebyanumberoffactorsincluding1)inabilitytodecouplethemotionfromthecameraandtargetand2)failuretotriangulatethedepthestimateoftheobject.Forthisconguration,relativeinformationcanbeobtainedandfusedwithadditionalinformationforstateestimation.First,decouplingthemotionrequiresknowninformationregardingmotionofthecameraorthemotionoftheobject,whichcouldbeobtainedthroughothersensorssuch 31

PAGE 32

5 ].Furthermore,theaccuracyofthestateestimatesbecomespoorforsmallbaselinecongurations,whichoccursforMAVusingstereovision.Theseissuesregardingtargetstateestimationwillbeconsideredinthisdissertationtoshowboththecapabilitiesandlimitationstowardautonomouscontrolandnavigation.Anotherimportanttaskinvolvedwithtargetestimationistodetermineapattern(ifany)intheobject'smotionbasedonthetimehistory.Theobjectscanthenbeclassiedintodeterministicandstochasticmotionsaccordingtopastbehavior.Withthisinformation,predictionmodelscanbemadebasedonpreviousimagestoestimatethepositionofanobjectatalatertimewithsomelevelofcondence.Thepredictedestimatescanthenbeusedinfeedbackfortrackingordockingpurposes.Forstochasticlyclassiedobjects,furtherconcernsregardingdockingorAARareimposedonthecontrolproblem.Theprimarytaskofstateestimation,forboththevehicleandobjectsintheenvironment,reliesonaccurateknowledgeoftheimagemeasurementsandtheassociatedcamera.Suchknowledgeisdifculttoobtainduetouncertaintiesinthesemeasurementsandtheinternalcomponentsofthecameraitself.Forinstance,theimagemeasurementscontainuncertaintiesassociatedwiththedetectionofobjectsintheimage,inadditiontonoisecorruption.Thesedrawbackshavepromptedmanyrobustalgorithmstoincreasetheaccuracyoffeaturedetectionwhilehandlingnoiseduringtheestimationprocess.Alternatively,manytechniqueshavebeenusedtoaccuratelyestimatetheinternalparametersofthecamerathroughcalibration.Theparametersthatdescribetheinternalcomponentsofthecameraarereferredtoasintrinsicparametersandtypicallyconsistoffocallength,radialdistortion,skewfactor,pixelsize,andopticalcenter.Thiscalibrationprocesscanbecomecumbersomeforalargenumberofcameras 32

PAGE 33

33

PAGE 34

34

PAGE 35

35

PAGE 36

1-6 illustratesthecomponentsofinterestdescribedinthisdissertationforstateestimationandtrackingcontrolwithrespecttoamovingobjectwhichinvolvesobjectmotiondetection,objectstateestimation,andobjectmotionmodelingandprediction.Theliteraturereviewofthesetopicsisgiveninthissection. 17 18 ]hasservedasafoundationformanyalgorithms.Thistechniquereliesonasmoothnessconstraintimposedontheopticowthatmaintainsaconstantintensityacrosssmallbase-linemotionofthecamera.Manytechniqueshavebuiltuponthisalgorithmtoincreaserobustnesstonoiseandoutliers.Oncefeaturetrackinghasbeenobtained,thenextprocessinvolvessegmentingtheimageformovingobjects.Theneedforsuchaclassicationisduethefactthatstandardimage 36

PAGE 37

19 21 ].TechniquesformorerealisticapplicationsinvolveKalmanltering[ 22 ]toaccountforlightingconditionsandbackgroundmodelingtechniquesusingstatisticalapproaches,suchasexpectationmaximizationandmixtureofGaussian,toaccountforothervariationsinreal-timeapplications[ 23 28 ].Althoughthesetechniquesworkwellforstationarycameras,theyareinsufcientforthecaseofmovingcamerasduetothemotionofthestationarybackground.Motiondetectionusingamovingcamera,asinthecaseofacameramountedtoavehicle,becomessignicantlymoredifcultbecausethemotionviewedintheimagecouldresultfromanumberofsources.Forinstance,acameramovingthroughascenewillviewmotionsintheimagecausedbycamerainducedmotion,referredtoasegomotion,changesincameraintrinsicparameterssuchaszoom,andindependentlymovingobjects.Therearetwoclassesofproblemsconsideredinliteratureforaddressingthistopic.Therstconsidersthescenariowherethe3Dcameramotionisknownapriorithencompensationcanbemadetoaccountforthismotiontodeterminestationaryobjectsthroughanappropriatetransformation[ 29 30 ].Thesecondclassofproblemsdoesnotrequireknowledgeofthecameramotionandconsistsofatwostageapproachtothemotiondetection.Therststageinvolvescameramotioncompensationwhilethelaststageemploysimagedifferencingontheregisteredimage[ 31 ]toretrievenon-staticobjects.ThetransformationusedtoaccountforcameramotioniscommonlysolvedbyassumingthemajorityofimageconsistsofadominantbackgroundthatisstationaryinEuclideanspace[ 32 33 ].Thissolutionisobtainedthroughaleast-squaresminimizationprocess[ 32 ]orwiththeuseofmorphologicallters[ 34 ].Thetransformationsobtainedfromthesetechniquestypically 37

PAGE 38

35 ]proposedauniedmethodtodetectmovingobjects.Thisproposedmethodhandlesvariouslevelsofparallaxintheimagethroughasegmentationprocessthatisperformedinlayers.Therstlayerextractsthebackgroundobjectswhicharefarawayfromthecameraandhavelowparallaxthroughageneraltransformationinvolvingcamerarotation,translation,andzoomthroughimagedifferencing.Thenextlayercontainstheobjectwithhighparallaxconsistingofbothobjectsclosetothecameraandobjectsthataremovingindependentlyofthecamera.Theparallaxisthencomputedfortheremainingpixelsandcomparedtoonepixel.Thisprocessseparatestheobjectswithintheimagebasedontheircomputedparallax.Theselectionmayinvolvechoosingapointonaknownstationaryobjectthatcontainshighparallaxsoanyobjectnotobeyingthisparallaxisclassiedasamovingobjectinthescene.Opticowtechniquesarealsousedtoestimatemovingtargetlocationsonceego-motionhasbeenestimated.Amethodthatcomputesthenormalimageowhasbeenshowntoobtainmotiondetection[ 36 ].Coordinatetransformationsaresometimesusedtofacilitatethisapproachtodetectingmotion.Forinstance,amethodusingcomplexlogmappingwasshowntotransformtheradialmotionsintohorizontallinesuponwhichverticalmotionindicateindependentmotion[ 37 ].Alternatively,sphericalmappingwasusedgeometricallytoclassifymovingobjectsbysegmentingmotionswhichdonotradiatefromthefocusofexpansion(FOE)[ 29 ]. 38

PAGE 39

38 39 ]usedthecoplanarityconstraintalsoknownastheepipolarconstraint.Meanwhile,thesubspaceconstrainthasalsobeenemployedtolocalizecameramotion[ 40 ].Thesetechniqueshavebeenappliedtonumeroustypesofautonomoussystems.Themobileroboticcommunityhasappliedthesetechniquesforthedevelopmentofnavigationinvariousscenarios[ 41 45 ].TheapplicationshavealsoextendedintotheresearchofUAVforaircraftstateestimation.GurlandRotstein[ 46 ]wasthersttoextendthisapplicationintheframeworkofanonlinearaircraftmodel.Thisapproachusedopticalowinconjunctionwiththesubspaceconstrainttoestimatetheangularratesoftheaircraftandwasextendedin[ 47 ].Webbetal.[ 48 49 ]employedtheepipolarconstrainttotheaircraftdynamicstoobtainvehiclestates.ThefoundationforbothoftheseapproachesisaKalmanlterinconjunctionwithageometricconstrainttoestimatethecameramotion.SomeapplicationsforaircraftstateestimationhaveinvolvedmissionsforautonomousUAVsuchasautonomousnightlanding[ 50 ]androadfollowing[ 51 ]. 52 58 ].TheseapproachesemploythesubspaceconstrainttoreconstructfeaturepointpositionthroughanextendedKalmanlter.Severalsurveypapershavebeenpublisheddescribingthecurrentalgorithmswhilecomparingtheperformanceandrobustness[ 59 62 ].RobustandadaptivetechniqueshavebeenproposedthatuseanadaptiveextendedKalmanltertoaccountformodeluncertainties[ 63 ].Inaddition,Qianetal.[ 64 ]designedarecursiveHltertoestimatestructurefrommotioninthepresenceofmeasurementandmodeluncertaintieswhile 39

PAGE 40

65 ]investigatedtheoptimalapproachestotargetstateestimationanddescribedtheeffectsoflinearsolutionsonvariousnoisedistributions. 66 ].ThismethodextendedthepreviousworkofBroidaetal.[ 67 ]thatonlyconsideredafeaturepointapproach.Forthecaseofmovingmonocularcameraconguration,theproblembecomesextremelydifcultduetotheadditionalmotionofthecamera.Oneapproachusedinliteraturerelevanttomonocularcamerasystemsisbearings-only-tracking.Inthisapproach,thereareseveralassumptionsmade:(i)thevehiclehasknowledgeofitsposition,(ii)anadditionalrangesensor,suchassonarorlaserrangender,isusedtoprovideabearingmeasurement,and(iii)animagemeasurementistakenforanestimateoflateralposition.Theinitialresearchhasinvolvedtheestimationprocessanddesignwithimprovementstotheperformance[ 68 72 ].ThisapproachwasimplementbyFlew[ 5 ]toestimatethemotionoftargetwithinacomputedcovariance.Guanghuietal.[ 73 ]providedamethodforestimatingthemotionofapointtargetfromknowncameramotion.Theroboticcommunityhasexaminedthetarget-motionestimationproblemfromavisualservocontrolframework.Trackingrelativemotionofamovingtargethasbeenshownusinghomography-basedmethods.Thesemethodshavebeendemonstratedtocontrolanautonomousgroundvehicletoadesiredposedenedbyagoalimage,wherethecamerawasmountedonthegroundvehicle[ 74 ].Chenetal.[ 75 76 ]regulatedagroundvehicletoadesiredposeusingastationaryoverheadcamera.Mehtaetal.[ 77 ]extendedthisconceptforamovingcamera,whereacamerawasmountedtoanUAVandagroundvehiclewascontrolledtoadesiredpose. 40

PAGE 41

15 ]appliedavisionnavigationalgorithmcalledVisNAVthatwasdevelopedbyJunkinsetal.[ 78 ]toestimatethecurrentrelativepositionandorientationofthetargetdroguethroughaGaussianleast-squaresdifferentialcorrectionalgorithm.Thisalgorithmhasalsobeenappliedtospacecraftformationying[ 79 ]. 80 ]demonstratedthecontrolrequiredtograbanunknownmovingobjectwithroboticmanipulatorusinganauto-regressive(AR)model.Thismodelpredictsafuturepositionofthetargetbasedonvelocityestimatescomputedfromimagesequences.Foraerialvehicles,detectingotheraircraftintheskyiscriticalforcollisionavoidance.NASAhasconsideredvisioninthisscenariotoaidpilotsindetectingaircraftonacrossingtrajectory.AtechniquecombiningimageandnavigationdataestablishedapredictionmethodthroughaKalmanlterapproachtoestimatethepositionandvelocityofthetargetaircraftaheadintime[ 34 ].Similarly,theAARproblemrequiressomeformofmodelpredictionwhendockingtoamovingdrogue.Kimmettetal.[ 15 ]utilizedadiscretelinearmodelforthepredictionofthedrogue.Thepredictedstatesusedforcontrolwerecomputedusingthediscretemodel,thecurrentstates,andlightturbulenceasinputtothedroguedynamics.Successfuldockingwassimulatedforonlylightturbulenceandwithlowfrequencydynamicsimposedonthedrogue.NASAisextremelyinterestedinAARproblemandcurrentlyhasaprojectonthistopic.FlighttestshavebeenconductedbyNASAinanattempttomodelthedroguedynamics[ 81 ].Inthisstudy,theaerodynamiceffectsfromboththereceiveraircraftandthetankeraircraftwereexaminedonthe 41

PAGE 42

82 ].Robustnesswasalsoanalyzedusingaleast-squaresolutiontoobtainanexpressionfortheerrorintermsofthemotionvariables[ 83 ].Theuncertaintyinvision-basedfeedbackisoftenchosenasvariationswithinfeaturepoints;however,uncertaintyinthecameramodelmayactuallybeanunderlyingsourceofthosevariations.Essentially,theuncertaintymaybeassociatedwiththeimageprocessingtoextractfeaturepointsorwiththecameraparametersthatgeneratedtheimage.Thepropercharacterizationofcamerauncertaintymaybecriticaltodeterminearealisticleveloffeaturepointuncertainty.Theanalysisofcamerauncertaintyistypicallyaddressedinaprobabilisticmanner.Alineartechniquewaspresentedthatpropagatesthecovariancematrixofthecameraparametersthroughthemotionequationstoobtainthecovarianceofthedesiredcamerastates[ 84 ].Ananalysiswasalsoconductedfortheepipolarconstraintbasedontheknowncovarianceinthecameraparameterstocomputethemotionuncertainty[ 85 ].AsequentialMonteCarlotechniquedemonstratedbyQianetal.[ 86 ]proposedanewstructurefrommotionalgorithmbasedonrandomsamplingtoestimatetheposteriordistributionsofmotionandstructureestimation.Theexperimentalresultsinthispaperrevealedsignicantchallengestowardsolvingforthestructureinthepresenceoferrorsincalibration,featurepointtracking,featureocclusion,andstructureambiguities. 42

PAGE 43

87 ]intrafcsituations,lowaltitudeightofarotorcraft[ 88 ],avoidingobstaclesintheightpathofanaircraft[ 34 ],andnavigatingunderwatervehicles[ 89 ].Opticalowtechniqueshavealsobeenutilizedasatoolforavoidancebysteeringawayfromareaswithhighopticowwhichindicateregionsofcloseobstacles[ 90 ].Targettrackingisanotherdesiredcapabilityforautonomoussystems.Inparticular,themilitaryisinterestedinthistopicforsurveillancemissionsbothintheairandontheground.Thecommonapproachestotargettrackingoccurinbothfeaturepointandopticalowtechniques.Thefeaturepointmethodtypicallyconstrainsthetargetmotionintheimagetoadesiredlocationbycontrollingthecameramotion[ 91 92 ].Meanwhile,Frezzaetal.[ 93 ]imposedanonholonomicconstraintonthecameramotionandusedapredictiveoutput-feedbackcontrolstrategybasedontherecursivetrackingofthetargetwithfeasiblesystemtrajectories.Alternatively,opticalowbasedtechniqueshavebeenpresentedforrobotichand-in-eyecongurationtotracktargetsofunknown2Dvelocitieswherethedepthinformationis 43

PAGE 44

94 ].Adaptivesolutionspresentedin[ 91 95 97 ]haveshowncontrolsolutionsfortargettrackingwithuncertaincameraparameterswhileestimatingdepthinformation.Thehomingcontrolproblemhasnumerousapplicationstowardautonomoussystemssuchasautonomousaerialrefueling,spacecraftdocking,missileguidance,andobjectretrievalusingarobtoticmanipulator.Kimmettetal.[ 15 98 ]developedacandidateautonomousprobe-and-drogueaerialrefuelingcontrollerthatusesacommandgeneratortracker(CGT)totracktime-varyingmotionsofanon-stationarydrogue.TheCGTisanexplicitmodelfollowingcontroltechniqueandwasdemonstratedinsimulationforamovingdroguewithknowndynamicssubjecttolightturbulence.Tandaleetal.[ 16 ]extendedtheworkofKimmettandValasekbydevelopingareferenceobserverbasedtrackingcontroller(ROTC)whichdoesnotrequireadroguemodelorpresumedknowledgeofthedrogueposition.Thissystemconsistofareferencetrajectorygenerationmodulethatsendscommandstoanobserverthatestimatesthedesiredstatesandcontrolfortheplant.Theinputtothiscontrolleristherelativepositionbetweenthereceiveraircraftandthedroguemeasuredbythevisionsystem.Asimilarvisionapproachtoaerialrefuelingisalsopresentedin[ 99 ],wheremodelsofthetankeranddrogueareusedinconjunctionwithaninferredcamera.Thedroguemodelusedinthispaperwastakenfrom[ 100 ]thatusesamulti-segmentapproachtoderivingthedynamicsofthehose.Meanwhile,Houshangietal.[ 80 ]consideredgraspingamovingtargetbyadaptivelycontrollingarobotmanipulatorusingvisioninteraction.Theadaptivecontrolschemewasusedtoaccountformodelingerrorsinthemanipulator.Inaddition,thispaperconsideredunknowntargetdynamics.Anauto-regressivemodelapproachwasusedtopredictthetarget'spositionbasedonpassedvisualinformationandanestimatedtargetvelocity.Experimentaltestcasesaredocumentedthatshowtrackingconvergence. 44

PAGE 45

3-1 .Thevector,h,representsthevectorbetweenthecameraandafeaturepointintheenvironmentrelativetoadenedcamera-xedcoordinatesystem,asdenedbyI.ThisvectoranditscomponentsarerepresentedinEquation 3 45

PAGE 46

Mappingfromenvironmenttoimageplane Amajorconstraintplacedonthissensoristhecamera'seldofview(FOV).HeretheFOVcanbedescribedasthe3Dregionforwhichfeaturepointsarevisibletothecamera;hence,featuresoutsidetheFOVwillnotappearintheimage.Thethreephysicalparametersthatdenethisconstraintaretheeldofdepth,thehorizontalangleandtheverticalangle.AtopviewillustrationoftheFOVcanbeseeninFigure 3-2 ,wherethehorizontalFOVisdenedbythehalfangle,gh,andthedistancetotheimageplaneisoflengthf.Likewise,asimilarplotcanbeshowntoillustratetheverticalangle,whichcanbedenedasgv. Imageplaneeldofview(topview) 46

PAGE 47

3 ,whererh;visdenedasthelargestspatialextensioninthehorizontalandverticaldirections. 3 forthehorizontalcomponent. 3 fortheverticalangle. 3.2.1IdealPerspectiveAgeometricrelationshipbetweenthecamerapropertiesandafeaturepointisrequiredtodeterminetheimageplanecoordinates.Thisrelationshipismadebyrstseparatingthecomponentsofhthatareparalleltotheimageplaneintotwodirections.Theimageplanecoordinatesarethencomputedfromatangentrelationshipofsimilartrianglesbetweentheverticalandhorizontaldirectionsandthedepthwithascalefactoroffocallength.Thisrelationshipestablishesthestandard2Dimageplanecoordinatesreferredtoasthepin-holecameramodel[ 101 102 ].Equations 3 and 3 representageneralpin-holeprojectionmodel 47

PAGE 48

3 and 3 reducetotheverycommonpin-holecameramodelandisrepresentedbyEquations 3 and 3 3 and 3 canbeexpressedinhomogeneouscoordinatesandisshowninEquation 3 3 .First,theimageplaneisdiscretizedintoasetofpixels,correspondingtotheresolutionofthecamera.Thisdiscretizationisbasedonscalefactorsthatrelatereal-worldlengthmeasuresintopixelunitsforboththehorizontalandverticaldirections.Thesescalingtermsaredenedassandsnwhichhaveunitsofpixelsperlength,wherethelengthcouldbeinfeetormeters.Ingeneral,thesetermsaredifferentbutwhenthepixelsaresquarethens=sn.Second,theoriginoftheimageplaneistranslatedfromthecenterofthe 48

PAGE 49

3 ,wherepixelmapping,origintranslation,andskewnessareallconsidered. 3 isrewrittentoEquation 3 3 toobtainageneralequationthatmapsfeaturepointsintheinertialframetocoordinatesintheimageplaneforacalibratedcamera. 49

PAGE 50

3 ,requiresaninniteseriesoftermstoapproximatethevalue. 3 and 3 ,mapsanundistortedimage,(0;n0),whichisnotmeasurableonaphysicalcamera,intoadistortedimage,(0d;n0d),whichisobservable[ 104 ].Thisdistortionmodelonlyconsidersthersttermintheinniteseriestodescriberadialdistortionandexcludestangentialdistortion.Thisapproximationindistortionhasbeenusedtogenerateanaccuratedescriptionofrealcameraswithoutadditionalterms[ 105 ], 3-1 ,attemptstomodelthecurvatureofthelensduringtheimageplanemapping.Thisdistortionintheimageplanevariesinanonlinearfashionbasedonposition.Thiseffectdemonstratesanaxisymmetricmappingthatincreasesradiallyfromtheimagecenter.AnexamplecanbeseeninFigure 3-3B and 3-3C whichillustrateshowradialdistortionchangesfeaturepointlocationsofaxedpatternintheimagebycomparingittoatypicalpin-holemodelshowninFigure 3-3A .Noticethedistortedimagesseemtotakeonaconvexorconcaveshapedependingonthesignofthedistortion. 50

PAGE 51

B CFigure3-3. RadialDistortionEffectsforA)f=0:5d=0,B)f=0:5d=0:0005,andC)f=0:5d=+0:0005 3 .Assuch,theseparametersaretermedtheintrinsicparametersandarefoundthroughcalibration.Afeaturepointmustbeanalyzedwithrespecttotheseintrinsicparameterstoensureproperstateestimation.Theradialdistancefromafeaturepointtothecenteroftheimageisdependentonboththerelativepositionsofcameraandfeaturealongwiththefocallength.Thisradialdistanceisalsorelatedviaanonlinearrelationshiptotheradialdistortion.Clearlyanyanalysisofthefeaturepointswillrequireestimationofthecameraparameters.Chapter 4 willdiscussatechniquethatconsidersboundeduncertaintytowardtheintrinsicparametersandestablishesaboundedconditiononthefeaturepointpositions. 51

PAGE 52

3 and 3 [ 102 103 ].Theimagecoordinates(;n)intheseexpressionsarecomputedusingeitherEqaution 3 orEquation 3 dependingonthecameramodel. 3 [ 102 103 ].Thepixelvalueswithinthesearchwindowaredenedasx. 3 .IfEquation 3 issatisedthenthisisavalidfeaturepointbasedontheuserscriterion[ 102 103 ].Thisselectionisafunctionofboththewindowsize,W,andthethreshold,t. 52

PAGE 53

106 ].ThismethodcanbeextendedtoedgedetectionbyconsideringthestructureofthesingularvaluesofG.AnexampleofthisalgorithmistheCannyedgedetector[ 107 ]. 3 3 3 53

PAGE 54

3 .Oneimportantlimitationofthiscriterionoccurswhenthewindowinbothimagescontainsrelativelyconstantintensityvalues.Thisresultsintheapertureproblemwhereanumberofsolutionsforhareobtained.Therefore,duringthefeatureselectionprocessit'sbenecialtochoosefeaturesthatcontainuniqueinformationinthiswindow. 3 forsmallbaselinetracking:(1)usingthebrightnessconsistencyconstraintand(2)applyingthesumofsquareddifferences(SSD)approach.Eachofthesetechniquesemploysatranslationalmodeltodescribetheimagemotion.Therefore,ifoneassumesasimpletranslationalmodelthenthegeneraltransformationisshowninEquation 3 3 intoEquation 3 whileinitiallyneglectingthenoiseterm.ApplyingtheTaylorseriesexpansiontothisexpressionaboutthepointofinterest,x,whileretainingonlythersttermintheseriesresultsinEquation 3 dt+I 3 inmatrixformresultsinEquation 3 dt;dn 3 constitutes1equationwith2unknownvelocities;therefore,anotherconstraintisneededtosolvethisproblem.Auniquesolutionforthevelocitiescanbedeterminedbyenforcinganadditionalconstraintontheproblem,whichentailsrestrainingregionstoalocalwindowthatmovesatconstantvelocity.Upontheseassumptiononecanminimizetheerror 54

PAGE 55

3 3 3 .Thenalsolutionforthepixelvelocityisfoundthroughaleast-squaresestimategiveninEquation 3 .Theseimagevelocitiesarealsoreferredtoastheopticow.Oncetheopticowiscomputedforafeaturepointthentheimagedisplacementforfeaturetrackingistrivialtond. 3 ,attemptstoestimatetheDxwhilenotrequiringthecomputationofimagegradients.Thisapproachalsoemploysthetranslationalmodeloverawindowedregion.Themethodconsidersthepossiblerangethatwindowcouldmove,danddn,inthetime,dt.Thisconsistencyconstraintthenleadstoaproblemofminimizingtheerroroverthepossiblewindowswithinthedescribedrange.ThiserrorfunctionisdescribedmathematicallyinEquation 3 17 ].Forlargebaselinetrackingsimpletranslationalmodels 55

PAGE 56

4 3 and 3 .Thevelocityexpressions,showninEquations 3 and 3 ,describethemovementoffeaturepointsintheimageplaneandiscommonlyreferredtoinliteratureastheopticow. 3 and 3 whileassumingc=0isasfollows 56

PAGE 57

3-4 whereh1andh2denotethepositionvectorsofthefeaturepoint,P,inthecamerareferenceframes.Also,thevaluesofx1andx2representthepositionvectorsprojectedontothefocalplanewhileTindicatesthetranslationvectoroftheoriginofthecameraframes.AgeometricrelationshipbetweenthevectorsinFigure 3-4 isexpressedbyintroducingRasarotationmatrix.Thisrotationmatrixincludestheroll,pitchandyawanglesthattransformthecameraframesbetweenmeasurements.TheresultingepipolarconstraintisexpressedinEquation 3 3 ,assumesapin-holecamerawhichiscolinearwithitsprojectionintothefocalplane. 57

PAGE 58

Geometryoftheepipolarconstraint TheexpressionsinEquation 3 andEquation 3 reectthatthescalartripleproductofthreecoplanarvectorsiszero,whichformsaplaneinspace.Theserelationshipscanbeexpandedusinglinearalgebra[ 102 103 ]togenerateastandardformoftheepipolargeometryasinEquation 3 .Thisnewformindicatesarelationshipbetweentherotationandtranslation,writtenastheessentialmatrixdenotedasQ,totheintrinsicparametersofthecameraandassociatedfeaturepoints.Inthiscase,theequationisderivedforasinglefeaturepointthatiscorrelatedbetweentheframes, 58

PAGE 59

3 withl1andl2representingtheepipolarlinesinimage1andimage2beingproportionaltotheessentialmatrix,respectfully. 3 and 3 arerewrittenintermsofthefundamentalmatrix,F,andareshowninEquations 3 and 3 3 3 whichsolvesfortheentriesoftheessentialmatrix.ThisalgorithmwasdevelopedbyLonguet-Higgins[ 39 ]andisdescribedinthissection.TheexpressioninEquation 3 canactuallybeexpressedasinEquation 3 usingadditionalargumentsfromlinearalgebra[ 102 103 ].Thevector,q2R9,containsthestackedcolumnsoftheessentialmatrixQ. 3 ,foreachfeaturepointwheretheentriesoftheessemtialmatrixarestackedinthevectorq.Asetofrowvectorsarestackedtoformamatrix,C,ofnmatchedfeaturepointsand 59

PAGE 60

3 .ThematricxC,showninEquation 3 ,isan9matrixofstackedfeaturepointsmatchedbetweentwoviews. 3 existsusingalinearleast-squaresapproachonlyifthenumberofmatchedfeaturesineachframeisatleast8suchthatrank(C)=8.Additionally,morefeaturepointswillobviouslygeneratemoreconstraintsand,presumably,increaseaccuracyofthesolutionduetotheresidualsoftheleast-squares.Inpractice,theleast-squaressolutiontoEquation 3 willnotexistduetonoise;therefore,aminimizationisusedtondanestimateoftheessentialmatrix,asshowninEquation 3 3 3 ,wherethetranslationTisfounduptoascalingfactor.Thesefoursolutions,whichconsistofallpossiblecombinationsofRandT,arecheckedtoverifywhichcombinationgeneratesapositivedepth 60

PAGE 61

102 103 ].Whenthissituationoccursonemustusetheplanarhomographyapproach,whichisthetopicofthenextsection. 102 103 ].Figure 3-5 depictsthegeometryinvolvedwithplanarhomography.Thefundamentalrelationshipexpressingapointfeaturein3DspaceacrossasetofimagesisgiventhrougharigidbodytransformationshowninEquation 3 61

PAGE 62

Geometryoftheplanarhomography Ifanassumptionismadethatthefeaturepointsarecontainedonthesameplane,thenanewconstraintinvolvingthenormalvectorcanbeestablished.DenoteN=[n1;n2;n3]Tasthenormalvectoroftheplanecontainingthefeaturepointsrelativetocameraframe1.ThentheprojectionontotheunitnormalisshowninEquation 3 ,whereDistheprojecteddistancetotheplane. 3 intoEquation 3 resultsinEquation 3 62

PAGE 63

3 canbeextendedtoimagecoordinatesthroughEquation 3 3 withtheskewsymmetricmatrixbx2resultsintheplanarhomographyconstraintshowninEquation 3 3 canberewrittentoEquation 3 3 requiresatleastfourfeaturepointcorrespondences.TheseadditionalconstraintscanbestackedtoformanewconstraintmatrixY,asshowninEquation 3 3 intermsofthenewconstraintmatrixresultsinEquation 3 102 103 ],showninEquation 3 fortheunknownscalerl. 63

PAGE 64

3 3 ,thatarepreservedinthehomographymappingandwillfacilitateinthedecompositionprocess. 3 willestablishahomographysolutionexpressedintermsoftheseknownvariables. ThefoursolutionsareshowninTable 3-1 intermsofthematricesgiveninEquations 3 3 andthecolumnsofthematrixV.Noticethetranslationcomponentisestimateduptoa1 Table3-1. Solutionsforhomographydecomposition Solution1 64

PAGE 65

3-4 andassumesthattherotation,R,andtranslation,T,betweenframesisknown.Giventhat,thecoordinatesofh1andh2canbecomputed.Recall,thefundamentalrelationshiprepeatedhereinEquation 3 3 andEquation 3 .Theserelationshipsallowsomecomponentsofhxandhytobewrittenintermsofandnwhichareknownfromtheimages.Thus,theonlyunknownsarethedepthcomponents,h1;zandh2;z,foreachimage.TheresultingsystemcanbecastasEquation 3 andsolvedusingaleast-squaresapproach. 3 usingz=[h2;z;h1;z]asthedesiredvectorofdepths. 3 obtainsthedepthestimatesofafeaturepointrelativetobothcameraframes.Thisinformationalongwiththeimageplanecoordinatescanbeusedtocompute(h1;x;h1;y)and(h2;x;h2;y)bysubstitutingthesevaluesbackintoEquations 3 and 3 .Theresultingcomponentsofh1canthenbeconvertedtothecoordinateframeofthesecondimageanditshouldexactlymatchh2.Thesevalueswillnevermatchperfectlydueto 65

PAGE 66

3 isverysensitivetotheseuncertainties.Chapter 4 willdiscussamethodtoobtainuncertaintyboundsontheSFMestimatesbasedonthesourcesdescribed. 66

PAGE 67

3 .Oncefeaturepointsarelocatedandtrackedacrossimages,anumberstateestimationalgorithms,suchasopticow,epipolarconstraint,andstructurefrommotion,canbeemployed.Althoughcameracalibrationtechniqueshaveproventoprovideaccurateestimatesoftheintrinsicparameters,theprocesscanbecumbersomeandtimeconsumingwhenusingalargequantityoflowqualitycameras.Thischapterdescribesquantitativelytheeffectsonfeaturepointpositionduetouncertaintiesinthecameraintrinsicparametersandhowthesevariationsarepropagatedthroughthestateestimationalgorithms.Thisdeterministicapproachtouncertaintyisanefcientmethodthatdeterminesalevelofboundedvariationsonstateestimatesandcanbeusedforcameracharacterization.Inotherwords,themaximumallowablestatevariationinthesystemwillthendeterminetheaccuracyrequiredinthecameracalibrationstep. 3-1 .TheresultingvaluesarerepeatedinEquations 4 and 4 asafunctionoffocallength,f,andradialdistortion,d,intermsofthecomponentsofh. 67

PAGE 68

4-1 ,isdependentonboththerelativepositionsofcameraandthefeature.Thisradialdistance,asshowninFigure 4-2 ,isalsorelatedviaanonlinearrelationshiptotheradialdistortion.Theanalysisofthefeaturepointswillrequireestimationofthecameraparameters. BFigure4-1. FeaturePointDependenceonFocalLengthforA)f=0:5andB)f=0:25 BFigure4-2. FeaturePointDependenceonRadialDistortionforA)d=0:0001andB)d=0:0005 4 ,showstherangeofvaluesthatmustbeconsideredforanominalestimate, 68

PAGE 69

4 presentstherangeofvaluesforradialdistortion. 4 andEquation 4 aresubstitutedintothecameramodelofEquation 4 andEquation 4 .TheresultingexpressionsforfeaturepointsarepresentedinEquations 4 and 4 4 andEquation 4 donotdependonuncertaintysotheseportionsrepresentnominalvalues,oandno,whicharethecorrectlocationsoffeaturepoints.Thesecondtermswhichincludedfandddtermsaretheuncertainty,danddn,ineachfeaturepointwhichareboundedinnormbyDandDn.Assuch,thefeaturepointsmaybewrittenasinEquation 4 andEquation 4 toreecttheuncertainty. 69

PAGE 70

4 andEquation 4 3 andEquation 3 .Inpractice,thevelocitiesarecomputedbysubtractinglocationsofafeaturepointacrossapairofimagestakenatdifferenttimes.Suchanapproachassumesthatafeaturepointcanbetrackedandcorrelatedbetweentheseframes.TheopticowisthengivenasJusingEquation 4 forafeaturepointat1andn1inoneframeand2andn2inanotherframe. 70

PAGE 71

4 andEquation 4 ,canbesubstitutedintoEquation 4 tointroduceuncertainty.TheresultingexpressioninEquation 4 separatestheknownfromunknownelements. 4 wheretheuncertaintyisboundedbyDJ2R. 4 .Theactualboundsonthefeaturepoints,asnotedinEquation 4 andEquation 4 ,variesdependingonthelocationofeachfeaturepointsoboundsofD1andD2aregivenforeachverticalcomponentandDn1andDn2aregivenforeachhorizontalcomponent.Assuch,theboundonvariationisnotedinEquation 4 asspecictotheh1andh2usedtogatherfeaturepointsineachimage. 3 ,requiresapin-holecamerawhoseintrinsicparametersareexactlyknown.Suchasituationisobviouslynotrealisticsotheeffectofuncertaintycanbedetermined.Anon-idealcamerawilllosethe 71

PAGE 72

4 andEquation 4 ,whichareactuallycausedbyuncertaintyinthecameraparametersasnotedinEquation 4 andEquation 4 .TheconstraintmatrixfromEquation 3 canthenbewrittenasanominalcomponent,Co,plussomeuncertainty,dC,asinEquation 4 4 andEquation 4 .TheithrowofthismatrixcanthenbewrittenasEquation 4 3 ,whenincludingtheuncertaintymatrixinEquation 4 ,willexist;however,thatsolutionwilldifferfromthetruesolutionorthenominalsolution.Essentially,thesolutioncanbeexpressedasthenominalsolution,qo,andanuncertainty,dq,asinEquation 4 .Thisperturbedsystemcannowbesolvedusingalinearleast-squaresapproachfortheentriesoftheessentialmatrix. 4 hasvariationwhichwillbenormboundedbyDqasinEquation 4 whichindicatestheworse-casevariationimposedontheentriesofq. 72

PAGE 73

4 .ThisboundusestherelationshipbetweenuncertaintiesinEquation 4 throughtheconstraintinEquation 4 .Also,thesizeofthisuncertaintydependsonthelocationofeachfeaturepointsotheboundsisnotedasspecictotheh1andh2obtainedfromFigure 3-4 4 ,canthenbeuseddirectlytocomputethevariationinstateestimates.Theentriesofqarerstarrangedbackintomatrixformtoconstructthenewessentialmatrixthatincludesparametervariations.ThisnewessentialmatrixisthendecomposedusingSVDtechniquesdescribedinSection 3.6.1 3 .SubstitutingEquation 4 andEquation 4 intoEquation 3 resultswithavariationinthesystemmatrixY.Likewise,thenewsystemmatrixwithuncertainintrinsicparamterscanbewrittenasanominalmatrix,Y0plussomevariation,dY,asshowninEquation 4 4 andEquation 4 .correspondingly, 73

PAGE 74

4 3 ,whenincludingtheuncertainmatrixinEquation 4 ,willexist;however,thatsolutionwilldifferfromthetruesolution.Essentially,thesolutioncanbeexpressedasthenominalsolution,ho,andanuncertainty,dh,asinEquation 4 4 hasvariationwhichwillbenormboundedbyDhasinEquation 4 4 .ThisboundusestherelationshipbetweenuncertaintiesinEquation 4 throughtheconstraintinEquation 4 .Also,thesizeofthisuncertaintydependsonthelocationofeachfeaturepointsotheboundsisnotedasspecictotheh1andh2obtainedfromFigure 3-4 4 ,canthenbeuseddirectlytocomputethevariationinstateestimates.Theentriesofharerstarrangedbackintomatrixformtoconstructthenewhomographymatrixthatincludesparameter 74

PAGE 75

3.6.3 3 forthestructurefrommotionrelationship.Assuch,thematrixshouldbewrittenintermsofanominalvalue,Ao,andanuncertainperturbation,dA,asinEq. 4 4 andEquation 4 intoEquation 3 .TheperturbationisthenwrittenasEquation 4 3 whenconsideringEquation 4 willobviouslyresultinadepthestimatethatdiffersfromthecorrectvalue.Denezoastheactualdepthsthatwouldbecomputedusingtheknownparametersofthenominalcameraanddzasthecorrespondingerrorintheactualsolution.Theleast-squaresproblemcanthenbewrittenasEquation 4 andsolvedusingapseudo-inverseapproach. 4 .Thisrangeofsolutionswillliewithintheboundedrangedeterminedfromtheworst-casebound. 75

PAGE 76

4 .Thisboundnotesthattheboundonvariationsinfeaturepoints,andultimatelytheboundonsolutionstostructurefrommotion,dependsonthelocationofthosefeaturepoints. 76

PAGE 77

3 toderivethesystemequations. 5-1 alongwiththerespectiveorigins.Thebody-xedcoordinatesystemhastheoriginlocatedatthecenterofgravityoftheaircraft.Theaxesareorientedsuchthatb1alignsoutthenoseandb2alignsouttherightwingwithb3pointedoutthebottom.Themovementoftheaircraft,whichincludesaccelerating,willobviouslyaffectthecoordinatesystem;consequently,thebody-xedcoordinatesystemisnotaninertialreferenceframe. 77

PAGE 78

Body-xedcoordinateframe Theorientationanglesoftheaircraftareofparticularinterestformodelingavision-basedsensor.Therollangle,f,describesrotationaboutb1,thepitchangle,q,describesrotationaboutb2andtheyawangle,y,describesrotationaboutb3.ThetransformationfromavectorrepresentedintheEarth-xedcoordinatesystemtothebody-xedcoordinatesystemisrequiredtorelateon-boardmeasurementstoinertialmeasurements.Thistransformation,giveninEquation 5 ,usesREBwhichareEulerrotationsofroll,pitchandyaw[ 29 108 ], 5 .Theorderofthismatrixmultiplicationneedstobemaintainforcorrectcomputation. 78

PAGE 79

5 5 torepresenttheserates. 5-2 ,usethetraditionalchoiceofi3aligningthroughthecenterofviewofthecamera.Theremainingaxesareusuallychosenwithi2alignedrightoftheviewandi1alignedoutthetopalthoughsomevariationinthesechoicesisallowedaslongastheresultingaxesretaintheright-handedproperties.Thedirectionofthecamerabasisvectorsaredenedthroughthecamera'sorientationrelativetothebody-xedframe.Thisframeworkisnotedasthecamera-xedcoordinatesystembecausetheoriginisalwayslocatedataxedpointonthecameraandmovesinthesamemotionasthecamera.Thecameraisallowedtomovealongtheaircraftthroughadynamicmountingwhichadmitsbothrotationandtranslation.Thisfunctionalityenablesthetrackingoffeatureswhilethevehiclemovesthroughanenvironment.Theoriginofthecamera-xedcoordinatesystemisattachedtothismovingcamera;consequently,thecamera-xedframeisnotaninertialreference.A6degree-of-freedommodelofthecameraisassumedwhichadmitsafullrangeofmotion.Figure 5-2 alsoillustratesthecamera'ssensingconewhichdescribesboththeimageplaneandtheeldofviewconstraint. 79

PAGE 80

Camera-xedcoordinateframe Similartothebody-xedcoordinateframe,atransformationcanbedenedforthemappingbetweenthebody-xedframe,Bandthecameraframe,IasseeninEquation 5 5 ,similartothebody-xedrotationmatrix.Theorientationanglesofthecameraarerequiredtodeterminetheimagingusedforvision-basedfeedback.Therollangle,fc,describesrotationabouti3,thepitchangle,qc,describesrotationabouti2andtheyawangle,yc,describesrotationabouti1. 5 willtransformavectorinbody-xedcoordinatestocamera-xedcoordinates.Thistransformationisrequiredtorelatecamerameasurementstoon-boardvehiclemeasurementsfrominertialsensors.Thematrixagaindependsontheangular 80

PAGE 81

5 torepresenttheseangles. 5-3 ,thusrelatesthecameraandtheaircrafttothefeaturepointalongwithsomeinertialorigin. Figure5-3. Scenarioforvision-basedfeedback 81

PAGE 82

5 andEquation 5 aretypicallyrepresentedintheinertialreferenceframerelativetotheEarth-axisorigin. 5 ,istypicallygivenwithrespecttothebody-axisorigin.Thischoiceofcoordinatesystemsreectsthatthecameraisintrinsicallyaffectedbyanyaircraftmotion. 3 todescribetherelativepositionbetweenthecameraandthefeaturepoint.Recall,thisvectorwasgiveninthecamera-xedcoordinatesystemtonotetheresultingimageisdirectlyrelatedtopropertiesrelativetothecamera.TherepresentationofhisrepeatedhereinEquation 5 forcompleteness. 5 isused.Thisexpressionincorporatesthetranslationsinvolvedwiththeoriginsofeachcoordinateframethroughaseriesofsingle-axisrotationsuntilthecorrectframeisreached. 82

PAGE 83

108 110 ]andarerepeatedinEquation 5 to 5 foroverallcompleteness.Fxmgsinq=m(u+qwrv) 5 .Theaircraftstatesofinterestforthecameramotionsystemconsistofthepositionandvelocityoftheaircraft'scenterofmass,TEBandvb,theangularvelocity,w,andtheorientation 83

PAGE 84

5 .AsstatedinEquation 5 ,theaircraft'svelocityisexpressedinthebody-xedcoordinateframe.Eachoftheseparameterswillappearexplicitlyintheaircraft-cameraequations. 84

PAGE 85

5-3 forafeaturepointrelativetotheinertialframe.Therefore,thevectorsumcanbeusedtosolvefortherelativepositionbetweenthecameraanda3Dfeaturepoint.AftermakingthepropercoordinatetransformationsbyusingEquations 5 and 5 ,thisrelativepositioncanbeexpressedincameraframe,I,asshowninEquation 5 5 intoEquations 3 and 3 animagecanbeconstructedasafunctionofaircraftstates.Themajorassumptionoftheseequationsispriorknowledgeofthefeaturepointlocationrelativetotheinertialframe,whichmaybeprovidedbyGPSmaps.Furthermore,theimageresultsobtainedcanalsobepassedthroughEquations 3 and 3 toaddtheeffectsofradialdistortion.Thedistortedimagewillprovideamoreaccuratedescriptionofanimageseenbyaphysicalcamera,assumingtheintrinsicparametersofthecameraareknown. 5 withrespecttotheinertialframe,asshowninEquation 5 dt(h)=Ed dt(x)Ed dt(TEB)Ed dt(TBI)(5) 85

PAGE 86

5 cannowberewrittentoEquation 5 foranon-stationaryfeaturepoint. dt(h)=xTEBBd dt(TBI)wTBIEwIh(5)ThisequationcanbereducedfurtherifthecamerasareconstrainedtohavenotranslationrelativetotheaircraftsoBd dt(TEI)=0.Alternatively,thistermisretainedinthederivationtoallowthisdegreeoffreedominthecamerasetup.Theangularvelocity,EwI,canbefurtherdecomposedusingtheAdditionTheorem.ThenalstepimplementsEquations 5 and 5 totransformeachtermintothecameraframe.Aftersomemanipulation,theexpressionforthevelocityofafeaturepointrelativetothecameraframeresultsinEquation 5 5 and 5 intoEquations 3 and 3 .Thisresultwillprovideadescriptionoftheopticalowforeachfeaturepointformedbyeitherthecameratravelingthroughtheenvironmentorthemotionofthefeaturepointsthemselves.ToincorporateradialdistortioneffectsintotheopticowcomputationrequirestheadditionalsubstitutionintoEquations 3 and 3 86

PAGE 87

5 5 3 3 3 ,and 3 .Arrangingtheparametersforthekthcameraintoasinglevector,asshowninEquation 5 ,resultsthenintheformulationofagenericaircraft-camerasystemwithkcamerasallhavingindependentmotionthattracknfeaturepointsisobtained. 5 .ThisvectorcanbeextendedtoincludeothercamerafeaturessuchasCCDarraymisalignment,skewness,etc.ThefocalplanepositionscanthenbeassembledintoavectorofobservationsasshowninEquation 5 ,wherennumberoffeaturepointsareobtained.Likewise,thestatesoftheaircraftcanbecollectedandrepresentedasastatevectorasshowninEquation 5 .Inaddition,theinitialstatesofthevehiclearedenedasX0. 5 and 5 .TheobservationsusedinthisdissertationconsistofmeasureableimagesshowninEquations 3 and 3 whichcapturenonlinearitiessuchasradialdistortion.Thissystem,whichmeasuresimageplaneposition,isdescribedmathematically 87

PAGE 88

5 5 ,thenadifferentsetofequationscanbeobtainedwhichwillbereferredtoastheOpticFlowFormofthegoverningaircraft-cameraequationsofmotion.ThissystemisgiveninEquation 5 ,whichusestheopticowexpressiongiveninEquations 3 and 3 astheobservations.X(t)=F(X(t);U(t);a(t);t) 88

PAGE 89

89

PAGE 90

3 andhowthesefeaturesrelatetothestatesofanaircraftinChapter 5 ,theeffectsofindependentlymovingobjectsneedtobehandledinadifferentmanner.Forcasesinvolvingastationarycamera,suchasinsurveillanceapplications,simplelteringandimagedifferencingtechniquesareemployedtodetermineindependentlymovingobjects.Althoughthesetechniquesworkwellforstationarycameras,adirectimplementationtomovingcameraswillnotsufce.Foramovingcamera,theapparentimagemotioniscausedbyanumberofsources,suchascamerainducedmotion(i.e.ego-motion)andthemotionduetoindependentlymovingobjects.Acommonapproachtodetectingmovingobjectconsidersatwostageprocessthatincludes(i)acompensationroutinetoaccountforcameramotionand(ii)aclassicationschemetodetectindependentlymovingobjects. 3.6 .Thesecondapproachusesthesmoothnessconstraintinattempttominimizethesumofsquaredifferences(SSD)overeitheraselectnumberoffeaturesortheentireoweld.Thisapproachassumesthestationary 90

PAGE 91

3.6 .Theepipolarconstraintcanbeusedtorelatefeaturepointsacrossimagesequencesthrougharigidbodytransformation.TheepipolarlinesofastaticenvironmentarecomputedusingEquation 3 orEquation 3 dependingiftheessentialmatrixorthefundamentalmatrixisrequired.AnillustrationofthecomputedepipolarlinesisdepictedinFigure 6-1 forastaticenvironmentobservedbyamovingcamera.Noticeforthisstaticcase,thefeaturepointsinthesecondimage(therightimagecontainingtheoverlaidepipolarlines)areshowntoliedirectlyontheepipolarlines. BFigure6-1. EpipolarLinesAcrossTwoImageFrames:A)InitialFeaturePointsandB)FinalFeaturePointswithOverlayedEpipolarLines Oncecameramotionestimationhasbeenfound,theepipolarlinescanbeusedasanindicationofmovingobjectsintheimage.Forinstance,thefeaturepointscorrespondingtothestationarybackgroundwilllieontheepipolarlineswhilethefeaturepointscorrespondingtomovingobjectswillviolatethisconstraint.Similarly,thecomputationofopticalowcanalsobeusedfordetectingindependentlymovingobjects.Incomputingtheopticalow,themotioninducedbythecameraalongwithmovingobjectsisfusedtogetherinthemeasuredimage.Recall,theopticowexpressions 91

PAGE 92

3 and 3 orEquations 3 and 3 forradialdistortion.Decomposingtheopticalowintoitscomponentsofcamerarotation(r;nr)andtranslation(t;nt)andindependentlymovingobjects(i;ni)facilitatesthedetectionproblem.Therefore,thecomponentsoftheopticalowcanbewrittenasinEquation 6 5 :thetranslationalvelocity[u;v;w]Tandtheangularvelocity[p;q;r]Tofthecamera.TheresultingexpressionsareshowninEquations 6 and 6 andappliesonlytofeaturesstationaryintheenvironment.ThedetailsdescribingthesubstitutionofthecameramotionstatesaredescribedinChapter 5 hz1 111 ]thattherotationalstates[p;q;r]Tcanbeestimatedaccuratelyforastaticenvironmentthroughanonlinearminimizationprocedurefornfeatureswheren6.Theapproachusedavector-valuedoweldJ(x)andisgiveninEquation 6 92

PAGE 93

6 iscomposedofunknownvehiclestatesanddepthparameters. 6 thatminimizesthemagnitudeofthecostfunction. 2kJ(x)k2(6)Thesameapproachistakenherewithcaution.Recallthatthemeasuredopticalowalsocontainsmotionduetoindependentlymovingobjectsinadditiontotheinducedopticalowcausedbythecameramotion.Ingeneral,thesevariationsinthemeasuredopticalowwillintroduceerrorintothe[p;q;r]Testimates.Ifsomeassumptionsaremaderegardingtherelativeopticalowbetweenthestaticenvironmentandmovingobjects,thenerrorsinthestateestimatescanhaveminimaleffect.Forinstance,ifthestaticportionofthesceneisassumedtobethedominantmotionintheopticalowthentheestimateswillcontainminimalerrors.Employingthisassumption,estimatesfortheangularvelocities[p;q;r]Tofthecamera/vehicleareobtained.SubstitutingtheseestimatesintoEquation 6 resultsinestimatesfortherotationalportionoftheopticalow,asshowninEquation 6 6 .TheresidualopticalowRes;nRescontainsonlythecomponentsofthecameratranslationandindependentlymovingobjects.Fromthisexpression,constraintscanbeemployedonthecamerainducedmotiontodetectindependentlymovingobjects. 93

PAGE 94

6-2 .Consequently,featurepointsthatviolatethisconditioncanbeclassiedasindependentlymovingobjects.Thischaracteristicobservedfromstaticfeatureswillbethebasisfortheclassicationscheme. Figure6-2. FOEconstraintontranslationalopticowforstaticfeaturepoints TheresidualopticalowmaycontainindependentlymovingobjectswithintheenvironmentthatradiatefromtheirownFOE.AnexampleofasimplescenarioisillustratedinFigure 6-3 forasinglemovingobjectontheleftandasimulationwithsyntheticdataoftwomovingvehiclesontheright.NoticethetwoprobableFOEsinpictureontheleft,onepertainingtothestaticenvironmentandtheotherdescribingthemovingobject.Inaddition,theepipolarlinesofthetwodistinctFOEsintersectatdiscretepointsintheimage.Thesepropertiesofmovingobjectsarealsoveriedinthesyntheticdatashownintheplotontheright.Thus,aclassicationschememustbedesignedtohandlethesescenariostodetectindependentlymovingobjects.Thenext 94

PAGE 95

Residualopticowfordynamicenvironments 6 .AnapproximationforthepotentiallocationoftheFOEisfoundbyextendingthetranslationaloptical-owvectorstoformtheepipolarlines,asillustratedinFigure 6-3 ,andobtainingallpossiblepointsofintersection.Asmentionedpreviously,theintersectionpointsobtainedwillconstituteanumberofpotentialFOEs;however,onlyonewilldescribethestaticbackgroundwhiletherestareduetomovingobjects.Theapproachconsideredforthisclassicationthatessentiallygroupstheintersectiondatatogetherthroughadistancecriterionisaniterativeleast-squaressolutionforthepotentialFOEs.Theiterationproceduretestsallintersectionpointsasadditionalfeaturesareintroducedtothesystemofequationseachofwhichinvolves2unknownimageplanecoordinatesoftheFOEfoei;nfoei.Theprocessstartsbyconsidering2featurepointsandtheirFOEintersection 95

PAGE 96

6 fortheFOEcoordinatesfoe1;nfoe1(fortherstiterationaleast-squaressolutionisnotnecessarybecausetwolinesintersectatasinglepoint). 2kM264n375bk2(6) where 6 fortheithiteration.Mathematically,theclassicationschemefortheithiterationisgiveninEquations 6 and 6 foeifoei12+nfoeinfoei12(6) 96

PAGE 97

97

PAGE 98

98

PAGE 99

3 .Amongthetechniquesthatutilizefeaturepoints,theapproachrelatedtothispaperinvolvesepipolargeometry[ 39 112 ].Thepurposeofthistechniqueistoestimaterelativemotionbasedonasetofpixellocations.Thisrelativemotioncandescribeeithermotionofthecamerabetweentwoimagesortherelativedistanceoftwoobjectsofthesamesizefromasingleimage.The3DscenereconstructionofamovingtargetcanbedeterminedfromtheepipolargeometrythroughthehomographyapproachdescribedinChapter 3 .Forthecasedescribedinthischapter,amovingcameraattachedtoavehicleobservesaknownmovingreferenceobjectalongwithanunknownmovingtargetobject.Thegoalistoemployahomographyvision-basedapproachtoestimatetherelativeposeandtranslationbetweenthetwoobjects.Therefore,acombinationofvisionandtraditionalsensorssuchasaglobalpositioningsystem(GPS)andaninertialmeasurementunit(IMU)arerequiredtofacilitatethisproblemforasinglecameraconguration.ForexampleintheAARcase,GPSandIMUmeasurementsareavailableforboththereceiverandtankeraircrafts.Ingeneral,asinglemovingcameraaloneisunabletoreconstructthe3Dscenecontainingmovingobjects.Thisrestrictionisduetothelossoftheepipolarconstraint,wheretheplaneformedbythepositionvectorsrelativetotwocamerapositionsintimetoapointofinterestandthetranslationvectorisnolongervalid.Techniqueshavebeenformulatedtoreconstructmovingobjectsviewedbyamovingcamerawithvariousconstraints[ 35 113 116 ].Forinstance,ahomographybasedmethodthatsegmentsbackgroundfrommovingobjectsandreconstructsthetarget'smotionhasbeenachieved[ 117 ].Theirreconstructionisdonebycomputingavirtualcamerawhichxesthetarget'spositionintheimageanddecomposesthehomographysolutionintomotionofthecameraandmotioncausedbythetarget.Thisdecompositionisdoneusingaplanartranslationconstraintwhichrestrictsthetarget'smotiontoagroundplane.Similarly,Han 99

PAGE 100

115 ]proposedanalgorithmthatreconstructs3Dmotionofamovingobjectusingafactorization-basedalgorithmwiththeassumptionthattheobjectmoveslinearlywithconstantspeeds.Anonlinearlteringmethodwasusedtosolvetheprocessmodelwhichinvolvedboththekinematicsandtheimagesequencesofthetarget[ 118 ].Thistechniquerequiresknowledgeoftheheightabovethetargetwhichwasdonebyassumingthetargettraveledonthegroundplane.Thisassumptionallowedothersensors,suchasGPS,toprovidethisinformation.ThepreviousworkofMehtaetal.[ 77 ]showedthatamovingmonocularcamerasystemcouldestimatetheEuclideanhomographiesforamovingtargetinreferencetoaknownstationaryobject.ThecontributionofthischapteristocasttheformulationshowninMehtaelal.toamoregeneralproblemwherebothtargetandreferencevehicleshavegeneralmotionandarenotrestrictedtoplanartranslations.Thisproposedapproachincorporatesaknownreferencemotionintothehomographyestimationthroughatransformation.EstimatesoftherelativemotionbetweenthetargetandreferencevehiclearecomputedandrelatedbackthroughknowntransformationstotheUAV.RelatingthisinformationwithknownmeasurementsfromGPSandIMU,thereconstructionofthetarget'smotioncanbeachievedregardlessofitsdynamics;however,thetargetmustremainintheimageatalltimes.Althoughtheformulationcanbegeneralizedforncameraswithindependentposition,orientation,translations,androtationthischapterdescribesthederivationofasinglecamerasetup.Meanwhile,cuesonboththetargetandreferenceobjectsareachievedthroughLEDlightsormarkersplacedinaknowngeometricpatternofthesamesize.Thesemarkersfacilitatethefeaturedetectionandtrackingprocessbyplacingknownfeaturesthatstandoutfromthesurroundingswhilethegeometryandsizeofthepatternallowsforthecomputationoftheunknownscalefactorthatiscustomarytoepiploarandhomographybasedapproaches.ThischapterbuildsonthetheorydevelopedinChapters 3 and 5 whilerelyingonthemovingobjectdetectionalgorithmtoisolatemovingobjectswithinanimage.RecalltheowoftheoverallblockdiagramshowninFigure 1-6 .Theprocessstartedbycomputingfeaturesinthe 100

PAGE 101

6 .Oncemovingobjectsintheimagearedetected,thehomographyestimationalgorithmproposedinthischapterisimplementedfortargetstateestimation. 7.2.1SystemDescriptionThesystemdescribedinthispaperconsistsofthreeindependentlymovingvehiclesorobjectscontaining6-DOFmotion.TodescribethemotionofthesevehiclesaEuclideanspaceisdenedwithveorthonormalcoordinateframes.TherstframeisanEarth-xedinertialframe,denotedasE,whichrepresentstheglobalcoordinateframe.Theremainingfourcoordinateframesaremovingframesattachedtothevehicles.Therstvehiclecontainstwocoordinateframes,denotedasBandI,torepresentthevehicle'sbodyframeandcameraframe,asdescribedinChapter 5 inFigure 5-1 .Thisvehicleisreferredtoasthechasevehicleandisinstrumentedwithanon-boardcameraandGPS/IMUsensorsforpositionandorientation.Thesecondvehicle,denotedasF,isconsideredareferencevehiclethatalsocontainsGPS/IMUsensorsandprovidesitsstatestothechasevehiclethroughacommunicationlink.Lastly,thethirdvehicle,denotedasT,isthetargetvehicleofinterestinwhichunknownstateinformationistobeestimated.Inaddition,actitiouscoordinateframewillbeusedtofacilitatetheestimationprocessandisdenedasthevirtualcoordinatesystem,V.Thecoordinatesofthissystemarerelatedthroughtransformationscontainingbothrotationalandtranslationalcomponents.TherotationalcomponentisestablishedusingasequenceofEulerrotationsintermsoftheorientationanglestomaponeframeintoanother.LettherelativerotationmatricesREB,RBI,REF,REV,RIV,RFV,RTVandRETdenotetherotationfromEtoB,BtoI,EtoF,EtoV,ItoV,FtoV,TtoV,andEtoT.Secondly,thetranslationsaredenedasTEB,xF,xV,xT,xF;n,xT;n,TBI,xIV,mIF,mIT,hF;n,hT;n,mVF,mVT,hVF;n,andhVT;nwhichdenotetherespectivetranslationsfromEtoB,EtoF,EtoV,EtoT,EtothenthfeaturepointonthereferencevehicleandtargetvehiclesallexpressedinE,BtoIexpressedinB,ItoV,ItoF,ItoT,ItothenthfeaturepointonthereferenceandtargetvehiclesexpressedinI,VtoF,V

PAGE 102

7-1 foracameraonboardaUAVwhilethevectorsrelatingthefeaturepointstoboththerealandvirtualcamerasaredepictedinFigure 7-2 .TheestimatedquantitiescomputedfromthevisionalgorithmaredenedasRTBandxTBwhicharetherelativerotationandtranslationfromTtoBexpressedinB. Figure7-1. Systemvectordescription Thecameraismodeledthroughatransformationthatmaps3-dimensionalfeaturepointsontoa2-dimensionalimageplaneasdescribedinChapter 3 .Thistransformationisageometricrelationshipbetweenthecamerapropertiesandthepositionofafeaturepoint.Theimageplanecoordinatesarecomputedbasedonatangentrelationshipfromthecomponentsofhn.ThecamerarelationshipusedinthischapterisreferredtoasthecontinuouspinholecameramodelandisgiveninEquations 3 and 3 forazerolensoffset,wherefisthefocallengthofthecameraandhx;n,hy;n,hz;narethe(x;y;z)componentsofthenthfeaturepoint.Thispinholemodelisacontinuousmappingthatcanbefurtherextendedtocharacterizepropertiesofaphysicalcamera.Somecommonadditionstothismodelincludeskewness,radial 102

PAGE 103

BFigure7-2. MovingtargetvectordescriptionreltivetoA)cameraIandB)virtualcameraV 3 .Eachextensiontothemodeladdsanotherparametertoknowfortheestimationproblemandeachcanintroduceuncertaintyandlargeerrorsintheestimationresult.Therefore,thischapterwillonlyconsidertheeldofviewconstraintandleavethenonlineartermsandtheeffectsonestimationforfuturework.RecalltheeldofviewconstraintsgiveninChapter 3 .Theseconstraintscanberepresentedaslowerandupperboundsintheimageplaneandaredependentonthehalfangles(gh;gv)whichareuniquetoeachcamera.Mathematically,theseboundsareshowninEquations 7 and 7 forthehorizontalandverticaldirections. 103

PAGE 104

102 ].Thesameconstraintholdsfortheimagecoordinatesaswellbutalsointroducesanunknownscalefactor.Employingthisconstraint,estimatesofrelativemotioncanbeacquiredforbothcamera-in-handandxedcameracongurations.Thisdissertationdealswiththecamera-in-handcongurationwhileassumingaperfectfeaturepointdetectionandtrackingalgorithm.Thisassumptionenablestheperformanceofthevisionbasedstateestimationtobetestedbeforeintroducingmeasurementerrorsandnoise.Thehomographyconstraintrequiresafewassumptionsbasedonthequantityandthestructureofthefeaturepoints.Thealgorithmrstrequiresaminimumoffourplanarfeaturepointstodescribeeachvehicle.Thisrequirementenablesauniquesolutiontothehomographyequationbasedonthenumberofunknownquantities.ThereferencevehiclewillhaveaminimumoffourpixelvaluesineachimagewhichwillbedenedaspF;n=[F;n;nF;n]8nfeaturepoints.Likewise,thetargetvehiclewillhavefourpixelvaluesandwillbedenedaspT;n=[T;n;nT;n]8nfeaturepoints.Thisarrayoffeaturepointpositionsarecomputedat30Hzwhichistypicalforstandardcamerasandtheframecountisdenotedbyi.Thenalrequirementisaknowndistanceforboththereferenceandtargetvehicle.OnedistancerepresentsthepositionvectortoafeatureonthereferencevehicleinEuclideanspacerelativetothelocalframeFandtheseconddistancerepresentsthepositionvectortoafeatureonthetargetvehicleinEuclideanspacerelativetothelocalframeT.Inaddition,thelengthofthesevectorsalsomustbeequalwhichallowstheunknownscalefactortobedetermined.ThevectordescribingthereferencefeaturepointwillbedenotedassFexpressedinF,whilethevectordescribingthetargetfeaturepointisreferredtoassTexpressedinT.ThesefeaturepointpositionvectorsarealsoillustratedinFigure 7-2 .Thefeaturepointsarerstrepresentedbypositionvectorsrelativetothecameraframe,I.TheexpressionsforboththereferenceandtargetfeaturepointsaregiveninEquations 7 and 7 .ThesevectorcomponentsarethenusedtocomputetheimagecoordinatesgiveninEquations 7 and 7 .ThecomputationinEquation 7 requiresinformationregardingthetargetwhichisdonesolelytoproduceimagemeasurementsthatnormallywouldbeobtainedfromthesensor.Remainingcomputations,regardingthehomography,willonlyusesensor 104

PAGE 105

7 and 7 arethetruerotationsmatricesfromFtoBandTtoB,repectfully,andareshowninEquations 7 and 7 77 ].Inthiscase,boththereferenceandtargetvehiclesareinmotionandarebeingviewedbyamovingcamera.Therefore,thenextstepistotransformthecameratoavirtualcongurationthatobservesthereferencevehiclemotionlessintheimageovertwoframes.Inotherwords,thisapproachcomputesaEuclideantransformationthatmapsthecamera'sstatesati1toavirtualcamerathatmaintainstherelativepositionandorientationbetweenframestoxthefeaturepointsofthereferencevehicle.Thistransformationisdonebymakinguseofthepreviousimageframeandstateinformationati1fromboththecameraandthereferencevehicle.Afterthevirtualcameraisestablishedthehomographyequationscanbeemployedforstateestimation.TocomputethelocationandposeofthevirtualcameraatitherelativepositionandorientationfromItoFati1isrequired.ThisrelativemotioniscomputedthroughknownmeasurementsfromGPS/IMUandtheexpressionsareshowninEquations 7 and 7 fortranslationandrotationati1,respectfully. 105

PAGE 106

7 and 7 forthecurrentframei. 3 and 3 .TheexpressionsforthenewvectorshVF;nandhVT;nintermsofthevirtualcameraaregiveninEquations 7 and 7 forthereferenceandtargetvehicles. 7 and 7 areonewaytocomputeimagecoordinatesforthevirtualcamera,butthereareunknowntermsinEquation 7 thataren'tmeasurableorcomputeableinthiscase.Therefore,analternativemethodmustbeusedtocomputeimagevaluesofthetargetinthevirtualcamera.Usingthepositionandorientationofthevirtualcamera,asgiveninEquations 7 and 7 ,therelativemotioniscomputedfromcameraItocameraVwhileusingepipolargeometrytocomputethenewpixellocations.ThisrelativecameramotionisgiveninEquations 7 and 7 wherethetranslationisexpressedinI. 106

PAGE 107

7 7 whichreliesontherelativemotionremainingconstanttomaintainthereferencestationaryintheimage. 7 .Likewise,thetimevaryingpositionofafeaturepointonthetargetvehicleexpressedinVisgiveninEquation 7 7 and 7 andarerelativetothevirtualcameraframe. 107

PAGE 108

7 7 and 7 whichdescribetherelativemotionbetweenthereferenceandtargetobjects. 7 andthereferencevehiclelocationisknownthroughGPSalongwiththefeaturepointlocations;therefore,aprojecteddistancecanbecomputedthatscalesthedepthofthescene.Tocomputethisdistancethenormalvector,n,thatdenestheplanewhichthereferencefeaturepointslieisrequiredandcanbecomputedfromknowninformation.Ultimately,theprojectivedistancecanbeobtainedandisdenedinEquation 7 throughtheuseofthereferenceposition. 7 intoEquation 7 resultsinanintermediateexpressionfortheEuclideanhomographyandisshowninEquation 7 DnThVF;n(7)Tofacilitatethesubsequentdevelopment,thenormalizedEuclideancoordinatesareusedanddenedinEquations 7 and 7 108

PAGE 109

7 7 ,and 7 thenormalizedEuclideanhomographyisestablishedwhichrelatesthetranslationandrotationbetweencoordinateframesFandT.ThishomographyexpressionisshowninEquation 7 intermsofthenormalizedEuclideancoordinates. {z }hVF;naH(7)InEquation 7 7 D(7)TheEuclideanhomographycannowbeexpressedintermsofimagecoordinatesorpixelvaluesthroughtheidealpin-holecameramodelgiveninEquations 3 and 3 .Thisexpressingisdonebyrstrewritingthecameramodelintomatrixformwhichisreferredtoasthecameracalibrationmatrix,K.SubstitutingthecameramappingintoEquation 7 andusingthecameracalibrationmatrix,K,thehomographyintermsofpixelcoordinatesisobtainedandgiveninEquation 7 .ThisnalexpressionrelatestherotationandtranslationofthetwovehiclesFandTintermsoftheirimagescoordinates.Therefore,toobtainasolutionfromthishomographyexpressionbothvehiclesneedtobeviewableintheimageframe. {z }pVF;nG(7)ThematrixG(t)isdenotedasaprojectivehomographyinEquation 7 whichareasetofequationsthatcanbesolveduptoascalefactorusingalinearleastsquaresapproach.OncethecomponentsofhomographymatrixareestimatedthematrixneedstobedecomposedintotranslationalandrotationalcomponentstoobtainxhandR.Thisdecompositionisaccomplishedusingtechniquessuchassingularvaluedecompositionandgeneratesfourpossiblesolutions[ 119 120 ].Todetermineauniquesolutionsomephysicalcharacteristicsoftheproblem 109

PAGE 110

7 7 toobtainx.Secondly,xisthendividedbyatoscalethedepthratioresultinginthenalxexpressedinI.ThisresultinconjunctionwithRisthenusedinEquation 7 tosolveformVT.ThenextstepistocomputetherelativetranslationfromItoVwhichisgiveninEquation 7 7 7 and 7 representtherelativemotionbetweenthecameravehicleandthetargetvehicle.Thisinformationisvaluableforthecontroltasksdescribedearlierinvolvingbothtrackingandhomingapplications.Thenextsectionwillimplementthisalgorithminsimulationtoverifythestateestimatorforthenoisefreecase. 110

PAGE 111

111

PAGE 112

121 ]thatemploysaHiddenMarkovModeltopredictthemotionofmovingobjects.ThebenetsinusingaHiddenMarkovModelincludeatimedependenceframeworkincorporatedintotheprobabilisticmodelaswellastheabilitytohandlestochasticprocesses.TheunderliningconceptofaHiddenMarkovModeldescribestheprobabilityofaprocesstosequentiallygofromonestatetoanother.Thissequentialpropertyprovidesthenecessaryframeworkfortimedependencemodeling,whichisanattractiveapproachfortheapplicationsconsidered,wherethetimehistorydataisacriticalpieceofinformationincludedinthemodeling. 8 .Therefore,thevelocityandpositionareupdatedthroughEquations 8 and 8 .Althoughthismodelislimited,itdescribesafoundationformodelingtargetmotionandcoversthebasicmodelconstantvelocity. 112

PAGE 113

8 andischaracterizedbyarandomvector,w(t)andisscaledbyaconstant,r.ThevelocitycorrespondingtothisaccelerationisdescribedinEquation 8 .Thismodelattemptstocapturethestochasticbehaviorsbyutilizingaprobabilisticdistributionfunction. 8 canbemodiedtoincorporatesomedependenceonthepreviousaccelerationvalue.ThisdependenceisachievedbyweightingthepreviousaccelerationinthemodelandisshowninEquation 8 .ThebenettothistypeofmodelasopposetoEquation 8 requiressomeknowledgeofthetarget;namely,thatthetargetcannotachievelargeabruptchanginginacceleration.TheresultingvelocityexpressionforthismodelisgiveninEquation 8 8 fortheithtargetandNimageframes.ThevelocityproleiscomputedusingabackwardsdifferencemethodandisgiveninEquation 8 8 ,isobtainedfromthevelocityprolegiveninEquation 8 .Thesamebackwardsdifferencemethodisusedtocomputethis 113

PAGE 114

8 .Thisaccelerationtimehistoryiscomputedimplicitlythroughthepositionestimatesobtainedfromthehomographyalgorithm 8 and 8 providetheinitialmotionstatedescriptionthatpropagatestheMarkovtransitionprobabilityfunction.TheformoftheMarkovtransitionprobabilityfunctionisassumedtobeaGaussiandensityfunctionthatonlyrequirestwoparametersforitsrepresentation.TheparametersneededforthisfunctionincludethemeanandvariancevectorsfortheaccelerationprolegiveninEquation 8 .Note,duringthischapter(x)isthemeanoperatorandnottheverticalcomponentintheimageplane.Likewise,s2(x)isreferredtoasthevarianceoperator. 8 ,wheretheargumentsconsistofthemeanandvariancepertainingtotheestimatedacceleration. 8 and 8 forthetransitionfunction.Thefunctionsfandfsarechosenbasedonthedesiredweightingofthetimehistoryandcansimplybeaweightedlinearcombinationofthearguments.Theseinitialstatisticalparametersareusedinthepredictionstepandupdatedonceanewmeasurementisobtained.

PAGE 115

8 asthree-dimensionalGaussiandensityfunctionandisuniquelydeterminedbythemeanandvariance. 2(ai(t)(ai(t)))2 8 8 and 8 ,thepredictiveprobabilityforobjectiattimet+kisgivenasEquation 8 .Thisframeworkenablestheexibilityofcomputingthepredictedestimatesatanydesiredtimeinthefuturewiththenotionthatfurtheroutintimetheprobabilitydiminishes. 8 and 8 fortheentiretimeinterval. 8 and 8

PAGE 116

8 and 8 2ai(t1) 2(ai(t1)) 2ai(t1) 2s2(ai(t1))Lastly,theprobabilityfunctionsforvelocityandpositionareusedtocomputethepredictiveprobabilitiesforobjectithataregiveninEquations 8 and 8 forvelocityandposition,respectfully. 8 istheprobabilitythattargetiislocatedinpositionp(x;y;z).Sothetheoverallprocessisaniterativemethodthatusesthemotionmodels,giveninSection 8.2.1 ,toprovideguessesforpositionandvelocityinattempttomaximizetheprobabilityfunctionsgiveninEquations 8 and 8 .ThepositionthatmaximizesEquation 8 isthemostlikelylocationofthetargetatt+kwithaknownprobability. 116

PAGE 117

7 .Effectively,thesequantitiesaretheerrorsignalsusedforcontroltotrackthemovingcameratowardadesiredlocationbasedonthemotionofthetarget.TheframeworkpresentedherewilluseaircraftandUAVnavigationschemesfortheaerialmissionsdescribedinChapter 1 .Therefore,thecontroldesigndescribedinthischapterfocusesonthehomingmissiontofacilitatetheAARproblem,whichinvolvestrackingthepositionstatescomputedfromthehomography.Varioustypesofguidancecontrollerscanbeimplementedforthesetypesoftaskoncetherelativepositionandorientationareknown.Dependingonthecontrolobjectivesandhowfastthedynamicsofthemovingtargetare,lowpasslteringoralowgaincontrollermayberequiredtoavoidhighratecommandstotheaircraft.IntheAARproblem,thesuccessofthedockingcontrollerwilldirectlyrelyonseveralcomponents.TherstcomponentistheaccuracyofestimatedtargetlocationwhichduringAARneedstoprecise.Secondly,thedynamicsofthedroguearestochastic.Thiscausesthemodelingtasktobeimpracticalinreplicatingreallifesothecontrollerislimitedtothemodelsconsideredinthedesign.Inaddition,thedrogue'sdynamicsmaynotbedynamicallyfeasiblefortheaircrafttotrackwhichmayfurtherreduceperformance.Lastly,thecontrollerideallyshouldmakepositionmaneuversinstagesbyconsideringthealtitudeasonestage,thelateralpositionasanotherstage,andthedepthpositionasthenalstage.Incloseproximity,thecontrollershouldimplementonlysmallmaneuverstohelpmaintainthevehiclesintheFOV. 15 ]. 117

PAGE 118

110 ].Thestandarddesignapproachwasusedbyconsideringthelongitudinalandlateralstatesseparatelyasintypicalwaypointcontrolschemes.Thisapproachseparatedthecontrolintothreesegments:1)Altitudecontrol,2)HeadingControland3)DepthControl. 9-1 .Therstportionofthissystemisdescribedastheinner-loopwherepitchandpitchrateareusedinfeedbacktostabilizeandtrackapitchcommand.Meanwhile,thesecondportionisreferredtoastheouter-loopwhichgeneratespitchcommandsfortheinner-loopbasedonthecurrentaltitudeerror.Theinner-loopdesignenablesthetrackingofapitchcommandthroughproportional 6 Altitudeholdblockdiagram control.Thispitchcommandinturnwillaffectaltitudethroughthechangesinforcesonthehorizontaltailfromtheelevatorposition.Thetwosignalsusedforthisinner-looparepitchandpitchrate.Thepitchratefeedbackhelpswithshortperioddampingandallowsforratevariationsinthetransientresponse.AleadcompensatorwasdesignedinStevensetal.[ 110 ]toraisethe 118

PAGE 119

9-1 .Thisstructurewillprovidegooddisturbancerejectionduringturbulentconditions.Inaddition,boundswereplacedonthepitchcommandtoalleviateanyaggressivemaneuversduringtherefuelingprocess. 9-2 .Theinner-loop -+fcmd 6y ?Dy 6f Headingholdblockdiagram componentofFigure 9-2 dealswithrolltracking.Thefeedbacksignalsincludebothrollandrollratethroughproportionalcontroltocommandachangeinaileronposition.Theinner-loop 119

PAGE 120

110 ].Consequently,theturnsmootherandcontainslessoscillations.Trackingheadingisnotsufcienttotrackthelateralpositionwiththelevelofaccuracyneededforrefuelingtask.Thenalloopwasaddedtoaccountforanylateraldeviationaccumulatedovertimeduetothedelayinheadingfromposition.Thisdelayismainlyduetothetimedelayassociatedwithsendingarollcommandandproducingaheadingchange.Therefore,thisloopwasaddedtogeneratemorerollforcompensation.Theloopcommandedachangeinaileronbasedoftheerrorinlateralposition.Thisdeviation,referredtoasDy,wascomputedbasedontwosuccessivetargetlocationsprovidedbytheestimator.Thecurrentandprevious(x;y)positionsofthetargetwereusedtocomputealineinspacetoprovideareferenceoftheit'smotion.Theperpendiculardistancefromthevehicle'spositiontothislinewasconsideredthemagnitudeofthelateralcommand.Inaddition,thesignofthecommandwasneededtoassignthecorrectdirection.Thisdirectionwasdeterminedfromtherelativeyposition,expressedinthebody-xedframe,thatwasfoundduringestimation.Oncethelateraldeviationwasdetermined,thatsignalwaspassedthroughaPIstructure,asshowninFigure 9-2 .Thegainscorrespondingtotheproportional,kyp,andintegrator,kyi,werethensummedandaddedtocomputethenalrollcommand.ThecompleteexpressionfortherollcommandisshowninEquation 9 120

PAGE 121

121

PAGE 122

8 providesamethodtoestimatetargetsinEuclideanspacewhenfeaturesdoexittheimage.Thismethodworkswellforshortperiodsoftimeafterthetargethasleft;however,thetrustinthepredictedvaluedegradestremendouslyastimeincreases.Consequently,whenafeatureleavestheimagethecontrollercanrelyonthepredictedestimatestosteertheaircraftinitiallybutmayresorttoalternativeapproachesbeyondaspeciedtime.Asalastresort,thecontrollercancommandtheaircrafttoslowdownandregainabroaderperspectiveofthescenetorecapturethetarget. 122

PAGE 123

110 ].Abaselinecontrollerisimplementedthatallowsthevehicletofollowwaypointsbasedentirelyonfeedbackfrominertialsensors.Imagesareobtainedfromasetofcamerasmountedontheaircraft.Thesecamerasincludeastationarycameramountedatthenoseandpointingalongthenose,atranslatingcameraunderthecenterlinethatmovesfromtherightwingtotheleftwing,andapitchingcameramountedunderthecenterofgravity.TheparametersforthesecamerasaregiveninTable 10-1 invaluesrelativetotheaircraftframeandfunctionsoftimegivenastinseconds. Table10-1. Statesofthecameras position(ft) orientation(deg) camera 24 0 0 0 90 0 2 -10 15-3t 0 0 45 0 3 0 0 3 0 45-9t 0 Thecameraparametersarechosenassimilartoanexistingcamerathathasbeenighttested[ 111 ].Thefocallengthisnormalizedsof=1.Also,theeldofviewforthismodelcorrelatestoanglesofgh=32degandgv=28deg.TheresultinglimitsonimagecoordinatesaregiveninTable 10-2 Table10-2. Limitsonimagecoordinates coordinate minimum maximum 0.62 0.53 Avirtualenvironmentisestablishedwithsomecharacteristicssimilartoanurbanenvironment.Thisenvironmentincludesseveralbuildingsalongwithamovingcaranda 123

PAGE 124

10-3 ,isassociatedwitheachfeaturefordirectidenticationinthecameraimages. Table10-3. Statesofthefeaturepoints position(ft) featurepoint north east altitude 1 3500 200 -1500 2 1000+200t 500 -500 3 6000 200cos(2p TheightpaththroughthisenvironmentisshowninFigure 10-1 alongwiththefeatures.TheaircraftinitiallyiesstraightandleveltowardtheNorthbutthenturnssomewhattowardstheEastandbeginstodescendfromadivemaneuver. BFigure10-1. VirtualEnvironmentforExample1:A)3DViewandB)TopView ImagesaretakenatseveralpointsthroughouttheightasindicatedinFigure 10-1 bymarkersalongthetrajectory.ThestatesoftheaircraftattheseinstancesaregiveninTable 10-4 .Theimageplanecoordinates(;n)areplottedinFigure 10-2 forthethreecamerasatt=2sec.ThiscomputationisaccomplishedbyusingEquation 5 inconjunctionwithEquations 3 and 3 whileapplyingtheeldofviewconstraintshowninEquations 3 and 3 .Allthreecamerascontainsomeportionoftheenvironmentalongwithdistinctviewsofthefeaturepointsofinterest.Forexample,camera1containsaforwardlookingviewofastationary 124

PAGE 125

Aircraftstates Time North East Down v w (ft) (ft) (ft) (ft=s) (ft=s) 1196.9 0.44 -2174.8 573.52 56.79 -126.46 4 2112.7 143.04 -1645.4 527.37 -54.94 17.77 6 2989.8 353.63 -1100.7 528.30 4.26 45.57 Time q y q r (deg) (deg) (deg) (deg=s) (deg=s) -13.92 -22.43 -1.81 -13.56 -36.82 1.38 4 -39.21 -37.90 22.79 32.31 28.41 0.04 6 6.98 -14.85 11.93 7.63 -9.34 -1.15 pointonthecornerofabuildingaswellasthemovinghelicopter.Meanwhile,cameras2and3observeatopviewofthemovinggroundvehicletravelingforwarddownaroad.Theseimagemeasurementsprovideasignicantamountofdataandallowformoreadvancedalgorithmsforstateestimationandreconstruction. B CFigure10-2. FeaturepointMmasurementsatt=2secforA)camera1,B)camera2,andC)camera3 Figure 10-3 depictstheopticowcomputedforthesamedatasetshowninFigure 10-2 .Thisimagemeasurementgivesasenseofrelativemotioninmagnitudeanddirectioncausedbycameraandfeaturepointmotion.TheexpressionsrequiredtocomputeopticowconsistedofEqs. 5 5 3 3 3 3 3 and 3 .Inthisexample,theopticowhasmanycomponentscontributingtothenalvalue.Forinstance,theaircraft'svelocityandangularratescontributealargeportionoftheopticowbecauseoftheirlargemagnitudes.Inaddition,thesmallercomponentsinthisexamplearecausedfromvehicleandcameramotionwhicharesmallerinmagnitudebuthaveasignicanteffectondirection.Comparingcameras1and2,there 125

PAGE 126

B CFigure10-3. OpticowMeasurementsatt=2secforA)camera1,B)camera2,andC)camera3 Asummaryoftheresultingimageplanequantities,positionandvelocity,isgiveninTable 10-5 forthefeaturepointsofinterestaslistedinTable 10-3 .Thetableisorganizedbythetimeatwhichtheimagewastaken,whichcameratooktheimage,andwhichfeaturepointisobserved.Thistypeofdataenablesautonomousvehiclestogainawarenessoftheirsurroundingsformoreadvancedapplicationsinvolvingguidance,navigationandcontrol. Table10-5. Imagecoordinatesoffeaturepoints Time(s) Camera FeaturePoint 1 1 0.157 0.162 0.610 0.044 2 1 3 0.051 0.267 0.563 -0.012 2 2 2 -0.308 0.075 0.464 -0.254 2 3 2 0.011 0.077 0.583 -0.235 4 2 2 -0.279 -0.243 -0.823 0.479 4 3 2 0.365 -0.248 -0.701 0.603 6 1 3 0.265 -0.084 0.267 -0.015 10.2.1ScenarioFeaturepointuncertaintyisdemonstratedinthissectionbyextendingthepreviousexample.Thissimulationwillexaminetheuncertaintyeffectsonvisionprocessingalgorithmsusingsimulatedfeaturepointsandperturbedcameraintrinsicparameters. 126

PAGE 127

10-4 alongwithapairofpointsindicatingthelocationsatwhichimageswillbecaptured.Theaircraftisinitiallystraightandlevelthentranslatesforwardwhilerolling4.0degandyawing1.5degatthenallocation. BFigure10-4. Virtualenvironmentofobstacles(solidcircles)andimaginglocations(opencircles)A)3DviewandB)topview Asinglecameraissimulatedatthecenterofgravityoftheaircraftwithlineofsightalignedtothenoseoftheaircraft.Theintrinsicparametersarechosensuchthatfo=1:0anddo=0:0forthenominalvalues.TheimagesforthenominalcameraassociatedwiththescenarioinFigure 10-4 arepresentedinFigure 10-5 toshowthevariationbetweenframes.Thevision-basedfeedbackiscomputedforasetofperturbedcameras.Theseperturbationsrangeasdf2[0:2;0:2]anddd2[0:02;0:02].ObviouslythefeaturepointsinFigure 10-5 willvaryasthecameraparametersareperturbed.Theamountofvariationwilldependonthefeature 127

PAGE 128

BFigure10-5. FeaturepointsforA)initialandB)nalimages point,asnotedinEquations 4 and 4 ,buttheeffectcanbenormalized.Thevariationinfeaturepointgivennominalvaluesofo=no=1isshowninFigure 10-6 forvariationinbothfocallengthandradialdistortion.Thissurfacecanbescaledaccordinglytoconsiderthevariationatotherfeaturepoints.TheperturbedsurfaceshowninFigure 10-6 ispropagatedthroughthreemainimageprocessingtechniquesforanalysis. Figure10-6. Uncertaintyinfeaturepoint 10-6 tofocallengthandradialdistortion.ArepresentativecomparisonofopticowforthenominalcameraandasetofperturbedcamerasisshowninFigure 10-7 128

PAGE 129

B CFigure10-7. Opticalowfornominal(black)andperturbed(red)camerasforA)f=1:1andd=0,B)f=1:0andd=0:01,andC)f=1:1andd=0:01 10-7 indicateseveraleffectsofcameraperturbationsnotedinEquations 4 and 4 .Theperturbationstofocallengthscalethefeaturepointssothemagnitudeofopticowisuniformlyscaled.Theperturbationstoradialdistortionhavelargereffectasthefeaturepointmovesawayfromthecenteroftheimagesotheopticowvectorsarealteredindirection.Thecombinationofperturbationsclearlychangestheopticowinbothmagnitudeanddirectionanddemonstratesthefeedbackvariationsthatcanresultfromcameravariations.Theopticowiscomputedforimagescapturedbyeachoftheperturbedcameras.ThechangeinopticowfortheperturbedcamerasascomparedtothenominalcameraisrepresentedasdJandisboundedinmagnitude,asderivedinEquation 4 ,byDJ.ThegreatestvalueofdJpresentedbythesecameraperturbationsiscomparedtotheupperboundinTable 10-6 .ThesenumbersindicatethevariationsinopticowareindeedboundedbythetheoreticalboundderivedinChapter 4 andindicatethelevelofowvariationsinducedfromthevariationsincameraparameters. Table10-6. Effectsofcameraperturbationsonopticow PerturbationSet Analyzeonlywithdf kDJk kdJk kDJk kdJk kDJk 0.0476 0.0040 0.0040 0.0496 0.0543 0.0476 0.0020 0.0040 0.0252 0.0543 0.0476 0.0020 0.0040 0.0264 0.0543 0.0476 0.0040 0.0040 0.0543 0.0543 129

PAGE 130

10-8 showsthequalityoftheestimation.Essentially,theepipolargeometryrequiresafeaturepointinoneimagetoliealongtheepipolarline.Thisepipolarlineisconstructedbytheintersectionbetweentheplaneformedbytheepipolarconstraintandtheimageplaneatthelastmeasurement.ThedatainFigure 10-8 showthefeaturesinthesecondimagedoindeedlieexactlyontheepipolarlines. BFigure10-8. Epipolarlinesbetweentwoimageframes:A)initialframeandB)nalframewithoverlayedepipolarlinesfornominalcamera Theintroductionofuncertaintyintotheepipolarconstraintwillcausevariationsintheessentialmatrixwhichwillalsopropagatethroughthecomputationoftheepipolarline.Thesevariationsintheepipolarlinearevisualcluesofthequalityoftheestimateintheessentialmatrix.Thesevariationscanoccuraschangesintheslopeandthelocationoftheepipolarline.Figure 10-9 illustratestheepipolarvariationsduetoperturbationsondf=0:1anddd=0:01tothecameraparameters.Thefeaturepointswithuncertaintyandthecorrespondingepipolarlinewasplottedalongwiththenominalcasetoillustratethevariations.Thekeypointinthisguresisthesmallvariationsintheslopeoftheepipolarlinesandthesignicantvariationsinfeature 130

PAGE 131

BFigure10-9. Uncertaintyresultsforepipolargeometry:A)initialframeandB)nalframewithoverlayedepipolarlinesforcameraswithf1:0andd=0:0(black)andf=1:1andd=0:01(red) Theessentialmatrixiscomputedfortheimagestakenusingasetofcameramodels.EachmodelisperturbedfromthenominalconditionusingthevariationsinFigure 10-6 .Thechangeinestimatedstatesbetweennominalandperturbedcamerasisgivenbydqovertheuncertaintyrangeandisbounded,asderivedinEquation 4 ,byDq.ThevalueofdqforaspecicperturbationisshownincomparisontotheupperboundinTable 10-7 whichalsoindicatethevariationinenteriesoftheessentailmatrixwhichpropagatetothecamerastates. Table10-7. Effectsofcameraperturbationsonepipolargeometry PerturbationSet Analyzeonlywithdf kDqk kdqk kDqk kdqk kDqk 293.14 4.45 4.45 288.75 297.34 293.14 2.19 2.19 288.75 297.34 293.14 2.11 2.19 288.75 297.34 293.14 4.15 4.45 288.75 297.34 131

PAGE 132

10-10 toindicatethatallerrorswerelessthan106. Figure10-10. Nominalestimationusingstructurefrommotion Thedepthsarealsoestimatedusingstructurefrommotiontoanalyzeimagesfromtheperturbedcameras.ArepresentativesetoftheseestimatesareshowninFigure 10-11 ashavingclearerrors.Aninterestingfeatureoftheresultsisthedependenceonsignoftheperturbationtofocallength.Essentially,thesolutiontendstoestimateadepthlargerthanactualwhenusingapositiveperturbationandadepthsmallerthanactualwhenusinganegativeperturbation.Sucharelationshipisadirectresultofthescalingeffectthatfocallengthhasonthefeaturepoints.Estimatesarecomputedforeachoftheperturbedcamerasandcomparedtothenominalestimate.Theworst-caseerrorsinestimationarecomparedtothetheoreticalbound,giveninEquation 4 ,totheseerrors.ThesenumbersshowninTable 10-8 indicatethevariationinstructurefrommotiondependsonthesignoftheperturbation.Theapproachisactuallyseentobelesssensitivetopositiveperturbations,whichcausesalargerestimateindepth,thantonegativeperturbations.Also,thetheoreticalboundwasgreaterthan,orequalto,theerrorcausedbyeachcameraperturbation. 132

PAGE 133

B CFigure10-11. Estimationusingstructurefrommotionfornominal(black)andperturbed(red)cameraswithA)f=1:1andd=0,B)f=1:0andd=0:01,andC)f=1:1andd=0:01 Effectsofcameraperturbationsonstructurefrommotion PerturbationSet Analyzeonlywithdf kDzk kdzk kDzk kdzk kDzk 4679.8 75.02 75.02 4903.5 4903.5 4679.8 36.90 75.02 1076.6 4903.5 4679.8 35.73 75.02 498.76 4903.5 4679.8 70.34 75.02 1092.5 4903.5 1 involvingapolicepursuitisdemonstratedthroughthissimulation.Thesetupconsistedofthreevehicles:anUAVyingabovewithamountedcamera,electronicsandcommunication,areferencegroundvehiclewhichisconsideredthepolicepursuitcar,andatargetvehicledescribingthesuspectsvehicle.ThegoalofthismissionisfortheUAVtotrackbothvehiclesintheimage,whilereceivingpositionupdatesfromthereferencevehicle,andestimatethetarget'slocationusingtheproposedestimationalgorithm.ThecamerasetupconsideredinthisproblemconsistofasingledownwardpointingcameraattachedtotheUAVwithxedpositionandorientation.Whileinightthecamerameasuresandtracksfeaturepointsonboththetargetvehicleandthereferencevehicleforuseintheestimationalgorithm.Thissimulationassumesperfectcameracalibration,featurepointextraction,and 133

PAGE 134

7 thegeometryofthefeaturepointsarepredescribedandaknowndistanceisprovidedforeachvehicle.AfurtherdescriptionofthisassumptionisgiveninSection 7.2.2 .Futureworkwillexaminemorerealisticaspectsofthecamerasystemtoreproduceamorepracticalscenarioandtrytoalleviatethelimitationsimposedonthefeaturepoints. 10-12 ,forillustration.Theinitialframeforthissimulationislocatedattheaircraft'spositionwhenthesimulationstarts.Thevelocityofthegroundvehicleswerescaleduptotheaircraft'svelocitywhichresultedinlargedistancesbutalsohelpedtomaintainthevehiclesintheimage. BFigure10-12. Vehicletrajectoriesforexample3:A)3DviewandB)topview ThepositionandorientationstatesofthethreevehiclesareplottedinFigures 10-13 10-18 andallarerepresentedintheinertialframe,E.Thepositionsindicatethatallthreevehicle 134

PAGE 135

B CFigure10-13. PositionstatesoftheUAVwithon-boardcamera:A)North,B)East,andC)Down B CFigure10-14. AttitudestatesoftheUAVwithon-boardcamera:A)Roll,B)Pitch,andC)Yaw B CFigure10-15. Positionstatesofthereferencevehicle(pursuitvehicle):A)North,B)East,andC)Down 135

PAGE 136

B CFigure10-16. Attitudestatesofthereferencevehicle(pursuitvehicle):A)Roll,B)Pitch,andC)Yaw B CFigure10-17. Positionstatesofthetargetvehicle(chasevehicle):A)North,B)East,andC)Down B CFigure10-18. Attitudestatesofthetargetvehicle(chasevehicle):A)Roll,B)Pitch,andC)Yaw motionfromtheUAVtothetargetofinterest.ThenormerrorofthismotionaredepictedinFigure 10-19 .Theseresultsindicatethatwithsyntheticimagesandperfecttrackingofthevehiclesnearlyperfectmotioncanbeextracted.Oncenoiseintheimageortrackingisintroducedtheestimatesofthetargetdeterioratequicklyevenwithminutenoise.Inaddition,imageartifactssuchasinterferenceanddropoutswillalsohaveanadverseaffectonhomographyestimation. 136

PAGE 137

BFigure10-19. NormerrorforA)relativetranslationandB)relativerotation Figures 10-20 and 10-21 showtherelativetranslationandrotationdecomposedintotheirrespectivecomponentsandexpressedinthebodyframe,B.Thesecomponentsrevealtherelativeinformationneededforfeedbacktotrackorhomeinonthetargetofinterest. B CFigure10-20. Relativepositionstates:A)X,B)Y,andC)Z B CFigure10-21. Relativeattitudestates:A)Roll,B)Pitch,andC)Yaw 137

PAGE 138

10-22 ofthecameraviewdepictingthevehiclesandthesurroundingscene.Theredvehiclewasdesignatedasthereferencewhereasthegreyvehiclewasthetargetvehicle.Thenextstepinthisprocessistoimplementanactualfeaturetrackingalgorithmonthesyntheticimagesthatfollowsthevehicles.Thismodicationalonewilldegradethehomographyresultsimmenselyduetothetroublesomecharacteristicsofafeaturepointtracker. Figure10-22. Virtualenvironment 1 describedthemotivationandthebenetsofAAR,thissectionwilldemonstrateitbycombiningthecontroldesigngiveninChapter 9 withthehomographyresultinChapter 7 toformaclosed-loopvisualservocontrolsystem.ThevehiclesinvolvedinthissimulationincludesaReceiverUAVinstrumentedwithasinglecamera,atankeraircraftalso 138

PAGE 139

5 withadditionalstatessuchasV,a,b,theaccelerationterms,Machnumber,anddynamicpressure.Althoughthecontrollerwillnotuseallstates,theassumptionoffullstatefeedbackwasmadetoallowallstatesaccessiblebythecontroller.Thecontrollerusesthesestatesoftheaircraftalongwiththeestimatedresultstocomputeactuatorcommandsaroundthespeciedtrimcondition. 139

PAGE 140

9 isintegratedandtunedforthenonlinearF-16modeltoaccomplishthissimulation.Itwasassumedthatfullstatefeedbackoftheaircraftstatesweremeasurableincludingposition.Theunitsusedinthissimulationaregiveninftanddegwhichmeansthegainsdeterminedinthecontrolloopswerealsofoundbasedontheseunits.First,thepitchtrackingforaltitudecontrollerisconsidered.Theinner-loopgainsforthiscontrolleraregivenaskq=3andkq=2:5.ThebodediagramforpitchcommandtopitchangleisdepictedinFigure 10-23 forthespeciedgains.Thisdiagramrevealsthedamping 140

PAGE 141

Figure10-23. Inner-looppitchtopitchcommandBodeplot ThestepresponseforthepitchcontrollerisgiveninFigure 10-24 andshowsacceptableperformance.Theouter-loopcontrolwillnowbedesignedusingthiscontrollertotrackaltitude. Figure10-24. Pitchanglestepresponse 141

PAGE 142

10 andwasdesignedinStevensetal.[ 110 ].AstepresponseforthiscontrollerisillustratedinFigure 10-25 thatshowsasteadyclimbwithnoovershootandasteady-stateerrorof2ft.ThisresponseisrealisticforanF-16butnotidealforautonomousrefuelingmissionwheretolerancesareonthecmlevel.Thealtitudetransitionisslowduetothecompensatorbutonemayconsidermoreaggressivemaneuversformissionssuchastargettrackingthatmayrequireadditionalagility. Figure10-25. Altitudestepresponse Thenextstagethatwastunedinthecontroldesignwastheheadingcontroller.Theinner-loopgainswerechosentobekf=5:7andkp=1:6fortherolltracker.ThebodediagramforthiscontrollerofrollcommandtorollangleisshowninFigure 10-26 whichshowsattenuationinthelowerfrequencyrange.Thisattenuationremovesanyhighfrequencyresponsefromtheaircraftwhichisdesiredduringarefuelingmission,especiallyincloseproximity.Meanwhile,thecouplingbetweenlateralandlongitudinalstatesduringaturnwascounteracted 142

PAGE 143

Figure10-26. Inner-looprolltorollcommandBodeplot ThestepresponseforthisbankcontrollerisillustratedinFigure 10-27 .Thetrackingperformanceisacceptablebasedonarisetimeof0:25sec,anovershootof6%andlessthana3%steady-stateerror.Theouter-looptuningforheadingcontrollerconsistedofrsttuningthegainonheadingerror.Againofky=1:5waschosenforthismissionwhichdemonstratedacceptableperformance.Figure 10-28 showstheheadingresponseusingthiscontrollerforarightturn.Theresponserevealasteadyrisetime,noovershoot,andasteady-stateerroroflessthan2deg.Finally,thelooppertainingtolateraldeviationwastunedtokyp=0:5andkyi=0:025whichproducedreasonabletrackingandsteadyerrorforlateralposition.Thenalstageofthecontrollerinvolvestheaxialposition.Thisstagewasdesignedtoincreasethrustbasedonavelocitycommandoncethelateralandaltitudestateswerealigned.Aproportionalgainwastunedbasedonvelocityerrortoachieveaslowsteadyapproachspeed 143

PAGE 144

Rollanglestepresponse Figure10-28. Headingresponse tothetarget.Againofkx=3:5wasdeterminedforthisloopwhichgeneratesthedesiredapproach.Lastly,tohelplimitthenumberoftimesthefeaturepointsexittheeldofviewalimitwasimposedonthepitchangle.Thislimitwasenforcedwhentheapproachachieveaspecieddistance.Forthisexample,thedistancewassettowithin75ftintheaxialpositionofthebody-xedframewhichwasdeterminedexperimentallyfromthetarget'ssize. 144

PAGE 145

10-29 forpositionandFigure 10-30 fororientationandrevealedacorrectt.Thisresultdemonstratesthefunctionalityoftheestimatorwithanaccuracyontheorderof109.ThiserrorwasplottedinFigure 10-31 forbothpositionandorientation. B CFigure10-29. Open-loopestimationoftarget'sinertialposition:A)North,B)East,andC)Altitude B CFigure10-30. Open-loopestimationoftarget'sinertialattitude:A)Roll,B)Pitch,andC)Yae Furthermore,theclosed-loopresultsforthissimulationwereplottedinFigures 10-32 and 10-34 forpositionandorientationofboththereceiveraircraftandthetargetdroguerelativetotheearth-xedframe.Thetrackingofthiscontrollershowedreasonableperformanceforthedesiredpositionandheadingsignals.Theremainingorientationangleswerenotconsideredinfeedbackbutestimatedforthepurposeofmakingsurethedrogue'spitchandrollarewithinthedesiredvaluesbeforedocking.AsseeninFigure 10-32 ,thereceiverwasabletotrackthegrossmotionofdroguewhilehavingsomedifcultlytrackingtheprecisemotion. 145

PAGE 146

BFigure10-31. NormerrorfortargetstateestimatesA)translationandB)rotation B CFigure10-32. Closed-looptargetpositiontracking:A)North,B)East,andC)Altitude ThecomponentsofthepositionerrorbetweenthereceiveranddrogueareshowninFigure 10-33 toillustratetheperformanceofthetrackingcontroller.Theseplotsdepicttheinitialoffseterrordecayingovertimewhichindicatesthereceiver'srelativesdistanceisdecreasing.Thealtitudeshowedaquickclimbresponsewhereastheresponseinaxialpositionwasaslowsteadyapproachwhichwasdesiredtolimitlargechangesinaltitudeandangleofattack.Thelateralpositionisstableforthetimeperiodbutcontainsoscillationsduetherolltoheadinglag.TheorientationanglesshowninFigure 10-34 indicatetheEuleranglesforforthebody-xedtransformationscorrespondingtothebody-xedframeofthereceiverandthebody-xedframeofthedrogue.Recall,theonlysignalbeingtrackedinthecontroldesignwasheading.Thisselectionallowedtheaircrafttosteerandmaintainaighttrajectorysimilartothedroguewithoutaligningrollandpitch.Thereceivershouldyclosetoatrimconditionratherthenmatchingthefullorientationofthedrogue,asillustratedinFigure 10-34 forpitchangle. 146

PAGE 147

B CFigure10-33. Positiontrackingerror:A)North,B)East,andC)Altitude TheerrorinheadingisdepictedinFigure 10-35 whichshowsacceptabletrackingperformanceoverthetimeinterval. B CFigure10-34. Targetattitudetracking:A)Roll,B)Pitch,andC)Yaw Figure10-35. Trackingerrorinheadingangle Theresultsshownintheseplotsindicatethatthetrackinginthelateralpositionandaltitudearenearlysufcientfortherefuelingtask.Thesimulationrevealsboundederrorsinthese 147

PAGE 148

8 willhelptoaidthecontroller,oratleasttohelpdeterminearegionofwherethefeaturesmostlikelyhavetraveled. 7 .Toseewhatlevelsofvariationsexistintheseresultsanuncertaintyanalysiswasperformed.Chapter 4 derivedamethodtocomputeworse-caseboundsonstateestimatesfromthehomographyapproachusingvisualinformation.ThetechniquedescribedinChapter 4 wasusedforthisuncertaintyanalysis.ThetargetestimatesforabsolutepositionandorientationalongwithupperandlowerboundswerecomputedforthissimulationandareshowninFigures 10-36 and 10-37 .These 148

PAGE 149

B CFigure10-36. Target'sinertialpositionwithuncertaintybounds:A)North,B)East,andC)Altitude B CFigure10-37. Target'sinertialattitudewithuncertaintybounds:A)Roll,B)Pitch,andC)Yaw Themaximumuncertaintiesintargetpositionrelativetotheearth-xedframearesummarizedinTable 10-9 .Meanwhile,Table 10-10 containsthemaximumuncertaintiesintargetorientation.Thethreelevelsofuncertaintyareincludedinthesetables.Thiscomparisonhelpstoverifythatthemaximumstatevariationcorrespondstothemaximumcameraparameter 149

PAGE 150

Table10-9. Maximumvariationsinpositionduetoparametricuncertainty uncertaintyparameter north(ft) 4.10 20.54 10.53 14.40 15.09 30.82 Table10-10. Maximumvariationsinattitudeduetoparametricuncertainty uncertaintyparameter 0 0 4.48 2.29 7.94 3.48 150

PAGE 151

151

PAGE 152

152

PAGE 153

8 intotherefuelingsimulationwillhelpthecontrollerbyprovidingstateestimatewhenthetargetexitstheeldofview. 153

PAGE 154

[1] SecretaryofDefense,UnmannedAircraftSystemsRoadmap2005-2030,website:http://uav.navair.navy.mil/roadmap05/roadmap.htm Grasmeyer,J.M.,andKeennon,M.T.,DevelopmentoftheBlackWidowMicro-AirVehicle,39thAerospaceSciencesMeetingandExhibit,AIAA2001-0127,Reno,NV,January2001. [3] Beard,R.,Kingston,D.,Quigley,M.,Snyder,D.,Christiansen,R.,Johnson,W.,Mclain,T.,andGoodrich,M.,AutonomousVehicleTechnologiesforSmallFixedWingUAVs,AIAAJournalofAerospaceComputing,Information,andCommunication,Vol.2,No.1,January2005,p.92-108. [4] Kingston,D.,Beard,R.,McLain,T.,Larsen,M.,andRen,W.,AutonomousVehicleTechnologiesforSmallFixedWingUAVs,AIAA2ndUnmannedUnlimitedSystems,Technologies,andOperationsAerospace,Land,andSeaConferenceandWorkshopandExhibit,AIAA-2003-6559,SanDiego,CA,September2003. [5] Frew,E.,ObserverTrajectoryGenerationforTarget-MotionEstimationUsingMonocularVision,PhDDissertation,StanfordUniversity,August2003. [6] Sattigeri,R.,Calise,A.J.,SooKim,B.,Volyanskyy,K.,andNakwan,K.,-DOFNonlinearSimulationofVision-basedFormationFlight,AIAAGuidance,NavigationandControlConferenceandExhibit,AIAA-2005-6002,SanFrancisco,CA,August2005. [7] Beard,R.,Mclain,T.,Nelson,D.,andKingston,D.,DecentralizedCooperativeAerialSurveillanceusingFixed-WingMiniatureUAVs,IEEEProceedings:SpecialIssueonMulti-RobotSystems,Vol.94,Issue7,July2006,pp.1306-1324. [8] Wu,A.D.,Johnson,E.N.,andProctor,A.A.,Vision-AidedInertialNavigationforFlightControl,AIAAGuidance,Navigation,andControlConferenceandExhibit,AIAA2005-5998,SanFrancisco,CA,August2005. [9] EttingerS.M.,Nechyba,M.C.,Ifju,P.G.,andWaszak,M.,Vision-GuidedFlightStabilityandControlforMicroAirVehicle,IEEE/RSJinternationalConferenceonIntelligentRobotsandSystem,Vol.3,September/October2002,pp.2134-2140. [10] Kehoe,J.,Causey,R.,Abdulrahim,M.,andLind,R.,WaypointNavigationforaMicroAirVehicleusingVision-BasedAttitudeEstimation,AIAAGuidance,Navigation,andControlConferenceandExhibit,AIAA-2005-6400,SanFrancisco,CA,August2005. [11] Abdulrahim,M.,andLind,R.,ControlandSimulationofaMulti-RoleMorphingMicroAirVehicle,AIAAGuidance,NavigationandControlConferenceandExhibit,AIAA-2005-6481,SanFrancisco,CA,August2005. [12] Abdulrahim,M.,Garcia,G.,Ivey,G.F.,andLind,R.,FlightTestingofaMicroAirVehicleUsingMorphingforAeroservoelasticControl,AIAAStructures,StructuralDynamics,andMaterialsConference,AIAA-2004-1674,PalmSprings,CA,April2004. 154

PAGE 155

Garcia,H.,Abdulrahim,M.,andLind,R.,RollControlforaMicroAirVehicleUsingActiveWingMorphing,AIAAGuidance,NavigationandControlConferenceandExhibit,AIAA-2003-5347,Austin,TX,August2003. [14] Waszak,M.R.,Jenkins,L.N.,andIfju,P.G.,StabilityandControlPropertiesofanAeroelasticFixedWingMicroAirVehicle,AIAAAtmosphericFlightMechanicsConferenceandExhibit,AIAA2001-4005,Montreal,Canada,August2001. [15] Kimmett,J.,Valasek,J.,andJunkinsJ.K.,VisionBasedControllerforAutonomousAerialRefueling,IEEEInternationalConferenceonControlApplications,Glasgow,Scotland,U.K.,September2002,pp.1138-1143. [16] Tandale,M.D.,Bowers,R.,andValasek,J.,RobustTrajectoryTrackingControllerforVisionBasedProbeandDrogueAutonomousAerialRefueling,AIAAGuidance,Navigation,andControlConferenceandExhibit,AIAA2005-5868,SanFrancisco,CA,August2005. [17] Lucas,B.,andKanade,T.,AnIterativeImageRegistrationTechniquewithanApplicationtoStereoVision,ProceedingsoftheDARPAImageUnderstandingWorkshop,1981,pp.121-130. [18] Tomasi,C.,andKanade,T.,DetectionandTrackingofPointFeatures,Tech.ReportCMU-CS-91-132,CarnegieMellonUniversity,April1991. [19] Kanade,T.,Collins,R.,Lipton,A.,Burt,P.,andWixson,L.,AdvancesinCooperativeMulti-SensorVideoSurveillance,ProceedingsofDARPAImageUnderstandingWork-shop,Vol.1,November1998,pp.3-24. [20] Piccardi,M.,BackgroundSubtractionTechniques:AReview,IEEEInternationalConferenceonSystems,ManandCybernetics,TheHague,TheNetherlands,October2004. [21] Schunck,B.G.,MotionSegmentationbyConstraintLineClusttering,IEEEWorkshoponComputerVision:RepresentationandControl,1984,pp.58-62. [22] Ridder,C.,Munkelt,O.,andKirchner,H.,AdaptiveBackgroundEstimationandForegroundDetectionusingKalman-Filtering,InternationalConferenceonRecentAdvancedinMechatronics,Istanbul,Turkey,June1995,pp.193-199. [23] Bailo,G.,Bariani,M.,Ijas,P.,andRaggio,M.,BackgroundEstimationwithGaussianDistributionforImageSegmentation,aFastApproach,IEEEInternationalWorkshoponMeasurementSystemsforHomelandSecurity,ContrabandDetectionandPersonalSafety,Orlando,FL,March2005. [24] Friedman,N.,andRussel,S.,ImageSegmentationinVideoSequences:AProbabilisticApproach,InternationalProceedingsoftheThirteenthConferenceofUncertaintyinArticialIntelligence(UAI),Providence,RI,August1997. 155

PAGE 156

Sheikh,Y.,andShah,M.,BayesianObjectDetectioninDynamicScenes,IEEEComputerSocietyConferenceonComputerVisionandPatternRecognition,SanDiego,CA,June2005. [26] Stauffer,C.,andGrimson,W.E.L.,AdaptiveBackgroundMixtureModelsforReal-TimeTracking,IEEEConferenceonComputerVisionandPatternRecognition,FortCollins,CO,June1999,pp.246-252. [27] Toyama,K.,Krumm,J.,Brumitt,B.,andMeyers,B.,Wallower:PrinciplesandPracticeofBackgroundMaintenance,InternationalConferenceonComputerVision,Corfu,Greece,September1999. [28] Zhou,D.,andZhang,H,ModiedGMMBackgroundModelingandOpticalFlowforDetectionofMovingObjects,IEEEInternationalConferenceonSystem,Man,andCybernetics,BigIsland,Hawaii,October2005. [29] Nelson,R.C.,QualitativeDetectionofmotionbyaMovingObserver,InternationalJournalofComputerVision,Vol.7,No.1,1991,pp.33-46. [30] Thompson,W.B.,andPong,T.G.,DetectingMovingObjects,InternationalJournalofComputerVision,Vol.4,1990,pp.39-57. [31] Odobez,J.M.,andBouthemy,P.,DetectionofMultipleMovingObjectsUsingMultiscaleMRPWithCameraMotionCompensation,IEEEInternationalConferenceonImageProcessing,Austin,TX,November1994,pp.245-249. [32] Irani,M.,Rousso,B.,andPeleg,S.,DetectingandTrackingMultipleMovingObjectsUsingTemporalIntegration,EuropeanConferenceonComputerVision,SantaMargheritaLigure,Italy,May1992pp.282-287. [33] Torr,P.H.S.,andMurray,D.W.,StatisticalDetectionofIndependentMovementfromaMovingCamera,ImageandComputing,Vol.11,No.4,May1993,pp.180-187. [34] Gandhi,T.,Yang,M.T.,Kasturi,R.,Camps,O.,Coraor,L.,andMcCandless,J.,DetectionofObstaclesintheFlightPathofanAircraft,IEEETransactionsonAerospaceandElectronicSystems,Vol.39,No.1,January2003,pp.176-191. [35] Irani,M.,andAnandan,P.,AUniedApproachtoMovingObjectDetectionin2Dand3DScenes,IEEETransactionsonPatternAnalysisandMAchineIntelligence,Vol.20,No.6,June1998. [36] Sharma,R.,andAloimonos,Y.,EarlyDetectionofIndependentMotionfromActiveControlofNormalFlowPatterns,IEEETransactionsonSystems,Man,andCybernetics,Vol.26,No.1,February1996. [37] Frazier,J.,andNevatia,R.,DetectingMovingObjectsfromaMovingPlatform,IEEEInternationalConferenceonRoboticsandAutomation,Nice,France,May1992,pp.1627-1633. 156

PAGE 157

Liu,Y.,Huang,T.S.,andFaugeras,O.D.,DeterminationofCameraLocationfrom2Dto3DlineandPointCorrespondence,IEEETransactionsonPatternAnalysisandMachineIntelligence,Vol.12,No.1,January1990pp.28-37. [39] Longuet-Higgins,H.C.,AComputerAlgorithmforReconstructingaScenefromTwoProjections,Nature,Vol.293,September1981,pp.133-135. [40] Heeger,D.J.,andJepson,A.D.,SubspaceMethodforRecoveringRigidMotion1:AlgorithmandImplementation,InternationalJournalofComputerVision,Vol.7,No.2,January1992. [41] Gutmann,J.S.,andFox,D.,AnExperimentalComparisonofLocalizationMethodsContinued,IEEE/RSJInternationalConferenceonIntelligentRobotsandSystems,Lausanne,Switzerland,October2002. [42] Martin,M.C.,andMoravec,H.,RobotEvidenceGrids,TechnicalReportCMU-RI-TR-96-06,RoboticsInstitute,CarnegieMellonUniversity,March1996. [43] Olson,C.F.,SelectingLandmarksforLocalizationinNaturalTerrains,AutonomousRobots,Vol.12,2002,pp.201-210. [44] Olson,C.F.,andMatthies,L.H.,MaximumLikelihoodRoverLocalizationbyMatchingRangeMaps,IEEEInternationalConferenceonRoboticsandAutomation,Leuven,Belgium,May1998,pp.272-277. [45] Volpe,R.,Estlin,T.,Laubach,S.,Olson,C.,andBalaram,J.,EnhancedMarsRoverNavigationTechniques,IEEEInternationalConferenceonRoboticsandAutomation,SanFrancisco,CA,April2000,pp.926-931. [46] Gurl,P.,andRotstein,H.,PartialAircraftStateEstimationfromVisualMotionUsingtheSubspaceContraintsApproach,JournalofGuidance,Control,andDynamics,Vol.24,No.5,September-October2001,pp.1016-1028. [47] Markel,M.D.,Lopez,J.,Gebert,G.,andEvers,J.,Vision-AugmentedGNC:PassiveRangingfromImageFlow,AIAAGuidance,Navigation,andControlConferenceandExhibit,AIAA2002-5026,Monterey,CA,August2002. [48] Webb,T.P.,Prazenica,R.J.,Kurdila,A.J.,andLind,R.,Vision-BasedStateEstimationforAutonomousMicroAirVehicles,AIAAGuidance,Navigation,andControlConfer-enceandExhibit,Providence,RI,August2004. [49] Webb,T.P.,Prazenica,R.J.,Kurdila,A.J.,andLind,R.,Vision-BasedStateEstimationforUninhibitedAerialVehicles,AIAAGuidance,Navigation,andControlConferenceandExhibit,AIAA2005-5869,SanFrancisco,CA,August2005. [50] Chatterji,G.B.,Vision-BasedPositionandAttitudeDeterminationforAircraftNightLanding,AIAAGuidance,NavigationandControlConferenceandExhibit,AIAA-96-3821,July1996. 157

PAGE 158

Silveira,G.F.,Carvalho,J.R.H.,Madirid,M.K.,Rives,P.,andBueno,S.S.,AFastVision-BasedRoadFollowingStrategyAppliedtotheControlofAerialRobots,IEEEProceedingsofXIVBrazilianSymposiumonComputerGraphicsandImageProcessing,1530-1834/01,Florianopolis,Brazil,October2001,pp.226-231. [52] Soatto,S.,andPerona,P.,DynamicVisualMotionEstimationfromSubspaceConstraints,IEEE,0-8186-6950-0/94,1994,pp.333-337. [53] Soatto,S.,Frezza,R.,andPeronaP.,MotionEstimationviaDynamicVision,IEEETransactionsonAutomaticControl,Vol.41,No.3,March1996,pp.393-413. [54] Soatto,S.,andPerona,P.,Recursive3-DMotionEstimationUsingSubspaceConstraints,InternationalJournalofComputerVision,Vol.22,No.3,1997,pp.235-259. [55] Soatto,S.,andPerona,P.,Reducing'StructurefromMotion',IEEE,1063-6919/96,1996,pp.825-832. [56] Soatto,S.,andPerona,P.,VisualMotionEstimationfromPointFeatures:UniedView,IEEEInternationalConferenceonImageProcessing,Vol.3,October1995,pp.21-24. [57] Soatto,S.,andPerona,P.,Reducing'StructurefromMotion':AGeneralFrameworkforDynamicVisionPart1:Modeling,IEEETransactionsonPatternAnalysisandMachineIntelligence,Vol.20,No.9,September1998,pp.933-942. [58] Soatto,S.,andPerona,P.,Reducing'StructurefromMotion':AGeneralFrameworkforDynamicVisionPart2:ImplementationandExperimentalAssessment,IEEETransactionsonPatternAnalysisandMachineIntelligence,Vol.20,No.9,September1998,pp.943-960. [59] Erol,A.,Bebis,G.,Nicolescu,M.,Boyle,R.D.,andTwombly,X.,AReviewonVision-BasedFullDOFHandMotionEstimation,IEEEComputerSocietyInternationalConferenceonComputerVisionandPatternRecognition,SanDiego,CA,June2005. [60] Huang,T.S.,andNetravali,A.N.,MotionandStructurefromFeatureCorrespondences:AReview,ProceedingsoftheIEEE,Vol.82,No.2,February1994,pp.252-268. [61] Stewart,C.V.,RobustParameterEstimationinComputerVision,SocietyofIndustrialandAppliedMathematics,Vol.41,No.3,1999,pp.513-537. [62] Weng,J.,Huang,T.S.,andAhuja,N.,MotionandStructurefromTwoPerspectiveViews:Algorithms,ErrorAnalysis,andErrorEstimation,IEEETransactionsonPatternAnalysisandMachineIntelligence,Vol.11,No.5,May1989,pp.451-476. [63] Jianchao,Y.,ANewMethodforPassiveLocationEstimationfromImageSequenceUsingAdaptiveExtendedKalmanFilter,InternationalConferenceonSignalProcessing,Beijing,China,October1998,pp.1002-1005. [64] Qian,G.,Kale,G.,andChellappa,R.,RobustEstimationofMotionandStructureusingaDiscreteH-innityFilter,IEEE0-7803-6297-7/00,2000,pp.616-619. 158

PAGE 159

Weng,J.,Ahuja,N.,andHuang,T.S.,OptimalMotionandStructureEstimation,IEEETransactionsonPatternAnalysisandMachineIntelligence,Vol.15,No.9,September1993,pp.864-884. [66] Blostein,S.D.,Zhao,L.,Chann,andR.M.,Three-DimensionalTrajectoryEstimationfromImagePositionandVelocity,IEEETransactionsonAerospaceandElectronicSystems,Vol.36,No.4,October2000,pp.1075-1089. [67] Broida,T.J.,Chandrashekhar,S.,andChellappa,R.,Recursive3-DMotionEstimationfromaMonocularImageSequence,IEEETransactionsonAerospaceandElectronicSystems,Vol.26,No.4,1990,pp.639-656. [68] Aidala,V.J.,KalmanFilterBehaviorinBearings-OnlyTrackingApplications,IEEETransactionsonAerospaceandElectronicSystems,Vol.15,No.1,1979,pp.29-39. [69] Bolger,P.L.,TrackingaManeuveringTargetUsingInputEstimation,IEEETransac-tionsonAerospaceandElectronicSystems,Vol.23,No.3,1987,pp.298-310. [70] Gavish,M.,andFogel,E.,EffectofbiasonBearings-OnlyTargetLocation,IEEETransactionsonAerospaceandElectronicSystems,Vol.26,No.1,January1990,pp.22-25. [71] Peach,N.,Bearings-OnlyTrackingUsingaSetofRange-ParameterizedExtendedKalmanFilters,IEEProceedings-ControlTheoryandApplications,Vol.142,No.1,January1995,pp.73-80. [72] Taff,L.G.,TargetLocalizationfromBearings-OnlyObservations,IEEETransactionsonAerospaceandElectronicSystems,Vol.33,No.1,January1997,pp.2-9. [73] Guanghui,O.,Jixiang,S.,Hong,L.,andWenhui,W.,Estimating3DMotionandPositionofaPointTarget,ProceedingsofSPIE,Vol.3173,1997,pp.386-394. [74] Fang,Y.,Dawson,D.M.,Dixon,W.E.,andQueiroz,M.S.de,.5DVisualServoingofWheeledMobileRobots,IEEEConferenceonDecisionandControl,LasVegas,NV,December2002,pp.2866-2871. [75] Chen,J.,Dixon,W.E.,Dawson,D.M.,andChitrakaranV.K.,VisualServoTrackingControlofaWheeledMobileRobotwithaMonocularFixedCamera,IEEEConferenceonControlApplications,Taipei,Taiwan,September2004,pp.1061-1066. [76] Chen,J.,Dawson,D.M.,Dixon,W.E.,andBehal,A.,AdaptiveHomography-BasedVisualServoTrackingforFixedandCamera-in-HandCongurations,IEEETransactionsonControlSystemsTechnology,Vol.13,No.5,September2005,pp.814-825. [77] Mehta,S.S.,Dixon,W.E.,MacArthur,D.,andCrane,C.D.,VisualServoControlofanUnmannedGroundVehicleviaaMovingAirborneMonocularCamera,IEEEAmericanControlConference,Minneapolis,MN,June2006. 159

PAGE 160

Junkins,J.L.,andHughes,D.,Vision-BasedNavigationforRendezvous,DockingandProximityOperations,AASGuidanceandControlsConference,Breckenridge,CO,February1999. [79] Alonso,R.,Crassidis,J.L.,andJunkins,J.L.,Vision-BasedRelativeNavigationforFormationFlyingofSpacecraft,AIAAGuidance,NavigationandControlConferenceandExhibit,AIAA-2000-4439,Denver,CO,August2000. [80] Houshangi,N.,ControlofaRoboticManipulatortoGraspaMovingTargetusingVision,IEEEInternationalConferenceonRoboticsandAutomation,CH2876-1/90/0000/0604,Cincinnati,Ohio,1990,pp.604-609. [81] Hansen,J.L.,Murry,J.E.,andCampos,N.V.,TheNASADrydenAARProject:AFlightTestApproachtoanAerailRefuelingSystem,AIAAAtmosphericFlightMechanicsConferenceandExhibit,Providence,RhodeIsland,August2004. [82] Chang,P.,andHebert,P.,RobustTrackingandStructurefromMotionthroughSamplingBasedUncertaintyRepresentation,InternationalConferenceonRoboticsandAutoma-tion,WashingtonD.C.,May2002. [83] Oliensis,J.,ExactTwo-ImageStructureFromMotion,IEEETransactionsonPatternAnalysisandMachineIntelligence,Vol.24,No.12,December2002,pp.1618-1633. [84] Svoboda,T.,andSturm,P.,BadlyCalibratedCamerainEgo-MotionEstimation-PropagationofUncertainty,InternationalConferenceComputerAnalysisofImageandPatterns,Kiel,Germany,September1997,pp.183-190. [85] Zhang,Z.,DeterminetheEpipolarGeometryanditsUncertainty:AReview,Interna-tionalJournalofComputerVision,Vol.27,No.2,1998,pp.161-195. [86] Qian,G.,andChellappa,R.,StructureFromMotionUsingSequentialMonteCarloMethods,InternationalConferenceOnComputerVision,Vancouver,Canada,July2001,pp.614-621. [87] Franke,U.,andHeinrich,S.,FastObstacleDetectionforUrbanTrafcSituations,IEEETransactionsonIntelligentTransportationSystems,Vol.3,No.3,September2002,pp.173-181. [88] Bhanu,B.,Das,S.,Roberts,B.,andDuncan,D.,ASystemforObstacleDetectionDuringRotorcraftLowAltitudeFlight,IEEETransactionsonAerospaceandElectronicSystems,Vol.32,No.3,July1996,pp.875-897. [89] Huster,A.,Fleischer,S.D.,andRock,S.M.,DemonstrationofaVision-BasedDead-ReckoningSystemforNavigationofanUnderwaterVehicle,OCEANS'98ConferenceProceedings,0-7803-5045-6/98,Vol.1,September1998,pp.326-330. [90] Roderick,A.,Kehoe,J.,andLind,R.,Vision-BasedNavigationusingMulti-RateFeedbackfromOpticFlowandSceneReconstruction,AIAAGuidance,Navigation,andControlConferenceandExhibit,SanFrancisco,CA,August2005. 160

PAGE 161

Papaikolopoulos,N.P.,Nelson,B.J.,andKhosla,P.K.,SixDegree-of-FreedomHand/EyeVisualTrackingwithUncertainParameters,IEEETransactionsonRoboticsandAutomation,Vol.11,No.5,October1995,pp.725-732. [92] Sznaier,M.,andCamps,O.I,ControlIssuesinActiveVision:OpenProblemsandSomeAnswers,IEEEConferenceonDecisionandControl,Tampa,FL,December1998,pp.3238-3244. [93] Frezza,R.,Picci,G.,andSoatto,S.,Non-holonomicModel-basedPredictiveOutputTrackingofanUnknownThree-dimensionalTrajectory,IEEEConferenceonDecisionandControl,Tampa,FL,December1998,pp.3731-3735. [94] Papaikolopoulos,N.P.,Khosla,P.K.,andKanade,T.,VisualTrackingofaMovingTargetbyaCameraMountedonaRobot:ACombinationofControlandVision,IEEETransactionsonRoboticsandAutomationVol.9,No.1,February1993,pp.14-35. [95] Papaikolopoulos,N.P.,andKhosla,P.K.,AdaptiveRoboticVisualTracking:TheoryandExperiments,IEEETransactionsonAutomaticControl,Vol.38,No.3,March1993,pp.429-445. [96] Zanne,P.,Morel,G.,andPlestan,F.,RobustVisionBased3DTrajectoryTrackingusingSlidingModeControl,IEEEInternationalConferenceonRoboticsandAutomation,SanFrancisco,CA,April2000,pp.2088-2093. [97] Zergeroglu,E.,Dawson,D.M.,deQueiroz,M.S.,andBehal,A.,Vision-BasedNonlinearTrackingControllerswithUncertainRobot-CameraParameters,IEEE/ASMEInternationalConferenceonAdvancedMechanics,Atlanta,GA,September1999,pp.854-859. [98] Valasek,J.,Kimmett,J.,Hughes,D.,Gunnam,K.,andJunkins,J.L.,VisionBasedSensorandNavigationSystemforAutonomousAerialRefueling,AIAA's1stTechnicalConferenceandWorkshoponUnmannedAerospaceVehicles,Portsmouth,Virginia,May2002. [99] Pollini,L.,Campa,G.,Giulietti,F.,andInnocenti,M.,VirtualSimulationSet-UpforUAVsAerialRefueling,AIAAModelingandSimulationTechnologiesConference,Austin,TX,August2003. [100] No,T.S.,andCochan,J.E.,DynamicsandCOntrolofaTeatheredFlightVehicle,JournalofGuidance,Control,andDynamics,Vol.18,No.1,January1995,pp.66-72. [101] Forsyth,D.A.,andPonce,J.,ComputerVision:AModernApproach,Prentice-HallPublishers,UpperSaddleRiver,NJ,2003. [102] Ma,Y.,Soatto,S.,Kosecka,andSastry,S.S.,AnInvitationto3-DVision:FromImagestoGeometricModels,Springer-VerlagPublishing,NewYork,NY,2004. [103] Faugeras,O.,Three-DimensionalComputerVision,TheMITPress,CambridgeMassachusetts,2001. 161

PAGE 162

Castro,G.J.,Nieto,J.,Gallego,L.M.,Pastor,L.,andCabello,E.,AnEffectiveCameraCalibrationMethod,IEEE0-7803-4484-7/98,1998,pp.171-174. [105] Tsai,R.Y.,AVersatileCameraCalibrationTechniqueforHight-Accuracy3DMachineVisionMetrologyUsingOff-theshelfTVCamerasandLens,IEEEJournalofRoboticsandAutomation,Vol.RA-3,No.4,August1987,pp.323-344. [106] Harris,C.,andStephens,M.,ACombinedCornerandEdgeDetector,ProceedingsoftheAlveyVisionConference,1988,pp.147-151. [107] Canny,J.F.,AComputationalApproachtoEdgeDetection,IEEETransactionsonPatternAnalysisandMachineIntelligence,Vol.8,No.6,November1986,pp.679-698. [108] Etkin,B.,andReid,L.D.,DynamicsofFlight:StabilityandControl,JohnWiley&Sons,NewYork,1996. [109] Nelson,R.C.,FlightStabilityandAutomaticControl,McGraw-Hill,NewYork,1989. [110] Stevens,B.L.,andLewis,F.L.,AircraftControlandSimulation,JohnWiley&Sons,Inc.,NewYork,1992. [111] Kehoe,J.J.,Causey,R.S.,Arvai,A.,andLind,R.,PartialAircraftStateEstimationfromOpticalFlowusingNon-Model-BasedOptimization,IEEEAmericanControlConference,Minneapolis,MN,June2006. [112] KaminskiJ.Y.,andTeicher,M.,AGeneralFrameworkforTrajectoryTriangulation,JournalofMathematicalImagingandVision,Vol.21,2004,pp.27-41. [113] AvidanS.,andShashuaA.,TrajectoryTriangulation:3DReconstructionofMovingPointsfromaMonocularImageSequence,IEEETransactionsonPatternAnalysisandMachineIntelligence,Vol.22,No.4,2000,pp.348-357. [114] Fitzgibbon,A.W.,andZisserman,A.,MultibodyStructureandMotion:3DReconstructionofIndependentlyMovingObjects,EuropeanConferenceonComputerVision,Dublin,Ireland,July2000,Vol.1,pp.891-906. [115] Han,M.,andKanade,T.,ReconstructionofaScenewithMultipleLinearlyMovingObjects,InternationalJournalofComputerVision,Vol.59,No.3,2004,pp.285-300. [116] Ozden,K.E.,Cornelis,K.,VanEycken,L.,andVanGool,L.,Reconstructing3DTrajectoriesofIndependentlyMovingObjectsusingGenericConstraints,JournalofComputerVisionandImageUnderstanding,Vol.96,No.3,2004,pp.453-471. [117] YuanC.,andMedioni,G.,DReconstructionofBackgroundandObjectsMovingonGroundPlaneViewedfromaMovingCamera,IEEEConferenceonComputerVisionandPatternRecognition,NewYork,NY,June2006. 162

PAGE 163

Dobrokhodov,V.N.,Kaminer,I.I.,Jones,K.D.,andGhabcheloo,R.,Vision-BasedTrackingandMotionEstimationforMovingTargetsusingSmallUAVs,AmericanControlConference,Minneapolis,MN,June2006. [119] Faugeras,O.,andLustman,F.,MotionandStructureFromMotioninaPiecewisePlanarEnvironment,InternationalJournalofPatternRecognitionandArticialIntelligence,Vol.2,No.3,pp.485-508,1988. [120] Zhang,Z.,andHanson,A.R.,ScaledEuclidean3DReconstructionBasedonExternallyUncalibratedCameras,IEEESymp.onComputerVision,1995,pp.37-42. [121] Zhu,Q.,AStochasticAlgorithmforObstacleMotionPredictioninVisualGuidanceofRobotMotion,IEEEInternationalConferenceonSystemsEngineering,Pittsburgh,PA,August1990. 163

PAGE 164

RyanScottCauseywasborninMiami,Florida,onMay10,1978.Hegrewupinastablefamilywithonebrotherinatypicalsuburbanhome.Duringhisteenageyearsandintoearlyadolescence,Ryanbuiltandmaintainedasmallbusinessprovidinglawncaretothelocalneighborhood.Thetoolsacquiredfromthisworkcarriedoverintohiscollegecareer.AftergraduatingfromMiamiKillianSeniorHighSchoolin1996,RyanattendedMiamiDadeCommunityCollegeforthreeyearsandreceivedanAssociateinArtsdegree.AtransferstudenttotheUniversityofFlorida,Ryanwaspreparedtotacklethestressesofauniversityasidefromthepoorstatisticsontransferstudents.Afewyearslater,hereceivedaBachelorofScienceinAerospaceEngineeringwithhonorsin2002andwasconsideredinthetopthreeofhisclass.RyansoonafterchosetoattendgraduateschoolbackattheUniversityofFloridaunderDr.RickLindintheDynamicsandControlsLaboratory.Duringthesummertime,RyaninternedtwiceatHoneywellSpaceSystemsasaSystemsEngineerinClearwater,FLandonceatTheAirForceResearchLaboratoryinDayton,OH.Vision-basedcontrolofautonomousairvehiclesbecamehisinterestandheisnowpursuingadoctoratedegreeonthistopic.RyanwasawardedaNASAGraduateStudentResearchProgram(GSRP)fellowshipin2004forhisproposedinvestigationonthisresearch. 164


xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20101211_AAAAAE INGEST_TIME 2010-12-11T07:40:06Z PACKAGE UFE0021231_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 38200 DFID F20101211_AAAECA ORIGIN DEPOSITOR PATH causey_r_Page_051.pro GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
83fede57cc33a8cb219da7ffcb91dcc4
SHA-1
ac345a7c7a29692b4d358cedfe535c458bc329dd
1053954 F20101211_AAAEBL causey_r_Page_160.tif
1c0d53bb64ad7e0897f665c626a0babe
868e901334fd7229c97c97d3b09faebf8fe4d5a5
25271604 F20101211_AAAEAX causey_r_Page_118.tif
0a8f8fded89d9cd97ce9b94c65ad234f
763bee460fc3e9a51bac9cb3424181ea365005ef
44958 F20101211_AAADVR causey_r_Page_134.pro
31608c9b96d774b8db07f1d4fc81fe10
f7100a53f00f3ea1abc1b06fd26412a20aebee4f
32269 F20101211_AAADWF causey_r_Page_122.jpg
09c210deb833507bf7c6abe3777fe2e5
b1ec3173f7fc1c8efef58418f497d6e1782278c3
4862 F20101211_AAAECB causey_r_Page_081thm.jpg
ece43f03ad8dc2b2d3f5cff57317d9e3
639f3d183b9255380d2d6941c2e58a84755fa703
22722 F20101211_AAAEBM causey_r_Page_107.QC.jpg
ff244272610539c0c929ee0d796b6b6c
71d8ab68587daeaf1be47ab95ae3791a3afe7719
7356 F20101211_AAAEAY causey_r_Page_154thm.jpg
5bbd35808ea3f2c089e257c7ea0bf6e6
6423657aca6c9e07dc3347f6f65333f88d4a6fff
45547 F20101211_AAADVS causey_r_Page_047.pro
b04e687e0b401f636b82603fdf3860c8
beeefdb21fd0af254323de48c0d0ba717663a4da
795110 F20101211_AAADWG causey_r_Page_046.jp2
9a493f5d79372afd7085f2d5d5ca11ef
41568f993774a62db20f334094f06f5c9d602317
27206 F20101211_AAAECC causey_r_Page_067.QC.jpg
29bf555341e92db7d1eca69f252e7a93
2b8f8a0f2cb0c60b9e1e6687ad37cd67ac013dba
F20101211_AAAEBN causey_r_Page_038.tif
d3455cfe74f72c7d7ad9537fb9371d7a
608b06dfe568fad34985ca6e5806c589765831b1
53830 F20101211_AAAEAZ causey_r_Page_024.pro
951a1e49abc99bd553f5f52458fdd548
a9e17acb467e7deac4cd5003beb0a91b67a3b448
6845 F20101211_AAADVT causey_r_Page_057thm.jpg
d1deb920b10cbbaaa78cfd54f309b42a
1787a6de2ff5fea1fc042b75e8215352d796b73a
F20101211_AAADWH causey_r_Page_049.tif
a4a7d7b760019d337175038af410f9dd
f4bb86d1f51616ca5e2b233b296f34a0033486d1
7009 F20101211_AAAEBO causey_r_Page_085thm.jpg
83d416fc351530bc9372b1e92af5d1de
ce11682327fbf2eb94e0c6a5a01a921ccd818ae8
98174 F20101211_AAADVU causey_r_Page_104.jpg
526974aace592382226051701ef5feba
d3676b5be51dc62fcf5896ecaea5227c90b6aa9c
1051979 F20101211_AAADWI causey_r_Page_031.jp2
e3acbba916ac0061f8b394d96aaccec8
1e2cea57b58564a7a71f8dd590d9c1ce76b18011
852930 F20101211_AAAECD causey_r_Page_115.jp2
e640e4f93a2dfa5b83c9404891f1f5d5
88b8bbcc73eb24cb38195a3ed1404ff983377c20
63994 F20101211_AAAEBP causey_r_Page_026.pro
36ac7425e572408f2a261d72c5528e6a
86c584a6e49c38eee8db25cfcb98b4de8a43c093
1014739 F20101211_AAADVV causey_r_Page_113.jp2
a24e0ce1431e82171b689e62b2dbac9f
5c7283a1704cde899e322e9f44146bfaa77b094f
6081 F20101211_AAADWJ causey_r_Page_124thm.jpg
26c33d937a0f81b26de6462a247d021b
593a475f8fc79ad1c98d4ab66995cea7ac20a894
989692 F20101211_AAAECE causey_r_Page_110.jp2
4b986de27d81d2cf8323445db015249d
5096809f5c39ec4303456138d5ba9117f27e81e8
15088 F20101211_AAAEBQ causey_r_Page_013.QC.jpg
249b07c2acd274f172a61f087be635af
b31c4daff2fbff65271b6f956f7cab542a0d03d6
F20101211_AAADVW causey_r_Page_132.tif
4b1760949a4b4ff089f7ad9dd1d5a77f
a0a628edbe35d59a3ee85417932ca91d198f2fd8
16524 F20101211_AAADWK causey_r_Page_153.jpg
01d7d583f265bef7cfc32c2fedaac166
c1129187b2850e695388bc58a204def00d033fde
252 F20101211_AAAECF causey_r_Page_003.txt
3250fbb73c0d3d68bab9f9d7fa44c407
fe41ec495e86cf018ac35375b941f29661bf9724
14894 F20101211_AAAEBR causey_r_Page_007.QC.jpg
89c782fde1caf14cfbc704b4545a6268
04bd614ccb12a58d94942a787ff07946fcd46430
23980 F20101211_AAADWL causey_r_Page_091.QC.jpg
c838314988d86d1d10b6674fd5015cd9
6c9751c4f79d37968cdd522a5401bef5fe728a9b
54097 F20101211_AAAECG causey_r_Page_089.jpg
d9bd57dc444cfc076a6adb754e8631e4
13b5e09a3ad30dc7dc3667df900cb0859ba4e945
19776 F20101211_AAAEBS causey_r_Page_137.pro
85f23ccc837057a6a94125357f39276d
cead62637dc8e73e9e2f124a34a8281496070c24
F20101211_AAADVX causey_r_Page_127.tif
0aec77d884b31b6b10378e5eeaab60e5
8fe1ddf164507a3ab9b35c84d609e027e0849f7b
52676 F20101211_AAADXA causey_r_Page_053.pro
3c62511902d23f16e022d85248698593
cfb8e047c18490219830c0c86f2b0056dd4af30b
2426 F20101211_AAADWM causey_r_Page_041.txt
2053abdd61d0fe1a92a75c06a489a741
56b29b46e45e923c0b2be46ab585e05297bf525c
F20101211_AAAECH causey_r_Page_158.tif
69ada2f703b61205e2472f24498e600b
0d3fcede60e138ec39d60bca5601a0e290edefb0
5479 F20101211_AAAEBT causey_r_Page_080thm.jpg
279aa88b3b33da73988e3e5dea73122c
a5482d7d7680c7f5aed43ef5c3165f4e16af7175
5932 F20101211_AAADVY causey_r_Page_108thm.jpg
4799116d6da96e8dca8e5921de82bd4f
0dbc9e97fe0cb4fbfd9fc36391f4782fe4324ec4
6104 F20101211_AAADXB causey_r_Page_125thm.jpg
b2bc62c926df8cb863f601e5ca33c623
d510a2f317ba3a8f422357ffa1cd50b3f4b27ad8
3337 F20101211_AAADWN causey_r_Page_008thm.jpg
a0e285067da5eb54b64625e2be1f5725
aa2be267d469479864c7c04957feb97493f0f71e
789814 F20101211_AAAECI causey_r_Page_081.jp2
d4ab723fe56f14db9b41c0034377ff18
b8144eaeccaf7d1647c88a427346ee0c0c88d614
F20101211_AAAEBU causey_r_Page_051.tif
f76d1a37b3dfc998ee484f1dd442fa6e
2e5727bad6c96dcd9fc0a841657e48e40219c90f
F20101211_AAADVZ causey_r_Page_150.tif
47271685f88371fce14aa2bac379068f
14c5d7eaefbbd22164be5169521c82b9aca873db
2507 F20101211_AAADXC causey_r_Page_026.txt
534e14e2112df6a8e0ea28844b8e7dd0
ceb8c5db6dee3d7429d6a0bb202c61d01d8af8c3
1220 F20101211_AAADWO causey_r_Page_078.txt
59efcce1c579f658fa66d9962ef435a2
b591100bc5e8360b1f62c9b68051e72eccf7a225
1802 F20101211_AAAECJ causey_r_Page_143.txt
0bc8ded6ad0447b268e0edbf4f90f0b4
a49ee6084981a7d4655f89753c4592f9b1c4bd47
F20101211_AAAEBV causey_r_Page_144.tif
a0395d37ba972ab75d5794af7a6822dd
42237c71a2809e5d5ff5d111a97ad6e917132dc8
6731 F20101211_AAADXD causey_r_Page_044thm.jpg
5c8f07820e108d5db38be8762765fddb
e91ad04be3037ce6855341d0ddb5da18d9d0ab2c
790142 F20101211_AAADWP causey_r_Page_008.jp2
77d17a9f995bf4ae2a585ac0b3cbb913
ae775f3b8da02be267a0f279738ee1418306d0a2
71951 F20101211_AAAECK causey_r_Page_069.jpg
81f6941910085887432f466e353bde9a
f00b28bb4e90d134160733f0c9b9031b2de88480
20718 F20101211_AAAEBW causey_r_Page_102.QC.jpg
24f229dab80f0734ad5abfd333b88092
8e6be00c3b07c2945f249bb1363a82003c527b00
94661 F20101211_AAADXE causey_r_Page_043.jpg
68124b835942611fb7652e29660fc5ba
ec33fbbe20bed95f1a068c59ff947840bf475888
73678 F20101211_AAADWQ causey_r_Page_134.jpg
d168bf31a6bc9ac7da6243bfc950ecbb
5ae0c508e38a1769cc5cd25e545aace51491c193
53267 F20101211_AAAEDA causey_r_Page_150.jp2
243b38f9d730b199c40bcabc91e1a063
51a5bb702b5932c6c2966cf0d319bc5f86ca1a5d
890990 F20101211_AAAECL causey_r_Page_082.jp2
15cc0353d0dc03fbf20562f5166fe871
bdac73366bef3345a23f3cd6525a40273e3577e6
89 F20101211_AAAEBX causey_r_Page_002.txt
e84cc19a53a5d7ee4c2556f40edbc4a8
f9f95168eb680e7c6ef12a8712cf3b40ea059896
94914 F20101211_AAADXF causey_r_Page_022.jpg
b1ca67e7e9ad6cc07708b620c21587cf
b9853baded3b8b3d6010c4f9c0d4573b2131377c
1051978 F20101211_AAADWR causey_r_Page_138.jp2
3b4f8d909e28f8169f18c5e834510b06
8ed6fc5dc438cf602a1183840b033b5a366b1594
1018988 F20101211_AAAEDB causey_r_Page_060.jp2
0fcbfce0adc5ab6366a864de46a0b954
f005d069015ef7fbcb3a5fb4ffd2d0f240afcd7c
6773 F20101211_AAAECM causey_r_Page_111thm.jpg
99e363f75a79597bf2ab820ec3d81e2e
b0e7d5c1428b9b936b38575122e4c06f5625614c
41624 F20101211_AAAEBY causey_r_Page_131.pro
bced43635f1f067b18753b9939c377a0
9b291da90ed0c77b726f17d7a6da72482096dcee
F20101211_AAADXG causey_r_Page_090.tif
2118b0401ebe50bd500820337fff0960
91501cb952b04803e19c3abfda831b2858c8e879
15410 F20101211_AAADWS causey_r_Page_078.QC.jpg
c49710d5722f1e7e8afb2d4aff860982
c79c696cdd7a18c5ef64db8d5b21d620a22621d8
F20101211_AAAEDC causey_r_Page_133.tif
cde7ae9f08f26d14de9744d881efd29a
7bcce066a89ef51d1b9f61eeb82e38e7cea3e4c7
1051941 F20101211_AAAECN causey_r_Page_022.jp2
f7e150fbfa3e15a16d7e0003f2396cc6
2bc1919e2abf64f923547f6b8fdc118ef74f130e
2428 F20101211_AAAEBZ causey_r_Page_038.txt
94cf416148aefb847650cc24e9e3e3a1
2939aef9c35d6a82d2a152f4b65d598c405ecb90
F20101211_AAADXH causey_r_Page_012.tif
fc87425c978638b1359a852eeeb6c00d
fbc547fd5feea020456247f66094b2cf2373b5f8
52859 F20101211_AAADWT causey_r_Page_045.pro
68b675fbefa837503fe7fc44d6ee6753
eb2ad4ccbff6eb2c4176c752a256cdb7a6977ee1
1051982 F20101211_AAADAA causey_r_Page_029.jp2
342bc83f9cbddfb4504b78dd9a7b025b
b40e217e3e6c9497ae0c0f73703f2229aa355a9d
26746 F20101211_AAAEDD causey_r_Page_084.QC.jpg
0ae72c13bdf6111b6dac3c05ee5596a2
15e569b4559f2d224a550fd5acf56d855fdc5df8
47380 F20101211_AAAECO causey_r_Page_135.jpg
c9847ccdb8327a4912b48551c4c34e2c
0be4b0113ea432c46e746d6ef784180e2c4769d4
740817 F20101211_AAADXI causey_r_Page_058.jp2
53659ab4ed131f5b41d3fe5c3c94307d
20ca5336f0aa8dbe9f17b823355252feef50a10a
39458 F20101211_AAADWU causey_r_Page_150.jpg
0f8dc8d04992cbda0588febf27753583
1b4a260b4065b412adc7e569fa544d8ccf22cd31
F20101211_AAADAB causey_r_Page_134.tif
b38811953e5b7ace044a6f3820b19644
b130d803d1269430b4e7be61d6e829d14835f33f
24015 F20101211_AAAECP causey_r_Page_125.QC.jpg
bf8d3c55e22a028414192dd7992d856d
f25719dd74c15eb6c3e1fb1596c0684cd67d57a7
2112 F20101211_AAADXJ causey_r_Page_106.txt
60b4e409b7043dd415788199ed41887f
1189dfc00b5ad9edd6c28b660a001c466b5cf6e5
6800 F20101211_AAADWV causey_r_Page_098thm.jpg
ec9fbf55b23b2c3b4f37d059ef1674ec
5b63a0325e0544c4abb9aec9a392a7bece9e7614
77284 F20101211_AAAEDE causey_r_Page_127.jpg
67513e021ec17b34dc2db17033ad876c
68fc2be86a63bf5664a4af839a2de58c6da4f18e
20585 F20101211_AAAECQ causey_r_Page_005.QC.jpg
548658cbb9af180421f30dee39f7a8a2
783c5b6b3bfc30ffd9a6ac75a3158f6c0204bde4
1045242 F20101211_AAADXK causey_r_Page_106.jp2
6075ea7b01b106dbc6247b23375067f7
46e3d884a2e18c0d994fa4b09db5921e5b85a938
30268 F20101211_AAADWW causey_r_Page_004.jpg
b3189f43094e32e7b5f526012b1e0100
f7a15d4dc6f3e735872631098c8065b67c2d8217
74046 F20101211_AAADAC causey_r_Page_009.jpg
321d5d5ff940cab1b282809ce1d0b339
5d10c2eef554c846ee0dab9117e627f11dff876a
66768 F20101211_AAAEDF causey_r_Page_104.pro
e5134c78e1d35a76ca310b29c69d1fa0
c66e477b2c7778643409f93c3f4c4981fc0e7bd1
1488 F20101211_AAAECR causey_r_Page_119.txt
b773c9a3782639046b43c59de0872947
7eceefa3b010167a391b49f639c7741c67a8389e
72862 F20101211_AAADXL causey_r_Page_110.jpg
44e98100e875bcf12b0d111fdebfb241
7084a1525f2901c334e6df53633cefe5fc7d5278
23510 F20101211_AAADWX causey_r_Page_096.QC.jpg
63323fbbddda89cc9522f5b2926f7bda
c1106d0f57b0e4f58853e30dc80ad5cc025e4fdb
F20101211_AAADAD causey_r_Page_161.tif
bb92b5b507caa2269dae4ff0862906a7
c1716874de2a29a7a4adf6a4e6a3ba158f256b44
63271 F20101211_AAAEDG causey_r_Page_108.jpg
ebc4d2be8d9c3bb9563a1b00ee466753
29de6def516028eaf252fd4120bc6c5240fd23d2
37793 F20101211_AAAECS causey_r_Page_143.pro
953ec6d2158a5116918e3cca9dadda60
4aba409606bf304e1f0386b9b8a4a415789c0005
F20101211_AAADYA causey_r_Page_091.tif
bb59d270c172a313b972768c21ffbe73
b7ad5fe14f5ec82b16a721eefd772e31dde55df4
687253 F20101211_AAADXM causey_r_Page_062.jp2
51deb5d509cd5d56d569ced758398723
49ba8c1a2b954cf0fb087a118bf6fc2848485d4c
6261 F20101211_AAADAE causey_r_Page_050thm.jpg
3e9a98234df5f50c44e97a3eadab3114
30bdfd6f77054646552a8ec444ce8efbccb076d1
5974 F20101211_AAAEDH causey_r_Page_072thm.jpg
4c4a621515a05f205ee2c7b8e34a1663
1bb52b1e998a7d92868d07dc6ea58419d1e0a2c1
20166 F20101211_AAAECT causey_r_Page_076.jpg
f02ed3b55f4cdcd68cc331879cbe943a
7e12985d561f61ee2d32edd38ca6c95197e40773
F20101211_AAADYB causey_r_Page_104.jp2
b43e874c0af8d32b77e17746088e02f1
e70a0c1b40a1df17133448ed454bbd04f96ba883
22287 F20101211_AAADXN causey_r_Page_069.QC.jpg
f087eeb5d718c5843941f875e54767f0
58df90e8beb2892cc4c05d373cec5f186e9cd444
F20101211_AAADWY causey_r_Page_121.tif
2b51096eda7145a4095b4b7613e9f730
8204466c2e4489bab24dc33e5879ef431069965e
22502 F20101211_AAADAF causey_r_Page_113.QC.jpg
c9c661cd97a36a3f043ecb744cbafbb8
68e206cd51920b768b56003c7b9aab1eebae4ac6
6535 F20101211_AAAEDI causey_r_Page_019thm.jpg
7da5892f033eb863f2788289f1aca7fb
0fb255e31dba39475275531996e202fcdfcc66c6
55759 F20101211_AAAECU causey_r_Page_007.jpg
ae88b09996d7ddb5a65ade87f2ba41b6
78121a8ec723a4e860f11160bb0ce49ff5f07bb3
F20101211_AAADYC causey_r_Page_096.tif
4a5100fc65e98a927866b2b120562098
90f83c673756f051427572e3138f1c3512fc1cc1
1050901 F20101211_AAADXO causey_r_Page_052.jp2
89e1500fa399e8639932bdf67b6b422a
bfd2da5581c8835826bfbd1ca8f371490e1a5824
20295 F20101211_AAADWZ causey_r_Page_056.QC.jpg
d6ad1c36bb4d40fcd9bba87413d8991f
e0c2c4a218b183923a2e2b19041b20f81258b452
6712 F20101211_AAADAG causey_r_Page_133thm.jpg
f2fb588a92e401e16814a6a6f254b94f
c5d041921f2f3716767c79ecd1d874bc59caa8a2
6790 F20101211_AAAEDJ causey_r_Page_109thm.jpg
28dc749d8fd2f8045bfd63c77608677d
95006aaa5cf2e21ffc6b45a6d39ac7f55c7f2d7e
22456 F20101211_AAAECV causey_r_Page_110.QC.jpg
1aa00a2cba59e8314c78904443af0889
07d2b4c5d51ee50718ecf2056170019c578c9582
6558 F20101211_AAADYD causey_r_Page_129thm.jpg
0498470850657f5583aa3d0ede9adac8
17942b3f194e3e31208b2c9f8c752d6905c44d9c
1051977 F20101211_AAADXP causey_r_Page_134.jp2
3edc638933d3da181c3a929fee016a01
f0bf85e97a3131c53c13f87a76155bb5df1839a5
56105 F20101211_AAADAH causey_r_Page_079.pro
94323c23df9f29c2eb8fa70a010b0dca
5018f0df4db69f99a6ff7950f68ee4fcb0c0902b
20975 F20101211_AAAEDK causey_r_Page_092.QC.jpg
64e4132a7c2de6fb5aa6cc2b2bd9bdf4
2b356a7bf0116640e67e484d566607f775dec752
74412 F20101211_AAAECW causey_r_Page_096.jpg
ed420fd70187abed777719aa1a2b7747
935a174b077ebca7597b97314b4d60d68a845df5
82155 F20101211_AAADYE causey_r_Page_130.jpg
2d534a3b7e5e3b8350ceb6a6f7539baa
b1885180e99557939e4cbf16142ffaedfec030f2
39529 F20101211_AAADXQ causey_r_Page_071.pro
ee5e5e699c4ff1c96e80d743cfe80332
7774daf8f05fde30e9fc26a0f1e8dc287b2da738
20945 F20101211_AAADAI causey_r_Page_163.pro
ecdb0667d0b771650d97295e4a5797d0
fde2bd5ccad442548601b60d6c6fae8d2b6707ef
1051923 F20101211_AAAEEA causey_r_Page_125.jp2
65775b47a367a019cfb549d6650dfb89
427d04537fa7018212501532bb16be74092e7c20
46822 F20101211_AAAEDL causey_r_Page_078.jpg
8c382ca446edd08ce09125b47a21f790
86e936a586eeb352887dea2a0a9aebc8ce3796cb
1335 F20101211_AAAECX causey_r_Page_015.txt
3c06a367ea675e8e23eefcb118449b10
90119e9432a5a6d10dc30bcf6c86df40481b5306
56188 F20101211_AAADYF causey_r_Page_019.pro
6fddd6b6107cfea8ec7a82f98e2ae894
c1fb457bda393aa072eec0e7d1c69885c3ffd582
22329 F20101211_AAADXR causey_r_Page_123.QC.jpg
526df532c9bf55c26de13b78c9eba2dc
fa48208a4a678154d63c8a4ce1c4d0fe632f8a0b
F20101211_AAADAJ causey_r_Page_147.tif
63722c5145930f67da90892bcf474666
dacc7b778579c521748ae0d38a8066d1e9604a76
F20101211_AAAEEB causey_r_Page_099.tif
473a2c462df5878786a928dfcccac5b1
9e6b38a39b180684282dc25781c929746fef60ae
102836 F20101211_AAAEDM causey_r_Page_158.jpg
e3bc9a3fd595ab4e0cef5178726f33bd
62de9d36724b3270a5fb408e69348f7890cd7287
1051962 F20101211_AAAECY causey_r_Page_044.jp2
14eec3bbf59fcdaf1bc42ef776130e50
f7264e0b6f9076b1da5f6a5b08e13626d931b2bc
1633 F20101211_AAADYG causey_r_Page_007.txt
3aa1548850a98acc264c8b0197586090
ea8b308ebc7fe6d243c9844bc0bdb7e924a5e3aa
38105 F20101211_AAADXS causey_r_Page_056.pro
8e53181241838a80bc5a910696f3d2cf
9a156d18c06a756aa5efa407dc233dfcd4c56bdf
37194 F20101211_AAADAK causey_r_Page_074.pro
31d9ce15af90610d95eef7e110916303
0f54e0386dcf0971ad612fa8917ce93fad8ab028
5342 F20101211_AAAEEC causey_r_Page_046thm.jpg
7e0dfebba500ada4c3737971ea4a1564
520880bc7ae0c484234c8407beac86ca48c0e4c2
62556 F20101211_AAAEDN causey_r_Page_040.pro
fce3555b84e289251bdbdc4387acde34
21f67f15e4c913c032136d6bc94dabc8b1fa7b9a
38509 F20101211_AAAECZ causey_r_Page_145.pro
3d6d17a7fd2102c01ddabe68d8611f23
7c1db317fbb780bf6621d6c91fa1b20b4d936ee8
78649 F20101211_AAADYH causey_r_Page_027.jpg
4241fecd068dff94ab3daf8d253ca104
2f698a2d30d2eee7e654ae3a3f4e4e5ffcf3342f
F20101211_AAADXT causey_r_Page_155.tif
d24902b1bed432798093154c3610a5c8
ae6305b2850be8bfb16b0c844913645ae1cc9218
44314 F20101211_AAADBA causey_r_Page_064.pro
ce4ca4b401e7e72db76494a9899769a8
bc23c8d8bc1d4d86a7013cca11b5e3fd368213c3
93115 F20101211_AAADAL causey_r_Page_156.jpg
6ee5b788195830a2f7de9f92343f124e
ea0fd951ae3fa9af7a50deaa64200055847837ee
17908 F20101211_AAAEED causey_r_Page_083.QC.jpg
0dba681e1efbaf1df6b2ce803326df32
550f73880ca58f22a193d20281e4b0695301d3a3
1934 F20101211_AAAEDO causey_r_Page_129.txt
084e9f8e55028f55fdd8aecf091395ad
0509dad4ed27aa6336d7c003ee04bfd520b085f4
3660 F20101211_AAADYI causey_r_Page_007thm.jpg
b6d00537e4febd7ee4eb53179e0d1075
271d2aa1831b2990f63537bc63158f316f055e08
65608 F20101211_AAADXU causey_r_Page_073.jpg
a3030ccbce16bfd8a7f90846d49e404f
34bec0bfbc14300d44bf378aaa4d1d4a9b207d6c
39193 F20101211_AAADBB causey_r_Page_142.pro
0b3f88d86d9f51820a08e4db57eab6ac
c6fc094d2c2cd63b239222b470fa6857a507a7d0
74109 F20101211_AAADAM causey_r_Page_149.jpg
c55d9819cbeec6cee220777dc7fa6c25
55b29a9fdf09d78b320a87b0d68158cfe36b5093
36729 F20101211_AAAEEE causey_r_Page_097.jpg
19597ce98118fdf22adec15edd31a745
9641dcab857219250cf4643cc77a7caf84bea6bc
1051968 F20101211_AAAEDP causey_r_Page_009.jp2
113604763fd56ad0d7f786d3672be490
818d5ad06fe3c2f80bab6f82340ebb7d9cac6d17
93871 F20101211_AAADYJ causey_r_Page_139.jpg
aeacf109e11114ca9831dd67a40fcaad
c51e2a58006eb02f8f605b9d44b24f348f7b486e
44721 F20101211_AAADXV causey_r_Page_049.pro
7a70355676f8747713bd9952766692ff
527c2870d23a5868a86c91d262e75c96adf07346
41198 F20101211_AAADBC causey_r_Page_055.pro
1c0cfb243a2bb1b787c172ce7d550e3f
51e888c67a2d6ad44146fb9ca7a0f83f1b81ff25
2196 F20101211_AAADAN causey_r_Page_112.txt
6a8475fa7507880d677830acc445d5b6
448b9927943b03af7c96e9cead2f5f4f1da11722
40542 F20101211_AAAEDQ causey_r_Page_072.pro
8d094da0e49bb646688f61b57f8ab9ba
9561bcf7005829dc5a99cddbacbf09f73456fade
F20101211_AAADYK causey_r_Page_039.jp2
bf31c8d0654a29c979100d1fc0da1f0b
ecb46c55865d74e6d9f6248474e3f9aac430e833
25311 F20101211_AAADXW causey_r_Page_130.QC.jpg
815a3d77a90209a14d996c7a093c10b2
828e87eed23b1a81455658b87cc25960efcf2e9b
60708 F20101211_AAADAO causey_r_Page_074.jpg
8fa374bc6e2196f50efa34613c67eb8d
6d7496dcfc30859ca820569263253dc3d9050a33
6547 F20101211_AAAEEF causey_r_Page_134thm.jpg
8e705c17191b7817ce4fc4428c80f765
55b33c5dfa9520a579583f881da9984d17d4f33e
146400 F20101211_AAAEDR causey_r_Page_157.jp2
82fd1c7376f102a6d7fb803f6133864c
627aec97a0a32c8ad479c7d4c46049a2b6bb0783
70588 F20101211_AAADYL causey_r_Page_105.jpg
efcc09d8ab6e94fc6a4f40a67aad9bde
1faaf492b16177e0c7fd53dfb0c9eb0d18f8e558
1407 F20101211_AAADXX causey_r_Page_016.txt
0ad03fc0a07f102824c649fbd37bf92b
3459a99fd02c261beec7717acca4b384f5e9eee9
1008099 F20101211_AAADBD causey_r_Page_105.jp2
690175395359ba0c6966bb859ef0aeb0
ffd3c6949655434a2ee7d7e3deea18e0b933a46a
28122 F20101211_AAADAP causey_r_Page_159.QC.jpg
0365e8529f61fdcb765d7a2be138f9ba
0eb03bb10b35ef6ffac7d5dbbd679c28b4aa5505
29943 F20101211_AAAEEG causey_r_Page_022.QC.jpg
4161c587fddce67916395c94c1965345
756ccd4e0e63c9030c3c51eb57bc8455e6a0cfac
1965 F20101211_AAAEDS causey_r_Page_049.txt
df2003eea0fde85a855375780f46edea
8af398f8a880f3df8b87621732791ab7300199d6
50898 F20101211_AAADYM causey_r_Page_097.jp2
a6c1b910356990112bf51593353c9421
7b8303af2523d80aa77d67fce36b6810c1b160d3
7124 F20101211_AAADXY causey_r_Page_117thm.jpg
e4a45b949b65f4485a66c1b1454c0b57
df9c540990d1f60bfb15aacfd0cfd44a7e19a65b
2266 F20101211_AAADBE causey_r_Page_085.txt
8bd76f1cff7fe54719bc7d8af804c435
5a19bcdad6b41752796a8fa02b554fdcb26b1ee6
66320 F20101211_AAADAQ causey_r_Page_034.pro
46f171238e9fe246d514cbe804ddcbf6
58ce7589f974d5d3efd7f89cd431bcdbd6f8d08e
4348 F20101211_AAADZA causey_r_Page_015thm.jpg
ad8f5efe42f31777f19a06c1d467d3aa
fd8a401001e0723c284ff9bbe7c14862d8c12489
1051916 F20101211_AAAEEH causey_r_Page_120.jp2
bf77440bcc18e671c365b10c651422e5
0690662c4368489b79a259dbeeae5bd160b16b82
1974 F20101211_AAAEDT causey_r_Page_093.txt
bd983ccea08a0faa97b269248960efc5
033e89fea875404674668e654c3f6f1c2cef5460
5166 F20101211_AAADYN causey_r_Page_143thm.jpg
4978ef0efe875955730080504118f381
5e484644903980f05937b7535089183dc7668d51
F20101211_AAADBF causey_r_Page_005.tif
b8df107133fc95cd5610f87b221731e3
a8e1bc958bf37647802806f409ad1cd5f8c40f00
16438 F20101211_AAADAR causey_r_Page_081.QC.jpg
21d81842a7002a05271242b2fcd403d3
be76b12577e1e58479f3d2146bbc1f735e7a3b2b
20577 F20101211_AAADZB causey_r_Page_088.QC.jpg
c76c70e513080bd3b41f50af5e28af92
b69d84b26799c728bddbb3c6990b71c387bb535f
5346 F20101211_AAAEEI causey_r_Page_083thm.jpg
20c3a6246c3b3377a381f19da673f1c4
deeef7e68a051518113a942e699e09168d2d9636
47515 F20101211_AAAEDU causey_r_Page_130.pro
f0db1455c159dab6ae9e315d57b3b11e
1b49d051842cc2297614ab9e6724ae33209e0bcd
7131 F20101211_AAADYO causey_r_Page_021thm.jpg
b44e30ff37bdd0c2f0fce5efdd54eb07
b7b28d70773e342a27e17b34010364e7da11ad8e
43286 F20101211_AAADXZ causey_r_Page_088.pro
8d7aeadc7596fd3bd983c29b323c02b0
709207cd8136b6b02d3fd15de3e695dea2b39251
1767 F20101211_AAADBG causey_r_Page_056.txt
641df3c9b84bd063bc0c8563bd50f36f
91f63a43dfe91c858e7ee094b36534123d767386
F20101211_AAACWA causey_r_Page_032.jp2
db15205c0fa5e41cef09408b675cd1c7
02cb02db89b72b6718f82ec75a3e9ec0068d577f
76510 F20101211_AAADAS causey_r_Page_035.jp2
07215d5750c88a88381a958844b7b3ee
80cff37aefd1fee5fb4984ce3c8a5123cc62e3a8
69444 F20101211_AAADZC causey_r_Page_015.jp2
7b21c94e7f98456a2c9d998f76a84801
29677597a0fabe54fbafb70ce495b9169da5550e
17182 F20101211_AAAEEJ causey_r_Page_136.QC.jpg
463a54fb62267fe78e2c91766a8fd51d
44f309244610af064cb6cc4e86c67e4ca9c4889f
1051985 F20101211_AAAEDV causey_r_Page_131.jp2
eb99286062be3ed7a354d0357c13359f
33dc608549e1d77117a72c037b918b0535c0e904
52677 F20101211_AAADYP causey_r_Page_062.jpg
a9a8e1147472bbb5ab938eddef3a0045
4776f9a0f9b0aa7bda7a05c61bef5768dcec704d
22147 F20101211_AAADBH causey_r_Page_070.QC.jpg
597767ed859a2000476ee203abfc4e53
f3221cd557504484d021ecd102eeeb5c484157df
23710 F20101211_AAACWB causey_r_Page_010.QC.jpg
23ebe698452ac3657aadcb718c8da538
e02fa27b5968a41801bb1fc3fc00693c66dcb5c9
19567 F20101211_AAADAT causey_r_Page_082.QC.jpg
1087e9f96fa8bea4c4d35e58b2dd7e0f
489ce4539f66dd8967d29e079e67dd25ead91b0e
48462 F20101211_AAADZD causey_r_Page_050.pro
5301155228aa8f9260cb213eab9867b4
b23cd999da8f88ce001811313b2e54a2a3859ca1
872777 F20101211_AAACVN causey_r_Page_119.jp2
0b1cdd980139d4f85cb32daa2d958d94
b5ce13b4ad864d06c2bdcb25143577669e8c0a43
6472 F20101211_AAAEEK causey_r_Page_114thm.jpg
9eacc179d3eeac60bcbf11f47d4d32a7
1c65087b7131c9bfaa4026ec5e538ce5e068db9c
26244 F20101211_AAAEDW causey_r_Page_033.QC.jpg
fe614d0cd268f9f2f2087a53b4d4a4e8
87d64d7984595a03421efffff5616fc48b0e223d
7133 F20101211_AAADYQ causey_r_Page_024thm.jpg
6bb34d81744e188e54718c6b3b345aab
13344c0effeb532e0ae73aa6c4463ee4fb67b50a
F20101211_AAADBI causey_r_Page_162.tif
f42488892a07fd74f201b8b3b63e6003
84c52fab8e0741f9ef31de88ecf64cf7e02cc5b6
5525 F20101211_AAACWC causey_r_Page_048thm.jpg
dd74af2d4caca7489c0060e0ba94cccc
e05aeb106c755c16f667656d96aca48c415564a5
57009 F20101211_AAADAU causey_r_Page_084.pro
7b3bb076b33d459b0019aa5acf6a3e29
074f68830667e5e45532000caf1d4de78661fb2c
83967 F20101211_AAADZE causey_r_Page_089.jp2
6d145719d11fd19823f6f50927bf2c19
6141f722e110a440dd2a037cc6ab78d01d886a15
2023 F20101211_AAACVO causey_r_Page_060.txt
2583b5c7413140ad9819dce28bce1d1d
2bae7f493202b092bfa2b755663ee585869ff5f9
25082 F20101211_AAAEFA causey_r_Page_129.QC.jpg
cec63a38e7047274e3e40c3ae9ac29ff
d6fa9e95f3ad2b647b0e0b38b1f5c74c538e7efd
F20101211_AAAEEL causey_r_Page_117.tif
80648c2cdf51845ed641f525c3b84bef
a358053c4af475901df6915f58badb9421ab78cc
2924 F20101211_AAAEDX causey_r_Page_160.txt
f3abcb51374c66406c0808eeb83d6fc9
2a51e4d62b54d02a0969f3f6439599c94c7da10a
4111 F20101211_AAADBJ causey_r_Page_014thm.jpg
66821b0a237f116b71b80323ce9ed48e
6b2f5370430965c73c9986532f0eee72cac400f3
F20101211_AAACWD causey_r_Page_069.tif
47a9101bf79c34872060c6fdc77b6b19
986b55eb10c91637b99891081617bdaca2b2a9b9
F20101211_AAADAV causey_r_Page_045.tif
e1aab87bfd3980e08577a3ca3b4cfdeb
f2e3817c9fc0f03c5bcf4cd275f19f44431bd0fe
978964 F20101211_AAADZF causey_r_Page_102.jp2
f9c80ff6f57d5f710bd93fd446c8688d
04b92a9481f750b09b1c0bbc0062308d9870840e
17949 F20101211_AAACVP causey_r_Page_062.pro
7d2fdc7402fffe417f4c33138218a4e4
505585207366890019394f8691c1e4d75eab0534
94977 F20101211_AAADYR causey_r_Page_157.jpg
65c31d5c0bbbd0ad6beb13f908d9d371
6fccc731b1448ffaa0419ae7e6acac3bf53bb196
36625 F20101211_AAAEFB causey_r_Page_119.pro
4364e04790e755a0100d463d99e65166
736d5c4f04b8a8e7bf86d1743069d07702f32a29
26403 F20101211_AAAEEM causey_r_Page_098.QC.jpg
4e9a0ed2e4c5c07e1b6e317d5fad229c
7ba3d9614dcf635a8c2a054ee25ac5f51457d812
1975 F20101211_AAAEDY causey_r_Page_115.txt
35dfaeb55463e971add9878adf76b1ef
4b9cdd5baf3925f5e1f7859f8c712d48309d745e
F20101211_AAADBK causey_r_Page_018.tif
8f74b773d96163fc3bce559fe2c66417
5764d79ab09ea6d6fbf31195712f8b9e472fe6ff
2229 F20101211_AAACWE causey_r_Page_057.txt
77640afb8985d9103d3e1e02efe41469
996bb2c41a6d99393fb0f6abc5a3a047154e561f
F20101211_AAADAW causey_r_Page_119.tif
fde154d523e6eec7dc7793af04ae8a00
9b31facae60fbf1fb61c94026a590e8e65307e48
52718 F20101211_AAADZG causey_r_Page_006.pro
d874e487ca132ed473e3323638716dfb
7c0819e8d40d4a3ee7363b0ad318569f602e933f
83755 F20101211_AAACVQ causey_r_Page_077.jpg
71dc775a71872fbcf3e5690f64aea335
93c62f7629e3858d0265c62c8f2c13911b83a628
688459 F20101211_AAADYS causey_r_Page_137.jp2
7bda560d6d6c8c69e06de90dfbe89fa2
71438bcc259f2e39cf182c53acb3e54c7e9c7c23
85751 F20101211_AAAEFC causey_r_Page_121.jpg
7c33648a24297156cd0fa9f5e026861d
182b3c923ab77fb922b6b0e475f40937e878e77d
6049 F20101211_AAAEEN causey_r_Page_132thm.jpg
8a331ea0c975e9309135f867847686f1
dc2d210c3923a6d7477f8875bddd426d404ee500
F20101211_AAAEDZ causey_r_Page_124.tif
56d2b04fd7b06b2dd4dc37913d8c0413
781771715cd19ae06deb269bfa16bcdfa7579ce8
7086 F20101211_AAADCA causey_r_Page_121thm.jpg
98c202c2e4ebf4194c3cb449edd06d97
ccc685611192fd4f0d828eb3fc2f8da6ec9ed5d1
51638 F20101211_AAADBL causey_r_Page_112.pro
a673c4f358805a9f48bc520d634e4a66
3bb9851051f5b3f2f74b5f18ab96e75475af3e4f
57146 F20101211_AAACWF causey_r_Page_021.pro
41ea0a5f3121d08cdd3cae7b58d1d47e
194f52005aa26da9d3d89f1a10c240cc9df51bd8
26079 F20101211_AAADAX causey_r_Page_045.QC.jpg
015d056708cff97d8ce5e010a992683c
f6776ff2e0f9b309277da0c222c71d1ea814db1d
88434 F20101211_AAADZH causey_r_Page_042.jpg
3f24dfe36864da1113f4686293c6a2d9
3f8334f8b164d22b1b20da5114aea29ec301c68e
52416 F20101211_AAACVR causey_r_Page_136.jpg
3311eb5fde48f0fa1d7742ffda01d419
485af090642507f1715c5f52dc1adbeaf44eb577
2685 F20101211_AAADYT causey_r_Page_162.txt
d314269c185b7507daeec2b1695c473c
115cca6fb3da99c11e50c07ffc60344413418477
2524 F20101211_AAAEFD causey_r_Page_037.txt
d5b9d47a79065025898711b9c1eb64eb
f9f4bad6db0a09a54a1539119acc4c03afdcff01
2321 F20101211_AAAEEO causey_r_Page_042.txt
5a53b5ff2e64563f264265fbe14adcb5
8aebfbb604d2e93e49191a320953d3f88170dd45
F20101211_AAADCB causey_r_Page_128.tif
3e325fde9f554b22f04fedb54c5960a8
7f41586b4f89f807eda3a0f093bbaa0133cba81b
889499 F20101211_AAADBM causey_r_Page_075.jp2
e5e8d06a29ee23776f265c3cceef3b85
ee764fd60fa2a3f7e735f104627f1f56d8c21b13
22969 F20101211_AAACWG causey_r_Page_093.QC.jpg
b35dc0b392c4790d351acd59feb214ef
ee8eb5c255c7f72c6e1584f147552aba680e30cc
24902 F20101211_AAADAY causey_r_Page_028.QC.jpg
9d41f63cac193aed18fb23955a7d4849
f75ca8b11d7a3a14ed9ef3c6e90d2a8d001aed4c
F20101211_AAADZI causey_r_Page_054.jp2
1a3e7a1dd3605344196e7d40c808d887
5ce8652440b01eac4ce4d19fbaa4d1c0ed7e2362
1003955 F20101211_AAACVS causey_r_Page_047.jp2
9245e10017eaad7b6597fe0c6f8dd754
f19af5eee0f3b370058c6786331b845dd9827ebf
54979 F20101211_AAADYU causey_r_Page_017.jpg
e070bdecd117f3bfdde1437e978e4eb2
735cf1b725235ff05dd879662fc2c35d64af7d9a
47069 F20101211_AAAEFE causey_r_Page_096.pro
f38ef772eece1a0f6d17648f41d21974
8467ca6970480bd30e0229a150faab2df146a57c
14690 F20101211_AAAEEP causey_r_Page_116.QC.jpg
84772fb7d40244f62e052622611591bb
29f6d4459ffced415130bb067981dc26d48cd56a
7508 F20101211_AAADCC causey_r_Page_026thm.jpg
c3bddf5e5fc40207e3002d5c73406be3
b6e54a12a2795244e27e5c5e4fe456bcc6979ec3
1051986 F20101211_AAADBN causey_r_Page_067.jp2
a5ddef25c1d6a87e7681f2abebf26f6f
e6430c276d4e4811e2bb10cab9964c5fa8e06aee
26171 F20101211_AAACWH causey_r_Page_077.QC.jpg
e2c73b37a2b650c6119413bed4a048be
7f87763bf1bcfb4b943de375d2f66bf954956f84
7469 F20101211_AAADAZ causey_r_Page_040thm.jpg
1d14135b3ce646d0c022e6d233997965
5f7042f848e7b242e156f17883592af87cbc057d
F20101211_AAADZJ causey_r_Page_081.tif
19ab4844af987baeac64dda2d8135f11
223acc1e1fc2c7357d5f857e3ade1a1ee1a3b533
F20101211_AAACVT causey_r_Page_073.tif
212a9de9fa9e98b70d0b7a48a67a336b
43fb3be089ed410695e2dc1fbf59f3105c63602e
66416 F20101211_AAADYV causey_r_Page_102.jpg
fe50c00095e7d996bf0149ce55711a6d
6dc6d0e5f251d461e25694d6dd896799ad7fd5a1
53150 F20101211_AAAEFF causey_r_Page_081.jpg
8f744b9e8b99ea88885035bfe6a3f720
5f6b9d1f8d0dd3a27430e3fc75e3d77f7e4605a6
F20101211_AAAEEQ causey_r_Page_105.tif
f2628c876700225b6826a413c12fbc45
e3667f604206168a6eac7ee64cfb21a667df2cf3
26865 F20101211_AAADCD causey_r_Page_031.QC.jpg
b72e98c4ef7dffc1522c02c6e8d6fdb0
3ab8818acd0b06c158607abc3628c9bf86dae21d
23893 F20101211_AAADBO causey_r_Page_106.QC.jpg
50ebaef886772f673a4b9adae367107d
bafac6bd162d33cac3061c7dda4e145d7c1126e6
5794 F20101211_AAACWI causey_r_Page_075thm.jpg
e45d57904735d0e6a95fe77969efb766
2b88f49e6fb33c1a88e181a8ff8de12a10c3ac6a
1760 F20101211_AAADZK causey_r_Page_092.txt
b9999c4ede9846f1e429897e8165bb8a
f12d50cce5c0a473598d16f03184c17d16a45cc4
48219 F20101211_AAACVU causey_r_Page_127.pro
21ff5bab257706d9f6ee721fb2df24d9
b3545253267f4818a6e0bd5ee0cd27e0cc255162
68751 F20101211_AAADYW causey_r_Page_114.jpg
c28e28254c466e1dadd2b7b64eac0501
6096f739809769ef2d1ace990846e173c709ff86
922646 F20101211_AAAEER causey_r_Page_108.jp2
dec04ef38c49490754e044a5ae2c5bd8
9f1d89a5495efed264bd11fe113b21758f81afa5
76204 F20101211_AAADBP causey_r_Page_118.jpg
c660fa60b1bc8deaaf651afbd91015d1
cf1bd8fe99a65203171816c59326946f8514ecb2
63534 F20101211_AAACWJ causey_r_Page_082.jpg
6d2bd72098713a8fe27a21e905f45af7
ed99ac80c3ddf1d3ae2c3a244a6ca17cb3c7794c
97140 F20101211_AAADZL causey_r_Page_161.jpg
32dc0e3e9bd0ae3e5190d82cce744a27
5bb8938b550df9434ff5e7131ff872f67ec7c58b
29018 F20101211_AAACVV causey_r_Page_014.pro
191080c99b4dcd472f388da8b5105ee8
6e170e751aa2e243483f0ecc4d32131d34fd16b7
41143 F20101211_AAADYX causey_r_Page_059.pro
e95e1ba07f73d58f220122716b94fe72
3462732709186b011ef5f0d758d261cef54f8f65
6365 F20101211_AAAEFG causey_r_Page_149thm.jpg
ba698a595cf3f7da023bf69eb4cc467a
f6a1c557287ac7a06509a62f512dc1ea07472072
2048 F20101211_AAAEES causey_r_Page_050.txt
504c743cb7206bde62d79937a5e3d5c2
b8904606b924ec8cbdac2927bc1925785670cf17
1051955 F20101211_AAADBQ causey_r_Page_037.jp2
52c9b6ab96a54cdcc0673e1a3ddc7e6d
31c0ea635ba042697df6f4c37a44e586f000ec23
64794 F20101211_AAACWK causey_r_Page_139.pro
7e3cb93b2596a438b3e397fb52604e21
0f406f99bd30494c853026d159560b89b666ff8e
903314 F20101211_AAADZM causey_r_Page_055.jp2
54b45ede63f69af6a09dd2a887469edd
e546a56295a1b59cfcc1148f8623e8aaaebe1ceb
7433 F20101211_AAACVW causey_r_Page_158thm.jpg
0db1a659912018a8a888d64cfbdcbc01
0563cde196013298cd84f118508c73c4211dd335
67238 F20101211_AAADYY causey_r_Page_146.jpg
5cdf83255d74048454d38fe5dac0fc07
69d26a7aadcc830b77f68680fd6de1cade4d2430
F20101211_AAADCE causey_r_Page_093.tif
585658842806225b8cf39ac92953c3f0
30cedd0da7ac5dad308ee80e5016f0088b0011b4
6334 F20101211_AAAEFH causey_r_Page_113thm.jpg
25b26a1a4cd7718571cc821f9fbe07ec
c31f0236a07d943de3eeffafc420071be897f287
6532 F20101211_AAAEET causey_r_Page_054thm.jpg
18c0ae5a683c1967c4515a99a5c5f70f
02e7f8054aadf2b5cfaa95690fa214ca0f5e23a4
F20101211_AAADBR causey_r_Page_139.tif
decf301b0684e102bf6ae5a11548f339
06f18f95d121f6bad279fda792bfb365c7870f8b
53273 F20101211_AAACWL causey_r_Page_083.jpg
7ecd0a58bb09f34d49294c825dbc846d
e3b5cb7d2734800b219e1e6f2553da99896a0338
F20101211_AAADZN causey_r_Page_004.tif
0fae33d85cd0736f6f025fff2137491e
948cfadc2327baa0c2c71de2f41057160dc3cd42
5856 F20101211_AAACVX causey_r_Page_092thm.jpg
b7b0b12119f7570236826f41531f4da0
243a3c683cbea0bd5aaa2910ca2160779f390d45
29551 F20101211_AAADYZ causey_r_Page_040.QC.jpg
a5107904ff8daee6208a97df16420f1c
7497b33a2d02c404712d76cc1600a8808a6debbb
34267 F20101211_AAADCF causey_r_Page_080.pro
cb5464de0f3fcab50e2b51b0f6b3aeee
7359fbb18abffd7cd1fcc00a5748258595429ae7
45682 F20101211_AAAEFI causey_r_Page_113.pro
4f3cc173439a732c344feb198d9862d9
71a31470962aa690751a4285ba462e0cb5c90840
66180 F20101211_AAAEEU causey_r_Page_071.jpg
6b001bea0addb6c4e761bbfb72ff61d1
8ec6004c31b4ca57d71124151198d15f4bd6fcd2
F20101211_AAACXA causey_r_Page_076.tif
83f241ffc9118e0407d6f91940736263
4869c39ba4faafce242261e939b25d48e78f160f
18806 F20101211_AAADBS causey_r_Page_048.QC.jpg
996b913b95f515c6e0e122b17891a838
f04eac1d418614a1dda3343826d4a3d00af7d22f
1431 F20101211_AAACWM causey_r_Page_035.txt
e212e42db21ded435120d2ea06d9cadc
30d9c74c125a36a217371a89cfb785fe16bebae5
25471 F20101211_AAADZO causey_r_Page_034.QC.jpg
ad6662b45ba8f4e0ce2b0a5b99c7b95b
002bec0716f121478c56ae710efa84ddbca04fcd
7211 F20101211_AAADCG causey_r_Page_038thm.jpg
b95b21ece7266a45cc06af268adae370
e92eed602206ba68ec8419682e7bead46aa04b8c
63429 F20101211_AAAEFJ causey_r_Page_142.jpg
340ecefe7c140d9058da21408c3f6828
02dced1ccf17701521c394b218ffa30fcf4a3164
47961 F20101211_AAAEEV causey_r_Page_163.jp2
8dc8f0317f6a1a3f9ceeaa13bebeaf54
9477ae428540ea656eda5e2a829af569cf2e38c4
7204 F20101211_AAACXB causey_r_Page_001.QC.jpg
fbc6e6f7d70fd7f0fac4f9fcee2177cd
5520705c25e1e7e4674fd3baaec6c2f11dc8c20d
2550 F20101211_AAADBT causey_r_Page_011thm.jpg
b31e856e0c7a05ac6765e4a07d42d363
847211380ac9baa039c606c46647bd39a199edaa
30405 F20101211_AAACWN causey_r_Page_029.QC.jpg
68468edcff215e01dd9b097bca8088f5
2742704ae264e63bf48e8e9fea114b44d8b7b2fe
46410 F20101211_AAADZP causey_r_Page_052.pro
1343f4616a2c4843dab6549712efc79a
850930e8957778878f26640ead6a897d2856b013
F20101211_AAACVY causey_r_Page_094.tif
9a81cd755dc27a6e3b429aa85fc37de5
5a300b54eb782ee3a485f2098f9b7ef3e61654e6
F20101211_AAADCH causey_r_Page_058.tif
0badb630e3774314413e9c4a8baa0614
2bc63c75f7b006533f247206060b0fe07401bc26
7305 F20101211_AAAEFK causey_r_Page_160thm.jpg
94cf73f8f1dfe082afc5e7c6b21a3a68
59e7b10ed98c764abfecceb5bd6e5deffd4f0082
63495 F20101211_AAAEEW causey_r_Page_092.jpg
46c85935fc1e0a9ea67a0b90974a6432
32d18cad85765936e8d7b986ed54089b55fa976a
20732 F20101211_AAACXC causey_r_Page_073.QC.jpg
26faae1ea4cb2e94c8b6886ffbc4737d
c81218ee6d5d5b4c7d656744f7c76a26c69b1896
1051961 F20101211_AAADBU causey_r_Page_021.jp2
e4c0044559fbc2eab588f9bedf708279
fd90d7af184f6c12587a39244135906cb81d0f61
41315 F20101211_AAACWO causey_r_Page_012.jpg
a2b3286ba43a54b4bcd31e45f40eed19
ad5b63a143bf548b3b108c770afd4527d80815da
967132 F20101211_AAADZQ causey_r_Page_123.jp2
64f95222c3ce8a7c5406aa4881a0f33a
6d00eff97480c022bc180604550909e8e78dced4
F20101211_AAACVZ causey_r_Page_026.jp2
83b89191c2a8112b4306058eb72af41d
74817394e5d563c785a85b3c18ccbe59db43cb34
F20101211_AAADCI causey_r_Page_054.tif
1cf12b026c463424faf58b9a6ba5638f
d50ae29d6099742eaa33db17cbe6d3ec827a8cd1
1051974 F20101211_AAAEGA causey_r_Page_085.jp2
ab8dcbb9bf841d5a223902e8895f9771
260793021653eb686d18991c96baf4829dbb9b3e
6501 F20101211_AAAEFL causey_r_Page_153.pro
83ba0681ea65a5fc31dea2049bc23670
3ac5e4ba785814e9c0e76eec80495a954f6cc170
919 F20101211_AAAEEX causey_r_Page_066.txt
681bb047073917228156295f82c143f7
be58c83036f8ac367f4ea5c421484ccc1c7e77ef
5055 F20101211_AAACXD causey_r_Page_128thm.jpg
4d85f1e31d373a2063b94287f274afed
3d11c6db24b04a7166cea75d8836fe8918a2faa2
18076 F20101211_AAADBV causey_r_Page_046.QC.jpg
b3ebe5863eef19d9a2477d1791ecf949
6ebdfd65619c09fbc8893a5dce1bf88cdae0d068
47786 F20101211_AAACWP causey_r_Page_054.pro
d3d90bf8b640cd5a29809549496f5129
7192999515db77ba7443dba58b4e74770b099bb8
62405 F20101211_AAADZR causey_r_Page_022.pro
9513c36f077ce007345d62e256550dee
27869d81daa20ac61abf16fb352c989726df44fb
30216 F20101211_AAADCJ causey_r_Page_011.jpg
6698015c7ac121e86af35854d3a90b94
323728279d6d9ffe9b1844bfc82a6bf5937721b5
72713 F20101211_AAAEGB causey_r_Page_160.pro
83150526d7b02504d8f33f378b744f8e
7bd190fd3906ee76dc57de1ab79c9e78d87e5540
1051984 F20101211_AAAEFM causey_r_Page_109.jp2
20b2e71e2a1a6954c97e31e23a22a6dd
e7d7a256077719935281adf82a8a32d7e65346a2
29664 F20101211_AAAEEY causey_r_Page_101.QC.jpg
a2698b871884acf3191feef505115dfd
e548c75b2dd2ce004479e65209810ede9dd8c35f
4593 F20101211_AAACXE causey_r_Page_116thm.jpg
cc640960e0683671cb57843a3b10a441
7c54a23c2f187fe1f18de5f7c9573dee83267fba
70580 F20101211_AAADBW causey_r_Page_107.jpg
360d9b51b93cbd39c72bfa713886a89a
b39b4e872e64c68c0530892b037c7afff7d7b3bf
2272 F20101211_AAACWQ causey_r_Page_077.txt
e412e0b5adf50cbd6564b8e8adccdda9
bf786fcc81359f597d3c3d0f9d8fd3690937890f
60151 F20101211_AAADZS causey_r_Page_039.pro
4f8ce0d2b12c57fb1a44ce2d2de33af9
37344ebad202862d0459271d0cf96e879302ec3a
130391 F20101211_AAADCK causey_r_Page_121.jp2
d3539d0247f229e97b546b1005aec505
b37f2b06199d9bf9613f35b511560ae82207a266
976404 F20101211_AAAEGC causey_r_Page_061.jp2
2f8570125be009d4a457a98a9267cc01
a89ee1a043ba75572b666f3c87479a19801262f3
44889 F20101211_AAAEFN causey_r_Page_110.pro
90166fd916449f8ad058b9d5c02d60c6
aad9a0aaf8fa571c6bd941fce35b5465f9144dc6
4049 F20101211_AAAEEZ causey_r_Page_012thm.jpg
3d625215ac8dc0e13acaf446eb2cf795
0222272bb99e5918c8951ae21ddc32350f5e48e1
46521 F20101211_AAADZT causey_r_Page_065.pro
5186cf0112bec2d7edf3fecbeffc00a3
988b32fda060b3488dd0f7cde5822f6f48acd2d8
79527 F20101211_AAACXF causey_r_Page_112.jpg
e83351578ec815dd145f024375469e24
8e8d88ae0baf80b88533b45f09cb663702b15dcd
25673 F20101211_AAADBX causey_r_Page_027.QC.jpg
5446c7ccce6a24ad45fbcc64abaa5ae7
81e3b121775837464ab02e32cc72f905e7a54c48
1051945 F20101211_AAACWR causey_r_Page_090.jp2
373c77209a34cdb40b50cd0b802f483d
9193f51fed2a103c7a15596bfdf8c9d7a76f5e81
59637 F20101211_AAADDA causey_r_Page_094.jpg
812bbc94d3475a6db8c3c5470a27489c
3843b5ce51dc357e7a00c23ed750d8aa5e845df5
2237 F20101211_AAADCL causey_r_Page_010.txt
59a80c9fdbb3a6857c3e1c3b01f93b03
dde60921d6dae0e517543d59522abf5a9af5bc1a
53705 F20101211_AAAEGD causey_r_Page_044.pro
44cfc6b69aba629f57542298aa90da70
59106bdd01e009badbf069af18183bf014d790ca
22801 F20101211_AAAEFO causey_r_Page_064.QC.jpg
dfd102240488e4f6e5e48d97b183b4cc
8e80aaec390f1d15a8fc1e2b186b42911022b6e2
28125 F20101211_AAADZU causey_r_Page_117.QC.jpg
1d80c294dea6af180b4711c5c813243a
13d910a3b9e583a093e9d0b9eee37eb50cb54769
F20101211_AAACXG causey_r_Page_114.tif
55c74fdbb37c1c44855104bec5f52805
99c068bdb6a91af4979df259d688ed595f7d4a06
879 F20101211_AAADBY causey_r_Page_137.txt
bc819238a5fa44e8efe811f2b38343e7
d23896d04c494a515a677489d6d3dd79399a5b50
F20101211_AAACWS causey_r_Page_116.tif
9bf5f518f44bc25ccad63444c517c757
eeb6b284f5615065ba0a16998697521723c72d47
23153 F20101211_AAADDB causey_r_Page_060.QC.jpg
d0eacb5a0b6ba785eb3cefb4947f7bf0
1c6f1d25974157f03147cf296aec1376994e1eea
2180 F20101211_AAADCM causey_r_Page_053.txt
5d09849b56deafa129b69984988aa806
483a3a0933b8c7c2be5feb05fb9b2776d4c1bd72
978 F20101211_AAAEGE causey_r_Page_097.txt
bc09501f2cbc5808794ebafd29954ecd
f667ef7caa65046ea629ede9187409c2de0d5811
70428 F20101211_AAAEFP causey_r_Page_113.jpg
e712cd629d00c6d511ceaaba3459b789
954b47f6de2fd745a34a1768fa544bfe943a9e03
6529 F20101211_AAADZV causey_r_Page_091thm.jpg
2129d98ba6b25b674e698001d0aab1da
656dc3f11cf59901d409815d7a37ca749457821a
47071 F20101211_AAACXH causey_r_Page_116.jpg
251faa7935aba317cbf1eebe4b6f91c1
f16686649706e7b81829d246edf233466072e796
83390 F20101211_AAADBZ causey_r_Page_025.jpg
46c70cf016bf53331609d333f6dd64c2
b915ccabf95fbb5d63b32250be28dc8b3a025829
5870 F20101211_AAACWT causey_r_Page_119thm.jpg
eddb65ea60406ae44e4cac973a340e5c
4df5a44d72aa46f483c9dc40f4dab405eef59cd1
62582 F20101211_AAADDC causey_r_Page_100.pro
edbcec0b9b7c07ed7d4b6eaed110233a
68496d9c38558389d25f954265389b6f8acb7729
40616 F20101211_AAADCN causey_r_Page_030.pro
c64f08a51ca66b7e3a8e5d382007f7f6
382ef30a99531859a27a8b2c3a2116ce3d9ad7d9
1962 F20101211_AAAEGF causey_r_Page_088.txt
0b338008d9335c3a162b683612e93753
00ee4e10687da0f5f9eb3ce8499352797107bd66
1432 F20101211_AAAEFQ causey_r_Page_013.txt
c28491d3f8e8267ff0ad4fe3331ae122
15a646279ca6e009ea340d014ae937bf892359b8
2045 F20101211_AAADZW causey_r_Page_107.txt
17edc5c35f092af983ee1e54b05cd621
23497bfcc73961a7c1b075a703ff2ff11a907c7e
28960 F20101211_AAACXI causey_r_Page_152.QC.jpg
4eeb8b11398ea1de7b6f9adbea1c6a3e
d76b36b3bae8aeb58e75d097140725f06a4898f6
1480 F20101211_AAACWU causey_r_Page_116.txt
2da603c244efcf2daa92749b6ae46f20
da87be8c86bdb47b263db796394333fa8bd3bdbc
13471 F20101211_AAADDD causey_r_Page_003.jp2
19f7c8488ee9d13346bcc97a418cd83b
3dc809340533d8faff9beceaeb76329360aa5acc
84189 F20101211_AAADCO causey_r_Page_031.jpg
fe6db7128f843cc29ca16d6f2347d0f5
2e4af2b896581cca1bd2c6b5832ac9f68d4dbc77
913403 F20101211_AAAEGG causey_r_Page_073.jp2
cf96d34f25bff5408ac6aaee337c9f14
788488cebd4bab22fe810c15ec49ce2c00afcf84
42490 F20101211_AAAEFR causey_r_Page_018.jpg
3a3fc38e79e805ef2091c023ff08db8d
0de4bf6c6c402a9b47e666623112ba2f8aa7112b
F20101211_AAADZX causey_r_Page_036.jp2
9b8d1c00e9f1e8016fef0c48cc7bfcbe
f02efc7a92cf4900284ae0d36256c8742abc00ee
F20101211_AAACXJ causey_r_Page_029.tif
47d47a71944c365263c4851317eb1e9c
6686e61e600be231c226796b0566767321b19330
21933 F20101211_AAACWV causey_r_Page_146.QC.jpg
64c787a1a8ba28fcda28d50b7054678b
d32bec14baa637bbbffe1e267237eb04df804cdd
7486 F20101211_AAADDE causey_r_Page_120thm.jpg
ae79996823db48a64dc0b7860403719f
f00bd508642197a25a1914237b8fcecdf0b06336
2492 F20101211_AAADCP causey_r_Page_025.txt
318155244406ca51c2af14465873d232
9a7d669ad971a94fbd44246cb0a809a234916de3
998059 F20101211_AAAEFS causey_r_Page_107.jp2
f8514946ae0d9355f1b48a1436384ec1
1eede994105e170b2978003b45c413f8ba2cf95e
1906 F20101211_AAADZY causey_r_Page_110.txt
41481f6c729960c75e2651f2ba9e4b59
d64e9e960b7a56a40b5e6b54349d41b4f43712e8
83662 F20101211_AAACXK causey_r_Page_133.jpg
d64e7392f403c5bce2577d3e6ddb349c
347829251e73e95dad8cb2773d01d7683f1e1ccc
23237 F20101211_AAACWW causey_r_Page_134.QC.jpg
c125b2b7e84a1e15aa57cbb2c29755bf
13c050ce70d0a612a8e6203be33517b262f38a69
5866 F20101211_AAADCQ causey_r_Page_082thm.jpg
b7f9404707e10b88997289ec417f408e
c4eb9d75a5d18069782de784f410ab4db56049a8
4731 F20101211_AAAEGH causey_r_Page_035thm.jpg
ca305c7b2317d90d2882c0674ec5d16a
55b7e87d0c934c35325259afe721124c207bb917
F20101211_AAAEFT causey_r_Page_123.tif
a48f61bd304002abc8015b20036ef70d
61dc09e6d6756940cf6ee264969aac04b7017fe6
2033 F20101211_AAADZZ causey_r_Page_124.txt
b25c77b6fd54d763585ec36d2bf83bcd
d177485cb1cbc63efe3324eb3ea32aedd9195449
2077 F20101211_AAACXL causey_r_Page_064.txt
668173e8870390f32c645f618cceeeff
cd07606af8685bb325471bbd95d6b3e501a1516b
F20101211_AAACWX causey_r_Page_126.jp2
89dc64cd95d9f148daf66364fcf111ad
2eb6eaca964d9052e615d8257a180318979ffc60
61617 F20101211_AAADDF causey_r_Page_080.jpg
5e816e87cebdcc5e6f140c8d14dfe0f1
fe77bd12486467815655c22007d47d7aa0ffe145
1736 F20101211_AAADCR causey_r_Page_145.txt
f184cf65d98b00720909879823e9d8ed
f10924ae8d94c11711664fa5528044ea72efdfef
F20101211_AAAEGI causey_r_Page_087.tif
b90427744bff781aed05a875b43d721d
5293f15b9f0845995c0ee62aa6a764220a306894
18483 F20101211_AAAEFU causey_r_Page_128.QC.jpg
37aaf7c2cdd6446286937339a63a3c2c
2eb86ac29c77fcefdc84e2297d6e77f3e985c0a5
1140 F20101211_AAACXM causey_r_Page_147.txt
d91cea5c6a3e9433a4d2528b3b9ce4f5
713c7a0105e535ff87882327e2162f043cf35d82
27641 F20101211_AAACWY causey_r_Page_042.QC.jpg
4be9a46d912b1417f837ec4faeaf790b
9c487ea69b792d938f2e6fe0905d5cf717875a8b
23894 F20101211_AAADDG causey_r_Page_023.pro
47173c0d1895a48a2273edd6e35b8e81
69c7dfcb6923f404ca92b76103a6ffafdefa1535
5791 F20101211_AAACYA causey_r_Page_102thm.jpg
09acdd532c631a0865e3ad7bfdef5045
512c86d6d194c8c58463f67fccd082ebb114fe9b
46579 F20101211_AAADCS causey_r_Page_129.pro
89620f3f50372df469d26a6fe2d663ee
8fdd9266903a9952670d8f274ffbe87b36133743
F20101211_AAAEGJ causey_r_Page_156.tif
cf6ef43befb82ba0990f4c12f4b9ac8d
00aed204aac16c7ebec83e380fdb4db3c594a896
54272 F20101211_AAAEFV causey_r_Page_057.pro
3906b4a5bddf1b09eee1dbe07048f84e
17453e6accc6db7034ff6a392edc6fccfd1c18c1
F20101211_AAACXN causey_r_Page_085.tif
87c7de85ade67086be6eed9e9d455916
c147c3969dbc0ea8afe4559f55bcb7ecd48d29ac
2416 F20101211_AAADDH causey_r_Page_076thm.jpg
a6bdbe9986802c2a6c257f5dc15be339
ebea947a580dd3f8bc54ad24c93b274eda02938a
2046 F20101211_AAACYB causey_r_Page_127.txt
b1d215a618f71092bce35b5d04e53111
11dbefeb243d72b0f865777d189cf15833bfb288
54022 F20101211_AAADCT causey_r_Page_087.pro
1b11c57e4cef72204b8180a153f4702c
0db72bb0d258d44742d0f0c38ab18f066b6b108b
2361 F20101211_AAAEGK causey_r_Page_021.txt
8d7f7bcf9fdbdc2bd85a21f61016aab4
f3e9d7482d402fb47cc62b9cb8493545d9602af6
F20101211_AAAEFW causey_r_Page_028.jp2
d2f34a6ec37416f292e956d12256ad39
fd5a57f4bbe0536ca959bdefee644e759e79baff
F20101211_AAACXO causey_r_Page_039.tif
c07f4c4146cbdf9a447d9606ef239315
ddc8c028fa83f7c3fd22511d0312c45f8974f42f
2054 F20101211_AAACWZ causey_r_Page_052.txt
71b833ee54162e4a77d50b87714929ca
27c3e586591c7254f9b00c904b2bb926ea3f7f3b
20626 F20101211_AAADDI causey_r_Page_055.QC.jpg
5dddd92a4c6c93e8e4374cd26c3941a1
43fe2121870542dbb60ebfce3fad7e7b5b113429
15051 F20101211_AAACYC causey_r_Page_144.QC.jpg
3519bacd7aa0296e7bfd505287a5ffb6
4561da25159a1c2baf7c10821d0f200d54cddfef
6375 F20101211_AAADCU causey_r_Page_106thm.jpg
64ca218d1284d35d937424e1261ad4fb
61463c90edfac71cf9941c86f9c817562a297fa9
43163 F20101211_AAAEHA causey_r_Page_123.pro
d87a9b4a1358601bf614a0e473a10dde
72a6f02d450b40218077b2fdda3472402fcbcb0a
1801 F20101211_AAAEGL causey_r_Page_131.txt
095ef7b6c97de044b60b4f160ce519c0
25859d7653e345e7c77ba748a0b69af9edb1d7f9
76832 F20101211_AAAEFX causey_r_Page_109.jpg
8f81a003e3257896488fdfedf09782a0
a5e04d17d86b4ca8ca920db8c9be1b1781e7c03c
6479 F20101211_AAACXP causey_r_Page_096thm.jpg
c516a882d11311b1d6f227cee56d515a
a25643ed516e4d6e927ddb0391b0336f48cc7ae6
1863 F20101211_AAADDJ causey_r_Page_125.txt
80f577e0a6c7e573d5fb38a7293e64cf
4e7d3afa9b30b872af50745a7b764a33e07b510a
71826 F20101211_AAACYD causey_r_Page_052.jpg
3f855479a9a6f3e696b59ec444e84b3f
b544bd7e93dae09d2714c8795b79c5b527398484
9867 F20101211_AAADCV causey_r_Page_122.QC.jpg
3d6ea67916f468a199ce305904cddabb
6481b3587e43d99f87cb8cf39a37b78afb7b0620
32455 F20101211_AAAEHB causey_r_Page_013.pro
41fd7bfcc193f4a3923fd080d6df5770
5190807abea389e9626a96bc6846cadfff9da100
F20101211_AAAEGM causey_r_Page_100.tif
ebfdd5cf37b60fc6cf545d85b0a182b6
3e6058eba24a04298808fc7db5e647ac5474a5e0
30607 F20101211_AAAEFY causey_r_Page_037.QC.jpg
6eaa9c58b7bb0588ef0734087f28ceb2
1d922220590f6ca38def0f359835f72261fbc148
7413 F20101211_AAACXQ causey_r_Page_041thm.jpg
f8d0ba575eb40a7054b3e2d0ca68bb0d
bc95a279a918270b2e0239a4f3a33b25d1b3b121
66041 F20101211_AAADDK causey_r_Page_014.jp2
1579b9e86a587dd6347742d98ec9ca32
6fb0cd49614df25a4cd02f84e001342e353f8d83
2537 F20101211_AAACYE causey_r_Page_139.txt
da766898f8017269af599c668845afd6
b37fa21f8de4ccf79d2d3def01743c277b64fadc
2450 F20101211_AAADCW causey_r_Page_040.txt
beddd70b5bd70c86cdf6be23efb39bfa
f7962eaa0eb5b2356b8bbc8ab91d2ef5b76d5e10
23015 F20101211_AAAEHC causey_r_Page_054.QC.jpg
eaae5a4b309df41bbca6906329290ed5
5070c22296bbff59edbb9b96dbe5fb659d812a38
26325 F20101211_AAAEGN causey_r_Page_018.pro
863cc556d0cd8a711f50772c6f3b314f
bbb4e3c625d8933a21a553d219b5b9a92a34e504
24737 F20101211_AAAEFZ causey_r_Page_112.QC.jpg
d99ccf68fc0017f431fe57fcc9f3592f
debcb9d44aa5a8a409d4265ef15715134288379c
11669 F20101211_AAACXR causey_r_Page_008.QC.jpg
c4032c481c0628f9250e579da4bbeba7
eb4bbb171464af6a91aa9304686fa6cfe173c80a
6055 F20101211_AAADEA causey_r_Page_110thm.jpg
0c00716ab27b51de9f284c0ce6b6e028
408e87465b4901bf1c49ddacc08eac7a550891d8
31265 F20101211_AAADDL causey_r_Page_068.pro
7de7a1bc29431b00ce65340496568ed8
3bc25829ff3b0526527fe7b70dbc395aaf0faaec
F20101211_AAACYF causey_r_Page_163.tif
4ec3257660fb56958b1f6bfb6b436314
6ad290caa31cadf7a8248de62ed04bff9ccc78b1
126483 F20101211_AAADCX causey_r_Page_151.jp2
0c811a02eb17bc11bc7b68adf88328a0
599a4810a56adcc9d4de9bbbc82a3574beea3616
68135 F20101211_AAAEHD causey_r_Page_152.pro
6eeb24441fbd06b98ab1de62bc6878e1
49ba506387a9cbb5b95f62033ba68906ac09cacd
7140 F20101211_AAAEGO causey_r_Page_162thm.jpg
f79a829ce7d936115b805e3010e71a3d
43dcbc15ea806a1505400d51a22abb493ed23605
72467 F20101211_AAACXS causey_r_Page_064.jpg
593d17300f6f93ddefb7f394be86106f
868c8306c28ec15ce2cf2b1c1415d3cae4c4ba32
5788 F20101211_AAADEB causey_r_Page_006thm.jpg
44b10e706f2852bf1e53264e70c694c4
4711ebe64b93a1eb20a61e436fc8a682c53e7af9
F20101211_AAADDM causey_r_Page_138.tif
428c58196d6f42c7f75f02e5820c3e35
e131598b307b99ebc5a79e78d45fe204b1caebfb
F20101211_AAACYG causey_r_Page_077.tif
32e3f87b3707deb0934c38961a9294d0
f21a71956afafebb110ed24644ced093819a7405
683801 F20101211_AAADCY causey_r_Page_136.jp2
5ec8217fc5818ba7661e2b7d9cdd116c
291feb11346d669b9091fc1c9c99ec2ba9a9b55c
F20101211_AAAEHE causey_r_Page_115.tif
248175f37dde5293a48a268f53a6a28f
9d5cb0a050ce0f319030c2fb458800c3c242337b
F20101211_AAAEGP causey_r_Page_028.tif
8ea94c0d933e805985e992b12d8ac856
a6f0070d01417bb72e9dbc4d0bdd35754f778511
96306 F20101211_AAACXT causey_r_Page_155.jpg
bac7abc57c38498c1b8c5f82896923b4
1e0f3f7f2c43f689693093dc24b13d6a3c159ded
6620 F20101211_AAADEC causey_r_Page_028thm.jpg
aeff61cb5135a17e66a21765dd085b87
754f84b262b295cb3d9f6a3e9dd6749b408cc87a
7332 F20101211_AAADDN causey_r_Page_157thm.jpg
a3cea5de6699cf700b7084bc7c6f5c72
436407346eecd75beb3d396ec363704bf5a7797c
2261 F20101211_AAACYH causey_r_Page_031.txt
4126ea2a98be4a8f8fe1f5436c226968
8803e3a22b5c8fe687e2303d882636eeb0bb02d9
23062 F20101211_AAADCZ causey_r_Page_052.QC.jpg
09c307167c9b679bb1fd6bbb5d49b43b
bfe3a1896d29c42152d7f94672541f5efa33f651
F20101211_AAAEHF causey_r_Page_113.tif
1e2083b9e5baffdb4a11a5da5ef0b0d5
b1a6bfe5b075b610242230949f8280b8051d0a7e
1960 F20101211_AAAEGQ causey_r_Page_055.txt
8da74f0c6ba9217f322990c47020cff4
bb3541141ae2588fc1af475281050104b84669bb
74835 F20101211_AAACXU causey_r_Page_106.jpg
6a6901a064b7f62e5c67bb82a52bc7a6
06da1fbbf03eebc13447f5c155cbf3fda4e19111
6230 F20101211_AAADED causey_r_Page_118thm.jpg
441abf08727dcebceac1f161adb99282
dfa40ed20707ba5a4a62aa58de3bb5bcd46a1e1b
F20101211_AAADDO causey_r_Page_141.tif
07776b0e407a445c8d8c42ac2d01a348
68eb61f50a60463bd20cd951493b85ba8c43834e
21697 F20101211_AAACYI causey_r_Page_059.QC.jpg
a2b37b6b85e71dd4ff2d22ceb138d66b
b6fb301847f4ed5d4e4f882107582c0bcf24c64f
133117 F20101211_AAAEHG causey_r_Page_034.jp2
e6a2359ac76501b4917f33e043203044
8bd808e90143cd203124a438592a82d70636b6d4
1051957 F20101211_AAAEGR causey_r_Page_043.jp2
71697c1190e870eb615603be33b9e306
5ed7177418f5e629ebe0bd143e232059ec376058
12803 F20101211_AAACXV causey_r_Page_141.QC.jpg
6b6e157d32ae5eb6ab4100a02f988046
5462bea8b873b30c9cdf800e8b8019c6be44d5f6
F20101211_AAADEE causey_r_Page_032thm.jpg
1be258af7e7be8523b1101af3bcdb5b3
424ddacf3f0f41f1490f5b6b2d2b4504e3b53c49
F20101211_AAADDP causey_r_Page_019.tif
b3305f6980a58f95460a563157681bbd
46369ca85eda9d7d58f27ddd36311350702f136e
1299 F20101211_AAACYJ causey_r_Page_046.txt
12d888d7ffc3c3216ed50032fb7a6563
01d7a3003a0ec9bc9eb110d736372d7242d66b3d
6116 F20101211_AAAEHH causey_r_Page_056thm.jpg
7d565345ad3a4a5cc8fc4dfa3fc64faf
fee06db49e997c034bcbcca909ac91ca3990e62e
983866 F20101211_AAAEGS causey_r_Page_114.jp2
b1280c0434d27b7629abe9102b8490a6
82f9cc4279eaba491842e5b8df732bc5992ce02c
2470 F20101211_AAACXW causey_r_Page_151.txt
cbf1828b1ee660a6475f138151f76911
eb665271bc4c7d642caff68589015726bf29d323
1028788 F20101211_AAADEF causey_r_Page_064.jp2
96114feb5743bd501bc0429ae696b08d
8ff1c57f35a8c8875d260cbd9ea2185893b1362c
51334 F20101211_AAADDQ causey_r_Page_009.pro
db3184b304a4ed4dd6f7cc0e323f165d
8dcf7449eaa11abedd3b6542384954e6fd8a559b
F20101211_AAACYK causey_r_Page_022.tif
67faf8099b84a9c9daddff79b4fc5db7
093fdcb16c5c4b1a97515d55e59e1e99cbead919
948915 F20101211_AAAEGT causey_r_Page_088.jp2
b26920ecd3812129dfd530debc88e0a8
4d970d93007516159c4d305d3d69d4d5259cbb31
24377 F20101211_AAACXX causey_r_Page_127.QC.jpg
bc17970fa568d6c114f6d6cb5492e46a
eba00b5390955d4ce52fa5afb67019b5569ce1cc
27524 F20101211_AAADDR causey_r_Page_156.QC.jpg
b224245d62558f6d997697623da9833f
cd87a76b30510d62a5073487e710d4963e443e8c
F20101211_AAACYL causey_r_Page_023.tif
11fa5181afe4f8e56940454b159c3e52
372b8c9fe5f6a2575aad6e1b322ac8ba4bab81d2
2794 F20101211_AAAEHI causey_r_Page_155.txt
4a49c0caa3c8658970cdce3c29bc1fa0
d74f48d84c32dddfbd11c0ad4ce2191a4c1b0ee7
47053 F20101211_AAAEGU causey_r_Page_060.pro
4018f237e1fdc648baf19fd773bb6a14
66bffb90d5280f40c6b057d67bde0cb7c0866d9c
1051952 F20101211_AAACXY causey_r_Page_040.jp2
56e50dbffcc883dd29f67e018b017a7f
a85e495ba12eb4647677b6946273fb0f2978b3d8
6951 F20101211_AAADEG causey_r_Page_087thm.jpg
1c0e856ef897fc00b4b0761bf43fba8c
11c33474774520ae347d7814440f049f9de7e547
F20101211_AAACZA causey_r_Page_024.tif
3f0d30f595292c37bd63584b9fdf7589
0b3530058335708fae92d92b9f42946b0f53007c
91361 F20101211_AAADDS causey_r_Page_117.jpg
c99868ae507e86b09a2dfa27b4f41577
b125fbb4c3aa6062d4e72cb8d39781bb29b9c98f
1247 F20101211_AAACYM causey_r_Page_058.txt
a86381b24c211dfddc3f6188383e8d1f
2d2d99957a2e11b799f4e8596693930d4bac22b9
5625 F20101211_AAAEHJ causey_r_Page_023thm.jpg
fa9a143f89599888e78ca9b0a88520e1
2fcd8fd27ad2eda0eb282b86f291e787cf46cadc
F20101211_AAAEGV causey_r_Page_021.tif
8ac9c409496980f2969a82fc30a4138c
511284a09733ea8db40f895700be5de306cf6330
7560 F20101211_AAACXZ causey_r_Page_101thm.jpg
36769925e3bbb5d4fc5c8c6f59af74df
4917ab13c5c155e7a0cd92aebb03f7834f1623ea
89346 F20101211_AAADEH causey_r_Page_021.jpg
03ba93a6a0d27ac1d2df8cd969a6b1fd
d7709a338e01029fd0ee24876877379ef64c4447
29873 F20101211_AAACZB causey_r_Page_140.QC.jpg
0175a24f314f334cd1cce45a4d142651
6fef31f9fd9e880176a6369b88ddc37ee707568b
F20101211_AAADDT causey_r_Page_006.tif
55f8fd76ba048019ce0cf12d824373cb
1310e8aa41e0f5880b81e01beff044f515d28665
F20101211_AAACYN causey_r_Page_055.tif
26c38f0fd0513515a1853799bf122ddd
709209315b110b6d193aace44f5c273bd85078b7
84169 F20101211_AAAEHK causey_r_Page_129.jpg
688cf9b1bb7d88891706c56292015f12
7509f2baa3a46a1520bd0a74c5a70bfdba18c019
14497 F20101211_AAAEGW causey_r_Page_016.QC.jpg
b8181552318d23626abd4ad980f93be2
40d079300486a1c5dd593e8842d36650d4d3d293
F20101211_AAADEI causey_r_Page_142.txt
ff982df4dfa6f23aae018ce8ca796e07
9f1515db5a2ebf9142eff03f1483d690814e73af
119832 F20101211_AAACZC causey_r_Page_027.jp2
b60f0d5b242c5eb3994824388084f39a
08f94121257e656db42dde8565ae7cd6bc3f26bf
12374 F20101211_AAADDU causey_r_Page_150.QC.jpg
2ee825a86b30e4080cdd327c07e92097
f660ef75b5161b999907331b2ea6bbaaf15a81e1
67235 F20101211_AAACYO causey_r_Page_156.pro
591d97500a5148636c60757b8e03503f
f273aa6d27f9bc4503f2b3c40c9fbea0a83cb990
97219 F20101211_AAAEIA causey_r_Page_029.jpg
ed0d41630739350d9fdcf3e23b097f81
27c05a9a175ac09ce14d298d3aeb478ae6c16ed8
F20101211_AAAEHL causey_r_Page_045.jp2
e53623069d91a5b156a2555d529b0919
e0435c8de6d718778b739b11f39e43368861352a
6637 F20101211_AAAEGX causey_r_Page_138thm.jpg
aa71bdb6bc9bfe7b9fca5dc028c9edbd
47356789e9aaa20a102032eca07d452b66d3a017
5514 F20101211_AAADEJ causey_r_Page_073thm.jpg
3231fbd5688f705e694bd39cddea65e7
c5a5dd6fdb929eac4c8fd91815a7d4e54c3101df
1655 F20101211_AAACZD causey_r_Page_030.txt
1a9c277865a7d76100562978108a8891
85788580ccabeaf4dd76ef4cf1ea0182bb5cb0f0
1950 F20101211_AAADDV causey_r_Page_047.txt
91de71d25cab039eaa21fb8f615db4ee
6c48de33634b4a958edaacf6f51441c88d02cbe3
366 F20101211_AAACYP causey_r_Page_076.txt
ed235dc96ddfb94b082bdbccce4f4e91
68bbd5c010e20613007bd93078597038ae7ba5ee
72294 F20101211_AAAEIB causey_r_Page_030.jpg
a9793ed75115b0b217f718ed70f99a1a
c5bc31c53c4ad301e3b5d37a8686ced67bd43000
22988 F20101211_AAAEHM causey_r_Page_066.pro
a423fbac947bd6da5c8c088f162b905f
8874583528d65f473cf538a93d1b3576e25e0708
61513 F20101211_AAAEGY causey_r_Page_090.pro
c274c3b336f4741e0de005b143ceb449
64060bbffbccefd26173c7352d33bff50c0abd41
F20101211_AAADEK causey_r_Page_032.txt
75c90d138e38592ac5b4a1d9e00324f9
0fdd7e46d6fb965f3b1ef77bbbcb7b4104b44740
F20101211_AAACZE causey_r_Page_066.tif
60194a4d81c0f758dbb0c62d2a6f7de1
44c861e4216048982489dca6c1786f7c96f6daa7
66959 F20101211_AAADDW causey_r_Page_119.jpg
39a1fc82e0b94e24dd825e8417d47b53
8159667198677cebaed57a6179f352f0b2cae5d1
67118 F20101211_AAACYQ causey_r_Page_126.jpg
c077ad55daf45bb71f3fb8f0885d0b40
09089e3e6fa152207ebcb400851a82379dddc9a5
96565 F20101211_AAAEIC causey_r_Page_037.jpg
4082f5a48041bce8edc1570e71909f84
754e12fdcf49a1ab7b876d4339b41a86612cc004
101911 F20101211_AAAEHN causey_r_Page_154.jpg
158bbbefdae340837720ff5dab641572
970033ccedb845a3bc0c045410c01758352f95cf
60039 F20101211_AAAEGZ causey_r_Page_036.pro
1a76989f52cc65ccfa5efc7a7b8bda4e
4dad77e7c1dce21757f6c58527a8a28c7798db99
18354 F20101211_AAADEL causey_r_Page_164.QC.jpg
df69e06e2e8523d578ff90335437fee2
fa01a5abeab106c395d9f97d887168f7d7d18ea5
51420 F20101211_AAACZF causey_r_Page_010.pro
877643a3e131873045cbd1265e94e907
a79a57210ad04cba1dfd2a0b9aed1ae0a96afd58
5032 F20101211_AAADDX causey_r_Page_135thm.jpg
6361170cc1a666e6156621e3065d404b
42e9aa9c28b3a0a5d4a2c8b2640d215d16eae1e3
36531 F20101211_AAACYR causey_r_Page_103.pro
81813716bb928af34ca2db8be30d29f0
f23f134725f11333a26fd70262d846ec9e11bfc1
7588 F20101211_AAADFA causey_r_Page_037thm.jpg
a7612405072ab18fd69de1a0885a3809
e91578e46b6e83c62837bd1453dffc64f78f2670
74923 F20101211_AAAEID causey_r_Page_054.jpg
18e44d327a2a98df9e3adf89e03155b3
9993c37fc51e31b746515a57c0380993aae84ba5
39515 F20101211_AAAEHO causey_r_Page_066.jpg
aee1fc55388130cea676377126827509
f85957f9cbf77a691b1d0f25acc0f06193ef4cad
F20101211_AAADEM causey_r_Page_106.tif
83efe2c2ee7639788ed4692afc5ae6bd
ff3fef52912e235c5c379efb709cb91e8b23afc6
23800 F20101211_AAACZG causey_r_Page_109.QC.jpg
d73d38e59cab64602812fd1e1e01ac98
58ffd7af358818fc57d37734b544148888984964
77763 F20101211_AAADDY causey_r_Page_053.jpg
38f8d3af41ea5710d41834ba12f4d4ff
e6df1499e932c23b43e8186cadc53c6dec1c1b94
18849 F20101211_AAACYS causey_r_Page_023.QC.jpg
1dff95e16176c14df03fe443d424558f
5a134bbd18a937086de2ac00205c8a5f949c67d7
7840 F20101211_AAADFB causey_r_Page_104thm.jpg
7ab83c3cc348b730e5c2138392794ad7
93aca1259fd7a70ea090a8811a674efdd0ff682b
84012 F20101211_AAAEIE causey_r_Page_057.jpg
45faaa22e685a93ae4ad97914a332a76
039cee912f0e160ea0ef597283cea563117a9d42
60663 F20101211_AAAEHP causey_r_Page_143.jpg
c242af491c56b69c1a5cc59a260eff46
1a3f5abd68d24d7093111277223be295a8d3466a
F20101211_AAADEN causey_r_Page_008.tif
a421a834f97451092adcbad1cc71efa9
d043cdd33271bfe54c1416c6ec7cd174ec2a9da8
5603 F20101211_AAACZH causey_r_Page_009thm.jpg
b2d820180b6ea4f106038b8d99ebced3
660c15c5628c9adb3fd54793fc31b9f8af466755
F20101211_AAADDZ causey_r_Page_088.tif
98cc83954ea64c1e2b2d4ed775c5fca6
81d9070657f9768e91d20d1366c1c3e3be85db50
1051976 F20101211_AAACYT causey_r_Page_084.jp2
07e157ae16f254f1f2127a03146544eb
9191c957477afb90396951e73e4a550c078f9276
90800 F20101211_AAADFC causey_r_Page_120.jpg
ef4a2b0269d3c67231deaf164d4e8572
7d7835198ea6db57de12c3c274dd1dae4e23e74a
74531 F20101211_AAAEIF causey_r_Page_065.jpg
5d35e949a3bd7e07874fb5566fd851dd
e553c02c23577237a49b156bef6856b72d58d96f
F20101211_AAAEHQ causey_r_Page_154.tif
e6a963cf5a991563aaae43da5251b5b7
b3d12c0d17e42a8792691e2129b4a651fa824785
158348 F20101211_AAADEO causey_r_Page_154.jp2
90daccc131c99cb027744fb74b2372a9
b0d5e36cadbb82ed552ac61a40d07bed5a2d9ea2
20719 F20101211_AAACZI causey_r_Page_103.QC.jpg
3516ef3e1a98333a644f0db0cea681bb
e1e896da48be67f8f9f7c41c9217be9ab8525697
F20101211_AAACYU causey_r_Page_101.jp2
a44d3a5c8504bb923622fe54d023e5cc
6a2d3399b50af402703c3ff2d1328a27025dde0e
72380 F20101211_AAADFD causey_r_Page_060.jpg
ff02e04cf5d25020cafc8ed7d697bfec
4cfbe009c18f1bab6dec67cace2f7a185ef64218
63854 F20101211_AAAEIG causey_r_Page_075.jpg
8449be0b591d313090fb09730880a10c
6f2ab6675e982689df91758d1da1ca3e2947ed75
F20101211_AAAEHR causey_r_Page_102.tif
a2d05c8275a706c6fdf449761ba70e4b
c51f8be165d1a691d87dfae625223818fdd81c38
36191 F20101211_AAADEP causey_r_Page_094.pro
77e0a960a2d2a220d498bb3ddddc4be7
e809ad777fc831eb80a980b492b271390814f6ad
1433 F20101211_AAACZJ causey_r_Page_141.txt
238d81728c50b18a6a38972bcc074740
54ba5b3ce25646ae8ea0220c0e2afbc19075c72e
29204 F20101211_AAACYV causey_r_Page_090.QC.jpg
9e75a7ee9aa55dc5dcc23d30e4ad8021
7cc684a1688eaed2b12ccf35312e96ca87bd9827
73020 F20101211_AAADFE causey_r_Page_047.jpg
64c9d82db3d7f6526c62563d5cd45126
7cf4750c0da709158c1980f8159cb3a90c0ad038
78569 F20101211_AAAEIH causey_r_Page_091.jpg
0d87a3b9e8a4bbcf29fc5d13ec8c7e57
5d2765cf0ff19267b00e32a3e454725e67b604ce
2799 F20101211_AAAEHS causey_r_Page_157.txt
98ece6759b7c9a851f70c422c5fa615c
66c55a685e9277a0fcaa4a6e5b5be2fb64d9adb5
7450 F20101211_AAADEQ causey_r_Page_148thm.jpg
4b2532e43a240236e054c47415343416
f3a2f26bc1419d082e41d42117194c9b465f816e
22044 F20101211_AAACZK causey_r_Page_009.QC.jpg
3b52f57823c8f5cf87381f8aa9115af0
ff416a542bcd5037d11c1f9ea7d1b82218bc9587
5433 F20101211_AAACYW causey_r_Page_058thm.jpg
212245fc30cebe027a2d166a3af7eb3e
1ab7bb195411399fa65870209dfa4ec7326ad353
1051921 F20101211_AAADFF causey_r_Page_112.jp2
f63830a9e9659f248f9b7e316c198f1d
c317bfa0783060dc6ec825d60f3ca526088984f7
65660 F20101211_AAAEII causey_r_Page_103.jpg
ea680967d3409633a36a33220a1d10bc
16921d444ce8992bc38446dd1d0fd208389c6e84
189197 F20101211_AAAEHT UFE0021231_00001.mets FULL
4a91dc10482e90cbecf6bd3f2db08ee2
38ab1ce92df3d9671357841c47527659def51a95
1051973 F20101211_AAADER causey_r_Page_148.jp2
d2409a8863acdf17aef912080947a285
d7453364e9ca29fe355fa0ec0b2150457b8e8b1a
150598 F20101211_AAACZL causey_r_Page_155.jp2
98a15c511bbd60cf2e4b5cbff0ed3005
8bcf6926c88cfcf90d012ba69116446c03f9cb1b
F20101211_AAACYX causey_r_Page_046.tif
5c39ab3387a17f02d03174bfde2a1f11
63bbe952098efb77a45bb740a95aac984ac4a283
28996 F20101211_AAADFG causey_r_Page_148.QC.jpg
231d6330d7364941ba7e233ea974b120
54a9c90e169be2c8873a503b7c49faa846d90794
82995 F20101211_AAADES causey_r_Page_045.jpg
04a07fdae2077a12a79d9464d7033724
c7f7da9b31927691a21f4383719a9731f796dfe4
F20101211_AAACZM causey_r_Page_037.tif
c457393d1fdfcc67cb97bcf3c9b0ee6e
c8a154fc833c85b049429876dee7b6f85872276a
25469 F20101211_AAACYY causey_r_Page_133.QC.jpg
689d46824c672f4956133fb5bad9b604
2ef35e202fcd209c999c441334038e9334fd388f
70011 F20101211_AAAEIJ causey_r_Page_124.jpg
a30720014e9233b215fb14b1332341e7
1071361d15ce02d843dfcc96f81eb639a98502e9
F20101211_AAADET causey_r_Page_003.tif
4a5f9ff9cdd7cadda4b723967e6c4ffd
fe65650d94683274e71d9433f3793c5ac92553c5
42086 F20101211_AAACZN causey_r_Page_124.pro
809105629d6f5b1e60e66515679cb133
ddd5d202d3ce41a51808e955ef12c736552e6020
1175 F20101211_AAACYZ causey_r_Page_014.txt
5f433f08e06ca560cc27e292e29ca953
e5c43aceecd451e0127b63dad2228fc19b73ac52
30516 F20101211_AAADFH causey_r_Page_099.QC.jpg
3d79dd2c5b1ed8d67149a06094ff4fb6
891419fafc9abe4125738813d428adc75c63edd1
61951 F20101211_AAAEIK causey_r_Page_145.jpg
4ad23e06ce00a80f1fce49c5c9512313
7f8b78b74bf3ae495e097ac3442515e8f29a0a4a
75806 F20101211_AAAEHW causey_r_Page_005.jpg
fafcf085825868ab23036c037cdf3bd5
d9338e5847d9b7ec393c5c4b81ae103211a3b259
46258 F20101211_AAADEU causey_r_Page_144.jpg
dd336bf59295d266e9bc8c4a5b26f78a
31b4f05ae84c8a353e26496c49774f22d569de6e
2090 F20101211_AAACZO causey_r_Page_054.txt
47b3b8651b9ea016d28a0279cb6b203b
bf68f0c414b170971ee9db4eaea7df9603cde72c
35029 F20101211_AAADFI causey_r_Page_138.pro
e464766b90ddd2210e6daaba84384873
49fd3ab45657d031e2e50446f19e40f2d189ec32
844323 F20101211_AAAEJA causey_r_Page_074.jp2
5cab46b55d5b3d41051f3610b7e9257f
3ad28ccdd9b2bb2c716e49728539903a103a2499
91516 F20101211_AAAEIL causey_r_Page_148.jpg
6fb3749ab98cd01388b8f520538b976f
0d44de065e44547f149d5430ad9c07995c5c7c13
77090 F20101211_AAAEHX causey_r_Page_019.jpg
70f335cbc9154af8a0683eabc9663215
4cb80eac696a5465c8f425412cb5465effff7e27
62998 F20101211_AAADEV causey_r_Page_056.jpg
522f922109a2eda7c1d5e8612de1998f
94c31a1c76b0c59927032d06e08d2dff5d96d483
69216 F20101211_AAACZP causey_r_Page_155.pro
a5f1134dc55b67e34f5c66550218c1b4
1c7dc2e49cd7c976566a1de7581801a2dc0a2b6b
7428 F20101211_AAADFJ causey_r_Page_029thm.jpg
c56fc3717ec138f57f61183b852c39a7
c43655f2dcfc92f3f7a33c3b004b845c4c71f3be
697219 F20101211_AAAEJB causey_r_Page_078.jp2
a3e852afc207c85df96194713b9e08cb
2eb0d5a5fe552b44e803ba0c9e3c75be8a4d7f19
94640 F20101211_AAAEIM causey_r_Page_159.jpg
7b8c86c5b55cc62c667754826e645248
bedcce1f4879372a41a7c4a2e4105ecaece02bd4
58218 F20101211_AAAEHY causey_r_Page_023.jpg
fe58fb4d789d3a555f932b3c5dc1d344
b777e60d850830a7112adb95285d734ca0ad4a17
36609 F20101211_AAADEW causey_r_Page_020.jp2
6303bae8e3318525008cbd9033d8907c
1447e296d1a0945ffd4a3e34edc0db23dd780e3d
23503 F20101211_AAACZQ causey_r_Page_006.QC.jpg
8919bcee3b8bf2ad7aae40db9b166c37
a4b7853cdf29f9d8ded36913c4e120bf616bae0e
2505 F20101211_AAADFK causey_r_Page_101.txt
48116e529ca7e5b548de0a150a51a559
e4138ca78fd061e79f30343d7b3fb12253698222
F20101211_AAAEJC causey_r_Page_079.jp2
3f7c3b309c0537839a5f9e6b15f7e741
f7bbe3142b668b910130fc88948a0f16269b2b61
23829 F20101211_AAAEIN causey_r_Page_001.jp2
74da11584d1a8935875bdba009e09527
34d532bd6989b74c9f358815efeb2226c9717491
79590 F20101211_AAAEHZ causey_r_Page_028.jpg
0e6fb49550e675384db1d0ac8c607e72
a70141e24d9d617d70e3b1cfad54a826dfbb8021
7263 F20101211_AAADEX causey_r_Page_140thm.jpg
3057623d12088cc08e62d14f9c22b820
b91cd6cc9a28c7262f5cb60f205c65b2828a3878
2969 F20101211_AAACZR causey_r_Page_154.txt
790838fd3e9e62caa70fdd44a57dd651
80112de751e82c80ac7fee22f1f78727eccce1e9
2435 F20101211_AAADGA causey_r_Page_098.txt
9171028a57f4150a70d2b2ec96f453ad
c15639aac7fd7f7e172be7994ff84c4e2c4ea675
724 F20101211_AAADFL causey_r_Page_011.txt
1c205ba7735e16b42ac9a0c44a9019fc
c7a762127645940149f83f22f0f22a4833b1352e
870924 F20101211_AAAEJD causey_r_Page_080.jp2
7e9964d8ed3a18e05d8c014826e3661e
7b6a46559962004947ad1d574f9a8d2cbdc842da
F20101211_AAAEIO causey_r_Page_005.jp2
ac47d26df37e3c97d343e0de40652167
1953e11e3b743520cdeb5efa7c5739b294a62f4c
2978 F20101211_AAADEY causey_r_Page_122thm.jpg
ddfec97224b2761bed4373ab0149bc81
5894b6300ba41fd6dee2b0885f0899a772e4cd73
F20101211_AAACZS causey_r_Page_059.tif
1596b26bbd85145c025e37646dc09795
fbe4c76c988f5cf028dedbb6e920b72ac6fabee2
1195 F20101211_AAADGB causey_r_Page_128.txt
3dfe3ec328a17d0d12703a3ca74eaea0
689cd3eb706d764144267122ffe74c8f11bb984e
992302 F20101211_AAADFM causey_r_Page_093.jp2
751bd8c26a16a0e7ca55cc2492451ae9
c061525ac4af899c569bd7057d6a09579f385aab
865814 F20101211_AAAEJE causey_r_Page_094.jp2
8c33d5bb5c386dcae98250cddbd69076
dabd6fb647ffd64cf3beb0c6b34f1f9cb8514744
1022394 F20101211_AAAEIP causey_r_Page_007.jp2
bca5f11dd23512c0a17d65b8e3372c4b
8548e48cdafd8ee705bb10ec256166b26195d38a
F20101211_AAADEZ causey_r_Page_043.tif
d3137f1c8dcdb6f25b940ac995a5203f
d2dfc0829b3bddedfad243e6995ab2dc6c79433c
4345 F20101211_AAACZT causey_r_Page_016thm.jpg
7e0a6f22855792a7c827406f18ab29b2
6d9d7c23de82cd89cf1b26cc43b038134a138c8a
87493 F20101211_AAADGC causey_r_Page_006.jpg
feb83242c7a6b8c7a27f25d2699ce501
83002e14a2554dbde320e2998a5f5de3d2d4868c
37341 F20101211_AAADFN causey_r_Page_164.pro
9cb348a151111c2b8057b0f808961fb0
69e9ef49562c516e26368b2e2ba7894f942f1367
123749 F20101211_AAAEJF causey_r_Page_111.jp2
96bf703917939619f38ffb3e79aba6b1
19bac8a27a48745a50097223cd1688fe19d8b09e
63184 F20101211_AAAEIQ causey_r_Page_018.jp2
73790b03bff67aa1c7c462dde265cddc
418bedd6b6999f2676c159e806246a0e440ebdda
4823 F20101211_AAACZU causey_r_Page_005thm.jpg
e57d6369ae4ed3820bceef3fdc4a9caa
e50731bc3f3b03dae24a478edcb13ee709f05729
F20101211_AAADGD causey_r_Page_130.tif
ad858b39a1197937da72b4f1e3a1d6cc
53395940879315db3057faf142985d39bdfc1f24
F20101211_AAADFO causey_r_Page_104.tif
4890f25ecf77c1a1946f4f574874c91c
651677c3c6027d772991474619ad5c9ef9a08ffb
657421 F20101211_AAAEJG causey_r_Page_116.jp2
bffb634823f61476f0231af9fc9003b6
abce1353bed62c23084294a95f6c7120c35f6755
118169 F20101211_AAAEIR causey_r_Page_019.jp2
470de08cc4887923a6d3b2dd0143340f
d8b59a176e021084f38718b034cd53214a3d2ef7
F20101211_AAACZV causey_r_Page_017.tif
28f64fba4193df5fcd811ec8fc216bd4
fed9f388289d486ecfb9dd3c81262878b808f585
18070 F20101211_AAADGE causey_r_Page_017.QC.jpg
01129c39cc52d87747541d18d5879c61
a08e476d63fd212bb3b38ec56aa0df472fe5f70d
16688 F20101211_AAADFP causey_r_Page_135.QC.jpg
2dbb083382c7662ce172857af5745f83
71bacda8b7ef7801d1860a8d10bb21d449109a36
1051915 F20101211_AAAEJH causey_r_Page_117.jp2
9e2625f6d7c3ecee631a16870d3df01f
8e1edf0a188ceb6d1c6506787d0247f038a94e4e
936745 F20101211_AAAEIS causey_r_Page_023.jp2
c43beec64cbeb9c7e4d887d29bcc786c
272a6e213844450f0022214ea08e7b38a663c092
8833 F20101211_AAACZW causey_r_Page_076.pro
b15c4f68da2acd5c7ec654dfb96e2727
570abcc6c1384cb788b92970af49d8ebb33fcbdf
83164 F20101211_AAADGF causey_r_Page_033.jpg
bf5a39176973ceb3596f64bb272f7e06
66f83ee449ab03d7076a60fae9dd9f25614cd019
5481 F20101211_AAADFQ causey_r_Page_142thm.jpg
30a0b568804ecdf009c92ecedf0fb469
a15276ec7cdf871d4947d6fe6d6ed9c70b2d0bc6
1022509 F20101211_AAAEJI causey_r_Page_118.jp2
685ce47fa8e940b8fa1abb8be421641e
3b9d316df24b6cef49a7feb366e788fa98b98b37
1051903 F20101211_AAAEIT causey_r_Page_030.jp2
33ad10de6d19cb82472324b7d24b6296
7801dade8ddfc6bad518a8d7470b330c44c13c68
81896 F20101211_AAACZX causey_r_Page_098.jpg
0aac3073b84cdd64539af510b863849a
024c70d5c032aa00d2118b124a615d6193db33a0
F20101211_AAADGG causey_r_Page_047.tif
c97bb3363f5737e43a6ebe1c4fe55aee
a63b96d4bf1289ff20b71429c80eff7c956a0719
F20101211_AAADFR causey_r_Page_139.jp2
784c3859cb097c138a4dae5f6544ce4d
c3fb94febc392418ef0a091fa622e1308b2a70d3
F20101211_AAAEJJ causey_r_Page_133.jp2
5cc1443e1ad66e78be5876776bfd0ef4
d68e0f50b20b3dd04a2cfab3913b4e741c8631d7
F20101211_AAAEIU causey_r_Page_042.jp2
f44371fbb0ba7f792b83bfeb66479068
d1be535a8733f8f204d732e9dc89dae916a2bbc0
7335 F20101211_AAACZY causey_r_Page_084thm.jpg
abe3542c186779e1c0eb3e91b28f1be6
1edb504b7f1269cd10ea1e7310c44e6bfd6c0fe2
F20101211_AAADGH causey_r_Page_077.jp2
44e180a290f75fd2c4e91c72151b62a8
5de620799f151d05f82d5de2edd92aa146b255e2
44951 F20101211_AAADFS causey_r_Page_118.pro
086a4700f6b97bb5f889363512de8696
012fac9804d84efbcccfbfa3af02c545d9888860
1051942 F20101211_AAAEIV causey_r_Page_050.jp2
b7fc2e57606b0f105008615d8bb7d5d2
1118404340b0f5c293f52518c2b8ca902c129845
F20101211_AAACZZ causey_r_Page_013.tif
2d50c5f6ffa49f84d519a0c614d96d0d
970740d09a67c6821459eb52edb0a5184bfc4437
57826 F20101211_AAADFT causey_r_Page_086.pro
9d4bc42c6cc5f27c94f30c8857ab1698
c5766f07e1bf6894bb22cf95b989d83c7640fdb5
647643 F20101211_AAAEJK causey_r_Page_147.jp2
4f1c4d163e35d518f0f0bf11a41b1307
150b80cfcfca968fa67efad96cdf9cdb1eb276ea
1051931 F20101211_AAAEIW causey_r_Page_051.jp2
5dea972424ec5c9f370803fa0b86cbba
3579826f21f5b041952fc45827c0c71a5a17aecb
F20101211_AAADGI causey_r_Page_052.tif
dbb9b29c0801b2e885f86bc4c1e83ca5
7dc6138f10dc3ebe474041cc672411d2ae217368
1592 F20101211_AAADFU causey_r_Page_102.txt
e127d6155f9e1a056ee4e5e26426bee1
91d78316603bdce226fe0dca13c616a2c80209c9
F20101211_AAAEKA causey_r_Page_140.tif
19511813eedd0d9e881abaa84bd813c3
075b7060002b9956d269fbab934155dc6653ca0f
151263 F20101211_AAAEJL causey_r_Page_153.jp2
d53adccc885460980ba861a565275aaa
bdd19501e612a2c8291e3b33740afe4d8ccfbea2
934558 F20101211_AAAEIX causey_r_Page_059.jp2
fb82f60b7c2d86a47962b26760254e04
440d3aaa3eb25d50df81e8291fcb990baa2cd8b4
27644 F20101211_AAADGJ causey_r_Page_086.QC.jpg
0ec184b7c20e2c85bd63bd002b4c5e04
cd4245dff8ec9862a9de5730d8bbf2f257d2ea11
F20101211_AAADFV causey_r_Page_124.jp2
66e3cee838dfe24802012968cf3fd27c
e29eb0869f99673d0723488e887ea58062aba928
F20101211_AAAEKB causey_r_Page_145.tif
b2464b5b4a97bd3d14d6f3cb307b7ebd
27729e125a35142e1f61c45a5c6de0d4403b549c
80749 F20101211_AAAEJM causey_r_Page_164.jp2
06a7fcae7a2dd7e6e9b67dce6e8ccfb9
5f29156de3268cc18d6c9002f6a971f4704edf69
1037100 F20101211_AAAEIY causey_r_Page_065.jp2
de44a1fd76267f41b0b25d9f580a2953
f3ec362a56cd4bfc911ae240ad3b89ad437e9cf2
43303 F20101211_AAADGK causey_r_Page_149.pro
e664e9c10efb3eac8f5c0184301326a2
e9eefe8ca0ff741f6f6b10863ecd5051286ba3a9
F20101211_AAADFW causey_r_Page_127.jp2
973fd7beb03c90835e9d3ae668e475ab
3a0d6efb925435c86573ef88962f123a2fc04209
F20101211_AAAEKC causey_r_Page_146.tif
544663a7040cf378b0e700e5c9fee2be
da000730d74036ea6e25877199471f45d98d07a2
F20101211_AAAEJN causey_r_Page_014.tif
37ac79dee5b5eb4201633d014f2ba4a0
eca6acb113ec7570db4fc21f4c486a9a0385d1b8
503890 F20101211_AAAEIZ causey_r_Page_066.jp2
2f0a86bb00d97f22d4145349922e6ad7
5b065cf91ff2e705e5f626a0353f2e56d42c9b23
16281 F20101211_AAADHA causey_r_Page_011.pro
5d7f1656497fb7f5528c57f57fffad6e
8fa197aedf6f1dd6403445f8581487fa63aba11a
23553 F20101211_AAADGL causey_r_Page_097.pro
fa73091c35f1b4be4168228b7cac9182
4c6f86d3ffae8af2d3bc7fff7149846ad667c1e0
1741 F20101211_AAADFX causey_r_Page_073.txt
1e18f81013d9cf40d24b4697837e4cbc
c3ea7d6a56f566dda88d0d0c441f27e3ab914534
F20101211_AAAEKD causey_r_Page_151.tif
54327e889b19898e2e55788d3bd0cfca
d4341e300e0d8efea16681b3c0dcf8e5e137c356
F20101211_AAAEJO causey_r_Page_034.tif
0620ae81d077a488b45db3fb51f6d782
c5bbff07c50b921cf03db2b9c04c27002bdd6140
1804 F20101211_AAADGM causey_r_Page_071.txt
49f30fb620bfab968266f69a04f40881
f0401654403fc7ae479781f973c06b53a6fe9f66
27740 F20101211_AAADFY causey_r_Page_157.QC.jpg
0a9110f73fd6d874e7f0688d656fd553
21f56968784465f7e253908298e7a8cd62db8287
5801 F20101211_AAADHB causey_r_Page_063thm.jpg
107c73c7df8493cf09df339eb23c6178
415371224c6cd7d01db2cf6d1da84b1d1f3cf9d3
F20101211_AAAEKE causey_r_Page_152.tif
ce0880ea7efddd98e9bfc7a87585a618
b051d0c0e6abae0fbbc979e155a9acc88d59442d
F20101211_AAAEJP causey_r_Page_048.tif
27f8b68f94235c3e605a2809e886ec72
898023e5a17533285d1d5a6d27227ff9e5a8200a
85828 F20101211_AAADGN causey_r_Page_051.jpg
f1f0fc54767371761c582ef52186f57b
d96d5227961d6b238260946f312c70aff8dadc58
7265 F20101211_AAADFZ causey_r_Page_086thm.jpg
6b6f5847c8031ae9151670942da077a1
03693722facce85db2ac8ee60c24352a545c1ff8
35014 F20101211_AAADHC causey_r_Page_035.pro
750097b6a824e67ff44d3409fb253bd5
caf775180ca7fe2ad64bac1fa099fa6ca35e312d
F20101211_AAAEKF causey_r_Page_159.tif
21ee664877bbbdd65922f7546c2c8fa6
a44ebd93f5140cc6e9bdf2856639edc74e5b59a7
F20101211_AAAEJQ causey_r_Page_053.tif
d7e97dc7d3b3820eff0ff1a85e7a5add
3e8a386ca079acb7fdeb8cb1ae88a95d3caa7105
6760 F20101211_AAADGO causey_r_Page_112thm.jpg
3bdadf4c05df2c17eae7d5965c5cebee
c57789b02c28debbb8629ad8a817a89af23ca8f4
26559 F20101211_AAADHD causey_r_Page_111.QC.jpg
0dd34c99017cb909a68e66bf80cc8cea
bd374eab309fc6bb1f0fd8f83eb7292fb1864969
908 F20101211_AAAEKG causey_r_Page_002.pro
b3e8785dcaabc5974ec82140a15c787e
7846641fe8e99e8d326daebd3e7726bdf1941abd
F20101211_AAAEJR causey_r_Page_060.tif
60aea4befb8e16cf9ed098faeb49cf48
2164e3b7d6755b5ab300082deb9d15a1010f1d2f
16701 F20101211_AAADGP causey_r_Page_122.pro
9a6c1556ab04edf5eeb9efdb5062bcaa
9d8256b660f3a0fadf29533a7e37e5c452e791e2
38652 F20101211_AAADHE causey_r_Page_115.pro
b95434a194a940a05071aadff6c52400
27e7e21e697f0e3144bbb658ada4144de39afdd6
35293 F20101211_AAAEKH causey_r_Page_007.pro
6d1568aff993379d5b32e74a89b82497
16dc83c66ba5b06f89b60ce90d4414ad1f2bfe6b
F20101211_AAAEJS causey_r_Page_062.tif
12a8ef625be590f89de783abb770d06b
a2c268d1ea5b2a65b867f7d82679a1414e82203a
5540 F20101211_AAADGQ causey_r_Page_010thm.jpg
3abb49bd1ca711538ceb546ee63b71ac
c0a365d37e7745190d458abcbb8ae38f8fe18818
39792 F20101211_AAADHF causey_r_Page_082.pro
6a9d76c7b2369b79c87d2ad974c5c4db
2162a2bc1ba4ebae9f6ecbc7eb5cd2c8e109e802
64450 F20101211_AAAEKI causey_r_Page_032.pro
0c160c3edc46b5961f2629219fed5778
699229833fd8a8a510e915055a266520524f690b
F20101211_AAAEJT causey_r_Page_065.tif
60e9217143c5acc5be14fddf0f15d9c2
07c19a2b825e755eea4a4f331814baf6a1e98c3d
6742 F20101211_AAADGR causey_r_Page_033thm.jpg
ec6eba96f426ed735ca9d4de5a0931b5
1b7094d7320cdb7b95532cbe37699e966a362a66
32459 F20101211_AAADHG causey_r_Page_046.pro
c569972d09f8e0aa0e2fcaa15720db3c
15eeea060641035acc61ac03582ebfb7b6128f62
64305 F20101211_AAAEKJ causey_r_Page_037.pro
79b9f45a529f9ab4683d96c43ea74def
6414e3b4f65a31cb33e92c94f848f12e06ca3a38
F20101211_AAAEJU causey_r_Page_071.tif
1ad7283f3cb67b9e169dad821b34a5af
1393d4459b9a77281aa8095aeff9e1bc2852ea51
9244 F20101211_AAADGS causey_r_Page_020.QC.jpg
9faf2f53e47e241230a9e3848141a6a2
2c13a84d2e00fbddb7226145c4c8dbe4f8103d54
9321 F20101211_AAADHH causey_r_Page_011.QC.jpg
7000cb98ac263ac8acf46871b3ed27eb
1bbed04fdc0e3e2ace5bf71e2fd25bb1a1d65837
60843 F20101211_AAAEKK causey_r_Page_041.pro
bf3572b1b37f92ca6a1598f9610198b7
1794ad64ef75314b089356961ed0f445377d2750
F20101211_AAAEJV causey_r_Page_075.tif
bdefa6042a5fb4d9afe25439200698f0
1b6867e43450c8ab7f57bf6fa39dbc79d09bc76f
867412 F20101211_AAADGT causey_r_Page_092.jp2
fe9b4c5d413be6211d74668017675267
845b00fc5c39e9dac4110dcbd4d297640198cca5
7372 F20101211_AAADHI causey_r_Page_100thm.jpg
5ca1a1ac8ca2720e1145ba2ec43c083a
d823bd164db657f1f775ed33fb5bc2a73148342f
F20101211_AAAEJW causey_r_Page_089.tif
42ef12369223947d84ec5f7f2e31dbc8
4e88ce1e8b0f9e19e4b7d9689687f2251d65a621
60917 F20101211_AAADGU causey_r_Page_151.pro
96c1ab7f9d1b2d325f1c802b4ca3039f
da5e49fe16607ec87c67b823f96790a3d8b2c126
24765 F20101211_AAAELA causey_r_Page_144.pro
f3219234c281cb04a5c6aa6f41910b42
e7317c64d473b486cb54d7e18329bb0af399e0c1
35271 F20101211_AAAEKL causey_r_Page_048.pro
7e16633d96642cb8f849aa7df20d7149
c9fbdb76ff4385507be3bcd1ad1155b8705a949c
F20101211_AAAEJX causey_r_Page_109.tif
2d66fe69edeea55540c5dcd544d80018
83b26365b02a603d7036bc55f3f142f33d2048ac
1041772 F20101211_AAADGV causey_r_Page_128.jp2
b2bd7f86efc44cbf6bb710f0626aa416
b8f791d12047d554be23df76c778e1f4017a8f09
2457 F20101211_AAADHJ causey_r_Page_100.txt
b424128f9e8a040190f283efdfe312d6
370d9424ce8c5cb5e04d54f33776542a048af7da
20203 F20101211_AAAELB causey_r_Page_147.pro
3ba4ab8333e9efd7f35b05d4e31c702f
91dd5870fe14ed18e0b7fee028cda17602aa9f0e
39444 F20101211_AAAEKM causey_r_Page_063.pro
1b8df92f9e7f124e82b5acf9c4eafa7e
666f18c7943a2429f31943a46da8df4e39068056
F20101211_AAAEJY causey_r_Page_112.tif
9bdd7a729bcf8f8044a11b1efdd15a9c
b107e4a74f48385a7b580ff8b9ae8244fd2a89d7
F20101211_AAADGW causey_r_Page_022thm.jpg
bd93733c2addf1af432d200503f0f6f9
1f966e93a3e8eaaa4c66c160b9062f372e24a8b6
6062 F20101211_AAADHK causey_r_Page_123thm.jpg
ee7dbad7882a414bc434b98615788cc9
1ae69751313d4bedb5a529f3ff159fd964a5204a
66275 F20101211_AAAELC causey_r_Page_162.pro
f10b2a27de29116753d6f95add478cf6
8d5755e308f6a4bfe1aa65bf1a9522c503d47ec9
26264 F20101211_AAAEKN causey_r_Page_069.pro
4fe8ca40dcfdddf9b6c38e7b02cb016c
3858fdc9da7e5db10432f891fb9c442e553ffdf3
F20101211_AAAEJZ causey_r_Page_135.tif
9610aea7ec67f70f2e15838fcef4fd62
11ccad57432c19996f2d045c7dfd99dc14afc918
931 F20101211_AAADGX causey_r_Page_062.txt
4b7447ec2b00e4f05a7c6fa0d00f12ec
09fcaf738c851ca327e2040ecde24cbbe244fea0
61062 F20101211_AAADIA causey_r_Page_038.pro
508509c8ef345d9632e70d0495daf774
5268b36f42f52b3dbcdf900142a62d9c59a73180
F20101211_AAADHL causey_r_Page_098.tif
1592f9d283d437e8b25ebac26230c314
667e5011217c27a5cdd5ab2acc3da93bdea2dca1
2322 F20101211_AAAELD causey_r_Page_005.txt
e5a4b1e15164f3998bb8470cd0713e6d
ad81f3f0488c8d9eba0eaac3d4f710368bcb7336
25637 F20101211_AAAEKO causey_r_Page_078.pro
77b1463d35f72e2d5744e4f7963b91b7
d129d0fb11516ee95b5395a18798be28831c3c1d
2159 F20101211_AAADGY causey_r_Page_009.txt
a76fc63473a4bfe06f56bb6ec40c0b8c
8cf1b144f2febd913ecc111638689885fd23d4c3
87228 F20101211_AAADIB causey_r_Page_034.jpg
5172e78f362540f0be33b7b50edaadee
70bb57af50cf2819e834586de4f62305d4b79b9e
2414 F20101211_AAADHM causey_r_Page_117.txt
0e9e142677a4d0f743accab322cad40b
aed1d7e20c64c5919e4238d20ea8f71941040a1e
994 F20101211_AAAELE causey_r_Page_008.txt
68ff199b795aabebd353e7c864cf6075
99223638988cb325e4f28f7cf2c7559694897467
53814 F20101211_AAAEKP causey_r_Page_085.pro
9c88d3548b70c89196a0a64a614036a8
6da6b201ee4bf868b57c0972e2d06fd46c2cd13a
94833 F20101211_AAADGZ causey_r_Page_026.jpg
66b5306fd1723a335a5eed3cc4549f1c
3850186a716baf0e96e087b62409b72e2f025e7b
1044082 F20101211_AAADIC causey_r_Page_095.jp2
0973047a2bd2d7e4e61e731e715b6c27
5e6d49b72c6aec4c583bf48399db865a42e6ad5c
F20101211_AAADHN causey_r_Page_056.tif
6eaee37669e62eb7a92d4ed7fd603238
7946e2534da178cb3428ab45a6c86616fa532682
1571 F20101211_AAAELF causey_r_Page_017.txt
cd75502d0d982680ae722491a6dea9c0
32d66ebbe03c247e09e0e42f5257663a78cdc663
43874 F20101211_AAAEKQ causey_r_Page_093.pro
eb2732a17daabbb38b59dc1d683c5fd6
016856dcf4621750a7416a78d72765235361300f
F20101211_AAADID causey_r_Page_084.tif
743b951262b765e08967ecf0b06f6122
d53d94e116ab29efe400918e260d3ae46347c2d9
70783 F20101211_AAADHO causey_r_Page_070.jpg
f30343abd7b5456bbca9157d4e97e011
7a05e310e736815dc2183f7312aeab899c65093e
643 F20101211_AAAELG causey_r_Page_020.txt
44327ba1453a4859d3cdc16873d06d26
4d5c2450708c4850c5b792a345435ce40ebf5356
36821 F20101211_AAAEKR causey_r_Page_095.pro
b550cf35150dab18ccfeaa002b1c1aa8
101818c368d2b1408a2bcc972e1f6f3bebc39743
F20101211_AAADIE causey_r_Page_131.tif
330bb16a6aea0d2011dab8a4f3f02a47
069b660d6800731519584d79b27a638b7ccff0d5
6464 F20101211_AAADHP causey_r_Page_065thm.jpg
49e83987fc75f83eb3a1f84901a55ac0
68daa74264fb71b81a9b1b2e443647ba7ddefadd
2489 F20101211_AAAELH causey_r_Page_033.txt
072b5f87bfe517f46c749a594df671fb
7d2c924c003b88ac52f67872697e6787f2e29263
63056 F20101211_AAAEKS causey_r_Page_101.pro
c35f1af77df09a17c818857f271116d0
3da61aff9cf08cd686d4d200d06c906d37202cdc
1797 F20101211_AAADIF causey_r_Page_059.txt
a02fb07d06ae86ceb5cf5fcca2be5d35
d73b68b0c7249bda573fd3aff967e49b22949490
75050 F20101211_AAADHQ causey_r_Page_158.pro
977c3a887c6d34ab390d736e90a6bad6
91e7d3f68d4de0da4f2887fe763b64c2c32f4248
2131 F20101211_AAAELI causey_r_Page_044.txt
b1fcd7062390985b2848fb2bc6a9dafc
2a31773d6238ceb96eb77dc2d52af4268e5dc161
36655 F20101211_AAAEKT causey_r_Page_102.pro
1d255bc91dab41e388f001d1daa77728
afa30578c11ba295ecc23ba23d12f088ef762ba0
F20101211_AAADIG causey_r_Page_083.tif
7cf3a22aeb0708d5590604aaf3c53840
9c745f88f5194571df7bb2837b814df2ba9ad9ed
82485 F20101211_AAADHR causey_r_Page_017.jp2
0f278a517bf6cc11fe4de2453ba5fecc
7aeb3a9aa3bd4c2830179b6cbadaca60c5af01de
2184 F20101211_AAAELJ causey_r_Page_045.txt
11c4a00f565ff9f0d7a2e3cb243ef44e
a2e8e59cc32b098c7a6c58c7fb226f78440199c8
45910 F20101211_AAAEKU causey_r_Page_105.pro
c57bc2cac6560abf05d11de50664edd3
1cd842b4e18d494c8ce9ec68b97952f040411233
61549 F20101211_AAADIH causey_r_Page_055.jpg
d7cc5b41cb35e8265163d233dd5cce2b
be45c1876bed1de928435336e15a0306686424e4
F20101211_AAADHS causey_r_Page_039.txt
f588c5ac78c87293bf0ffd9afd8615d0
6e9e541df995959aa723358f08a309ef31186ea1
F20101211_AAAELK causey_r_Page_051.txt
065bd988d8ec7be655ac51c5a3e6d2fc
eaae85a704264d0f63257250d192a33cb8cab3e8
47594 F20101211_AAAEKV causey_r_Page_106.pro
7b1367de09d0df986ef9b92a7478e871
00afafa034d27d41cab84d73eb06371d5e8b3f41
1890 F20101211_AAADII causey_r_Page_146.txt
8bd14f358bd35a0408b600bd94979a2f
46444227675a32b826cd64dd9a2f8cf7504bfd15
52851 F20101211_AAADHT causey_r_Page_067.pro
c68105bb5cc41654daf1395ab326187c
9f736667df966e555ee6643833b847fe37ac31f0
1866 F20101211_AAAELL causey_r_Page_061.txt
deed14fc8a1f8fecae716a6c9deb3aab
e2a9e86cba7e35b67fa40294e75a639f4f477c56
59193 F20101211_AAAEKW causey_r_Page_120.pro
1fa2cc6ac78cd74c00b6a58a2296bf71
fbe81d21bdd65d378c5139c5b152fc803814b9b2
1051932 F20101211_AAADIJ causey_r_Page_132.jp2
f2016cf3a226cb1eeb55da6ebba7b3af
792225fe91c31b6e5ee254d2411f57f2b2836548
F20101211_AAADHU causey_r_Page_038.jp2
182f6e8d2ddf4b9cfe65491c20e2f12a
68bd45bc9053d30e40ca3a5b7206522e191233be
1082 F20101211_AAAEMA causey_r_Page_150.txt
a8d5cf1678252afa41b560d0f8731f90
782b7b01f02475b7d6e23776a788febad6c62575
43428 F20101211_AAAEKX causey_r_Page_125.pro
c1da47c1f485e792f46c32c96a2ba6c4
682abee45fefd310ff076188bf16a3e8e17f8671
F20101211_AAADHV causey_r_Page_007.tif
5eb0a188d5f30d43b8afad6b3d3041ee
092b4a6b754ba95858a1acdaf9fb59cf3f2dcf7f
2660 F20101211_AAAEMB causey_r_Page_152.txt
9b7a14106ef88ee6537792501f6c3e5f
026b1799d1ff9807ad99d2581f99650d280de3dc
1842 F20101211_AAAELM causey_r_Page_063.txt
23805ef5829062baaece258d5942afd3
9b47bd51d434a9b4b06a9fcc7158c8c296679e37
47227 F20101211_AAAEKY causey_r_Page_133.pro
db4fc57c1483eef2a889da81c3db137e
c5eaa3aa8fadec247c6906cd02a22baddd317575
64788 F20101211_AAADIK causey_r_Page_029.pro
cad46485be9a4feb311f4fa5f4631ab5
d97f5e1e3704a5ef51604a4c84a726248d49f1f9
1313 F20101211_AAADHW causey_r_Page_070.txt
c9030d03091af16517d54352c9f74586
1c7ea0f4e8aa17eb26e16fb6f85e53a488d703e9
261 F20101211_AAAEMC causey_r_Page_153.txt
8be7235dd15457868a85c61a0388e45d
0a1b78162c2a6c8ee20bd9601074a3700a948692
2228 F20101211_AAAELN causey_r_Page_067.txt
10e07e8722b53ed041cd5fdaa10e8711
106fb462c8504a5ea7ff96cddcc2affe46143c44
27580 F20101211_AAAEKZ causey_r_Page_136.pro
34fa3efeb9bb0231caaaac5b370b15ab
11a9775c187a537891def315b78062c79f2d8c3b
6309 F20101211_AAADJA causey_r_Page_060thm.jpg
138ff5aa7c7b64169780549eaf8d8ca7
d956cd274667b394fb76bdf5eacf394e29967e69
2016 F20101211_AAADIL causey_r_Page_153thm.jpg
99288338b5f01b47534265aabcee3763
2865b08c54866751152f4caa4be06797cbb28a46
31521 F20101211_AAADHX causey_r_Page_015.pro
afa1b4d36231d5cbf0967054d157894e
974fabce4cbd77322f442174ac749c403affa8f4
6163 F20101211_AAAEMD causey_r_Page_095thm.jpg
0b693b3d073a1e18a3c8c3b8d65837af
eb984cc7288e488822f43040c940b956413bf0ef
2383 F20101211_AAAELO causey_r_Page_079.txt
31778536c903abf0ea23f1087ff4adce
bda1b2513f67d058d9e52c3e6c58afd1a60966fc
93338 F20101211_AAADJB causey_r_Page_090.jpg
645aeecc061fa36ab71573dd060c83f0
17f839630a88b789bc5c8bf7aef77484ebcb0170
22548 F20101211_AAADIM causey_r_Page_105.QC.jpg
2cc3984fe5b9c3bf5ebe59a8bb9f54f9
711214fd2bbfd5f95d0faf0413804955a3d5a2af
7235 F20101211_AAADHY causey_r_Page_036thm.jpg
f0e987597ce27ff414b249b20ed8a4c9
1cd645028984f8b226efab4d17dd48a3f4bbdd47
1808 F20101211_AAAEME causey_r_Page_003thm.jpg
5dfb7fb7c074aaa192e0265fa569e3fc
597406e0ee6c0c4709c6c3279a066abe4023e4ce
1596 F20101211_AAAELP causey_r_Page_080.txt
8d16a7383295247c08a19b05ba3ac4e7
0bc05410ce91fd6c670b42163bd55d3c97ec3aae
F20101211_AAADJC causey_r_Page_086.jp2
06e8c4d3882c2d275bcceaa5a59f21f0
ebd9186ee05e2b3bdcf4649a5def642d507c6a88
62166 F20101211_AAADIN causey_r_Page_033.pro
4136f9acb6582fa2c64a5bf99db4434d
5bb1efaf6d4ecdda68209cff016869c31d7edca2
F20101211_AAADHZ causey_r_Page_070.tif
92725ffad92b00dd2512c3af704ed85f
847663ed41301d8b90a46dcd5411f3eb649cf886
29652 F20101211_AAAEMF causey_r_Page_043.QC.jpg
510998b75297889e5f305b411a541b42
7f06be57ab7baa2856c312486761eba159a9f508
2335 F20101211_AAAELQ causey_r_Page_084.txt
ea5ad58b2ab83d5677a824675fc16905
350d73591fd6c702a00b9bbd4092710a07da2ca5
24251 F20101211_AAADJD causey_r_Page_118.QC.jpg
6746088554ff67fbb2c0fdb5a7fc8a76
10b718daa4c9b1e83434c4d514fe908a8f741ff4
140428 F20101211_AAADIO causey_r_Page_162.jp2
6b1239367a47403886646ec7d64ad3ca
9ef4759a4ab3adb367271d73dc53cabf780e2e2a
19428 F20101211_AAAEMG causey_r_Page_142.QC.jpg
5b87feb1750129850633b17b67781c6d
30cdea295f6f032e2e6d75aa3eb41fc31ecba3ef
2378 F20101211_AAAELR causey_r_Page_086.txt
3fbf54207ad1f4ecf94ca145ad602a5d
97ea2198f7e16ced4a97560314875ef6fa11afe5
13773 F20101211_AAADJE causey_r_Page_018.QC.jpg
dd4a8940f3007432e76e8206cd4c8731
8138c15eee494b7a8f94644ce0ae285f9a09fc84
2792 F20101211_AAADIP causey_r_Page_159.txt
753f9eb7a80fe3d588970ab4e296ec04
a6553eaee6ac786b24ae97d591286391bcc6c29d
29595 F20101211_AAAEMH causey_r_Page_158.QC.jpg
d3cfe860c0bbbea5b3699acb97174d9a
c14688d53ce34806f0a8baf5db588150a8b37727
2214 F20101211_AAAELS causey_r_Page_087.txt
8f4678110729d711b11359c056f83008
8b05ec12d2c58cc40bc8498bc087fe19f1eaa0cc
2680 F20101211_AAADJF causey_r_Page_034.txt
154c2276510afe09f91ee14d99635cb2
b588907b5eb8587699c8bff8cb4a74b80155c431
6892 F20101211_AAADIQ causey_r_Page_079thm.jpg
1d6fb8975407a65d6286d19bf17d5227
6905a8c4ae0894c62899e9162078a1666c0a7c91
4351 F20101211_AAAEMI causey_r_Page_147thm.jpg
5a7d7f4408237fb0be2b61125ffb66d6
926f6aeddbac457b3f690fe2383a93f773e774ca
1669 F20101211_AAAELT causey_r_Page_095.txt
22d29f4322f4a5c03be44c4dac65b64e
ab93d9272e22dc5ad27cdee2f57a9f7433590a21
9988 F20101211_AAADJG causey_r_Page_002.jpg
035a241f7d0d2751ed5e9bdc06e9cc46
cf25b57cc13d7375264fe8591f9d39aa1a9c72cb
65490 F20101211_AAADIR causey_r_Page_088.jpg
5197af61adee4e34f3b792b673505045
8a7bab7a134809f19b22feb6d789ce1497b55a64
23217 F20101211_AAAEMJ causey_r_Page_065.QC.jpg
380ba044a3c832eaa94992754f90926b
9ef29925b884834f6487f55ab200f8b311b896af
2536 F20101211_AAAELU causey_r_Page_099.txt
b0a088800b87b2e60fda3d0643930a6e
85fa517e0669842b5e1631ae664574e5037cee9e
74466 F20101211_AAADJH causey_r_Page_049.jpg
ee71d00be1a4137c2baa6c9f9f02aea9
4f4890a92e33cf81bcd0cb78e4f8a37d5492161d
991804 F20101211_AAADIS causey_r_Page_070.jp2
3a9aedd770891223605acaf816e9d294
d2d76a1c78e94e830ac290d6f737e42543870946
22562 F20101211_AAAEMK causey_r_Page_095.QC.jpg
60ae7064b7c1854fb50b55df5d9282fc
17cbef835f606e94b3670417272cbe1ef1dfb41e
2417 F20101211_AAAELV causey_r_Page_120.txt
8087459cb1988c9b8a0d08ed56e09ad3
6fe541e310c281877fb98c5c607ac86e22139d5b
6066 F20101211_AAADJI causey_r_Page_070thm.jpg
b95581cd83dfc814bf875c605f09d3d3
6595934cb02e3ed89923ab4eff1d6500a4cccdcf
54786 F20101211_AAADIT causey_r_Page_077.pro
82c9d83623a1ad25810d6dc48106fd61
7d7920ec415b1e635d5368175918b4934adbf33e
25188 F20101211_AAAEML causey_r_Page_085.QC.jpg
3df63779201fb5a3b78414fb4e5aef86
46a10266bdb1888944dd51227c2cf53303f8ef1e
670 F20101211_AAAELW causey_r_Page_122.txt
86c04be07372886997740e687f45e7af
50fc4c71f2c6cdbc3fbc8a195f23ea8c17147121
899611 F20101211_AAADJJ causey_r_Page_056.jp2
1598a2212cecf595be826e48189e9e47
71a11d1ef27f3986838d46a2a9fe720cb341f3f7
10592 F20101211_AAADIU causey_r_Page_163.QC.jpg
09f04a854be7b3c5a7991b6fda6f0643
a9adc574afe43bc083500e1d89e512c2579eafac
28187 F20101211_AAAENA causey_r_Page_161.QC.jpg
e6c915ba626bee7557fdf4cd1af17898
ae5b5435e25241df9dd7c9da6848054e922ab56b
6729 F20101211_AAAEMM causey_r_Page_049thm.jpg
41f8d8319a6df2633b37e193ddb05f70
9e949355681ec5e410dc96412a04dd239c6e9056
2253 F20101211_AAAELX causey_r_Page_132.txt
77ea38f98cd970ca555bdcfcc4aeffa1
8ea165e9d2814dfec05a28660a82d5c58e87bc78
19606 F20101211_AAADJK causey_r_Page_074.QC.jpg
c0d8219c374c4dd494d87da181ba1dce
fd1437ddba81c5c38bd61ca3a34d8e2af2bce98a
1742 F20101211_AAADIV causey_r_Page_074.txt
8a40a390a649451f403e0a6f66d9bd0e
183e8e62bfefa41a96bdfdcdcabdb428d39c8da6
21806 F20101211_AAAENB causey_r_Page_114.QC.jpg
b73d32da1919be891324633e02a814fc
ce507bca610261127bd8a78d040d76364fd84022
1428 F20101211_AAAELY causey_r_Page_138.txt
2ba673f9e5c894718d5d071792f68536
31dacea5363f3a32b36ad9635fb8fdd3c47af4a5
30525 F20101211_AAADIW causey_r_Page_016.pro
107561639a95e883fce7e59a41b56348
664750618bdd4c801570e7c8b2167f0e8b15a36d
17753 F20101211_AAAENC causey_r_Page_089.QC.jpg
b664ef0833443efb61ce3ba7e0d9c931
735ee7135c4563957c1f24ec4b76418196e4c2ca
19853 F20101211_AAAEMN causey_r_Page_145.QC.jpg
739100c3921a63f3f3e90bec14d2af74
0bae275be18c2afc488c4715b33b39e6583a934f
1125 F20101211_AAAELZ causey_r_Page_144.txt
10f171a30b5453b22e57f850d65478a1
3cfc6f775b6aa41ebf8b66b4d382e29271273fbf
2480 F20101211_AAADJL causey_r_Page_043.txt
b0abe0c4e3bfd54a6cc5c7f52f7e3935
fe00acbff7e824c6c7ed60928a0677d43782239b
5511 F20101211_AAADIX causey_r_Page_136thm.jpg
d501fdab27d36f3480fe8a349e377fc4
52b839e02b6c0f100626deb1a42119b6d09cf09e
24817 F20101211_AAADKA causey_r_Page_053.QC.jpg
ffc68eedd6654858119b9d29feca60d3
0fc45c252659816ae78963159b67ef261aeb9759
6183 F20101211_AAAEND causey_r_Page_030thm.jpg
a4c124a9e1a6803c24fb34ebed6bd474
0539cd86b5372cd8eb3623e7dad88085f19126c9
6097 F20101211_AAAEMO causey_r_Page_126thm.jpg
66928b5ad63a1a541d6eeb338a32da29
13d973803a0fb3e7fea76c594e8e6ec58963748d
35569 F20101211_AAADJM causey_r_Page_092.pro
e3dbef761f25387ab35daf14f6598923
d18d03ca3c94331b6b1e413c9b05f8cd7e3e7248
100308 F20101211_AAADIY causey_r_Page_160.jpg
44e64b8fbd64c87a0c63164cd0ddf07d
4ccc0afe239df85691e29f14b4311a5951e7b752
136080 F20101211_AAADKB causey_r_Page_152.jp2
defff14f6e7441bfc3dc34ee24716f30
31b1b6d41cc2029e7d7adbe3d87ec2f722f4ddbb
4980 F20101211_AAAENE causey_r_Page_017thm.jpg
87aa0ecae66fca36bb63ca47579d3c1e
32bf7e75b985f27cb975d9697106023bab7ca8b5
5822 F20101211_AAAEMP causey_r_Page_103thm.jpg
5a4be48e5f63725780e74968551da0fe
f7e57bbe46140e3d47ace9e1a568054a7baa153e
F20101211_AAADJN causey_r_Page_107.tif
89513bce72a52cd63689382ccbad2d85
2e622e0eb01f5b4f476a993759380c3d68729589
47949 F20101211_AAADIZ causey_r_Page_109.pro
88fdd9ff5dd004261566f43cdac62c34
7f278db450f4aafdc1bf3ca6c5f73cbc55237fa5
F20101211_AAADKC causey_r_Page_087.jp2
ac88befacc138b5efd7b72e7626e3135
b0e8f92908a2f36a7714ef85be3ebd1b2986c673
6743 F20101211_AAAENF causey_r_Page_077thm.jpg
221f52157abab8fd2a92c087b1e5876f
58882cbbabc107b6b5bc4d754708696b215a4a41
26336 F20101211_AAAEMQ causey_r_Page_044.QC.jpg
a752a6ab9d8dd114433e242a0317f40d
598ce780a6d96594589e5bca8d024c7b71528726
1561 F20101211_AAADJO causey_r_Page_135.txt
f3e1c9893e2e25ddfc7ef15eff5c3ada
3d23693161b779819c8e34a6066d1238496dcd0d
60513 F20101211_AAADKD causey_r_Page_115.jpg
a58bf99783fce6acdff74e58dc27bfc4
e1cf96acc866d1ad73b26491f4b291be6e8f01d3
7229 F20101211_AAAENG causey_r_Page_152thm.jpg
5885e88b35e7393bfd981b300daf2b77
da2d8ebc11b20dabc103d792f4dbaca600299cab
F20101211_AAAEMR causey_r_Page_139thm.jpg
833eeb8e2a992dcc3fed121b9a6fd6f2
6f2290651242bca61c8ee3f090cac4d8d07e01c2
F20101211_AAADJP causey_r_Page_042.tif
a8eb42a0bc462f0c135522418e362de3
b207c06db01c4b2c372ad54b2c8fa5d38f80230b
28705 F20101211_AAADKE causey_r_Page_058.pro
88bbace59d1ef0266701ca814b172c06
79cf2df62935339b1038060361b2b2ed07eb5f7f
20261 F20101211_AAAENH causey_r_Page_108.QC.jpg
dc3cbc9097ed473f2d46b1a3c0dbc8d0
f2c2c407a174a8a29e7fe9904aef729ecd35846b
5930 F20101211_AAAEMS causey_r_Page_055thm.jpg
38d7a3ac287d79511b91f18f7fbd7c35
208e42db2a99f136b48e8cdb998331b353547519
28124 F20101211_AAADJQ causey_r_Page_121.QC.jpg
7173d61d7da45fabb269e327c7bb034e
3fd5c49a52c155e5d47c2987d1026b41ddad7a7d
60215 F20101211_AAADKF causey_r_Page_068.jpg
8b7ed7a5b05c50c5a5970365049bb053
988dabfb2f2ba2a474b57ce1b2103a5321b85d63
22254 F20101211_AAAENI causey_r_Page_124.QC.jpg
4cc9337cfe6313698182ee32c34100e9
34fb8c2ec3911f94ab92930ceb130919173c5004
2861 F20101211_AAAEMT causey_r_Page_004thm.jpg
d88eeb70e1039dc9397e01ff3ec14554
ca9089ed27c02d75d31f6362800f49e414ff5eb7
2486 F20101211_AAADJR causey_r_Page_090.txt
f8e365eda24e33601bbe6bc16618c15f
72a8a3025d91b219fa9cf18fb31bc5461a1613d6
4938 F20101211_AAADKG causey_r_Page_003.QC.jpg
37584d08bc01f65035ea13c191da23cd
a2735b89b5b7e1ad8098418da09998ceb23b2861
23103 F20101211_AAAENJ causey_r_Page_050.QC.jpg
623ea424637fe92f264b66ea2d8922e8
5f9926dfd7f510c64e3b079d3ecb6f40c32def87
5558 F20101211_AAAEMU causey_r_Page_074thm.jpg
6c0d6fe0bf597e301db096f826bee981
66cdba7b9f57349e247b8f033b6db904e4b25890
85911 F20101211_AAADJS causey_r_Page_067.jpg
5e916ca1028cc46b1cfdc9f562006662
7efe4fbd5b394c4f620a39d3d23c9b76ca931346
2442 F20101211_AAADKH causey_r_Page_148.txt
82ec61825fdfe8d34d0860151a68a009
4af9984489051f432e6c45464fd71b8eb8de08c7
244979 F20101211_AAAENK UFE0021231_00001.xml
1e6ed3517c222ff3cad0e60308b197b2
e1316b2874f1150db2e1982228e0a07f312aa3ab
7741 F20101211_AAAEMV causey_r_Page_099thm.jpg
ec3508cdb670b847cc5cf011e3a0fbae
1788e4d97e98f64300dfc9ac5409f1a6d22e11cb
3403 F20101211_AAADJT causey_r_Page_097thm.jpg
4a020ffcc5cfb224ff94e79e26bad3b7
23345a2475179389a8d8eea5695d88416129661c
21428 F20101211_AAADKI causey_r_Page_008.pro
c7934bb7c22e7d065c4e688c387cb112
97fe931ecf1d0fa0230e6bfb834662073992e3ad
6888 F20101211_AAAENL causey_r_Page_025thm.jpg
6f38c812aa655d6340fc2d77977fdc72
b10c9b0f474bfa17f30abc20e19917b9d681fac3
5639 F20101211_AAAEMW causey_r_Page_068thm.jpg
635e5cbf8b4f14f35e636413358565c7
62cf361a71ccfc37ab27dedcc6c710eee5675527
F20101211_AAADJU causey_r_Page_142.tif
5e1d6af750fd7ba5660b44ff8ff467af
b7670c6d929a51a94a2dae7a307c1349b9d5f359
61998 F20101211_AAADKJ causey_r_Page_063.jpg
451f5857a49d471c1dc28cf395c5c24b
188245482c66ef1de4eda4bf9c84653a571cb81f
27848 F20101211_AAAENM causey_r_Page_036.QC.jpg
b2cf134bd310b49ce26e6fa1171978b6
871da96b6fc8d703147992196fe55bfcd1e66126
23034 F20101211_AAAEMX causey_r_Page_132.QC.jpg
9a9440c9558d54eb0825ffd4ab08c6b3
106edd4bc5cec7df8c0efa7d59fbbe650093d9bd
79572 F20101211_AAADJV causey_r_Page_131.jpg
b70482ec8099ce65f3880641f86c401e
490c8bf6641bb497a8601e9b5ef5fe90c462d6e7
1025441 F20101211_AAADKK causey_r_Page_096.jp2
a0c015b12e0b600e27f6b396f303713d
5b69da9680a394ba969e0d1dc9ab4deeb896f447
6150 F20101211_AAAENN causey_r_Page_047thm.jpg
b2061e436c6558b050efadcc4052766f
0670bfc7a4121e4e7ccaf11f0d3bf9168d9d9714
27440 F20101211_AAAEMY causey_r_Page_162.QC.jpg
cbdee94090590abc81c010e9d2529fdd
ca957b265404a2663a70d5b155e72e6e34c50253
F20101211_AAADJW causey_r_Page_031.tif
1c3126aa821f399abcec07f663b7a5ea
8d8895c40f476498f0425ad5a59961153b36eaf0
11968 F20101211_AAADKL causey_r_Page_097.QC.jpg
7625538fbb6e868d0933a2168dbabfb4
f1aa04cb6f25f55fce6673273f1e09afe5b3b04f
28915 F20101211_AAAEMZ causey_r_Page_160.QC.jpg
797464b526df85f5cd3f258972e56ebe
c8cb3766942dee46f73d67936dc11871abfe1e7e
28934 F20101211_AAADJX causey_r_Page_020.jpg
6154cf1a487c988592dd3d4467cee644
39a421390cf39b19df68906d7d188b1db947871c
6338 F20101211_AAADLA causey_r_Page_093thm.jpg
b7d96d8b223610b194a22f23d0b8840b
b1cb584b4d0f774e8afec4b431a4b82cc21740b4
6849 F20101211_AAAENO causey_r_Page_051thm.jpg
42c4b4babe16fa974444c666a2cbcd52
5707e35e3441f31f11f8960502b7d13ed23ed962
1014949 F20101211_AAADJY causey_r_Page_049.jp2
beccf992d0cc4f3feb26200cd104bfb9
75fc5661f672b56c26086dcb0e13ae4ac9e9ee47
F20101211_AAADLB causey_r_Page_044.tif
d9511cb2777ad04e9a91e8e02a9fe6b0
21229776304b84aec0a0646aab6f8f247288da0e
7167 F20101211_AAADKM causey_r_Page_039thm.jpg
e199bc2f1441c52c44d78bbb13828eed
848b79bf17b5405ff411d1c17df1a1bb70a71334
6082 F20101211_AAAENP causey_r_Page_059thm.jpg
3f9e99234d2136d9f5ec10f016db7f67
67f12b3ba921545c6d861307aa16fc10e1d38b02
2349 F20101211_AAADLC causey_r_Page_027.txt
74ebaade59c5874ab40b2d55ba8c4be0
3098550c07d4bfe552f0d924b7af9fd045bf9129
F20101211_AAADKN causey_r_Page_072.tif
1e983ef5736e71bbcc80755a9f5a64a3
5ca1cc00f8e9d9d3d945279a100376e0f72d2fbf
5880 F20101211_AAADJZ causey_r_Page_061thm.jpg
b831ac4ef2d49bdd08bb51fc85fb2999
c0311232b2b9f0347ff6505ad157e73085782ecd
16213 F20101211_AAAENQ causey_r_Page_062.QC.jpg
87745b07d350446d7296c90a7ae6092e
e059a01e090d4c64a0a7e8b81d7587a1002b649c
2801 F20101211_AAADLD causey_r_Page_020thm.jpg
24ab279d106341c5f08ab42ad10df5ce
e55212e0aec325937294b53401b005e8bcae3eb6
1928 F20101211_AAADKO causey_r_Page_114.txt
3d45a7ea2eb2eec1edea1b49ca31b13f
5aab8baa0306b7e29106b69de9b0e9e08a0399f5
3691 F20101211_AAAENR causey_r_Page_066thm.jpg
42cc9cb6a9fd0d689b3351c9b06666db
897dd664f6e951ef8784ac937310fcf808024da4
F20101211_AAADLE causey_r_Page_079.tif
ea22d175b3bc6d4938a54167f1a6ceed
56f01cf543f2380a45d444d4ef912d942ff86c1b
2051 F20101211_AAADKP causey_r_Page_123.txt
efa14fb49d56802e632bb1cf30ce3e0d
136f2357fce0fd4fe4e82102b6fb608ef01c234e
4868 F20101211_AAAENS causey_r_Page_078thm.jpg
bef381acfd1779a6e946ecaf14299ee3
db6deac6605ebb02410e6b4f4bd702037b037780
1051970 F20101211_AAADLF causey_r_Page_099.jp2
7c8ecf5ff669d17ae946e962123ccccf
00688e4a7d2511be42d76089a39e147801e67450
6244 F20101211_AAADKQ causey_r_Page_130thm.jpg
16e2ca76cf58d4c6f6046bbaeccb8154
660da265a137003d8e4437842c8e425bdf8be74d
6316 F20101211_AAAENT causey_r_Page_105thm.jpg
c9394511f3a977b71b852c280875d968
e2042797c327ada3482fd260de91cd7611d59d5f
58584 F20101211_AAADLG causey_r_Page_117.pro
e4e9a3413ce5749b22eadbb27cb7d93c
ac349fa7662774216b05e549245659497d637b89
27674 F20101211_AAADKR causey_r_Page_155.QC.jpg
b4bc0d0857fd28f7a5e8851843ff43d8
6108983dd118821921c3f1fa44902e7b294a1986
28335 F20101211_AAAENU causey_r_Page_120.QC.jpg
5e229cdade48d49f1bed2bef28f67f40
4fc510026e326088ac6f1e3dd38c80e4b5248790
F20101211_AAADLH causey_r_Page_009.tif
55594729209be770b2bc439115666dfe
cb94f5d287d6434f402eb90b6b1c5b2e464a6050
1855 F20101211_AAADKS causey_r_Page_072.txt
7629a487f4026ad1a56ee6face803876
d213a83fd81e191ec070239ed095302c6b41d45d
7326 F20101211_AAAENV causey_r_Page_161thm.jpg
033a8693d759ef9ffcc9fc5a34c54014
166af862b220c8d13d1b34025508d576e5b418b5
F20101211_AAADLI causey_r_Page_025.tif
a0487855bdc36f76b8538651dc9a6a5a
2205b840f5acddf6906f7b4f978a611589d82f49
7148 F20101211_AAADKT causey_r_Page_156thm.jpg
9cf2927447d54e6a20f41ada90617f12
29e263f5d3526afb7b2754ea3203d33b773f8faf
21768 F20101211_AAADLJ causey_r_Page_061.QC.jpg
9a5570dd8ea5de6a0b39ea4c16829834
0577311b975c9f3e9dcb23689c795b8e7ec3ed89
F20101211_AAADKU causey_r_Page_153.tif
162f5db8874c101e1eacc62425087dc8
f241f8bc36512b5efafd0f162467d70f70ffc072
2455 F20101211_AAADLK causey_r_Page_036.txt
1a8dd366fb7fa757b80b2f111ac93b48
707b142509dbd5d1e085c5b1a51d8f36ab31d8bd
F20101211_AAADKV causey_r_Page_137.tif
8408968017d8f2cd88b416931a7f2c0f
4a083523d4a4187de827d3e39390d3faf85ac205
52325 F20101211_AAADLL causey_r_Page_035.jpg
da1203fbf57d85111c2af4b75cebb05c
de9de0317aa2d8e7105b79aa262e38c3dcfe768b
16067 F20101211_AAADKW causey_r_Page_004.pro
24fdc47894cd7a9e8670e45b8966a4fe
5edbd190da2f6dff0629e848dc2bf725cab2f35e
46655 F20101211_AAADLM causey_r_Page_015.jpg
af8820468ae48a9198f063ffd03057f0
ebd7259b36e5471da7ca5a8a8122d8a7620d5c55
60511 F20101211_AAADKX causey_r_Page_111.pro
7f1bf6dbb01974b2e69c38985215fdf1
99b387e06c33292ec8a82199cf4db5218b497878
42601 F20101211_AAADMA causey_r_Page_028.pro
2814dfa72fe521db20e5bf5b26af1745
9c39a3d14e2bc54befc294b9fb1b0eab00906b05
19869 F20101211_AAADKY causey_r_Page_094.QC.jpg
c85786185f0daa4ffa602865ba39133e
201515b842d82865f2da021be9d374e19c0efe3a
F20101211_AAADMB causey_r_Page_111.tif
888b702df259dc1bf914b3ed0d5311fd
bf51d00b91e34fb4d764543ffa574726b5bff8a3
47162 F20101211_AAADLN causey_r_Page_013.jpg
4bf8d89619c13de53956e02d755f391d
00428658f5874255acf2463c5372f697a7d5ecf7
4035 F20101211_AAADKZ causey_r_Page_018thm.jpg
0b63305124de91f19021cfe35744cf24
7d0182a3d7919e3a4ba61be49e9152d3174ba33e
30426 F20101211_AAADMC causey_r_Page_032.QC.jpg
43a517e904bcdbe1874fe1afcfcc0674
e5f17ee2d8d27267aefeefdddbcc10d691bf7e24
27042 F20101211_AAADLO causey_r_Page_151.QC.jpg
5c65a164417c5a82f28bf03f706bfef9
c523f14dde10113ee912feff5700007703a5852b
42248 F20101211_AAADMD causey_r_Page_147.jpg
d934b8a2d528ebdee5b6eca1b686c375
cda9f73f6a767b42a98251aeac4ca3784e8e881a
16130 F20101211_AAADLP causey_r_Page_020.pro
d1e1cee60e01a14c8d82532a36681b2f
a0df11642eb880d75e24ce860e2b97c06fb4f2d3
65761 F20101211_AAADME causey_r_Page_059.jpg
5abfe91c0ff7c7984a504d3ebee5878e
9e081633a3de31180877dd3703cd8ba16d3253cf
F20101211_AAADLQ causey_r_Page_041.tif
329dda7fa9662733e4cd34e9800e6881
bb5edad0af9eb92167345d49d88ee100e2d33727
58080 F20101211_AAADMF causey_r_Page_042.pro
d095a0df8a10400957ffb2a20844e026
5aa323389c30508d99ebf0ff6f568d37783fee99
69498 F20101211_AAADLR causey_r_Page_093.jpg
bf31f55caba3d96d25e585df02ff61a6
1a246fb7b15877e714e3d35b07a74ef799a2bfb6
79289 F20101211_AAADMG causey_r_Page_085.jpg
23c67822f4908b5316980ac26c5ef2b3
0de1cf432f380cc814a1122f13f589a45c7eb1aa
2035 F20101211_AAADLS causey_r_Page_113.txt
20217e526047865783973040e5b95517
3deacd17606db48a1ffa7643637041c57e02e2f3
F20101211_AAADMH causey_r_Page_143.tif
dd8f47bf9418c136cf6f60f739b362db
e39fac2062a453e24fd4ed5785ea031022b582bd
F20101211_AAADLT causey_r_Page_080.tif
ead9271cdfc9112cd2cf18df7ab634af
f6e060affa57192bd0c1ac94fcef64f53c86f7dd
91951 F20101211_AAADMI causey_r_Page_101.jpg
8e585a407b4d7bdddb5b9fc74b92d100
6d280f218f91125512469ea7291f01c0b6747492
F20101211_AAADLU causey_r_Page_036.tif
29fb389d29bc2ff2fed89750cda39c44
e8fe39e071791effc5ca1913b37e4f4301e8557e
68715 F20101211_AAADMJ causey_r_Page_095.jpg
13c769f561890ab1b7502ac476e946e0
2d44750c5715c048d027d54960be49aa220c9acf
37426 F20101211_AAADLV causey_r_Page_126.pro
4ddc657ee1c7caa41299c8baf9503a73
4767e99c4154b8c298d24f60d9524113a12ddbcb
F20101211_AAADMK causey_r_Page_086.tif
1239b779100e4bc877150316c66c3778
e1894d3ab63d82e6a909d9070a5fc81f69dc71b0
76939 F20101211_AAADLW causey_r_Page_138.jpg
b011abcbea2a49907299a5006ac224bc
baaedb855e9a452c0929e95aa628d1e7e6a68cd8
27929 F20101211_AAADML causey_r_Page_021.QC.jpg
50094fb8e4f37bc0a978ecd4f958fb6c
12f3bcb27fa2da58a8a1b44c9eb1c0eb71a4f4a3
1367 F20101211_AAADLX causey_r_Page_081.txt
8efc6011a163a6a9255da9d4f6be9cfd
a03d417fc8059913fcb2d1707b6a72aae7fa3dc1
19136 F20101211_AAADNA causey_r_Page_068.QC.jpg
5624a8e24aed91d6b4ddb381fde41f07
428aeaccc7e4ebe4181c71fdf8f53811f2f8ff01
738181 F20101211_AAADMM causey_r_Page_083.jp2
fac8efeafa45cca0507695504772ae25
c00b8428595e3d7a21d2610548ac06814fd71885
5406 F20101211_AAADLY causey_r_Page_137thm.jpg
7f5612d38209913630ec1b9e435c3100
fa3900d8f0b87484192ddea8d398abf4b8f0d4fe
1051980 F20101211_AAADNB causey_r_Page_130.jp2
d84cdfdcc71cd63bab6901ed9c30268b
5fc9210769b52d8ef85e9349feb677348a7f713c
12441 F20101211_AAADMN causey_r_Page_066.QC.jpg
6a8ab0a2683fd6ef27be4087d20d91f8
fe717dc259f4c6d971ce419160440d491194521d
1051918 F20101211_AAADLZ causey_r_Page_057.jp2
bbd3a4dd6b7fc3ed82c27404c7003e50
4313070b68aa1208e785ff4a50ed0b13d44bbbaf
2392186 F20101211_AAADNC causey_r.pdf
6a5c3d2a4bd2e188f3d4ebc96ebb0984
49893565012d499ce013fc8cd61ab69a562e65e8
39352 F20101211_AAADND causey_r_Page_073.pro
c59cc87ed2435634cb963d254c97853c
9c5a319245d5231b703abec3fa636edb6fe4da24
44919 F20101211_AAADMO causey_r_Page_014.jpg
196a57c6b60ff7f26f581e9d45b0f892
1dd5fe2067af9daff9e64a2d4240fd08b8cddd8f
44425 F20101211_AAADNE causey_r_Page_091.pro
a3b5851fa204cf942e86be5b200a3260
960650a84060150f9572435b9dd2c3d451274f1a
2446 F20101211_AAADMP causey_r_Page_022.txt
9f03444c3e50d1da5d6725ddaa5f2ad4
c22a0efcd25cd7292f20dde2958d9f23a585ca76
4875 F20101211_AAADNF causey_r_Page_003.pro
296233e983a27fd6d58ee961d629dc17
b73111fa802207b7bc43250cb41af1e6ff71a3ed
6250 F20101211_AAADMQ causey_r_Page_107thm.jpg
0dc0caf3e1c897a315c85201ee156b03
cd260e761a94a305cbb01f30f9436e8e8ac09536
1963 F20101211_AAADNG causey_r_Page_149.txt
2be2c51632fe44ec03cc4f328ac9dfae
21302323c3ac93730a4f32d38bec32c4ed29e4d1
96951 F20101211_AAADMR causey_r_Page_099.jpg
736a236a801f5ab18fa69f98d9464333
0229e37a6b9b402709e4d3389a4f779710c1b5e7
16405 F20101211_AAADNH causey_r_Page_137.QC.jpg
1772ce942c4a8ef854f6f9fdb984759b
dcda5a262b4eba00b2c85d7604619ac74f41bc33
5299 F20101211_AAADMS causey_r_Page_153.QC.jpg
5f6f809d61260f45479b245d099b0b15
99db7995b8345ad167b158e7216766dbfba96c68
F20101211_AAADNI causey_r_Page_120.tif
cc36155ee97701508966b6de79141db1
9fcbf9f034b933727296271b43166290ad8d965f
31132 F20101211_AAADMT causey_r_Page_104.QC.jpg
a7a6fe7f138b414ab364d3b557420f85
5e8a0ae06b71405ec51a50976bb1eb75278b4da9
19956 F20101211_AAADNJ causey_r_Page_075.QC.jpg
01c9eaf600d2d61277852f0d3069ec9e
b9090c178e95f20a31c37a1075b84efaa7265259
F20101211_AAADMU causey_r_Page_157.tif
a326821ac11b0fc39c55373dc68566f8
315364735711d547317d3dd2a16c098cc7c832eb
1956 F20101211_AAADNK causey_r_Page_083.txt
6f599b731cf8955d00b56a8dd5540beb
c41c2c29f196a4a2d9397f8fecc29e1a3ee5ed0b
38252 F20101211_AAADMV causey_r_Page_017.pro
6d71642b786c2f1cc602f0035723f042
c571e679db447b28131f52b78eff67f362e8196c
F20101211_AAADNL causey_r_Page_126.tif
d34af20ee011c1cde0f33e675e9350ec
16d03e3907fb999c20c749068531d3dab58e3a64
39921 F20101211_AAADMW causey_r_Page_108.pro
5e3f4a21535304da3322442df5e68fa8
a1777945e7e93ceb5a548b7189d1e5524c5e5b37
F20101211_AAADOA causey_r_Page_164.tif
991341766b0678a0933f8d9dbb0cdf13
22b5ce661baf06d79e428907e0e91f702fa248be
146211 F20101211_AAADNM causey_r_Page_159.jp2
e6b423ab03ab56f5d3f961339cc3637a
759a4d48bd15cfc6fcfff702001bce3aea1f06fb
1712 F20101211_AAADMX causey_r_Page_048.txt
835990706ff4479a66dc6a9accfa4e88
fdb0e240a4e20a7ef6d67dc521860d6828188f51
F20101211_AAADOB causey_r_Page_027.tif
cbd5c7580795721d854142bf94b4801b
04cb5170758602aacd8514f2c6f92c10b45871be
901996 F20101211_AAADNN causey_r_Page_142.jp2
80d50ab8fa653ae74ed7792a1c718821
2270c845ea1026fb8502ba2bb094e32a1984166e
44823 F20101211_AAADMY causey_r_Page_016.jpg
941e6bf53f193c5173c16b2eb066bd21
e34528bbf650b3e43b75453ceff9cb06da3cdd84
72796 F20101211_AAADOC causey_r_Page_050.jpg
348e49231dbea25fc8cbc6a2b5a9b852
b53c8d6ec8af020a6c9c15a08234b57684bdd9ca
83026 F20101211_AAADNO causey_r_Page_111.jpg
10021fc46239400874da8261b730d678
98210623599ec1fc9081d2d4d874c616d8d04c9f
88039 F20101211_AAADMZ causey_r_Page_152.jpg
ed0e41b74ab1e012b69af84a90673b9f
eb96215ea0ceefeda2060f44e6b507af87860e75
2155 F20101211_AAADOD causey_r_Page_109.txt
d8f48052442a9a56d84d9375517e7de6
b3b2dc08ecefa9f508b444db591e7e7627c8b8ba
27409 F20101211_AAADOE causey_r_Page_025.QC.jpg
9454069e5fa725f4985d487fe72fc38c
7d6eb24c1385d3d6d2cfd3382e2b18d97afce848
51046 F20101211_AAADNP causey_r_Page_031.pro
9af1ef2f8606198167bd9c27a906cd34
a28441fa80e88ee403752612098a7fba362cff79
3245 F20101211_AAADOF causey_r_Page_002.QC.jpg
0becbbb031749149319aaa32ce716c54
86af4498eff3ef02f525174ed5447030ccdc2f5b
F20101211_AAADNQ causey_r_Page_129.jp2
68f7443d0d2c204ca35ffc6d94939a54
1301e0a06074653e9cf71fe114df295882898476
F20101211_AAADOG causey_r_Page_006.jp2
d9a072f499f19074c5e12bc56efb0166
676a716e3f80e7974d70197e65af497d4cc694cb
F20101211_AAADNR causey_r_Page_097.tif
3e75af77b8ee9f940fe30dbe47fb3793
4373d6a943611660f421e984a7f3504d4ca9a603
44158 F20101211_AAADOH causey_r_Page_114.pro
7b25bbae1cfff2bafd8e6639736999c7
44e88b43748c4e95b63c0b9e9f254696f4ae8cd8
1016 F20101211_AAADNS causey_r_Page_023.txt
ed298085ecedc781f9e557f0f50a142e
6e988dd93f068f38f4b9b48c4124d73981a49855
126256 F20101211_AAADOI causey_r_Page_033.jp2
2cb00d6f9a38444d23b932193efd5d76
8ce0aded74dbdde6509d8991b2bb2e9b2a5a9d88
27146 F20101211_AAADNT causey_r_Page_135.pro
c34903e1dd7443d60cbd08bc6bbbe629
5a3a1491d10740d484e3a5df015748c02f856ebb
F20101211_AAADOJ causey_r_Page_020.tif
8cb5a05d4736466d669dd6b0881d008c
51291ff8924a9c491010df5866b7b8c753b1a01d
45165 F20101211_AAADNU causey_r_Page_107.pro
d5b0fee8693d84ca6abd422fce74330a
7736622342a8eb945f50c108dfe1c4d3d2e19290
46878 F20101211_AAADOK causey_r_Page_132.pro
3ea9343c689bc2c5ced3036c4d48ac97
21aafbfe8aa6753a0e73c33378a92bf0ea53e715
2513 F20101211_AAADNV causey_r_Page_111.txt
4673e5cf4debe55cddf3cad3cb1e1f8d
b1022225010b557fd803d7a089ac344b2b3694f3
1105 F20101211_AAADOL causey_r_Page_018.txt
f3c43911705e502bc900fceb991c82a1
ab21e127ac17cdba3cd97224ef1c3683444648be
14095 F20101211_AAADNW causey_r_Page_147.QC.jpg
b1a7f48f141b2f2cae2149c389f18b33
6c8476cdb29647e794704c278fe5b9bf311e1366
F20101211_AAADOM causey_r_Page_011.tif
51858e756f438c212be51e35b8e07e21
e1bb6e2db97d67ff212cde55401636073f3645e9
63402 F20101211_AAADNX causey_r_Page_025.pro
d769448146379e3bdb38460b671ee88b
b76255580583877fe53a1f4287e17bf9079261a7
124001 F20101211_AAADPA causey_r_Page_098.jp2
edd7f50c0505571b6ed6a86cd8f737c3
85f78fa8b33778a3df3be7931d2662098450a71d
1051948 F20101211_AAADON causey_r_Page_024.jp2
d95d131c79ae4fb377b19486ecdd4fd3
4790bd9f9c40680a5d48aea5bc5eedda437d1575
F20101211_AAADNY causey_r_Page_148.tif
ce9567af6926346eaed5127640ac4fdb
5eebfb6cbbb277a3978863afa053576766bab879
5767 F20101211_AAADPB causey_r_Page_071thm.jpg
7f1544c21aef5998f35d330cea322924
27368c0a8f3523c6ef908cbcb3a697d593681b1a
56339 F20101211_AAADOO causey_r_Page_058.jpg
47187fbe7807dc94a6e6d80b078630b7
569c87d72e9bc89a77ee301c28100db671365534
37886 F20101211_AAADNZ causey_r_Page_004.jp2
61494f50c933be2ea488741213aa343c
5a8a3541388b0d5faad848ba71059a05e92998f6
13465 F20101211_AAADPC causey_r_Page_012.QC.jpg
3fbd8ad8d8ddf53fe72222ed3e3f8ac3
3565e81ed6f1c8b2f36ad73ee99b311d5e951003
F20101211_AAADOP causey_r_Page_016.tif
f67aa6abf6b19811d9bbe56390f07b8b
f6b193d1a93f918e0aa366f294b831a49db53fd8
1646 F20101211_AAADPD causey_r_Page_126.txt
8cc09b68cc649ef41270f77bff4fa217
03d0a50026ab4ea0b0c80d3e9ecb125d6bba3ce6
F20101211_AAADPE causey_r_Page_040.tif
279e1e5062880c8b0254be0405d407f7
7334781faa2aec9113977d4c2fdbeff806f39a2e
2053 F20101211_AAADOQ causey_r_Page_096.txt
daf071036fa2fb5276ee04e50931b218
a752f73b498b3c2ec6335ef46d58db5905361e8e
20636 F20101211_AAADPF causey_r_Page_126.QC.jpg
788f180bdf153a84fdff1c8b609b024e
029769440b5836c28a274310dbbe114514abd04c
23695 F20101211_AAADOR causey_r_Page_138.QC.jpg
2d507b2a159759cd33aead7715ba09c0
0a218357aa596f3c6db813a5f7e7d6289b4ca5f5
F20101211_AAADPG causey_r_Page_082.tif
7bc5d505368f88f277d0c7070822f127
07445b1fa8f4ae0341ec43a57c99e5c174fc3d92
909915 F20101211_AAADOS causey_r_Page_063.jp2
27f17f5e66970b6c4d81e93857cf349a
247f2f2508addb864e027ba8ea12c6ac406c8c9b
15444 F20101211_AAADPH causey_r_Page_003.jpg
1baba7947af036dfdf3fa6a9e6b6a336
f252a21fbba0d0b8d3622595828d028637afcff6
75639 F20101211_AAADOT causey_r_Page_132.jpg
e0610675b89a8109e480039833a9e985
749cd97b39e914f8a983a1556a57436bb92618ea
7517 F20101211_AAADPI causey_r_Page_090thm.jpg
e6722d33c83a44c84af3ec69ebf09ff0
588999c602a175fa3fe498d46e0cc6c901aeb47e
F20101211_AAADOU causey_r_Page_035.tif
561bc29d8e0a1f282643947adbc9c041
7ec668c06676b44e4ac0c95a08dd867a2e499d8c
58202 F20101211_AAADPJ causey_r_Page_046.jpg
fbe5b3ae7d6adc869b026ab18b7efee0
ae76b04bef41732524b1cb60445d07246707dc7d
18985 F20101211_AAADOV causey_r_Page_080.QC.jpg
79d2b276def47c43483bc6e6f0b1251d
2a4dcb2c4153233811158cd640ff8fff416ecb46
7166 F20101211_AAADPK causey_r_Page_155thm.jpg
8b82256ecd991a0d211179df2014d6bb
fd836165acf083f05a9f1ef5ee9648ae15be0f70
22530 F20101211_AAADPL causey_r_Page_047.QC.jpg
28936465970ca3898b0d6b3fe0d5aa2d
aebf533336bff881a46aef441195f92ecd60b904
46228 F20101211_AAADOW causey_r_Page_005.pro
9b3c38daf4851c5fecf1d23854fdd7f7
a8e8a53c32627dafe7bde20f17ded8f49909f38f
2055 F20101211_AAADQA causey_r_Page_105.txt
67d36d0961860963f594793463a76ef2
7bf14c88fdbe5f33e920c7a1004949ef0242a92d
6430 F20101211_AAADPM causey_r_Page_064thm.jpg
667a8ddf52f5bd6db6f3b28094bc53e1
34ed87a79266033ac8319779b735d3915ee192e7
23632 F20101211_AAADOX causey_r_Page_049.QC.jpg
c25e9e189b994b3405c536b83ca3addf
a5762b844a0bd394cfde91d6e3f5f9a5f0d7612b
5629 F20101211_AAADQB causey_r_Page_088thm.jpg
759bf8abdfb765d18f5dda8dd6613dfa
5fa1beeb43d61e0f3546dc936020b0c2353653a0
92335 F20101211_AAADPN causey_r_Page_039.jpg
9638f0d5dab260ddac78687c3915d101
a5573a6af165925013e87ecf6927725baf9d066a
2122 F20101211_AAADOY causey_r_Page_024.txt
83e33455e76bcc59ceca7823c7bb74f2
41d9f02249776d362d3eb09e671c49e13c3f9c05
3038 F20101211_AAADQC causey_r_Page_163thm.jpg
1f9c189f676d668a200f4652c409e27f
70fa3434b5ab728ed5155ad3fba0ebea04edf578
F20101211_AAADPO causey_r_Page_001.tif
9e44fb84e2f1b637d8e0075609f22ff9
f90b00d63b69a7bb4e95ee694a7da4d74b2f0a0a
37943 F20101211_AAADOZ causey_r_Page_008.jpg
7b9923c00194fac471342eef0e1de6ba
354df57206a06fb620c120cf7e3c2e7ffd9c6750
29765 F20101211_AAADQD causey_r_Page_139.QC.jpg
63863f8e5272fe83fd9224135c0b6743
d6b10eb8a15d4d1bf06cbac8c127d91bdd86cc57
2373 F20101211_AAADPP causey_r_Page_019.txt
f54adfab7b8299c36b76348bbb563cfb
77ee850a54c19899da72d2becc555695804597e3
2330 F20101211_AAADQE causey_r_Page_001thm.jpg
26e0544cf6e10520d3295d127abd6d29
579e2649f3469495fd6e02862c26b54c786c440e
4412 F20101211_AAADPQ causey_r_Page_013thm.jpg
6f7561d404165ef7e2f3ec0ce43fbc1d
89d12e458c12b310c2c7ec04885bd3f539f029ab
1141 F20101211_AAADQF causey_r_Page_012.txt
b7131e49411ae7f39231b034368c5fb5
b0ee9a63ae32df279546c105598b85f3c3aeeb54
3698 F20101211_AAADQG causey_r_Page_150thm.jpg
4b44a6562cae544aa32520af309ea12b
6ef7a7b04407c1a187a19dae546b467e695c6f03
F20101211_AAADPR causey_r_Page_092.tif
3ed2c08d8f660d7262f9ded0334067ac
a17415d108729a3473b95ff9f969cd6e224a85ee
548914 F20101211_AAADQH causey_r_Page_011.jp2
8f00cbf5d03810248d5fd132fcad2638
4c4e87d45f6fb82645a4089a70496a5c6882e6ef
62677 F20101211_AAADPS causey_r_Page_043.pro
fc88651c24aba94b958bce7e848899e6
5c5298296052d2e0fcc6478db3ce887695b1ef0f
F20101211_AAADQI causey_r_Page_103.tif
24414d817a0318e5f02018574a7d40d1
01a0d8420fa723895a74a93cc7fceae6dfa49c64
30169 F20101211_AAADPT causey_r_Page_026.QC.jpg
cbff856cc625733ad93d970ebb6a1229
d162b202f15352f38574a6b4f3063d61b644a59c
1851 F20101211_AAADQJ causey_r_Page_082.txt
f78ad53dded26010a6b6e1fa1c04a46b
e6ac5d1d80838fdf6ab1399d6ae4e1cb0b939db6
5785 F20101211_AAADPU causey_r_Page_145thm.jpg
8632ed2923b619c7ea87d894267ee924
8dfc330a4e3da915466bb78bb4f06e98ab92e4df
1393 F20101211_AAADQK causey_r_Page_068.txt
4d0b544b5ccc44fa8885753cbc97d9d8
f5a4f06fb741450737e12fc170c58c8bd9acc94d
156255 F20101211_AAADPV causey_r_Page_158.jp2
27844d3b4ed4ef0b9e42f5b1b902b114
c08e2829a56a26285bb0ce2e744047cf5502e61c
63896 F20101211_AAADQL causey_r_Page_135.jp2
0d6d636f862f2ff00cfa7cdf3ac77cf5
727e1d516be8e5c50e2436cd00b14dade5b423b4
59547 F20101211_AAADPW causey_r_Page_098.pro
2b474d1a73424cdc0a4ca8777bce7388
dfbb2296d62100233817aadb82e4bb92e450e5bd
F20101211_AAADQM causey_r_Page_100.jp2
c0d01ebaa2813d9668270187ea3cafd0
7b7dabe02a33ce77b76ddd970f0319a1e432440f
145016 F20101211_AAADPX causey_r_Page_156.jp2
dbe5ee4590bb060d0b88919c0167cc0c
e3f96cdab16fee8808da8e391baa3f07086dd0c9
F20101211_AAADRA causey_r_Page_122.tif
2f36a813510d5d6b7fbef3c6abd44881
bfe9839ddad2400851d3a46d68db2abc40c91503
40820 F20101211_AAADQN causey_r_Page_146.pro
dd6b2cf52de50760294d2eec64a6cb53
9cd8bbc97aeec80b2bf500c8a487cfa3c9bae5ba
19528 F20101211_AAADPY causey_r_Page_063.QC.jpg
7c3f2998758f3146cb9b24a25c4591a4
17bfab927233d916c47af75ad8537778bf5d2528
F20101211_AAADRB causey_r_Page_015.tif
d7e312ae802b75060b2ee723713098b5
c02d22822474aeaf8221d23e4308390a2a3eab4a
57536 F20101211_AAADQO causey_r_Page_027.pro
ee605ef4afdd3f87ee93a99db0b91997
6d23caed602aed596ebd8dc6f884fb45c958f8b8
22970 F20101211_AAADPZ causey_r_Page_128.pro
065543f558b7688a541cd5fcf25c244d
a9e98dcb64eb924db8a4cabd60bc0dd9a70601a4
130519 F20101211_AAADRC causey_r_Page_025.jp2
7f3497820499301aa4a16d6d72cebd8a
c9e222bd4342453ef312b0ae109c46759d0ae10a
1017919 F20101211_AAADQP causey_r_Page_146.jp2
4ffc28c74889d8fc2460e96dfb1eb303
d94e87f5a6642165e5b4a277e8c21d06e059a824
F20101211_AAADRD causey_r_Page_010.tif
b99df6be956804285d401dfbb473e2f5
782346fd5929211c482966cfaccd7eb632affff3
F20101211_AAADQQ causey_r_Page_029.txt
b05d61c3009c8636c6dce0dc822af12c
ca0c53e8cbe9d7655bcb966d7255547a6f7c3eab
64327 F20101211_AAADRE causey_r_Page_140.pro
3a80f60b09c1fd35eec130f7db97dd3e
489decb4681674beb24473e29ef08d3dd7406bdc
91852 F20101211_AAADQR causey_r_Page_041.jpg
52fe4bd9a38f00baf060e1bb848b402e
b293001fa4df1a61bd2033b178d9fb273b42ff44
85127 F20101211_AAADRF causey_r_Page_084.jpg
7a30dc75eccdda7dddde2d2f7295e178
d8ff3122a4e47f6e421c000a546b2c35f68513b7
4655 F20101211_AAADRG causey_r_Page_062thm.jpg
113290a54d8bdb8b3aa30245fea2c104
c8a1f3753554b5a477fdc44701d901eecb3c4cd4
F20101211_AAADQS causey_r_Page_149.tif
18740016d2f64f95f2c9a5e8b02d2ee1
06a08993896f8b15eadf7776a7c20a658e08572f
676 F20101211_AAADRH causey_r_Page_004.txt
c8256e83b899a372001edfc4a004e591
926b8e7367261e3ccaf80418b6e244f43d0f29a2
14244 F20101211_AAADQT causey_r_Page_014.QC.jpg
2cbd43644becdb9e5b833e62c849b24b
d988cc67a64f5042f878e62c1ad67eec8b3b77d7
1280 F20101211_AAADRI causey_r_Page_136.txt
ee95cfc07f27521f59294e2c91727440
1b6ab107656231d70e90be385937cc9ade37d6a7
1519 F20101211_AAADQU causey_r_Page_164.txt
7ea8f6f0dc75870719318080d77da69b
b8e133f11fcc85ca441398782af0123b3b3f77f4
F20101211_AAADRJ causey_r_Page_006.txt
71e488f052751c553d6b7ff562e222a2
ab15649583c12ea243d1f5a7aca3fd57dca7e590
34473 F20101211_AAADQV causey_r_Page_083.pro
a81627d1024e233fab57cfe0e1354dae
fdc5c0a7745ec5cb977cccf1acc7b777481f1a22
50798 F20101211_AAADRK causey_r_Page_137.jpg
02adde3bac67303b59df6679f8e8c74f
40798e6a35e9d41beff27062e0fd06317382fb21
96064 F20101211_AAADQW causey_r_Page_032.jpg
8e49a3bfa05c5dc0ef4737584712ff67
fc2b782f1c96b5e17e07762fcc4bbe674d55de1f
F20101211_AAADRL causey_r_Page_068.tif
13b97633af5334f8eaaba046e87015a2
044495d1fd3082b0c816d1fa81b3e0e33a780b90
F20101211_AAADQX causey_r_Page_033.tif
258932917d4518a996b63b6dd328ebc8
26353e18f2b3cd665710ee8c63659be4c202d6f1
1252 F20101211_AAADSA causey_r_Page_069.txt
fba6764432ce03069d590185e45826b8
32cd5f32fdc3e07c6e94bd244f6a41fbdb3aab7f
22745 F20101211_AAADRM causey_r_Page_030.QC.jpg
de22124dd4b9301dec895f01770c802e
59cf29650712c9f5e8c6ebc0b9487231fcd7436b
9474 F20101211_AAADQY causey_r_Page_004.QC.jpg
69241f01db9d83759708c6b0cd34bbf7
49a28b4f54d36265dfc75f4456f7505e892c2804
84902 F20101211_AAADSB causey_r_Page_044.jpg
a92d2b7ea2bf688f442cb70a9bf3f11a
e3ee7ef608d563e57424936e5a5d72fef6fa32d3
90257 F20101211_AAADRN causey_r_Page_036.jpg
2072d2ded0bcb80614de804d1d5e9f7f
38ddba06c46d3561b99b2350e9574018253cf2b1
73737 F20101211_AAADQZ causey_r_Page_154.pro
15c0903f0a036d25f50b731592641f86
2d67de43c6829ef500ab28ee442f23b794afc45c
26371 F20101211_AAADSC causey_r_Page_012.pro
14b3c228ed245084cde8843808744666
3e51b960464b4411f72dde1ba99587f8ecbd6fea
1820 F20101211_AAADRO causey_r_Page_075.txt
293bf72987194ac1cbacd6640c102c21
a9f25ca1bda46c74fb9499cbe19af11741947e09
1914 F20101211_AAADSD causey_r_Page_108.txt
77069a58823f2c13a9e3270ec48805c3
83a5620d64c26ad0d57990b78ff414f9bf741a7b
5807 F20101211_AAADRP causey_r_Page_115thm.jpg
51a5fd5b640e84a58665bcc5b8dcbfa5
f3b680cfb5df56e0aa238dead83a49aa43456162
566480 F20101211_AAADSE causey_r_Page_141.jp2
c77970ead40c897034646fed00843ec8
74ea408f127abade8b852e046141d5a30910a71f
29026 F20101211_AAADRQ causey_r_Page_070.pro
16680a19359f80aae18c1c25316593ee
995025d3dc1660c8d6e3159724e80c5f7e6d483a
62088 F20101211_AAADSF causey_r_Page_148.pro
8f24cee494a1da3b273c7eda924f36f6
8b6707b826d8c33b2558d2aca1e1cd673e73cd5f
3005 F20101211_AAADRR causey_r_Page_158.txt
8e0a625f07deb94904e80254092059f2
d28852bfbb16eb129719d4f4b431bbb9b4eba398
29592 F20101211_AAADSG causey_r_Page_100.QC.jpg
61802a3afee2d47c716154d16b242d46
68fca46eb27df3e7eacfbac190b2da9c13a59e64
43468 F20101211_AAADRS causey_r_Page_061.pro
43129456802b48a6dd7715a35f4c0c88
05284cf6a2e6e2f8e1b4752a52ea3e6f4140745f
902866 F20101211_AAADSH causey_r_Page_072.jp2
5bc03651a7ee9970646e86c5c57069a5
5db19b62fcb1e8354f0a7458e58aafb19002c249
F20101211_AAADSI causey_r_Page_095.tif
7483edef171a8165d81e0bd7485a2361
27219a5a570e89793889f7f38e1c1decdd16b56f
635470 F20101211_AAADRT causey_r_Page_144.jp2
7aa797022e827d6e053320f54deefaaf
b8da1702f88059e27ed65391ce62506f624a7d97
14966 F20101211_AAADSJ causey_r_Page_015.QC.jpg
c6222201ad7af60a1f9b0b48bf8935fb
84a930e73183d53516000ffab0e8fe55dfb00cf6
2529 F20101211_AAADRU causey_r_Page_140.txt
3a8705fa1b58e53c358f4dfc8f875dd8
de6d613ec4f33b23f1b9f78440817dc6bed31c92
908998 F20101211_AAADSK causey_r_Page_071.jp2
e41bef5429c9243c8f9dc3e5a71980a5
c7ba444f584ef83771d590f142ccab1204d9bf3e
68377 F20101211_AAADRV causey_r_Page_061.jpg
dd4b14b856eae27ea85f5be39910a038
cf03e9a8858f9d14a1c84c8cb185b0e17256b951
29020 F20101211_AAADSL causey_r_Page_041.QC.jpg
92858c7576165e6c520a012f7d3a1327
8503ab2c7ba2f56d02fa1f045a24a86451472340
65511 F20101211_AAADRW causey_r_Page_072.jpg
65855d10a18deb3eae12dbde7d0379ed
e5313dd05253c0d697d2c8ff5dc665ae0a7625c2
70767 F20101211_AAADTA causey_r_Page_123.jpg
0270eeb1f4cf2d2320c5bbcd3980cbd3
1ad6b7f1b1488b52e4c2337bdaad9fba4f4eecee
69465 F20101211_AAADSM causey_r_Page_161.pro
9ee90e358269e43e96f70b3e90274bab
66b4c4408e2949df848d0e39c17d757c426e5bb6
F20101211_AAADRX causey_r_Page_026.tif
faeb4bbb6473e6ae10773cf753d96a9b
d2a8c101c889518b3da4e889e579dfa15755416a
2615 F20101211_AAADTB causey_r_Page_104.txt
3f99f87bf2c548875a254c4b81d554b2
8f18c4373fde5cdc31bc2af1787655002fdd6a45
29154 F20101211_AAADSN causey_r_Page_024.QC.jpg
8588cfc7c2c2c18b9f08d9759d911f00
e9d7dc6689866a81861245a2052ccf87a80fd84c
1051920 F20101211_AAADRY causey_r_Page_053.jp2
5d10b491df4c5c5873b7b85970b7ce17
578604f0a5cfaf9b80c689c049e7158ee8555d7f
2718 F20101211_AAADTC causey_r_Page_156.txt
4a6b28bde5c2efb607f6ec0e741da92f
fdc238877c775c7e81f74b7cb208f39ebace641e
5503 F20101211_AAADSO causey_r_Page_094thm.jpg
7808e341da216dd84e16e5042be97e7e
5d71b70f1a632a6ea34f88b21cca1f5e1831ef24
7139 F20101211_AAADRZ causey_r_Page_159thm.jpg
ab29c36f185425d05513215b147ca30d
afa6dd6640251b2b723ce44c920a5324e13d42a6
24095 F20101211_AAADTD causey_r_Page_131.QC.jpg
b6e1f8a5fd43f17949a0659c90b99bee
f4fa1ac286f2765e2560810c454212feffb5f2f8
6205 F20101211_AAADSP causey_r_Page_052thm.jpg
90b4a4e79480113cda43d25ed12f4915
a2cd8193095b549e82521306c27de863c3fd73d0
F20101211_AAADTE causey_r_Page_061.tif
5ea2a0432ea05cb4ae99798f90a8a79f
28b30f8f96e4c8e3714ecd5ac76894c7ce77e066
1843 F20101211_AAADSQ causey_r_Page_089.txt
20daac7aa42b952e6b46e310751da1f0
73e1673c5822c4dbd8660270afab8cbd0705c95f
6443 F20101211_AAADTF causey_r_Page_076.QC.jpg
f9758e120aa2d70de7edad7663810336
de9fac9aa6a03281dce932d49b04efef1c95f157
F20101211_AAADSR causey_r_Page_041.jp2
92e17e9b8670a656a3c0a421a2da5331
d204c13771452d4ae2f78c5e40fa18aef6a78e84
F20101211_AAADTG causey_r_Page_136.tif
a72b32871c97d1f11924248b396a724a
20ff68c7e32d1e5b41ce854702d4e041a3dfd70c
F20101211_AAADSS causey_r_Page_030.tif
7ce9ee1c1215f25eb2c82a0a72666a95
94fa58b54113d5c58870f19b26e8925f1de25c6f
7884 F20101211_AAADTH causey_r_Page_001.pro
f54547ad4589363a58777ed82ff3fc18
06a85111c7cb34e31728362cf89b4b26ac231ea8
90762 F20101211_AAADST causey_r_Page_038.jpg
265893bf334986173b3cb225788c1f21
d47586ffb3f45b21f1b7829f23a9477e1f057a5b
2569 F20101211_AAADTI causey_r_Page_121.txt
68cf43db355f3e537430d162ac6c4c28
3c21c93121440e0da7a3bd7d5a4663a719da519f
1365 F20101211_AAADTJ causey_r_Page_002thm.jpg
b3257df1a41de0e75d4533c2874c3793
c50954dc5c1e23dcff85592a17be4218b4d96da2
6958 F20101211_AAADSU causey_r_Page_151thm.jpg
068ff8ac1df266b5e3a634b513963e7e
74f8ce48accbcdb8f59303f1105f0f2df3658def
F20101211_AAADTK causey_r_Page_129.tif
a78c2757954135645a022b88d4aac681
608b54053f8aa4b4186c3059d8b600d914869c04
F20101211_AAADSV causey_r_Page_032.tif
be7b83b97f0aee2cce943c36f18a0c2d
e1372b46da5548632defd26caed4fde4ace0df96
205980 F20101211_AAADTL causey_r_Page_076.jp2
8346bc68fdb4747cbd1eeee93bbb88bb
17a5b764f0db9ca1b450f3b3e8d155b02ff2d97c
39787 F20101211_AAADSW causey_r_Page_089.pro
daa55f0081a17d4d7c153b582eeadd0c
42eddb8fa868b5864c5eb3a2057249564ce6adaf
991576 F20101211_AAADTM causey_r_Page_069.jp2
4cb9fe9404e41cec86e4a0d7bfba0fa7
eabeb27217294371c4d47df2366fe34648dbb56d
30310 F20101211_AAADSX causey_r_Page_081.pro
eea633674d9b8bc4417a89717b3dc277
8f0c9b6c570a15ba29ae3b080e6861f2bab7e3fb
6641 F20101211_AAADUA causey_r_Page_027thm.jpg
3dd50b9f9bb0d880d84e7b7b52f6498e
947467dd917247d3e1f71cbb891a10a2eea3f37d
4961 F20101211_AAADTN causey_r_Page_089thm.jpg
58224ec13c5d4b9c745bb1a1940315c2
f85eb130e6c468d8429bdab9af3e2d4bf03ef7b6
433 F20101211_AAADSY causey_r_Page_001.txt
1becea56e40fd0a298e3f1bbdc0ec8f4
62348bbc927f72e614c7cc9b2017b3f7faf13a7a
6073 F20101211_AAADUB causey_r_Page_146thm.jpg
c6ebd454f938b7b3aa542f0ea4197f77
7ba4ebc0f37f7babd7dfe462cbfb74a4df84e878
F20101211_AAADTO causey_r_Page_125.tif
e4a8bfa2c1708d83c5c1ea3ff4ea9398
79ed0637d0e4a1c23f2c6b11300d926e60f83953
6553 F20101211_AAADSZ causey_r_Page_127thm.jpg
9b42befc4a72e77f0c15db89d7efe568
7e95a5d5d9f2cdf6430f1d7139f50a9b8347cf41
5728 F20101211_AAADUC causey_r_Page_002.jp2
2206b0d5bb2f7a00456f9a653e704526
bdd534654e76125ba2ace40adc90646ec1337f5f
784248 F20101211_AAADTP causey_r_Page_048.jp2
a9c4293c55d8037afdaf7e5c61972428
9c0fc320613e1df0ead82f73f63b325534d3efe1
20785 F20101211_AAADUD causey_r_Page_072.QC.jpg
c5ff6a8ba530d8e5690adcee5bde3025
69282fba4ccf5eec1e6c3e3a34d0b447083e4ce1
1051981 F20101211_AAADTQ causey_r_Page_140.jp2
85b8804ae2c00041528990b6f6ab9ed5
07359becfe757e64001a1acb4973f9d57ce98b61
2001 F20101211_AAADUE causey_r_Page_065.txt
0abe02f8fcda33af3763c49c2efaf23b
4a67296169950e530c6b4ead71506180a1fb7233
24764 F20101211_AAAEAA causey_r_Page_019.QC.jpg
2d845c41f880bac41a849932c5718052
ca30136751c94192ee6c060bbd822efe7a247ec1
79498 F20101211_AAADTR causey_r_Page_010.jpg
e22480783a7a94bfa8c0ecbd906bbf68
73825dd1bfcaa25142ef936d85785be4c411fdcf
36660 F20101211_AAADUF causey_r_Page_163.jpg
230b7c467908c524188530aa510d8d92
eba14a8245379278e411148492564b97e925482f
F20101211_AAADTS causey_r_Page_064.tif
b179061b26d82464dee7688336e9e99d
ffb5571c2f1af3e1e62436a0970da3b6f2615bbb
28724 F20101211_AAADUG causey_r_Page_038.QC.jpg
47691239c3667fc9e04a7f08a773c95d
6b91d8b36ef58b7193bd4180c968725655d50e84
87494 F20101211_AAAEAB causey_r_Page_086.jpg
a80055465b92a3518039176f052e5279
09b007c39e5ff87584596e0e8a5f92c1ca440cf8
6061 F20101211_AAADTT causey_r_Page_069thm.jpg
51a22354723c66a94fc9fe35f20a4d82
79d108a5860dca91d3bac5e7a732899fceaac6f2
66061 F20101211_AAADUH causey_r_Page_016.jp2
7aa1d1f64bb2f068cc1fd1265cb910fb
c1ab24942d6b9161c83424266a1b99fd944a3e24
F20101211_AAAEAC causey_r_Page_101.tif
bf366a98ffa10962b5592bdd39ffb7e2
ebdd7dace918f3e95fa2fa6759ccca4194329f70
F20101211_AAADUI causey_r_Page_002.tif
d55ca36409d6c9dc2360012568ca0c35
3ae5cf160f39223d46473d4b9af8b523bb40d348
26433 F20101211_AAADTU causey_r_Page_079.QC.jpg
9426369e3adfbe626f2bb74fbc0cd0cc
8ab11fdf6d0be5b391ad3dbd380887dab05d85b9
94535 F20101211_AAAEAD causey_r_Page_162.jpg
2d198e45ad03c44a44dab29f78dfe4c9
80e6640ebfc73055d6ddbe04b7d91793b2175d9a
2003 F20101211_AAADUJ causey_r_Page_133.txt
303b9ec91262fb9af5b64b1c893fe781
f981f481316a5af50a2010029825396b9fdc446f
69457 F20101211_AAAEAE causey_r_Page_159.pro
2209fe67eaa0826607492e144899524d
936a4209dea520c6466bae792f20aa4bd84ff46b
F20101211_AAADUK causey_r_Page_078.tif
8ff5e280e56f70320a7bb065d863cd19
32141ffc2e9ddd78d8deaedf506c54a327564604
29101 F20101211_AAADTV causey_r_Page_039.QC.jpg
1854431eda5a8dfd66f387360f5fb595
5d4472b2ea301f2a027aa572caa29ed5292c4b80
69427 F20101211_AAAEAF causey_r_Page_157.pro
d50731b01b008926d6dcbb72d1623143
8eb4be3e98f729cf6713492f6cf1b549575a03df
F20101211_AAADUL causey_r_Page_057.tif
c27d5360993ef0ff168528a33763e7ca
8f33eaab182bb2b3fcaa4c30a8da12a998fdff9f
F20101211_AAADTW causey_r_Page_063.tif
f7ed255b548f37f73507c683f8487df1
6392a8a66ad2786e9f9f9c9045357c99fe8c2893
20899 F20101211_AAAEAG causey_r_Page_119.QC.jpg
6ecb1c9be962491ce44259221494d942
1d03b7dbe28d4eeb33aabb4e1a78964fcd369540
2009 F20101211_AAADVA causey_r_Page_130.txt
671f3f2377f44a78c88416c1aa0eeb98
f7fdfbde70c2888a39f8a1d5e784974d228fa7ff
6874 F20101211_AAADUM causey_r_Page_045thm.jpg
e3f67604d7a45875d7f946ddab46955f
04bd224f2b3af4bebebc69af2484475556c543d4
F20101211_AAADTX causey_r_Page_074.tif
ce00bf6ede398de7a59b5f6f8f8479a0
fa5c91f0481e83472a418500722c9fb20f5d4ba6
26777 F20101211_AAAEAH causey_r_Page_051.QC.jpg
ed9d096af0b5e32ab13b0913d6539172
8efff664852f128efd3a94a87997cc77d9c18a56
25941 F20101211_AAADVB causey_r_Page_087.QC.jpg
ca38446c82de23570e64162056c273e7
c81c5ba1f16bca9a3560db4b05bb5ed7b33ae6d9
23110 F20101211_AAADUN causey_r_Page_001.jpg
9b1aa2f40e9849929d6ceb25dd847c89
ced2945e97daee4f9c3418374c97201b0c2d1059
F20101211_AAADTY causey_r_Page_067.tif
5518fef06288c15ccbc7d98e0bc230e2
cb9bef6d91222d341243cceead41e776ad9f7b8a
6605 F20101211_AAAEAI causey_r_Page_034thm.jpg
abb8c9b0280262ccdc19da7be51e5f79
1baabd401ba074e5dad81894cb5e773791a545ba
94944 F20101211_AAADVC causey_r_Page_100.jpg
ce4158e4f9251550f3ddacf326a7da49
e0d19a963b5802ed8ad3000915b84a92dd3c07b4
6924 F20101211_AAADUO causey_r_Page_031thm.jpg
20cf05766863622edab8d345dec634d5
3aa72e7657b4ff987559bebe5b3e91559ed4106a
F20101211_AAADTZ causey_r_Page_110.tif
d4e18d3cefb357f0e77d464a68de5e08
e5d6a91cc9550dfa32df762ee363ea7be5146288
81716 F20101211_AAAEAJ causey_r_Page_079.jpg
31b10ac84a325d4c245223ec1ac5167d
2cd079ea2c7185d0b73375b4f32c2720e7784801
28612 F20101211_AAADVD causey_r_Page_116.pro
f0e54b34be3b7447218aa36ec702026d
21d50e45f44c7c92815fb02f931d7bade627f41c
6920 F20101211_AAADUP causey_r_Page_053thm.jpg
e7b7b322e01fa3f8fc8e35590efdf443
6b128dc67af86d9b70d1c468f0eaec6c3c19ec57
4884 F20101211_AAAEAK causey_r_Page_164thm.jpg
88b7b06547f9bb28730a4cf18e64cf2b
cd3e33bc463c66e6916da29477dd2f3b4f9d9b27
156307 F20101211_AAADVE causey_r_Page_160.jp2
d60dd47bedad03a3632c1ffcd9c9efce
d8d7a4f1b3fc44ee70e676c02391b93824da724c
877123 F20101211_AAADUQ causey_r_Page_143.jp2
fc8800e856cf1b8f275cb1a24d42bec5
5e359635bfb04ac3948483af5a32bfe3ee6666cd
6684 F20101211_AAAEBA causey_r_Page_067thm.jpg
27ce67997f8563c17f250e8e3699f88d
97b7e54f40aad146124990ae6f10f1b54da34a5a
F20101211_AAAEAL causey_r_Page_091.jp2
895eb71969a40b450d01e4484666702d
c009870b1c2613c9231046a1595f3a36e2999573
23440 F20101211_AAADVF causey_r_Page_150.pro
d4ac8f78faf09c20709cac6c079fcaf1
f1e229378e2d5e92148feb4b0aee0ea541643edd
92992 F20101211_AAADUR causey_r_Page_140.jpg
e66e2dab31a67707062765b48afd8bc1
d87a66647d51a04aed46fa73d2355fb7c119c113
4031 F20101211_AAAEBB causey_r_Page_141thm.jpg
df7af5d3cf1cf31934e51bdf5d03465a
17dabe20753cb1aa076db1f4a55d4e0ce88650de
59356 F20101211_AAAEAM causey_r_Page_048.jpg
51701d3f450b1325548cf29dac1db667
06a366423a48a4612d456aa7e9af41140903cdc0
888600 F20101211_AAADVG causey_r_Page_103.jp2
9a526259d468edaebaf8fa0704b52949
262bc9f3a3ae1f58e9209ea61e4269065f0f94e7
1698 F20101211_AAADUS causey_r_Page_028.txt
d6ab6bb3ddd065cd4e9f4e561d03fa39
76e2711c736d8fa56c1a1ae11486f22637a218e6
F20101211_AAAEAN causey_r_Page_149.jp2
77f784cbb0b9158ae21ffa85d8568a13
a3044e41a68e883f1b988fc823d2d4ad01fafaec
4384 F20101211_AAADVH causey_r_Page_144thm.jpg
dc8ebf7b1646d6ae2983dfd3f64301f9
ad25460a0cee62023c756a85931046079492c934
83929 F20101211_AAADUT causey_r_Page_087.jpg
fd24dfdfecde248a137c785ea397de31
0667182b88bbb9bad7702b551dff82ace1269f6c
71256 F20101211_AAAEBC causey_r_Page_013.jp2
36c0e65f0043c2874375d6a732cf6a39
d1d7fae71dabd262c038113e8761d8d46ab6ebaf
1880 F20101211_AAAEAO causey_r_Page_091.txt
1c0629f70f63ef8c6e3c0525dc68df4d
24de6328cc4b4d2862c87c9ab0b1290e134763d6
1010797 F20101211_AAADVI causey_r_Page_145.jp2
566be48d4684e4cb4d943d14c623e4da
0329bb5c94951c6279296ec24c79f01930156c65
368715 F20101211_AAADUU causey_r_Page_122.jp2
a749008237ce21bf4f13b0c6206888ac
897367887527d96749928180ea7ba97af0b6a9cf
F20101211_AAAEBD causey_r_Page_108.tif
4be0e73172e28145ae06adebec37484e
18d3781e58ce83a0a9e33873ddc6011f436b8af1
F20101211_AAAEAP causey_r_Page_010.jp2
92a4af87d56dd91b742a72d0beb51733
561a3bce3d735d63b692a7871ade45513da0a6a5
40462 F20101211_AAADVJ causey_r_Page_141.jpg
fc47f854a0d23f20823936a6c6e9cb60
767c9887ef5f53327a26d1e1a5183ff13540ab82
19093 F20101211_AAADUV causey_r_Page_143.QC.jpg
1742aff79b97a675b6bd53b2108be94a
ec096cb5c49d40915bddd1fc7163a15c04024cda
82901 F20101211_AAAEBE causey_r_Page_151.jpg
8a63b903c77d0d9b13633bb59ca163be
3c5161633feafab4837611c8af7a5bafffd4c702
29415 F20101211_AAAEAQ causey_r_Page_154.QC.jpg
0e2d056a2beb6b86a68723dabbe7c60d
972b5535fd2e3feae501fb9825ee4fe083f66d99
20024 F20101211_AAADVK causey_r_Page_141.pro
58fb3c82de72fe9bc2128c87c13e2da2
cf5bc53db994fdf0997a4196e18e383bb8a9450a
1819 F20101211_AAAEBF causey_r_Page_118.txt
472dd41f1708d32eca9da210565be2c9
d75d9d8772afc306cb9b33f1d4b2784ec464e50b
78432 F20101211_AAAEAR causey_r_Page_125.jpg
e69a2b0fd8f63f2ac5b7e484b2eb3d29
2f363c518d4802407d449cca9c219866cd562254
56798 F20101211_AAADVL causey_r_Page_164.jpg
1e8c34e320096d882bd00fc455a7d41e
7232e03cc2fc21c9e8e2eb86e88417c7370a4212
866 F20101211_AAADUW causey_r_Page_163.txt
de461ccfc3c331a7da53221777f5cf38
c9f0aa17732270939ddab693dd3b39c47fa55583
22805 F20101211_AAAEBG causey_r_Page_149.QC.jpg
174b210653b6cd39912d625517fb3662
a692ac9c0bd7ff6e0a0c807781742fb1b657ded4
F20101211_AAAEAS causey_r_Page_103.txt
0f0d718d936f9b9c00d9094a5aee2151
3d85445b116abd591a02a7c2937bd83532c1c311
7168 F20101211_AAADVM causey_r_Page_042thm.jpg
48bf37b86ad20be1c3b8b81c7697dbd1
565f6a4b6ef903ffc753a8e2f641ef2b7120f541
93975 F20101211_AAADUX causey_r_Page_024.jpg
1f9f4436a2dee93dc25e36418fe07b43
7930e63d322e52be4406688141bb9f31d896297e
64768 F20101211_AAADWA causey_r_Page_099.pro
11e5ed4e6d1687e1e0f61e2512e53228
5dcb6f442afc6370fbbac87f46c0d883a52ac0e8
18370 F20101211_AAAEBH causey_r_Page_058.QC.jpg
321b23ca0385827d657b8fcdb272ab78
6c2970be714ccf656ee9349836b2ecf482bfad05
60973 F20101211_AAAEAT causey_r_Page_012.jp2
7ef046eb70891301962bef7fe066c590
5cdee4d0948c3575c0bf303563e413c80a52f076
20849 F20101211_AAADVN causey_r_Page_071.QC.jpg
120a06221b10316c6ae500047bf6c28f
d698c4d789d328b6dce45c6dae777d95a7c9165f
93285 F20101211_AAADUY causey_r_Page_040.jpg
b25592ebfe5879ba2e4d1ecdabbebd95
a1fc59b5365f66da3dbf15c973fd8ea928daa215
7295 F20101211_AAADWB causey_r_Page_043thm.jpg
b760346e5569a47edf325823228d5186
0c50f3c60b0e33725a9b0e7b01068b4a731335e2
2040 F20101211_AAAEBI causey_r_Page_134.txt
96bcd74bb1b2c5e260bf475f774d74a4
4e889d610e3a45ae3e22388380693e1d2c4290ec
65513 F20101211_AAAEAU causey_r_Page_121.pro
bb39fa4f0f05e613c012c50c8677343f
5b71616dd626fe61c92eaf0ec49d1804e6acbd8b
1604 F20101211_AAADVO causey_r_Page_094.txt
c8cf007a99cd33ee546656b98823f4f8
52bb5cb1c6ea6822c1ecd5a40fa7d6ce831893c9
148599 F20101211_AAADUZ causey_r_Page_161.jp2
778473ac3edf2bcb94c3d0af7f2da5ac
5fabbb1af9cc3d44a809c14e83469b7302169eb4
17087 F20101211_AAADWC causey_r_Page_035.QC.jpg
81027ec4dc1804d225ad8026eb97dde4
afeaea1845c62f5d9a56d6d090ce3d34cf08de02
58179 F20101211_AAAEBJ causey_r_Page_128.jpg
3998251cfc29f2aa1b9e3509058eafe0
5b48598832497a60046cc8203f0550948e68de3d
F20101211_AAAEAV causey_r_Page_050.tif
34ddcc8bc132ea948496dc0904ddab0f
1ae799f626333c1b7c0ada4ce805b0db23fd8406
19322 F20101211_AAADVP causey_r_Page_115.QC.jpg
6be5bb17de980223b254cc03bc9b7c1f
ceee6708bb94f9f68a589a9ee2e620a3b0274a53
2837 F20101211_AAADWD causey_r_Page_161.txt
3389c0e200a3d52f302680102789abdc
185afd9d0ed844b3209a5549fc42d91b2f8e003a
26250 F20101211_AAAEBK causey_r_Page_057.QC.jpg
b26f92e919bfafbfe4f50e5aa1d374f9
eced552752d943052d22324a1ce636375133c201
38458 F20101211_AAAEAW causey_r_Page_075.pro
302c556dc46a1886a8bf4dc4cc6c1df3
f4ad967c3d25eddc70a01df3703ed70f0a2f6974
899572 F20101211_AAADVQ causey_r_Page_068.jp2
01c949dba021cd60a28c03f8adffd2cc
26c0fab31140ae5eea4c6f473fdb50babee2b12e
6201 F20101211_AAADWE causey_r_Page_131thm.jpg
566f61a2cd3704741b931519d9e6845c
6cafa6f5be1d4e1b3327c2a1a8dfdb451dcb6a4b