Citation
Tracking and state estimation of an unmanned ground vehicle system using an unmanned aerial vehicle system

Material Information

Title:
Tracking and state estimation of an unmanned ground vehicle system using an unmanned aerial vehicle system
Creator:
MacArthur, Donald K. ( Dissertant )
Crane, Carl D. ( Thesis advisor )
Place of Publication:
Gainesville, Fla.
Publisher:
University of Florida
Publication Date:
Copyright Date:
2007
Language:
English

Subjects

Subjects / Keywords:
Aircraft ( jstor )
Coordinate systems ( jstor )
Error rates ( jstor )
Global positioning systems ( jstor )
Helicopters ( jstor )
Image processing ( jstor )
Payloads ( jstor )
Pixels ( jstor )
Remotely piloted vehicles ( jstor )
Sensors ( jstor )
Dissertations, Academic -- UF -- Mechanical and Aerospace Engineering
Mechanical Engineering thesis, Ph. D.
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Abstract:
Unmanned Air Vehicles (UAV) have several advantages and disadvantages compared with Unmanned Ground Vehicles (UGV). Both systems have different mobility and perception abilities. These UAV systems have extended perception, tracking, and mobility capabilities compared with UGVs. Comparatively, UGVs have more intimate mobility and manipulation capabilities. This research investigates the collaboration of UAV and UGV systems and applies the theory derived to a heterogeneous unmanned multiple vehicle system. This research will also demonstrate the use of UAV perception and tracking abilities to extend the capabilities of a multiple ground vehicle system. This research is unique in that it presents a comprehensive system description and analysis from the sensor and hardware level to the system dynamics. This work also couples the dynamics and kinematics of two agents to form a robust state estimation using completely passive sensor technology. A general sensitivity analysis of the geo-positioning algorithm was performed. This analysis derives the sensitivity equations for determining the passive positioning error of the target UGV. This research provides a framework for analysis of passive target positioning and error contributions of each parameter used in the positioning algorithms. This framework benefits the research and industrial community by providing a method of quantifying positioning error due to errors from sensor noise. This research presents a framework by which a given UAV payload configuration can be evaluated using an empirically derived sensor noise model. Using this data the interaction between sensor noise and positioning error can be compared. This allows the researcher to selectively focus attention to sensors which have a greater effect on position error and quantify expected positioning error. ( , )
Subject:
geopositioning, helicopter, sensitivity, UAV, UGV, unmanned
General Note:
Title from title page of source document.
General Note:
Document formatted into pages; contains 110 pages.
General Note:
Includes vita.
Thesis:
Thesis (Ph. D.)--University of Florida, 2007.
Bibliography:
Includes bibliographical references.
General Note:
Text (Electronic thesis) in PDF format.

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright MacArthur, Donald K.. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Embargo Date:
7/12/2007

Downloads

This item has the following downloads:


Full Text





TRACKING AND STATE ESTIMATION OF AN UNMANNED GROUND VEHICLE
SYSTEM USING AN UNMANNED AIR VEHICLE SYSTEM




















By

DONALD KAWIKA MACARTHUR


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA

2007





































O 2007 Donald K. MacArthur


































I proudly dedicate my life and this work to my wonderful wife Erica. Many trials, we both have
suffered through this process.









ACKNOWLEDGMENTS

I would like to thank my father Donald Sr., my mother Janey, and my brother Matthew for

their support through my many years of schooling.












TABLE OF CONTENTS


page

ACKNOWLEDGMENTS .............. ...............4.....


LIST OF TABLES ................ ...............7............ ....


LIST OF FIGURES .............. ...............8.....


AB S TRAC T ............._. .......... ..............._ 1 1..


CHAPTER


1 INTRODUCTION ................. ...............13.......... ......


2 BACKGROUND ................. ...............14.......... .....


Position and Orientation Measurement Sensors ................. ...............14........... ...
Gl ob al Positioning Sy stem s........._..... .......... ...............14....
Inertial Measurement Units ................. ...............17................

M agnetometers .............. ...............18....
Ac el erom eter ................. ...............19...___ .....
Rate G yro....................... ..............1
Unmanned Rotor craft Modeling ........._.___..... .___ ...............20....
Unmanned Rotor craft Control .............. ...............21....


3 EXPERIMENTAL TESTING PLATFORMS .............. ...............24....


Electronics and Sensor Payloads .............. ........ ... ..............2
First Helicopter Electronics and Sensor Payload .............. ...............24....
Second Helicopter Electronics and Sensor Payload ................. ................. ........ 26
Third Helicopter Electronics and Sensor Payload ........... ................... .. ............... ..28
Micro Air Vehicle Embedded State Estimator and Control Payload ................... ...........29
Testing Aircraft............... ...............29
UF Micro Air Vehicles ................. ...............29................
ECO 8 .................. ....... ................3
Miniature Aircraft Gas X cell ................. ...............3.. 0.............

Bergen Industrial Twin............... ...............31..
Yamaha RM AX ................. ...............3.. 1..............


4 GEO-POSITIONING OF STATIC OBJECTS USING MONOCULAR CAMERA
TECHNIQUES .............. ...............33....


Simplified Camera Model and Transformation .....__.....___ ..........._ ...........3
Simple Camera Model ............ ..... .__ ...............33...
Coordinate Transformation .............. .. ........... .... .........3

Improved Techniques for Geo-Positioning of Static Obj ects ........._...... ...._._... ...........3 6












Camera Calibration................. ...............3
Geo-Positioning Sensitivity Analysis............... ...............43


5 UNMANNED ROTORCRAFT MODELING ................ ...............55................


6 STATE ESTIMATION USING ONBOARD SENSORS ................. .......... ...............60


Attitude Estimation Using Accelerometer Measurements .............. ...............60....
Heading Estimation Using Magnetometer Measurements .............. ...............64....
UGV State Estimation .............. ...............66....


7 RE SULT S .............. ...............69....


Geo-Positioning Sensitivity Analysis............... .. .... .. ..........6
Comparison of Empirical Versus Simulated Geo-Positioning Errors ................ ................77
A applied W ork .............. ... ........... .......... .. ... .. .. ............7
Unexploded Ordnance (UXO) Detection and Geo-Positioning Using a UAV ............... 79
Experimentation VTOL aircraft ................. ...............80........... ....
Sensor payload .................. .......... .. .............8
Maximum likelihood UXO detection algorithm ................... ............... 8
Spatial statistics UXO detection algorithm .............. ...............83....
Collaborative UAV/UGV Control ................. ...............86........... ....

W ay point surveying .............. ...............87....
Local map ................. ...............88.................
Citrus Yield Estimation ................. ...............91....... .....
M materials and methods .............. ...............93....
R results .............. ...............96....
Discussion .............. ...............97....


8 CONCLUSIONS .............. ...............102....


LIST OF REFERENCES .....___................. ........___.........10


BIOGRAPHICAL SKETCH ....___ ................ ......._. ..........11










LIST OF TABLES


Table page

7-1 Parameter standard deviations for the horizontal and vertical position ................... ..........70

7-2 Parameter standard deviations for the roll, pitch, and yaw angles............... .................7

7-3 Normalized pixel coordinate standard deviations used during sensitivity analysis...........73

7-4 Parameter standard deviations used during sensitivity analysis .............. ...................73

7-5 Comparison of Monte Carlo Method results ................. ...............77......._._ ..

7-6 Production of Oranges (1000's metric tons) (based on NAS S, 2006) ............... ... ............92

7-7 Production of Grapefruit (1000's metric tons) (based on NAS S, 2006). ................... ........92

7-8 Irrigation Treatments .............. ...............94....

7-9 Results from Image Processing and Individual Tree Harvesting............... ...............9











LIST OF FIGURES


Figure page

2-1 Commercially available GPS units ............ .....__ ...............15.

2-2 Commercially available GPS antennas .....__.....___ ..........._ ...........1

2-3 Commercially available IMU systems ................. ...............17........... ...

2-4 MicroMag3 magnetometer sensor from PNI Corp. ............. ...............18.....

2-5 HMC 1053 tri-axial analog magnetometer from Honeywell .........____....... ._.............19

2-6 ADXL 330 tri-axial SMT magnetometer from Analog Devices Inc. ............... ...............19

2-7 ADXRS 150 rate gyro from Analog Devices Inc. ............ ..... .......__......2

4-1 Image coordinates to proj section angle calculation ................. ............... ......... ...33

4-2 Diagram of coordinate transformation............... .............3

4-3 Normalized focal and proj ective planes. ................._._._.. ......... ..........3

4-4 Relation between a point in the camera and global reference frames............... ................38

4-5 Calibration checkerboard pattern ...._.. ................. ........_.. ........ 4

4-6 Calibration images .............. ...............41....

4-7 Calibration images .............. ...............42....

5-1 Top view of the body Eixed coordinate system ................. ...............56......__.. .

5-2 Side view of the body Eixed coordinate system ................. .....___.............. .....5

5-3 M ain rotor blade angle ................. ...............58......... ....

5-4 Main rotor thrust vector ................. ...............58....___ ....

6-1 Fast Fourier Transform of raw accelerometer data ....._____ .... ... ..__ ...........__....62

6-2 Fast Fourier Transform of raw accelerometer data after low-pass filter .........................62

6-3 Roll and Pitch measurement prior to applying low-pass filter .............. .....................6

6-4 Roll and Pitch measurement after applying low-pass filter ....._____ ...... .....__..........64

6-5 Magnetic heading estimate .............. ...............65....











7-1 Roll and Pitch measurements used for defining error distribution .................. ...............71

7-2 Heading measurements used for defining error distribution ................. ............. .......71

7-3 Image of triangular placard used for geo-positioning experiments ................. ................72

7-4 Results of x and y pixel error calculations ................. ...............72........... .

7-5 Error Variance Histograms for the respective parameter errors .............. ....................76

7-6 Experimental and simulation geo-position results ................. ..............................78

7-7 BLU9 7 Sub muniti on ................. ...............79...............

7-8 Miniature Aircraft Gas Xcell Helicopter .............. ...............80....

7-9 Yamaha RMAX Unmanned Helicopter ................. ...............80........... ...

7-10 Sensor Payload System Schematic .............. ...............81....

7-11 Segmentation software............... ...............82

7-12 Pattern Recognition Process .............. ...............84....

7-13 Raw RGB and Saturation Images of UXO ................. ...............85..............

7-14 Segmented Image ................. ...............85........... ....

7-15 Raw Image with Highlighted UXO .............. ...............85....

7-16 TailGator and HeliGator Platforms ................. ...............86......___. ...

7-17 Aerial photograph of all simulated UXO ................. ...............87........... ..

7-18 Local map generated with Novatel differential GPS .............. ...............88....

7-19 A comparison of the UGV' s path to the differential waypoints ................ ................ ..90

7-20 UAV waypoints vs. UGV path .............. ...............91....

7-21 Individual Tree Yields as Affected by Irrigation Depletion Treatments ...........................96

7-22 Individual Tree Yield as a Function of Orange Pixels in Image............... ..................9

7-23 Individual Tree Yield as a Function of Orange Pixels with Nonirrigated Removed.._......97

7-24 Image of Tree 2C Before and After Image Processing ......___ ........_ ..............98

7-25 Image of Tree 2F Before and After Image Processing .............. ...............98....












7-26 Image of Tree 6D Before and After Image Processing............... ...............9

7-27 Ground Images of Tree 6D and Tree 2E............... ...................100

8-1 Simulated error calculation versus elevation ....._._._ ..... .._.. .. ..._. ..........0


8-2 Geo-Position error versus elevation............... ...............10









Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

TRACKING AND STATE ESTIMATION OF AN UNMANNED GROUND VEHICLE
SYSTEM USING AN UNMANNED AERIAL VEHICLE SYSTEM

By

Donald Kawika MacArthur

May 2007

Chair: Carl Crane
Major: Mechanical Engineering

Unmanned Air Vehicles (UAV) have several advantages and disadvantages compared with

Unmanned Ground Vehicles (UGV). Both systems have different mobility and perception

abilities. These UAV systems have extended perception, tracking, and mobility capabilities

compared with UGVs. Comparatively, UGVs have more intimate mobility and manipulation

capabilities. This research investigates the collaboration of UAV and UGV systems and applies

the theory derived to a heterogeneous unmanned multiple vehicle system. This research will also

demonstrate the use of UAV perception and tracking abilities to extend the capabilities of a

multiple ground vehicle system. This research is unique in that it presents a comprehensive

system description and analysis from the sensor and hardware level to the system dynamics.

This work also couples the dynamics and kinematics of two agents to form a robust state

estimation using completely passive sensor technology. A general sensitivity analysis of the

geo-positioning algorithm was performed. This analysis derives the sensitivity equations for

determining the passive positioning error of the target UGV. This research provides a

framework for analysis of passive target positioning and error contributions of each parameter

used in the positioning algorithms. This framework benefits the research and industrial

community by providing a method of quantifying positioning error due to errors from sensor









noise. This research presents a framework by which a given UAV payload configuration can be

evaluated using an empirically derived sensor noise model. Using this data the interaction

between sensor noise and positioning error can be compared. This allows the researcher to

selectively focus attention to sensors which have a greater effect on position error and quantify

expected positioning error.









CHAPTER 1
INTTRODUCTION

The Center for Intelligent Machines and Robotics at the University of Florida has been

performing autonomous ground vehicle research for over 10 years. In that time, research has

been conducted in the areas of sensor fusion, precision navigation, precision positioning systems,

and obstacle avoidance. Researchers have used small unmanned helicopters for remote sensing

purposes for various applications. Recently, experimentation with unmanned aerial vehicles has

been in collaboration with the Tyndall Air Force Research Laboratory at Tyndall AFB, Florida.

Recently, unmanned aerial vehicles (UAVs) have been used more extensively for military

and commercial operations. The improved perception abilities of UAVs compared with

unmanned ground vehicles (UGVs) make them more attractive for surveying and reconnaissance

applications. A combined UAV/UGV multiple vehicle system can provide aerial imagery,

perception, and target tracking along with ground target manipulation and inspection capabilities.

This research investigates collaborative UAV/UGV systems and also demonstrates the

application of a UAV/UGV system for various task-based operations.

The Air Force Research Laboratory at Tyndall Air Force Base has worked toward

improving EOD and range clearance operations by using unmanned ground vehicle systems.

This research incorporates the abilities of UAV/UGV systems to support these operations. The

research vision for the range clearance operations is to develop an autonomous multi-vehicle

system that can perform surveying, ordnance detection/geo-positioning, and disposal operations

with minimal user supervision and effort.









CHAPTER 2
BACKGROUND

Researchers have used small unmanned helicopters for remote sensing purposes for

various applications [1,2,3]. These applications range from agricultural crop yield estimation,

pesticide and fertilizer application, explosive reconnaissance and detection, and aerial

photography and mapping.

This research effort will strive to estimate the states of a UGV system using monocular

camera techniques and the extrinsic parameters of the camera sensor. The extrinsic parameters

can be reduced to the transformation from the camera coordinate system to the global coordinate

sy stem.

Position and Orientation Measurement Sensors

Global Positioning Systems

Global Position Systems (GPS) are widely becoming the position system of choice for

autonomous vehicle navigation. This technology allows for an agent to determine its location

using broadcasted signals from satellites overhead. The Navigation Signal Timing and Ranging

Global Positioning System (NAVSTAR GPS) was established in 1978 and is maintained by the

United States Department of Defense to provide a positioning service for use by the U. S. military

and is utilized by the public as a public good. Since its creation, the service has been used for

commercial purposes such as nautical, aeronautical, and ground based navigation, and land

surveying. The current U.S. based GPS satellite constellation system consists of over 24

satellites. The number of satellites in operation for this system can vary due to satellites being

taken in and out of service. Other countries are leading efforts to develop alternative satellite

systems for their own GPS systems. A similar GPS system is the GLONASS constructed by

Russia. The GALILEO GPS system is being developed by a European consortium. This system









is to be maintained by Europeans and will provide capabilities similar to that of the NAVSTAR

and GLONASS systems.

Each satellite maintains its own specific orbit and circumnavigates earth once every 12

hours. The orbit of each satellite is timed and coordinated so that five to eight satellites are above

the horizon of any location on the surface of earth at any time. A GPS receiver calculates

position by first receiving the microwave RF signals broadcast by each visible satellite. The

signals broadcasted by the satellites are complex high frequency signals with encoded binary

information. The encoded binary data contains a large amount of information but mainly

contains information about the time that the data was sent and location of the satellite in orbit.

The GPS receiver processes this information to solve for its position and current time.

GPSreceivers typically provide position solutions at 1Hz but GPS receivers can be

purchased that output position solutions up to 20Hz. The accuracy of a commercial GPS system

without any augmentation is approximately 15 meters. Several types of commercially available

GPS units are shown in Figure 2-1. Some units are equipped with or without antennas. The

Garmin GPS unit in Figure 2-1 contains the antenna and receiver whereas the other two units are

simply receivers. Several types of antennas are shown in Figure 2-2.


Figure 2-1. Commercially available GPS units


















Figure 2-2. Commercially available GPS antennas

Differential GPS is an alternative method by which GPS signals from multiple receivers

can be used to obtain higher accuracy position solutions. Differential GPS operates by placing a

specialized GPS receiver in a known location and measuring the errors in the position solution

and the associated satellite data. The information is then broadcast in the form of correction data

so that other GPS receivers in the area can calculate a more accurate position solution. This

system is based on the fact that there are inherent delays as the satellite signals are transmitted

through the atmosphere. Localized atmospheric conditions cause the satellite signals within that

area to have the same delays. By calculating and broadcasting the correction values for each

visible satellite the differential GPS system can attain accuracy from 1mm to 1cm [4].

In 2002, a new type of GPS correction system has been integrated so that a land-based

correction signal is not required to improve position solutions. Satellite based augmentation

systems (SBAS) transmit localized correction signals from orbiting satellites [5]. A SBAS

system implemented for North America is the Wide Area Augmentation System (WAAS). This

system has been used in this research and position solutions with errors of less than three meters

have been observed.

In 2005, the first in a series of new satellites was introduced into the NAVSTAR GPS

system. This system provides a new GPS signal referred to as L2C. This enhancement is

intended to improve the accuracy and reliability of the NAVSTAR GPS system for military and

public use.









Inertial Measurement Units

Inertial Measurement Unit (IMU) systems are used extensively in vehicles where accurate

orientation measurements are required. Typical IMU systems contain accelerometers and

angular gyroscopes. These sensors allow for the rigid body motion of the IMU to be measured

and state estimations to be made. These systems can vary greatly in cost and performance.

When coupled with a GPS system, the positioning and orientation of the system can be

accurately estimated. The coupled IMU/GPS combines the position and velocity measurements

based on satellite RF signals with inertial motion measurements. These systems complement

each other whereby the GPS is characterized by low frequency global position measurements

and the IMU provides higher frequency relative positioning/orientation measurements. Some of

the commercially available IMU systems are shown in Figure 2-3.










Figure 2-3. Commercially available IMU systems

Other sensors allow for orientation measurements such as fluidic tilt sensors, imaging

sensors, light sensors, thermal sensors. Each of these sensors has different advantages and

disadvantages for implementation. Fluidic tilt sensors provide high frequency noise rejection

and decent attitude estimation for low dynamic vehicles. In high G turns and extreme dynamics

these sensors fail to provide usable data. Imaging sensors have the advantage of not being

affected by vehicle dynamics. However, advanced image processing algorithms can require

significant computational overhead and these sensors are highly affected by lighting conditions.

Thermopile attitude sensors have been used for attitude estimation and are not affected by










vehicle dynamics. These sensors provide excellent attitude estimations but are affected by

reflective surfaces and changes in environment temperature.

Magnetometers

A magnetometer is a device that allows for the measurement of a local or distant magnetic

field. This device can be used to measure the strength and direction of a magnetic field. The

heading of an unmanned vehicle may be determined by detecting the magnetic field created by

the Earth's magnetic poles. The "magnetic north" direction can aid in navigation and geo-spatial

mapping. For applications where the vehicle orientation is not restricted to planar motion, the

magnetometer is typically coupled with a tilt sensor to provide a horizontal north vector

independent of the vehicle orientation.

There are several commercially available magnetometer sensors. The MicroMag3 from

PNI Corp. provides magnetic field measurements in three axes with a digital serial peripheral

interface (SPI) shown in Figure 2-4.












Figure 2-4. MicroMag3 magnetometer sensor from PNI Corp.

The Honeywell Corporation also manufactures a line of magnetic field detection sensors.

These products vary from analog linear/vector sensors to integrated digital compass devices.

The HMC 1053 from Honeywell is a three axis magneto-resistive sensor for multi-axial magnetic

field detection and is shown in Figure 2-5.





















Figure 2-5. HMC1053 tri-axial analog magnetometer from Honeywell

Accelerometer

An accelerometer measures the acceleration of the device in single or multiple

measurement axes. MEMS based accelerometers provide accurate and inexpensive devices for

measurement of acceleration. The ADXL330 from Analog Devices Inc. provides analog three

axis acceleration measurements in a small surface mount package shown in Figure 2-6.











Figure2-6. ADXL 330 tri-axial SMT magnetometer from Analog Devices Inc.

A two or three axis accelerometer can be used as a tilt sensor. The off horizontal angles

can be determined by measuring the projection of the gravity vector onto the sensor axes. These

measurements relate to the roll and pitch angle of the device and when properly compensated to

account for effects from vehicle dynamics, can provide accurate orientation information.

Rate Gyro

A rate gyro is a device that measures the angular time rate of change about a single axis or

multiple axes. The ADXRS150 is a single axis MEMS rate gyro manufactured by Analog










Devices Inc. which provides analog measurements of the angular rate of the device and is shown

in Figure 2-7.









Figure 2-7. ADXRS150 rate gyro from Analog Devices Inc.

This device uses the Coriolis effect to measure the angular rate of the device. An

internally resonating frame in the device is coupled with capacitive pickoff elements. The

response of the pickoff elements changes with the angular rate. This signal is then conditioned

and amplified. When coupled with an accelerometer, these devices allow for enhanced

orientation solutions.

Unmanned Rotorcraft Modeling

For this research, the aircraft operating region will mostly be in hover mode. The flight

characteristics of the full flight envelope are very complex and involve extensive dynamic,

aerodynamic, and fluid mechanics analysis. Previously, researchers have performed extensive

instrumentation of a Yamaha R-50 remote piloted helicopter [6]. These researchers outfitted a

Yamaha R-50 helicopter with a sensor suite for in-flight measurements of rotor blade motion and

loading. The system was equipped with sensors along the length of the main rotor blades,

measuring strain, acceleration, tilt, and position. This research was unique due to the detail of

instrumentation not to mention the difficulties of instrumenting rotating components. This work

provided structural, modal, and load characteristics for this airframe and demonstrates the

extensive lengths required for obtaining in-flight aircraft properties. In addition, extensive work

has been conducted in system identification for the Yamaha R-50 and the Xcell 60 helicopters










[7,8,9]. These researchers performed extensive frequency response based system identification

and flight testing and compared modeling results with that of the scaled dynamics of the UH-1H

helicopter. These researchers have conducted an extensive analysis of small unmanned

helicopter dynamic equations and system identification. This work has resulted in complete

dynamic modeling of a model scale helicopter. These results showed great promise in that they

demonstrated a close relation between the UH-1H helicopter dynamics and the tested aircraft.

This research also showed that the aircraft modeling technique used was valid and that the

system identification techniques used for larger rotorcraft were extensible to smaller rotorcraft.

Other researchers present a more systems level approach to the aircraft automation

discussion [10]. They present the instrumentation equipment and architecture, and present the

modeling and simulation derivations. They go on to present their work involving hardware-in-

the-loop simulation and image processing.

Unmanned Rotorcraft Control

Many researchers have become actively involved in the control and automation of

unmanned rotorcraft. The research has involved numerous controls topics including robust

controller design, fuzzy control, and full flight envelope control.

Robust Hoo controllers have been developed using Loop Shaping and Gain-Scheduling to

provide rapid and reliable high bandwidth controllers for the Yamaha R-50 UAV [11,12]. In this

research, the authors sought to incorporate the use of high-fidelity simulation modeling into the

control design to improve performance. Coupled with the use of multivariable control design

techniques, they also sought to develop a controller that would provide fast and robust controller

performance that could better utilize the full flight envelope of small unmanned helicopters.

Anyone who has observed experienced competition level Radio Controlled (RC) helicopter










pilots and their escapades during flight have observed the awesome capabilities of small RC

helicopters during normal and inverted flight. It is these capabilities that draw researchers

towards using helicopters for their research. But with increased capabilities comes increased

complexities in aircraft mechanics and dynamics. These researchers have attempted to

incorporate the synergic use of a high-fidelity aircraft model with robust multivariable control

strategies and have validated their findings by implementing and flight testing their control

algorithms on their testing aircraft. Also, Hoo controller design has been applied to highly

flexible aircraft [13]. As will be shown later, helicopter airframes are significantly prone to

failures cause by vibration modes. Disastrous consequences can occur if these vibration modes

are not considered and compensated. In this research, a highly flexible aircraft model is used for

control design and validation. The controller is specifically designed to compensate for the high

flexibility of the airframe. The authors present the aircraft model and uncertainties and discuss

the control law synthesis algorithm. These results demonstrate the meshing of the aircraft

structure modeling/analysis and the control design/stability. This concept is important not only

from a system performance perspective but also from a safety perspective. As UAVs become

more prevalent in domestic airspace, the public can benefit from the improved system safety

provided by more sophisticated modeling and analysis techniques.

Previous researchers have also conducted research on control optimization for small

unmanned helicopters [14]. In this research, the authors focus on the problem of attitude control

optimization for a small-scale unmanned helicopter. By using an identified model of the

helicopter system that incorporates the coupled rotor/stabilizer/fuselage dynamic effects, they

improve the overall model accuracy. This research is unique in that it incorporates the stabilizer

bar dynamic affects commonly not included in previous work. The system model is validated by










performing flight tests using a Yamaha RMax helicopter test-bed system. They go on to

compensate for the performance reduction induced by the stabilizer bar and optimize the

Proportional Derivative (PD) attitude controller using an established control design methodology

with a frequency response envelope specification.










CHAPTER 3
EXPERIMENTAL TESTING PLATFORMS

Electronics and Sensor Payloads

In order to perform testing and evaluation of the theory and concepts involved in this

research, several electronics and sensor payloads were developed. The purpose of these

payloads was to provide perception and aircraft state measurements and onboard processing

capabilities. These systems were developed to operate modularly and enable transfer of the

payload to different aircraft. The payloads were developed with varying capabilities and sizes.

The host aircraft for these payloads ranged from a 6" fixed wing micro-air vehicle to a 3.1 meter

rotor diameter agricultural mini-helicopter.

First Helicopter Electronics and Sensor Payload

The first helicopter electronics and sensor payload was constructed to provide an initial

testing platform to ensure proper operation of the electronics and aircraft during flight. The

system schematic is shown in Figure 3-1.

Imaging

Digital Stereovision Cameras




Communicationl Datatia Storage'"'"

Wireless I CPU Laptop Hard Drive
Ethernet




Power

DC DC Converters LiPo Battery


Figure 3-1. First helicopter payload system schematic










The system consisted of five subsystems:

1. Main processor
2. Imaging
3. Communication
4. Data storage
5. Power
The main processor provides the link between all of the sensors, the data storage device,

and the communication equipment. The imaging subsystem consists of a Videre stereovision

system linked via two firewire connections. The data storage subsystem consists of a 40GB

laptop hard drive which runs a Linux operating system and was used for sensor data storage.

The power subsystem consists of a 12V to 5V DC to DC converter, 12V power regulator and a

3Ah LiPo battery pack. The power regulators condition and supply power to all electronics. The

LiPo battery pack served as the main power source and was selected based on the low weight

and high power density of the LiPo battery chemistry. The first helicopter payload attached to

the aircraft is shown in Figure 3-2.























Figure 3-2. First payload mounted on helicopter



























~_h- --~Ll.i SL~-~ILaa*~~E~P ~L~YUI~PLI
~5~L~CIIIC~
IL3e L~ Y4 -I I
----- ;L ~-Y -C


The first prototype system was equipped on the aircraft and was tested during flight.


Although image data was able to be gathered during flight, it was found that the laptop hard


drive could not withstand the vibration of the aircraft. Figure 3-3 shows in flight testing of the


first prototype payload.


Figure 3-3. Helicopter testing with first payload

Second Helicopter Electronics and Sensor Payload

The payload design was refined in order to provide a more robust testing platform for this

research. In order to improve the designs, vibration isolation of the payload from the aircraft was


required as well as a data storage method that could withstand the harsh environment onboard

the aircraft. The system schematic for the second prototype payload is shown in Figure 3-4.


Imagmg
Data Storage
Digital Stereovislon CamerasCopcFlh

Commnicaion ndusrialCompact Flash



Wireless CPU
Ethernet
Pose Sensols

OEM Garmm GPS


DC/DC Convelters LIPo Battery Digital Compass


Figure 3-4. Second helicopter payload system schematic










The system consisted of six subsystems:

1. Main processor
2. Imaging
3. Pose sensors
4. Communication
5. Data Storage
6. Power
The second prototype payload contained similar components as the first prototype, but

instead of a laptop hard drive it utilized two compact flash drives for storage, and in addition two

pose sensors were added. The OEM Garmin GPS provided global position, velocity, and altitude

data at 5Hz. The digital compass provided heading, roll, and pitch angles at 30Hz. The second

prototype payload is shown in Figure 3-5.


Figure 3-5. Second payload mounted to helicopter












Flight tests showed that the second payload could reliably collect image and pose data


during flight and maintain wireless communication at all times. Figure 3-6 shows the second


prototype payload equipped on the aircraft during flight testing.


















Figure 3-6. Flight testing with second helicopter payload

Third Helicopter Electronics and Sensor Payload


The helicopter electronics and sensor payload was redesigned slightly to include a high


accuracy differential GPS (Figure 3-7). This system has a vendor stated positioning accuracy of


2 cm in differential mode and allows precise helicopter positioning. This system further


improves the overall system performance and allows for comparison of the normal versus RT2


differential GPS systems.


Imaging

Digital Stereovision
Cameras



commemea so" Industrial

Wireless CPU
Ethernet



SPower

DC/DC Converters LIPo Battery


Data Storage

Compact Flash


Compact Flash


Pose Sensors

Novatel RT2 Differential
GPS


Digital Compass


Figure 3-7. Third helicopter payload system schematic












Micro Air Vehicle Embedded State Estimator and Control Payload


An embedded state estimator and control payload was developed to support the Micro Air


Vehicle research being performed at the University of Florida. This system provides control


stability and video data. The system schematic is shown in Figure 3-8.


Imaging

CMOS Camera
RF Video Transmitter


Communication
Atmel
Aerocomm
900MHz RF Megal28
Modem



SPower

DC/DC Converters LIPoBattery


Pose Sensors

2 Axis Accelerometer



Altitude and Ailrspeed
Pressure Sensors


Figure 3-8. Micro-Air Vehicle embedded state estimator and control system schematic

Testing Aircraft

UF Micro Air Vehicles


Several MAVs have been developed for reconnaissance and control applications. This


platform provides a payload capability of < 30 grams with a wingspan of 6" (Figure 3-9). This


system is a fixed wing aircraft with 2-3 control surface actuators and an electric motor. System


development for this platform requires small size and weight, and low power consumption.


Figure 3-9. Six inch micro air vehicle









ECO 8

This aircraft was the first helicopter built in the UF laboratory. The aircraft is powered by

a brushed electric motor with an eight cell nickel cadmium battery pack. The aircraft is capable

of flying for approximately 10 minutes under normal flight conditions. This system has a

payload capacity of less than 60 grams with CCPM swashplate mixing as shown in Figure 3-10.

















Figure 3-10. Eco 8 helicopter

Miniature Aircraft Gas Xcell

A Miniature Aircraft Gas Xcell was the first gas powered helicopter purchased for testing

and experimentation (Figure 3-11). This aircraft is equipped with a two stroke gasoline engine,

740 mm main rotor blades, and has an optimal rotor head speed of 1800 rpm. The payload

capacity is approximately 15 lbs with a runtime of 20 minutes.













Figure 3-11. Miniature Aircraft Gas Xcell










Bergen Industrial Twin

A Bergen Industrial Twin was purchased for testing with heavier payloads (Figure 3-12).

This aircraft is equipped with a dual cylinder two stroke gasoline engine, 810 mm main rotor

blades, and has an optimal rotor head speed of 1500 rpm. The payload capacity is approximately

25 lbs with a runtime of 30 minutes.




















Figure 3-12. Bergen industrial twin helicopter

Yamaha RMAX

Several agricultural Yamaha RMAX helicopters were purchased by the AFRL robotics

research laboratory at Tyndall Air Force base in Panama City, Florida. The aircraft is shown in

Figure 3-13. This system has a two-stroke engine with internal power generation and control

stabilization system. This system has a 60 lb payload capability. The system is typically used

for small area pesticide and fertilizer spraying.

These aircraft were used to conduct various experiments involving remote sensing, sensor

noise analysis, system identification, and various applied rotorcraft tasks. These experiments

and their results will be discussed in the subsequent chapters. Each aircraft has varying costs,










payload capabilities, and runtimes. As with the various sensors available for UAV research, the

aircraft should be selected to tailor to the needs of the particular proj ect or task.


Figure 3-13. Yamaha RMAX helicopter









CHAPTER 4
GEO-POSITIONING OF STATIC OBJECTS USING MONOCULAR CAMERA
TECHNIQUES

Two derivations were performed which allowed for the global coordinates of an object in

an image to be found. Both derivations perform the transformation from a 2D coordinate system

referred to as the image coordinate system to the 3D global coordinate system. The first

derivation utilizes a simplified camera model and calculates the position of the static obj ect using

the concept of the intersection of a line and a plane. The second derivation utilizes intrinsic and

extrinsic camera parameters and uses proj ective geometry and coordinate transformations.

Simplified Camera Model and Transformation

Simple Camera Model

The cameras were modeled by linearly scaling the horizontal and vertical proj section angle

with the x and y position of the pixel respectively as illustrated in Figure 4-1. This allowed for

the relative angle of the static obj ect to be calculated with respect to a coordinate system fixed in

the aircraft.









-Pitch








Roll
Figure 4-1. Image coordinates to projection angle calculation










Coordinate Transformation

A coordinate transformation is performed on the static obj ect location from image

coordinates to global coordinates as shown in Figure 4-2. The image data provides the relative

angle of the static obj ect with respect to the aircraft reference frame. In order to find the position

of the static obj ect, a solution of the intersection of a line and a plane was used.

Holicoptera
Rafaranoc
Frame







Global ''.An
Rafmarnea
Funne


Figure 4-2. Diagram of coordinate transformation


The equation of a plane that is used for this problem is

Ax + By + Cz + D = 0 (4-1)

where x, y, and z are the coordinates of a point in the plane.

The equation of a line used in this problem is

p1 p + u 2 1 l (4-2)

where pi and p2 are pOints on the line.

Substituting (4-2) into (4-1) results in the solution

Ax, + By, + Cz + D
u = (4-3)
A(x, x2)+ B(y, y2 )+ C Z Z2









where xl, yl, and zl are the coordinates of point pi, and x2, y2, and z2 are the coordinates of point

p2.

For this problem the ground plane is defined in the global reference frame by A=0, B=0,

C=1, and D=-ground elevation. The point pi is the focal point of the camera and it is determined

in the global reference frame based on the sensed GPS data. The point p2 is calculated in the

global reference frame as equal to the coordinates of pi plus a unit distance along the static

obj ect proj section ray. This is known from the static obj ect image angle and the camera' s

orientation as measured by attitude and heading sensors. In other words the direction of the

static obj ect proj section ray in the global reference frame was found by transforming the

proj section vector from the aircraft to the static obj ect as measured in the aircraft frame to the

global frame. This entailed using a downward vector in the aircraft frame and rotating about the

yaw, pitch, and roll axis by the proj section angles and pose angles. The rotation matrices are

Cos # Sin~ O
'R2 = Sin#~ Cos# 0 (4-4)
0 0 1


Cos8 0 Sin0
2R3 = 0 1 0 45
-Sin0 0 CosB

10 0
3R4 =I 0 Cos y Sin l (4-6)
0 Sin r Cos y

where

cp=yaw of the aircraft

6=pitch of the aircraft plus proj section pitch angle

uy=roll of the aircraft plus proj section pitch angle.









The downward vector r=(0 0 -1)T was transformed using the compound rotation matrix

1R4 __1R"R"R4 (4-7)

The new proj section vector was found as

jr = R4r (4-8)

where r is the proj section ray measured in the aircraft reference frame and j"' is the proj section ray

as measured in the global reference frame. Using the solution found for the intersection of a line

and a plane and using the aircraft position as a point on the line, the position of the static object

in the global reference frame was found. Thus, for each obj ect identified in an image, the

coordinates of pl and p2 are determined in the global reference frame and (4-2) and (4-3) are then

used to calculate the position of the object in the global reference frame.

Improved Techniques for Geo-Positioning of Static Objects

A precise camera model and an image to global coordinate transformation were developed.

This involved finding the intrinsic and extrinsic camera parameters of the camera system

attached to the aerial vehicle. A relation between the normalized pixel coordinates and

coordinates in the proj ective coordinate plane was used:




=v 1.X Zec (4-9)



The normalized pixel coordinate vector iib and the proj ective plane coordinate vector M\~

are related using Equation 4-9 and form the projection relationship between points in the image

plane and points in the camera reference frame as shown in Figure 4-9, where










Ye .:1 (4-11)

I 1

















Figure 4-3. Normalized focal and projective planes

The transformation from image coordinates to global coordinates was determined using the

normalized pixel coordinates, and the camera position and orientation with respect to the global

coordinate system (Figure 4-4). The transformation of a point M expressed in the camera

reference system C to a point expressed in the global system is shown in Equation 4-12.

G Mh __ GC C M (4-12)



XG Xc

G~~: GCPG G 1 (4-13)


Dividing both sides of Equation 4-13 by Zc and substituting ZG = 0 (assuming the

elevation of the camera is evaluated as the above ground level and the target location exists on

the ZG = 0 global plane) results in Equation 4-14.





Figure 4-4. Relation between a point in the camera and global reference frames


Zc



1
Zc,


(4-14)


Substituting Xc/Zc


un and Yc/Zc = vn:


(4-15)


(4-16)


Zc



1
Zc,


Zc u


SZcYO Of 1 1:



This leads to three equations and three unknowns XG, YG, Zc:

X, GPM
G= R,,u, + R12 n + R13 C=
Zc Zc










G= Rept,, + R22V, + R23 + c?(-7


GPC
0 = R?1u,, + R32v,, + R33 + c (4-18)


where the scalar R,, represents the element in the ith' row and j"' column of the R,, matrix.

Using Equations 4-16, 4-17, and 4-18, Zc, XG, YG can be determined explicitly:

GP
c, =o (4-19)
R31 72 + R32 n R33


XG Gieo= ( u, + R12 72 + R13 GPeox (4-20)


R31 72 + ;JR32 72 + R33 (-1




Equations 4-20 and 4-21 provide the global coordinates of the static obj ect.

Camera Calibration

In order to calculate the normalized pixel coordinates using raw imaging sensor data, a

calibration procedure is performed using a camera calibration toolbox for MATLAB@[15]. The

calibration procedure determines the extrinsic and intrinsic parameters of the camera system.

During the calibration procedure, several images are used with checkerboard patterns of specific

size that allow for the different parameters to be estimated as shown in Figure 4-5.

The extrinsic parameters define the position and orientation characteristics of the camera

system. These parameters are affected by the mounting and positioning of the camera relative to

the body fixed coordinate system.




























Figure 4-5. Calibration checkerboard pattern

The intrinsic parameters define the optic projection and perspective characteristics of the

camera system. These parameters are affected by the camera lens properties, imaging sensor

properties, and lens/sensor placement properties. The camera lens properties are generally

characterized by the focal length and prescribed imaging sensor size. The focal length is a

measure of how strongly the lens focuses the light energy. This in essence correlates to the zoom

of the lens given a fixed sensor size and distance. The imaging sensor properties are generally

characterized by the physical size, and horizontal/vertical resolution of the imaging sensor.

These properties help to define the dimensions and geometry of the image pixels. The

lens/sensor placement properties are generally characterized by the misalignment of the lens and

image sensor, and the lens to sensor planar distance. For our analysis we are mostly concerned

with determining the intrinsic parameters of the camera system. These parameters are used for

calculating the normalized pixel coordinates given the raw pixel coordinates.

The intrinsic parameters that are used for generating the normalized pixel coordinates are

the focal length, principal point, skew coefficient, and image distortion coefficients. The focal










length as described earlier estimates the linear proj section of points observed is space to the focal

plane. The focal length has components in the x and y axes and does not assume these values are

equal. The principal point estimates the center pixel position. All normalized pixel coordinates

are referenced to this point. The skew coefficient estimates the angle between the x and y axes

of each pixel. In some instances the pixel geometry is not square or even rectangular. This

coefficient describes how "off-square" the pixel x and y axes are and allows for compensation.

The image distortion coefficients estimate the radial and tangential distortions typically caused

by the camera lens. Radial distortion causes a changing magnifieation effect at varying radial

distances. These effects are apparent when a straight line appears to be curved through the

camera system. The tangential distortions are caused by ill centering or defects of the lens

optics. These cause the displacement of points perpendicular to the radial imaging Hield.


Calibration images






















Figure 4-6. Calibration images









The camera calibration toolbox allows for all of the intrinsic parameters to be estimated

using several images of the predefined checkerboard pattern. Once the calibration procedure is

completed, the intrinsic parameters are used in the geo-positioning algorithm. Selections of

images were used that captured the checker pattern at different ranges and orientations as shown

in Figure 4-6.

The boundaries of the checker pattern were then selected manually for each image. The

calibration algorithm used the gradient of the pattern to then find all of the vertices of the

checkerboard as shown in Figure 4-7.



















Figure 4-7. Calibration images

Once the boundaries for all of the images were selected, the algorithm calculated the

intrinsic camera parameter estimates using a gradient descent search. Using the selected images

the following parameters were calculated:

Focal Length: fc = [ 1019.52796 1022.12290 ] a [ 20.11515 20.62667 ]

Principal point: cc = [645.66333 527.72943 ] a[ 13.60462 10.92129 ]

Skew: alpha~c = [ 0.00000 ] a [ 0.00000 ] => angle of pixel axes = 90.00000 +

0.00000 degrees









Distortion: kc = [-0.17892 0.13875 -0.00128 0.00560 0.00000 ] +[0.01419

0.02983 0.00158 0.00203 0.00000 ]

Pixel error: err = [ 0.22613 0.14137 ]

Geo-Positioning Sensitivity Analysis

In this section the derivation for the sensitivity of the position solution will be derived

based on the measurable parameters used in the geo-positioning algorithm. This analysis will

show the general sensitivity of the positioning solution, and also the sensitivity at common

operating conditions.

Equation 4-15 is used to determine the global position of the target based on the global

position and orientation of the camera, and the normalized target pixel coordinates. Multiplying

Equation 4-15 through by Ze produces:



7G G GCol 1 (4-22)




Since the geo-positioning coordinates are of primary concern for the sensitivity analysis,

Equation 4-22 is reduced to the form:



Gz Yo ~ (4-23)
yG C GR) GR) GR Ge



Equation 4-23 is rewritten in the form:

xG
= zeAb (4-24)
.PG
where









C,C, +S,S S, C,S, -S,S C, S,C, GPeox
A =
-CS CC S, GP~ (4-25)

n,

V,

b=l 1

1 (4-26)



The geo-positioning process is modeled by assuming there are some errors in the

parameters used in the calculation. The parameter vector is defined below:



G;Pc





(4-27)





The modeled process is shown below:


r:(X actual _=A H error)
where








G Pco, + 3 GPc,~

G Pco,+ 3 G Pc,;





vn + 3(v,)


(4-29)


The positioning error from Equation 4-28 reduces to:


(4-30)


In order to establish a metric from measurement of the error for the sensitivity analysis, the
inner product of Equation 4-30 is used.



8" &= zcb c~b c~b zci


-' & = zcAb


Upon using Equation 4-24 to substitute for zc.Ab^ h enrcfrmfrthro

variance becomes:


C"e=z2 T7AAb"AE


(4-32)


The partial derivative of the generic error variance is shown for an arbitrary parameter 5.


S= eXG =z, b -zc b .


zc1 zcAb zAb, +( zcAb zAb1~


-2 zcAAb G', C2X Tz.b"AAb"A .










i~' 8zC2 TA'A6g1 8 zcAb 8zC/c I~ 2 TA'A6 p
-2 +

i' 8zC2 TA'A6Ep 82zcbTAT L


= z, cb AAb +z 2 ATAb +zC2 T Ab + zC2TAT b+ zC2 TATA sr

8zT xG bT xG T SA' xG
-2 c r b'A' 1 +z( A'() + zcb s> lc


(4-33)

In order to reduce the complexity of the analysis and to provide a more concise

representation of the effects of the parameter errors on the error variance, and without loss of

generality, the target position is set as the origin of the global coordinate system. Equation 4-30

reduces to:


e = zc Ab .r (4-34)

d'e = zci zcsi~ b

(4-35)
e"Te =z2b" A" A

This quantity equates to the error variance of the positioning solution given the system

configuration and error values. It is desirable to determine the effects of errors in each parameter

used in the geo-positioning solution. Hence the partial derivative of the inner product is

calculated with respect to each parameter error.

The partial derivative of the reduced error variance is shown for an arbitrary parameter 5.

deTe" dz, 2dbT dAT A b
~~ir= 2zc c r'ATb +zC2 ATAb +zC2 T Ab +zC2 TAT _b + zC2 TATA (4-36)








Equation 4-25 is restated below along with the partial derivatives with respect to G~co,~


GPco,a GPco, #,0, 7, u and v


-C"S

sA 0_=[

83 GPcz) 0


,,A 0[


SGP:


:CL


001


000

000


(4-25)

(4-37)


(4-38)


(4-39)


83 (9) rS CS, ~S

d6A8 -S4C, +CS 'CS, Y

"BA -CS, + SS ~,CC,


-S4C C:, -S

-S S, C'S C,~ -S~S 0
o o o

-C C S S, O 0


(4-40)

(4-41)

(4-42)

(4-43)

(4-44)


Equation 4-26 is restated below along with the partial derivatives with respect to GPco ,

GPcoI, G~co, 8, /, U, u, and v :


u


(4-26)


Sze )


as (us,)a oo oo o
as (v.)a o o oo oo









1


(4-45)


G ^
'1
0





1
0
SS,+C~S C,


db
83(u) >


(4-46)


db
83(vl )


(4-47)


o'O
0
0
LOI


db
"( P~


(4-48)


db


(4-49)


db
83 GPcz)


(4-50)


C SC -CCC


- udSC,-C;) SS +9S4Sp-
GP~~I)

li~i~cs(a- 2~~C ~~


db
83(#)


(4-51)


U(s SC -SSr +9d(SS +C S Cr -CC











db 0
(4-52)
a6( u CC+SSS +9 (CSc-S~S~C )+SC




db 0
(4-53)
-Sp-S +9 C-S



Equation 4-19 is restated below along with the partial derivatives with respect to GPco,

G~co, G~,, #, 8, ry, u and v :


z c (4-54)
Su -SC+ S+v -SS-C C + C
8z
S= 0 (4-55)
as (G Cox
8z,
= 0 (4-56)



B B +V(4-57)
83(GP~ Co n -S C +C SS+ +9 -S-C C +CC

(4-58)
83(u ) -~S C', +CSS+9 -SSCS~C +CC 2


(4-59)
83(v ) -G~S C,+C SS ~+9 -~Sj S,-~C SC +CC


(4-60)









8z G Cozn -S (I,((s )s) 4s+9 -CiS,+SS )S
(4-61)
83(c8) (6 -SC +CiS S,)+9-S Scic,c,-CSCl+


(4-62)
83(', r) g -SC +CISS )+-(SS -CSSC )+CC 2
With the partial derivatives for the components of the error inner product defined, the

sensitivity of the error can be quantified for each parameter. Hence the sensitivity of the error

variance can be derived with respect to each parameter error.

The error sensitivity with respect to GPco, is shown in Equations 4-63 and 4-64.


= 2z c bTATAb +z 2, ATAb + zC2 T Ab
86 G~ox C GPox G Cx G~ox(4-63)
dA db
+zC2 TAT + zC2 TATA


deT"= -2zc b 'A'Ab+v-C C
86~~~~~ GPo n- A C S T-S C C CC (4-64)

+zC T l Ab + zC2 TAT br O:


The error sensitivity with respect to GPco, is shown in Equations 4-65 and 4-66.


= 2z c bTATAb + z ATAb + zC2 T Ab
?6 GPro C GC7roi "C GPo Go (4-65)
dA db
+zC2 TAT + zC2 TATA

deTe T Coo Cl o l

=z2 T AE +zC2 TAI Ib (4-66)


The error sensitivity with respect to GPco_ is shown in Equations 4-67 and 4-68.
























































-SCC -SIS 0 SCS

-S C, Ci 01 S4S


az, db'
2z c bTA Ab +z2 2
C G Co, C Gp~o,


db
ATAb + zC2 TATA
G~p~o


86 GP~oz


(4-67)


dAT dA
+z2 2T ~Ab +zC2 TAT bh
dCG GC
Co~ Co,


85'! "
86 GPoz


A' Ab


SSC CSS + SS CSCi -CCY

(G PC~- 2


4A'


+z 2T


(z (SC, -CSS +v S-CSC


-C~C


(4-68)


The error sensitivity with respect to # is shown in Equations 4-69 and 4-70.


de" &


85'!

de" ( )


22zc zcT ATA + zC2b ATAb + zC2 T SA A +zC2 TA bA + +zC2 TTAA b


(4-69)


GpC n S +v C C -C
c" ( SC +C~SS +9 -S~S -S~C +CC


-bTA Ab






-u0C~C-S + jC~C~C + C~S


-u^C~C3S +9(C~CC +C~S


ATAb + zC2 TATA


+zC2


+zC2 T c~
S~S


-S~CC -S~S 0

-SC, Ci 0


(4-70)


I; (~", '1~To~+ 0 (.S~Y~1





















0




u~ (CCn +SdSnlS +CS,-SSCl )+SC(
GP
Coz


The error sensitivity with respect to B is shown in Equations 4-71 and 4-72.


e z, b' ab dAT A -
= 2zc cTA Ab +zC2 A Ab +zC2 TAT A+ zC2 T Ab + zC2b ATA b

(4-71)

G -o n -S^SIS^ +9 -S+S SC -S C A
=e 2z b Aa'Ab
c S CS+ -S-S C


r -
830)


+z2


ATAb


0
0
0

u C SS +9r+i~~+ CSB~i L~~-SSC SC

GP


+zC2bA"TA


-Sa~S 0l Ab



-SBSd 0 bi


+zC2b' -S C, +CrS S -S ~7S -CSC
0 0

+z ~~ 2bA -C + S S -S S:Y CS
C=2"A (,
0 0


(4-72)


The error sensitivity with respect to ry is shown in Equations 4-73 and 4-74.


8z, b' db
22zc cgATAb +zC2 ATAb +zC2 TA A-


dAT A -
+ zC2 T Ab + zC2 TAT ~b

(4-73)










GP S?, SS+CoS^C, +9,, -iSC,+CoSoS,-CoS bA

c; (2," -SC+CoSoS, +v, -So~S,-CoSoC,)+CC


de"' e
dS(v)


u~ +
\ -SS -CSC +9 SC -C S.S


A' Ab


0



22-S,-S. -Clse S.C.~ +9SC -
G


+z,2 TA


oS + S~So-C, CoC, +SoSoS,


-C~C


-C~S
cd V'~~s
CoC,+SoS


+z, i'A' -CoS,+So~C,


SAb
0 01

Oo 0o


(4-74)


The error sensitivity with respect to u,, is shown in Equations 4-75 and 4-76.


22ze cgATAb + z, ATAb + z, 'ATA-


de"Te


dA'
+ z, 2T ~


dA ~
Ab + z, 5'A' (4-75)


22i, (2


Gd-SoC ,+C oSoS
Sb'A Ab
-SC,~';+CoSoS, )+"(9,; -SoS,-CoYC)+Co 2 To~


de"Te
86(uM>)


(4-76)


ATAb + z, i'AT


The error sensitivity with respect to v,, is shown in Equations 4-77 and 4-78.


dz, dbT db
22ze cgATAb + z, ATAb +z, h A A-
dv & &


e"'! e
BS (v,,)


dAT dA
+z, r Ab + z, &'AT b (4-77)
dv, dv










G C S~S, -C^S^CY



q-S +CS+C S 9-SS- SCS +CC 2^



C C,


(4-78)


This derivation provides the general sensitivity equations for target geo-positioning from a

UAV. These equations provide the basis for the sensitivity analysis conducted in the following

chapters. These results will be combined with empirically derived sensor data to determine the

parameter significance relative to the induced geo-positioning error.


85"'!
= 2z
86(v,) c




+z2









CHAPTER 5
UNMANNED ROTORCRAFT MODELING

In order to derive the equations of motion of the aircraft and to perform further analysis, an

aircraft model was developed based on previous work [7,8,9, 10,16, 17]. For this research the

scope of the rotorcraft mechanics was limited to Bell-Hiller mixing and Flapping rotor head

design. A simplified aircraft model was developed previously [16, 17] for simulation and

controller development. A similar approach will be used here for the derivations.

Mettler et al. [7,8,9] use more complex analysis when deriving their dynamic equations.

Their analysis includes more complex dynamic factors such as fly-bar paddle mixing, main blade

drag/torque effects, and fuselage/stabilizer aerodynamic effects.

The actuator inputs commonly used for control of RC rotorcraft are composed of:

Ston: Longitudinal cyclic control

Stat: Lateral cyclic control

Scol: Collective pitch control

Grmd: Tail rudder pitch control

Gthr: Throttle control

A body Eixed coordinate system was used in order to relate sensor and motion information

in the inertial and relative reference frames. Figures 5-1 and 5-2 show the body Eixed coordinate

sy stem.

A transformation matrix was derived which relates the position and orientation of the body

Eixed frame to the inertial frame. The orientation of the body Eixed frame is related to the inertial

frame using a 3-1-2 rotation sequence. The inertial frame is initially in the North-East-Down

orientation. The coordinates system undergoes a rotation uy about the Z axis, then a rotation (p









about the X' axis, and then a rotation 6 about the Y"' axis. The compound rotation is equated

below in Equation 5-1 and the subsequent rotations are shown in 5-2, 5-3, and 5-4.





















Figure 5-1. Top view of the body fixed coordinate system










Figure 5-2. Side view of the body fixed coordinate system


1R4 _1R22RR4

Cos y Sin7 0y
'R =Sin r Cos y 0
0 0 1


(5-1)


(5-2)









10 0
2R =I 0 Cos# Sing~ (5-3)
0 Sin # Cos #



R = 1 0(5-4)
-Sin0 0 CosB

The final compound rotation matrix is equated below in Equation 5-4.
C, C, + S, SS, C, S, S, SC, S, C,
'R' = -C,S, C,C, S,(54
S, C, + C, SS, S, S, C, SC, C,C~ C,-

where the notation C, and S, represent the Cosine and Sine of the angle i respectively.

The transformation matrix which converts a point measured in the body fixed frame to the

point measured in the inertial fixed frame is shown in Equation 5-5.


i~etraT~oy br~ertial R 'retrr"""'P-5
bretia Bo __ Boa o o 5 5



where ''eremiPeo,,o represents the position of the body fixed frame origin measured in the inertial

frame .


The lateral and longitudinal motion of the aircraft is primarily controlled by the lateral and

longitudinal cyclic control inputs. For a flapping rotor head, the motions of the main rotor blades

form a disk whose orientation with respect to the airframe is controlled by these inputs. The

orientation of the main rotor disk is illustrated in Figure 5-3:

In this analysis, a represents the lateral rotation of the main rotor blade disk and b

represents the longitudinal rotation of the main rotor blade disk. In a report by Heffley and

Munich [17], motion of the main rotor disc is approximated by a first order system as shown

below:
















X1 -


Front View


Port Side View


Figure 5-3. Main rotor blade angle


1;,O ir~0 (ah am 0 d'"' (5-6)



where z,,, is the lateral cyclic damping coefficient and z0,, is the longitudinal cyclic damping


coefficient.


The angular velocity as measured in the body fixed frame B can be translated into angular

velocity in the inertial frame by using Equation 5-4:

BRG 1R4

GR" ="RG T(5-7)

mG) GRB" my


TM RR










z z



Front View Port Side view


Figure 5-4. Main rotor thrust vector









The main rotor induces a moment and linear force on the body of the aircraft. These

induce lateral and longitudinal motion, and roll and pitch rotations of the aircraft. The main rotor

thrust vector Tm is illustrated in Figure 5-4:

The main rotor thrust vector as measured in the body fixed frame is:

7 =7 j-Sin(b)
TM =R T Sin(a) I (5-7)



The equations of motion of the aircraft were derived in the inertial frame using the

following equations:

GR" = ma
GR""AI, = R[I, Rdi (5-8)

This derivation has resulted in a simplified helicopter dynamic model. This model

provides a foundation for simulation of the aircraft in the absence of an experimental platform.

This derivation was performed to provide the reader with basic helicopter dynamic principles

and an introduction to helicopter control mechanics. Now that a background on helicopter

mechanics and dynamics has been presented, the next chapter will discuss the use of onboard

sensors and signal processing for aircraft state estimation.









CHAPTER 6
STATE ESTIMATION USINTG ONBOARD SENSORS

This research proposes to derive and demonstrate the estimation of UGV states using a

UAV. In order to estimate the UGV states, the estimates of the UAV states are required. In this

research, sensors measurements from the UAV will be used to perform the state estimation of the

UAV and UGV. This research is primarily concerned with developing a remote sensing system.

Where it stands out is in how UAV dynamics and state measurements are utilized in passively

determining the states of the UGV.

Attitude Estimation Using Accelerometer Measurements

As discussed earlier, a two or three axis accelerometer can be used for determining the

attitude of an aircraft. A simple equation for determining the roll and pitch angles of an aircraft

using the acceleration measurements in the x and y body fixed axes is shown in Equation 6-1 and

6-2.


roll =sin_ (6-1)




pitch =r sina (6-2)

where ax and a are the measured acceleration measurements in the body fixed x and y axes.

A maj or problem with using accelerometers for attitude estimation is the effects from high

frequency vibration inherent to rotary wing aircraft. There are several characteristic frequencies

in the rotorcraft system to consider when analyzing the accelerometer signals. The main

characteristic frequencies are the speed of the main rotor blades, tail rotor blades, and

engine/motor. The highest frequency vibration will come from the engine/motor. The main gear

of the power transmission reduces the frequency to the main rotor head by about a factor of 9.8

for the Gas Xcell helicopter. The frequency is then further reduced by the tail rotor transmission









to the tail rotor blades. Any imbalances in the motor/motor fan, transmission gears, and rotor

heads/blades can cause significant vibration. Also any bent or misaligned shafts can cause

vibration in the system.

Due to the speed and number of moving parts in a helicopter, these aircraft have significant

vibration at the engine, main and tail rotor frequencies and harmonics. Extreme care must be

taken to ensure balance and proper alignment of all elements of the drive train. Time taken

balancing and inspecting components can payoff in the long run in system performance. The

airframe and payload structure must be carefully considered. Due to the energy content at

specific frequencies, any structural element with a natural frequency at or around the engine, or

main/tail rotor frequencies or harmonics could produce disastrous effects. Rigid mounting of

payload is highly discouraged as there would be no element other than the aircraft structure to

dissipate the cyclic loading.

Prospective researchers are forewarned that small and large unmanned aircraft systems

should be treated like any other piece of heavy machinery. In this case the payload was rigidly

attached to the base of the aircraft frame. Upon spool-up of the engine the head speed

transitioned into the natural frequency of the airframe with the most flexible component of the

system being the side frames of the aircraft. In less than a second the aircraft entered a resonant

vibration mode. This resulted in a tail-boom strike by the main blades. The damage was a result

of the main shaft shattering and proj ecting the upper main bearing block striking the pilot over

thirty feet away. Airframe resonance is particularly dangerous in all rotary aircraft from small

unmanned systems to large heavy lift commercial and military helicopters.

A Fast Fourier Transform (FFT) of the accelerometer measurements show very specific

spikes on all axes at specific frequencies as shown in Figure 6-1.





















300 FFT of Raw Y Axis Accelerometer Measurement
200


0 5 10 15 20 25


Frequency (Hz)



Figure 6-1. Fast Fourier Transform of raw accelerometer data


Frequeny (H2)


Figure 6-2. Fast Fourier Transform of raw accelerometer data after low-pass filter

Strategic filtering at the maj or vibration frequencies can improve the attitude estimates


while still allowing for the aircraft dynamics to be measured. Also by attenuating only specific










frequency bands, the noise can be reduced yet still produce fast signal response. A discrete low-

pass Butterworth IIR filter was used, with a 5 Hz pass band and a 10 Hz stop band that filtered

the high frequency noise evident between 15-25Hz. The raw accelerometer FFT response using

the low-pass filter is shown in Figure 6-2.

The FFT of the raw accelerometer data shows that the high frequency noise is attenuated

beyond 5 Hz thereby eliminating the maj or effects caused by the power train or high frequency

electrical interference. Before the low-pass filter was applied, the roll and pitch measurements

were almost unusable as shown in Figure 6-3.


Measured Roll Angle


50 100


150 200
sample
Measured Pitch Angle


50 100 150 200 250 300 350
sample


Figure 6-3. Roll and Pitch measurement prior to applying low-pass filter























-



-

-

-

-


After the low-pass filter was applied, the measurements produced much more viable results


as shown in Figure 6-4.



Measured Roll Angle


0


40

20



-20

-40

-60

-80

-100


100 150 200
sample

Measured Pitch Angle


50 100 150 200 250 300
sample


Figure 6-4. Roll and Pitch measurement after applying low-pass filter


These results indicate the importance of proper vehicle maintenance and assembly. More


rigorous balancing and tuning of the vehicle can produce much better system performance and


reduce the work required to compensate for vibration in sensor data.


Heading Estimation Using Magnetometer Measurements


The heading of unmanned ground and air vehicles is commonly estimated by measuring


the local magnetic Hield of the earth. The magnetic north or compass bearing has been used for


hundreds of years for navigation and mapping. By measuring the local magnetic Hield, an











estimate of the northern magnetic field vector can be obtained. Errors between the true north as


measured relative to latitude and longitude compared with magnetic north varies depending on


the location on the globe. Depending on the location, the variations between the true and


magnetic north are known and can be compensated. Alternative methods for determining the

heading of unmanned systems exist including using highly accurate rate gyros. By precisely


measuring the angular rate of a static vehicle, the angular rate induced from the rotation of the


earth can be used to estimate heading. This requires extremely high precision rate gyros which


are currently too expensive, large, and sensitive for small unmanned systems.


Normally all three axes of the magnetometer would be used for heading estimation but due


to the fact that the aircraft does not perform any radical roll or pitch maneuvers, only the lateral


and longitudinal magnetometer measurements are required as shown in Figure 6-5.


x 10"
Lateral Magnetic Field Measurement
14-




0 50 100 150 200 250 300 350
x 10" time (s)
Longitudinal Magnetic Field Measurement


$2

0 50 100 150 200 250 300 350
time (s)
100
Heading Estimate



~-100

-200
0 50 100 150 200 250 300 350
time (s)



Figure 6-5. Magnetic heading estimate









UGV State Estimation

The geo-positioning equations derived in the previous chapters are restated below:


r ,eI= zc Ab (6-3)
where


CoC, + SoS,S, CoS, SoS,C, SoC, GPCo
A = (64
-CS, C,C, S~ GPCo,



b = 1 (6-5)




GPc
Zc o (6-6)
R31 n + R32 n R33

By identifying two unique points fixed to the UGV, the direction vector can be defined:


XG2 G1 Un2 n1


Sin~y)(6-7)



The heading of the vehicle can be found using:

ry = atan2(Sin(ry), Cos( y)) (6-8)
The kinematic motion of the vehicle can be described by the linear and angular velocity

terms. In the 2D case, the UGV is constrained to move in the x-y plane with only a z component

in the angular velocity vector. Hence the state vector is shown below:

xvCos(ry) Cos(ry) 0
2 =y =vSin(ry) =Sin(ry) 0 69
7 7 0 1








In [27] the researchers define the kinematic equations for an Ackermann style UGV.

Using these equations the kinematic equations are restated using our notation:

FL L
x 2 2v
v~o~ )+Sin(W) Cos(ry) Sin(ry)
v~in~y) Cos(W) -Sin(r)Csy)
Equation 6-10 follows the structure outlined in [28] and is rewritten in the form:

Cos (w) LSin(W)
i= L (6-11)
Sin(y) -C2 os(IY)1

where z is the measurement vector, x is the state vector, and 9 is the additive measurement

error. The measurement error can be isolated and the squared error can be written in the form:



The measurement estimate is written in the form:

z =Hi (6-13)

Hence the sum of the squares of the measurement variations z z is represented by:

J= (z- HiZ) (z- Hj) (6-14)
The sum of the squares of the measurement variations is minimized with respect to the state
estimate as shown:

83J 8 (2 H)T (Z H~i)

0=(-H)T (fHxl)+(x-Hi) (-H)
0 = -H'i"+ HTHi -z"TH + 'HH (6-15)
0 = -2H'" + 2HTHS
;= H'H) H'E
Therefore the state estimate can be expressed as:









I CsryY L CsLyI
Cos(ry) Sin(r)Co() Sin(w) Cos(ry) Sin(ry)
Sin( ) Co() L Sin(7) Co(7
2 2 Sin"('/) Co( 2 2

1 0Cos(ry) Sin(ry)

0 -- Sin(w) LCos(ly)] (6-16)
1 0 Cos (r) Sin (r)
0= OL LSin(v/) LCos(w> )

Cos(ry) Sin(ry)
X Sin(v/) L Cos(V/)I

Equation 6-16 can be rewritten in the form:

Cos(ry) Sin(ry)
= 2 2 (-7
cI; -rSin(ly) Cos(7) y~
This chapter has discussed the use of onboard sensors for aircraft and ground vehicle state

estimation. These techniques will be used in the following chapter to determine the sensor noise

models for the sensitivity analysis. They will also be used for the validation of the geo-

positioning algorithm and comparison with simulation results.









CHAPTER 7
RESULTS

This chapter presents the results derived from the experiments performed using the

experimental aircraft and payload systems. It will also present the results of the effort of

applying this research to solve several engineering problems.

Geo-Positioning Sensitivity Analysis

The sensitivity of the error variance to the errors of each parameter is highly coupled with

the other parameters in the positioning solution. The mapping of the error variance sensitivity to

the parameters is highly nonlinear. In order to achieve a concise qualitative representation of the

effects of each parameter' s error in the positioning solution, a localized sensitivity analysis is

performed. This entails using common operating parameters and experimentally observed values

for the parameter errors. When substituted into the error variance partial differential equations,

the respective sensitivity of each parameter is observed for common testing conditions.

The error models for the various geo-positioning parameters were obtained using

manufacturer specifications and empirically derived noise measurements. The main aircraft used

for empirical noise analysis was the Miniature Aircraft Gas Xcell platform. This aircraft was

equipped with the testing payload as discussed in previously and sensor measurements were

recorded with the aircraft on the ground but with the engine on, and head-speed just below

takeoffs speed.

The sensor used for measuring the global position of the camera was a WAAS enabled

Garmin 16A model GPS. This GPS provides a 5 Hz positioning solution. The manufacturer

specifications for horizontal and vertical positioning accuracy is less than 3 meters. For the

sensitivity analysis the lateral and longitudinal error distribution was defined using a uniform











radial error distribution bounded by a three meter range. The error distribution parameters for


the horizontal and vertical positioning measurements are stated in Table 7-1.


Parameter Value

3m


G ^ 1.5 m
Pco

Table 7-1. Parameter standard deviations for the horizontal and vertical position



The sensor used for measuring the orientation of the camera was a Microstrain 3DMG


orientation sensor. This sensor provide three axis measurements of linear acceleration, angular


rate, and magnetic field. The sensor measurements used for determining the roll and pitch angle

of the camera were the lateral and longitudinal linear accelerations. The roll and pitch angles


were calculated using these measurements as described previously. The roll and pitch


measurements used for defining the error distribution are shown in Figure 7-1.


Measured Roll Angle
10









-20
0 2 4 6 8 10 12 14 16 18 20
time (s)
Measured Pitch Angle
20









-20
0 2 4 6 8 10 12 14 16 18 20
time (s)

























































Parameter Value

0~ 4.40

os 6.8o


Table 7-2. Parameter standard deviations for the roll, pitch, and yaw angles


Figure 7-1. Roll and Pitch measurements used for defining error distribution

The Microstrain 3DMG contains a three axis magnetometer for estimating vehicle heading.

The measurements made to estimate the heading error distribution for the sensitivity analysis is

shown in Figure 7-2.


Heading Estimate


0 2 4 6 8 10 12 14 16
time (s)


Figure 7-2. Heading measurements used for defining error distribution

Using this data set the standard deviations for the roll, pitch, and yaw were calculated and

shown in Table 7-2.

























































0 50 100 150 200 250 300 350 400 45(
Image sample


The error distributions for the normalized pixel coordinates were calculated using a series


of images of a triangular placard taken from various elevations as shown in Figure 7-3.


Figure 7-3. Image of triangular placard used for geo-positioning experiments


6 X Component of Plxel Error


r


Y Component of Pixel Erro


5
v,
" O
.x
a
a,
m


) 50 100 150 200 250
Image sample


300 350 400 450


Figure 7-4. Results of x and y pixel error calculations


-










It was difficult to quantify the expected error distribution for the normalized pixel

coordinates. The error distribution for the x and y components of the normalized pixel

coordinates were estimated by comparing the detected vertex points of the placard with the

calculated centroid of the volume. The resulting variation is shown in Figure 7-4. The pixel

errors were then converted to normalized pixel errors and are shown in Table 7-3.

Parameter Value
0.0021

0.0070

Table 7-3. Normalized pixel coordinate standard deviations used during sensitivity analysis



A summary of the parameters for the sensor error distributions that are used in the

proceeding sensitivity analysis are shown in Table 7-4.

Parameter Value


gG^ 3m
Pco


G ^ 1.5 m
COz
4.40



0.90

0.0021

0.0070

Table 7-4. Parameter standard deviations used during sensitivity analysis



The Monte Carlo method was used to evaluate each sensitivity equation. In order to

demonstrate the significance of each parameter in the Monte Carlo analysis, each parameter is

perturbed by a uniform error distribution based on experimentally derived measurements. This










analysis seeks to show the difference between the positioning errors based off of each varying

parameter. The key element to this analysis is that the error sensitivity for each parameter is

calculated including errors from other parameters. This allows for the nonlinear and coupled

relationship between the parameters to propagate through the sensitivity analysis. The results of

this analysis determine the rank of the dominance of each parameter in causing positioning error.

The error sensitivity is used in the subsequent analysis and is restated in Equation 7-1.

87e'
S' = (7-1)


The error sensitivity is evaluated using the common parameter values perturbed by a

uniform error distribution. The range of the error distribution is defined using experimentally

derived data. A uniform distribution was chosen instead of a normal distribution for the Monte

Carlo simulation. It was found that the normal distribution took too long to converge during

testing. The normal distribution also had a larger search space, combined with the nonlinear

coupling between the parameters, caused for the processing times to not be manageable. The

uniform distribution provides solid limits to the error distribution and quickly traversed through

the search space. This provided a quick yet fruitful analysis. In order to quantify the errors in

position attributable to each parameter, Equation 7-1 was modified as shown in Equation 7-2.





where p : parameter vector with all elements perturbed by associated uniform error distribution

f>: parameter vector with all elements but 5 perturbed by associated uniform error

di stributi on

ee : parameter error.








This formulation allows for the Monte Carlo simulation to calculate the error variance

distribution associated with each parameter using all parameter error distributions. This allows

not only for the coupling between the different parameters to affect the positioning error but also

the various parameter error distributions to affect the results. As with many complex systems,

not only does the inherent relationship between the various parameters effect the observations

but also the measurement errors of the various parameters.

The simulation uses the empirically derived error distribution and the target system

configuration. For this analysis, the target system is a hovering unmanned rotorcraft, operating

at a 10 meter elevation. The parameter values used for this analysis are shown in Equation 7-3.


GPco, = 0.0(m)

GPco, = 0.0(m)




SP) =~ < = 0.00
0 = 0.00

ry = 0.00 (7-3)

u, = 0.O

v, = 0.0

The error distribution is defined as a uniform distribution with bounds at the standard deviations

defined previously for the geo-positioning algorithm parameters. The histograms of the error

variance relative to each parameter are shown in Figure 7-5.











sigma(Px) sigma(Py) sigma(Pz) sigma(Phi)
6000r 50001 120001 6000

5000 4010000 ~ 5000

4000 1 8000 ~ 4000
3000
3000 1 6000 ~ 3000
2000
2000 L 4000 I 2000

1001000 2000 -1000

0- 0- 0 0
0 10 20 0 10 20 0 20 40 0 1 2
Error Variance (m 2) Error Vlariance (m 2) Error Variance (m 2) Error Variance (m 2)
x 141gma(theta) x 10sigma(psi) sigma(un) sigma(vn)
2.5 2 8000 6000


2 ~1.51 6000 50
4000
1.5
11 ~4000~ 3000

2000
0.5 1 2000

0.0 0 010000
0 0.2 0.4 0 2~aacerd 0 1 2 0 0.01 0.02
Error Variance (m 2) Ero ainerh Error Variancegrth Error Variance (m 2

Figure 7-5. Error Variance Histograms for the respective parameter errors

The bounds of the results with respect to the parameter standard deviations are shown in

Table 7-5. For the given parameter error distributions and system configuration, the results show


that the order of significance is as follows: 3 Gj:,),cOz Gco, 3 GPc, ,~, 3(#), 3(0), 3(y/),


3(v,), and 3(u). The most significant term 3 GPc,), demonstrates the importance in the


altitude data in the geo-positioning calculations. This simulation has shown the process by

which the geo-positioning parameter rank was calculated using empirically derived sensor noise

distributions and a specified system configuration. By simply adapting the sensor noise

distributions and system configuration values, this process can be applied to any given system to

provide insight in geo-positioning error source dominance.









Parameter Value

max S, eG 18.00 m2



max YOGC~ 18.00 m2


max S ~~~i)20.53 m2


max S 1.594 m2


max (SaH)0.3130 m2

maxS 0.0001524 m2





max So 0.01316 m2

Table 7-5. Comparison of Monte Carlo Method results


Comparison of Empirical Versus Simulated Geo-Positioning Errors

The experimental results obtained using the Gas Xcell Aircraft equipped with a downward facing

camera and the experimental payload discussed earlier were compared with simulation results

using the estimated error distributions used in the Monte Carlo analysis. The testing conditions

used for the simulation analysis are shown in Equation 7-4. The results show that the geo-

positioning errors from simulation closely match the geo-positioning results obtained using the

experimental vehicle/payload setup. The geo-positioning results are shown in Figure 7-6.











G~rco, = 0.0 (m)
GPco ~= 0.0(m)

G~c, = 10 (m)

p). #= 0.00
B = 0.00 (7-4)

ry = 0.00

u,~ = 0.0

v, = 0.0

Experimental and Simulation GeoPositioning Error
10

8- Experimental
S+ Simulation



6-t j+~L-







-6 -t


-8f -






-10
-10 -8 -6 -4 -2 0 2 4 6 8 10
UTM Easting (m)

Figure 7-6. Experimental and simulation geo-position results

The use of a uniform error distribution for the simulation produces different results

compared with a normal distribution. While the simulation results vary slightly from the










experimental results, the uniform distribution provides more of an absolute bound for the error

di stributi on.

Applied Work

Unexploded Ordnance (UXO) Detection and Geo-Positioning Using a UAV

This research investigated the automatic detection and geo-positioning of unexploded

ordnance using VTOL UAVs. Personnel at the University of Florida in conjunction with those at

the Air Force Research Laboratory at Tyndall Air Force Base, Florida, have developed a sensor

payload capable of gathering image, attitude, and position information during flight. A software

suite has also been developed that processes the image data in order to identify unexploded

ordnance (UXO). These images are then geo-referenced so that the absolute positions of the

UXO can be determined in terms of the ground reference frame. This sensor payload was

outfitted on a Yamaha RMAX aircraft and several experiments were conducted in simulated and

live bomb testing ranges. This paper discusses the obj ect recognition and classification

techniques used to extract the UXO from the images, and present the results from the simulated

and live bombing range experiments.















Figure 7-7. BLU97 Submunition

Researchers have used aerial imagery obtained from small unmanned VTOL aircraft for

control, remote sensing and mapping experiments [1,2,3]. In these experiments, it was necessary









to detect a particular type of ordnance. The primary UXO of interest in these experiments was

the BLU97. After deployment, this ordnance has a yellow main body with a circular decelerator.

The BLU97 is shown in Figure 7-7.

Experimentation VTOL Aircraft

The UXO experiments were conducted using several aircraft in order to demonstrate the

modularity of the sensor payload and to determine the capabilities of each aircraft. The first

aircraft that was used for testing was a Miniature Aircraft Gas Xcell RC helicopter. The aircraft

was configured for heavy lift applications and has a payload capacity of 10-151bs. The typical

flight time for this aircraft is 15 minutes and provides a smaller VTOL aircraft for experiments at

UJF and the Air Force Research Laboratory. The Xcell helicopter is shown in Figure 7-8.










Figure 7-8. Miniature Aircraft Gas Xcell Helicopter

The second aircraft used for testing was a Yamaha RMAX unmanned helicopter. With a

payload capacity of 601bs and a runtime of 20 minutes, this platform provided a more robust and

capable testing platform for range clearance operations. The RMAX is shown in Figure 7-9.










Figure 7-9. Yamaha RMAX Unmanned Helicopter











Sensor Payload

Several sensor payloads were developed for various UAV experiments. Each payload was

constructed modularly so as to enable attachment to various aircraft. The system schematic for

the sensor payload is shown in Figure 7-10.


Irnagmg
Digital Stereovision t s,,Copc lh

Cameras Cmpact Flash
a sell~~~ ~ Industrial ICompactFlash

EthernetPose Sensors
....................................Novatel RT2 Dif ferential
GPS
Power

DC/DC Converters LPo Battery Digital Compass



Figure 7-10. Sensor Payload System Schematic

The detection sensor used for these experiments was dual digital cameras operating in the

visible spectrum. These cameras provided high resolution imagery with low weight packaging.

These experiments sought to also explore and quantify the effectiveness of this sensor for UXO

detection.


Maximum Likelihood UXO Detection Algorithm

A statistical color model was used to differentiate pixels in the image that compose the

UXO. The maximum likelihood (ML) UXO detection algorithm used a priori knowledge of the

color distribution of the surface of the BLU97s in order to detect ordnance in an image. The

color model was constructed using the RGB color space. The feature vector was defined as



x = g (7-5)


where r, g, and b are the eight bit color values for each pixel.










Using the K-means Segmentation Algorithm [18], an image containing a UXO was

segmented and selected. A segmented image is shown in Figure 7-11. This implementation

used a 5D feature vector for each pixel which allowed for clustering using spatial and color

parameters. Results varied depending on relative scaling of the feature vector components.


Segmented .
Ordnance










Figure 7-11. Segmentation software

The distribution of the UXO pixels was assumed to be a Gaussian distribution [18] thereby

the maximum likelihood method was used to approximate the UXO color model. The region

containing the UXO pixels was selected and the color model was calculated. The mean color

vector is calculated as


p f (7-6)


where n is the number of pixels in the selected region.

The covariance matrix was then calculated as


[E]= -1 )x-p (7-7)


The mean and covariance of the UXO pixels were then used to develop a classification

model. This classification model described the location and the distribution of the training data

within the RGB color space. The equation used for the classification metric was











= 1 (7-8)


The classification metric is similar to the likelihood probability except that it lacks the pre-

scaling coefficient required by Gaussian pdfs. The pre-scaling coefficient was removed in order

to optimize the performance of the classification algorithm. This allows for the classification

metric value to range from 0 to 1. The analysis was performed by selecting a threshold for the

classification metric in order to classify UXO pixels from the image. This allowed for images to

be screened for UXO detection and for the pixel coordinate location of the UXO to be identified

in the image.

Initial experimentation using simulated UXO and the ML UXO detection algorithm proved

to provide successful classification of UXO using images obtained using both aircraft. As

expected the performance of the algorithm deteriorated when there were variations in the color of

the surface of the UXO or the contrast between the UXO and the background. Relating to the

data, the ML UXO detection algorithm failed when either the actual UXO color distribution fell

far from the modeled distribution in RGB space or the background distribution closely

encompassed the actual UXO color distribution. In these cases, the variations caused both false

positives and negatives when using the classification algorithm. The use of an expanded training

data set and multiple Gaussian distributions for modeling was investigated but found to slightly

improve UXO detection rates but greatly increase false positive readings from background

pixels. The algorithm performance was also extremely sensitive to the likelihood threshold

thereby introducing another tunable parameter to the algorithm.

Spatial Statistics UXO Detection Algorithm

Previous experimental results showed that when the background of the image closely

resembled the UXO color, the ML UXO performance degraded. In order to perform more robust









UXO detection, an algorithm was developed whose parameters were based solely on the

dimensions of the UXO and not a trained color model. A more sophisticated pattern recognition

approach was used as shown in Figure 7-12.


Capture Pre- I ISegmentation I IClassification
Image I Ifiltering


Fig. 7-12. Pattern Recognition Process

The spatial statistics UXO detection algorithm was designed to segment like

colored/shaded obj ects and classify them based on their dimensions. This would allow for robust

performance in varying lighting, color, and background conditions. The assumptions made for

this algorithm were that the UXO was of continuous color/shading, and the UXO region would

have scaled spatial properties of an actual UXO. Based on the measured above ground level of

the aircraft and the proj ective properties of the imaging device, the algorithm parameters would

be auto-tuned to accommodate the scaling from the imaging process.

In order to reduce the dimensionality of the data set, the color space was first converted

from RGB to HSV. By inspection, it was found that the saturation channel provided the greatest

contrast between the background and the UXO. The raw RGB image and the saturation channel

images are shown in Figure 7-13.










Figure 7-13. Raw RGB and Saturation Images of UXO

The pre-filtering process consisted of histogram equalization of the saturation image. This

allowed for the contrast between the UXO pixels and the background and improved

segmentation.

The segmentation process was conducted by segmenting the pre-filtered image using the k-

means algorithm as shown in Figure 7-14.


Figure 7-14. Segmented Image

Each region was analyzed and classified using the scaled spatial statistics of the UXO.

Properties such as the maj or/minor axis length for the region were used to classify the regions.

Regions whose spatial properties closely matched those of the UXO were classified as UXO and

highlighted in the final image as shown in Figure 7-15.


Figure 7-15. Raw Image with Highlighted UXO









Collaborative UAV/UGV Control

Recently unmanned aerial vehicles (UAVs) have been used more extensively in military

operations. The improved perception abilities of UAVs compared with unmanned ground

vehicles (UGVs) make them more attractive for surveying and reconnaissance applications. A

combined UAV/UGV multiple vehicle system can provide aerial imagery, perception, and target

tracking along with ground target manipulation and inspection capabilities. This experiment was

conducted to demonstrate the application of a UAV/UGV system for simulated mine disposal

operations.

The experiment was conducted by surveying the target area with the UAV and creating a

map of the area. The aerial map was transmitted to the base station and post-processed to extract

the locations of the targets and develop waypoints for the ground vehicle to navigate. The

ground vehicle then proceeded to each of the targets, which simulated the validation, and

disposal of the ordnance. Results include the aerial map, processed images of the extracted

ordnances, and the ground vehicle's ability to navigate to the target points.

The platforms used for the collaborative control experiments are shown in Figure 7-16.

















Figure 7-16. TailGator and HeliGator Platforms









Waypoint Surveying

In order to evaluate the performance of the UAV/UGV system, the waypoints were

surveyed using a Novatel RT-2 differential GPS. This system provided two centimeter accuracy

or better when provided with a base station correction signal. Accurate surveying of the visited

waypoints provided a baseline for comparison of the results obtained from the helicopter and the

corresponding path the ground vehicle traversed.

The UXOs were simulated to resemble BLU-97 ordnance. Aerial photographs as shown in

Figure 7-17 of the ordnance along with the camera position and orientation were collected.

Using the transformation described previously the global coordinates of the UXOs were

calculated. The calculated UXO positions were compared with the precision survey data.


Figure 7-17. Aerial photograph of all simulated UXO






















m Diff. Waypoints
Boundaries






m





Local Map

A local map of the operating region was generated using the precision survey data. This

local map as shown in Figure 7-18 provided a baseline for all of the position comparisons

throughout this task.


3280360
-205


3280355


32803450


3280345


3280340




3280330


3280325


3280320
368955 368960 368965 368970 368975 368980 368985 368990 368995 369000

Easting (m)


Figure 7-18. Local map generated with Novatel differential GPS

The data collected compares the positioning ability of the UGV and the ability of the UAV

sensor system to accurately calculate the UXO positions. While both the UGV and UAV use

WAAS enabled GPS there is some inherent error due to vehicle motion and environmental

affects. The UGV' s control feedback was based on waypoint to waypoint control versus a path

following control algorithm.









Once a set of waypoints was provided by the UAV, the UGV was programmed to visit

every waypoint as if to simulate the automated recovery/disposal process of the UXOs. The

recovery/disposal process was optimized by ordering the waypoints in a manner that would

minimize the total distance traveled by the UGV. This problem was similar to the traveling

salesman optimization problem in which a set of cities must all be visited once while minimizing

the total distance traveled. An A* search algorithm was implemented in order to solve this

problem.

The A* search algorithm operates by creating a decision graph and traversing the graph

from node to node until the goal is reached. For the problem of waypoint order optimization, the

current path distance g, estimated distance to the final waypoint I;, and the estimated total

distance 3 was evaluated for each node by

g = length of straight line segments of all predecessor waypoints

h; = (minimum distance of any two way points t (successors & current way points)) x (# of

successors)

f = g +h. (7-10)

The requirement for the A* algorithm of the admissibility of the A heuristic is fulfilled due

to the fact that there exists no path from the current node n to a goal node with a distance less

than 4. Therefore the heuristic provides the minimum bound as required by the A* algorithm

and guarantees optimality should a path exist.

The UGV was commanded to come within a specified threshold of a waypoint before

switching to the next waypoint as shown in Figure 7-19. The UGV consistently traveled within

three meters or less of each of the desired waypoints which is within the error envelope of typical

WAAS GPS accuracy.











3280360


m Diff. Waypoints
Boundaries


3280355-
-UGV


3280350-


3280345-


.C 3280340-


3280335-


3280330-


3280325-


3280320
368955 368960 368965 368970 368975 368980 368985 368990 368995 369000

Easting (m)


Figure 7-19. A comparison of the UGV's path to the differential waypoints

The UAV calculates the waypoints based on its sensors and these points are compared with

the surveyed waypoints. There is an offset in the UAV' s data due to the GPS being used and due

to error in the transformation from image coordinates to global coordinates as shown in Figure 7-

20.

The UGV is able to navigate within several meters of the waypoints, however, is limited

due to the vehicle kinematics. Further work involves a waypoint sorting algorithm that accounts

for the turning radius of the vehicle.












3280360 Wapnt
Boundaries
UAV
3280355


3280350 -1


3280345


.E 3280340


3280335


3280330


3280325 -


3280320
368955 368960 368965 368970 368975 368980 368985 368990 368995 369000
Easting (m)


Figure 7-20. UAV waypoints vs. UGV path

Citrus Yield Estimation


Within the USA, Florida is the dominant state for citrus production, producing over two-


thirds of the USA' s tonnage, even in the hurricane-damaged 2004-2005 crop year. The citrus


crops of most importance to Florida are oranges and grapefruit, with tangerines and other citrus


being of less importance.

With contemporary globalization, citrus production and marketing is highly


internationalized, especially frozen juice concentrates. So there is great competition between


various countries. Tables 7-6 and 7-7 show the five most important countries for production of


oranges and grapefruit in two crop years. Production can vary significantly from year-to-year

due to weather, especially due to hurricanes. Note the dominance of Brazil in oranges and the


rise of China in both crops.










Country 2000-2001 Crop Year 2004-2005 Crop Year
Brazil 14,729 16,606
USA 11,139 8,293
China 2,635 4,200
Mexico 3,885 4,120
Spain 2,688 2,700
Other Countries 9,512 9,515
World Total 44,588 45,434
Table 7-6. Production of Oranges (1000's metric tons) (based on NASS, 2006)

Country 2000-2001 Crop Year 2004-2005 Crop Year
China 0 1,724
USA 2,233 914
Mexico 320 310
South Africa 288 270
Israel 286 247
Other Countries 680* 330
World Total 3,807 3,795
*Cuba produced a very significant 310 (1000 metric tons) in 2000-2001
Table 7-7. Production of Grapefruit (1000's metric tons) (based on NASS, 2006)


The costs of labor, land, and environmental compliance are generally less in most of these

countries than in the USA. Labor is the largest cost for citrus production in the USA, even

though many workers, especially harvesters, are migrants. In order for producers from the USA

to be competitive, they must have advantages in productivity, efficiency, or quality to counteract

the higher costs.

This need for productivity, efficiency, and quality translates into a need for better

management. One management advantage that USA producers can use to remain competitive is

to utilize advanced technologies. Precision agriculture is one such set of technologies which can

be used to improve profitability and sustainability. Precision agriculture technologies were

researched and applied later to citrus than some other crops, but there has been successful

precision agriculture research [19,20,21]. There has been some commercial adoption [22].









Yield maps have been a very important part of precision agriculture for over twenty years

[23]. They allow management to make appropriate decisions to maximize crop value

(production quantity and quality) while minimizing costs and environmental impacts [24].

However, citrus yield maps, like most yield maps, can currently only be generated after the fruit

is harvested because the production data is obtained during the harvesting process. It would be

advantageous if the yield map was available before harvest because this would allow better

management, including better harvest scheduling and crop marketing.

There has been a history of using machine vision to locate fruit on trees for robotic

harvesting [25]. More recent work at the University of Florida has attempted to use machine

vision techniques to do on-tree yield mapping. Machine vision has been used to count the

number of fruit on trees [26]. Other researchers not only counted the fruit, but used machine

vision and ultrasonic sensors to determine fruit size [27]. This research has been extended to

allow for counting earlier in the season when the fruit is still quite green[28].

However, these methods all require vehicles to travel down the alleys between the rows of

trees to take the machine vision images. Researchers have demonstrated that a small remotely-

piloted mini-helicopter with machine vision hardware and software could be built and operated

in citrus groves[29]. They also discuss some of the recent research on using mini-helicopters in

agriculture, primarily conducted at Hokkaido University and the University of Illinois.

The obj ective of this research was to determine if images taken from a mini-helicopter

would have the potential to be used to generate yield maps. If so, there might be a possibility of

rapidly and flexibly producing citrus yield maps before harvest.

Materials and Methods

The orange trees used to test this concept were located at Water Conserv II, jointly owned

by the City of Orlando and Orange County. The facility, located about 20 miles west of









Orlando, is the largest water reclamation project (over 100 million liters per day) of its type in

the world, one that combines agricultural irrigation and rapid infiltration basins (RIBs). A block

of 'Hamlin' orange trees, an early maturing variety (as opposed to the later maturing 'Valencia'

variety), was chosen for study.

The spatial variability of citrus tree health and production can range from very small to

extremely great depending upon local conditions. This block had some natural variability,

probably due to its variable blight infestation and topography. Additional variability was

introduced by the trees being subjected to irrigation depletion experiments. However, mainly

due to substantial natural rainfall in the 2005-2006 growing season, the variation in the yield is

within the bounds of what might be expected in contemporary commercial orange production,

even with the depletion experiments.

The irrigation depletion treatment (percent of normal irrigation water NOT applied) was

indicated by the treatment number. Irrigation depletion amounts were sometimes different for

the Spring and the Fall/Winter parts of the growing season, as seen in Table 7-8 below. The

replication was indicated by a letter suffix. Only 15 of the 42 trees (six treatments with seven

replications each) were used for this mini-helicopter imaging effort. Treatment 6 had no

irrigation except periodic fertigation, and the trees lived on rainfall alone.


Treatment Spring Deletion Fall/Winter Deletion
1 25 25
2 25 50
3 25 75
4 50 50
5 50 75
6 100 100
Table 7-8. Irrigation Treatments









The mini-helicopter used for this work was a Gas Xcell model modified for increased

payload by its manufacturer [30]. It was purchased in 2004 for about US$ 2000 and can fly up to

32 kph and carry a 6.8 kg payload. Its rotor is rated to 1800 rpm and has a diameter of less than

1.6 m. The instrumentation platform is described in MacArthur, et al., (2005) and includes GPS

with WAAS, two compact flash drives, a digital compass, and wireless Ethernet. The machine

vision system uses a Videre model STH-MDCS-VAR-C stereovision sensor.

The mini-helicopter was flown at the Water Conserv II site on 10 January 2006, a mostly

sunny day, shortly before noon. The helicopter generally hovered over each tree for a short

period of time as it moved down the row taking images with the Videre camera. The images

were stored on the helicopter and some were simultaneously transferred to a laptop computer

over the wireless Ethernet. In addition, a Canon Power Shot S2 IS five-megapixel digital camera

was used to take photos of the trees (in north-south rows) from the east and west sides.

The fruit on the individual trees were hand harvested by professional pickers on 13

February 2006. The fruit from each tree was weighed and converted to the industry-standard

measurement unit of"field boxes". A field box is defined as 40.8 kg (90 lbs.).

The images were later processed manually. A "best" image of each tree was selected,

generally on the basis of lighting and complete coverage of the tree. Each overhead image was

cropped into a square that enclosed the entire tree and scaled to 960 by 960 pixels. The pixel

data from several oranges were collected from several representative images in the data set. The

data was assumed to be normally distributed, thus the probability function was calculated for

each orange pixel dataset. Using a "Mixture of Gaussians" to represent the orange class model,

the images were analyzed and a threshold established based on our color model. The number of

"orange" pixels was then calculated in each image and used in our further analysis










Results

The results of the image processing and the individual tree harvesting of the 15 trees

studied in this work are presented in Table 7-9. As Figure 7-21 illustrates, only irrigation

depletion treatment 6 had a great effect on the individual tree yields. Treatment 6 was 100%

depletion, or no irrigation. The natural rainfall was such in this production year that the other

treatments produced yields of at least four boxes per tree.



Treatment Replication Orange Pixels Boxes Fruit
1 B 13990 7
1 G 6391 6
2 B 11065 8
2 C 2202 4
2 E 5884 5
2 F 17522 7.5
3 B 2778 6
4 A 4433 6.2
4 B 5516 4.8
4 E 5002 4
4 F 11559 4.3
5 B 9069 7
5 C 17088 6.8
6 B 5376 2.5
6 D 6296 1
Table 7-9 Results from Image Processing and Individual Tree Harvesting


* *


S0 1 2 3 4 5 6 7
Treatment Number


Figure 7-21. Individual Tree Yields as Affected by Irrigation Depletion Treatments





















y = 0.0002x + 3.5919
R- = 0.2835



* **


The images were treated by the process discussed above. The number of "orange" pixels

varied from 2202 to 17,522. More pixels should indicate more fruit. However, as Figure 7-22

shows, there was substantial scatter in the data. It can be improved somewhat by the removal of

the nonirrigated treatment 6, as shown in Figure 7-23.


0 5000 10000 15000

Orange Pixels in Image


Figure 7-22. Individual Tree Yield as a Function of Orange Pixels in Image


20000


y = 0.0002x + 4.5087
R- = 0.373


O 5000 10000 15000 20000

Orange Pixels

Individual Tree Yield as a Function of Orange Pixels with Nonirrigated Removed


Figure 7-23.


Discussion

This work showed that good overhead images of citrus trees could be taken by a mini-

helicopter and processed to have some correlation with the individual tree yield. A tree with

fewer oranges should have fewer pixels in the image of the "orange" color. For example, tree









2C had only 2202 pixels and 4 boxes of fruit while tree 2F had 17,522 pixels and 7.5 boxes of

fruit. These are shown as Figures 7-24 and 7-25 below in which the "oranges" in the "After"

photo are enhanced to indicate their detection by the image processing algorithm.


Figure 7-24. Image of Tree 2C Before and After Image Processing


Figure 7-25. Image of Tree 2F Before and After Image Processing


The image processing used in this initial research was very simple. More sophisticated

techniques would likely improve the ability to better separate oranges from other elements in the

images. The strong sunlight likely contributed to some of the errors. Again, the use of more










sophisticated techniques from other previous research, especially the techniques developed for

yield mapping of citrus from the ground, would likely improve the performance in overhead

yield mapping.

A maj or assumption in this work is that the number of orange pixels visible is proportional

to the tree yield. However, the tree canopy (leaves, branches, other fruit, etc.) do hide some of

the fruit. Differing percentages of the fruit may be visible on differing trees. This is quite

apparent with the treatment 6 trees. Figure 7-26 shows the images for tree 6D. This tree,

obviously greatly affected by the lack of irrigation and a blight disease, has 6296 "orange" pixels

but only yielded one box of fruit. The poor health of the tree meant that there were not many

leaves to hide the interior oranges. Hence, a falsely high estimate of the yield was given.

Figure 7-27 shows the images taken from the ground of Trees 6D and 2E. Even though they had

similar numbers of "orange" pixels on the images taken from the helicopter, Tree 2E had five

times the number of fruit. The more vigorous vegetation, especially the leaves, meant that the

visible oranges on Tree 2E represented a smaller percentage of the total tree yield.


Figure 7-26. Image of Tree 6D Before and After Image Processing

























Figure 7-27. ~Ground Images of Tree 6D and Tree 2E


Mini-helicopters are smaller and less expensive than piloted aircraft. Accordingly, the

financial investment in them may be justifiable to growers and small industry firms. The mini-

helicopters would give their owners the flexibility of being able to take images on their own

schedule. The mini-helicopters also do not cause a big disturbance in the fruit grove. The noise

and wind are moderate. They can operate in a rather inconspicuous manner, as shown by Figure

7-28.




















Figure 7-28 Mini-Helicopter Operating at Water Conserv II




Full Text

PAGE 1

1 TRACKING AND STATE ESTIMATI ON OF AN UNMANNED GROUND VEHICLE SYSTEM USING AN UNMANNED AIR VEHICLE SYSTEM By DONALD KAWIKA MACARTHUR A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2007

PAGE 2

2 2007 Donald K. MacArthur

PAGE 3

3 I proudly dedicate my life and this work to my w onderful wife Erica. Ma ny trials, we both have suffered through this process.

PAGE 4

4 ACKNOWLEDGMENTS I would like to thank my father Donald Sr., my mother Janey, and my brother Matthew for their support through my many years of schooling.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS...............................................................................................................4 LIST OF TABLES................................................................................................................. ..........7 LIST OF FIGURES................................................................................................................ .........8 ABSTRACT....................................................................................................................... ............11 CHAPTER 1 INTRODUCTION..................................................................................................................13 2 BACKGROUND....................................................................................................................14 Position and Orientation Measurement Sensors.....................................................................14 Global Positioning Systems.............................................................................................14 Inertial Measurement Units.............................................................................................17 Magnetometers................................................................................................................18 Accelerometer..................................................................................................................19 Rate Gyro...................................................................................................................... ...19 Unmanned Rotorcraft Modeling.............................................................................................20 Unmanned Rotorcraft Control................................................................................................21 3 EXPERIMENTAL TESTING PLATFORMS.......................................................................24 Electronics and Se nsor Payloads............................................................................................24 First Helicopter Electroni cs and Sensor Payload............................................................24 Second Helicopter Electronics and Sensor Payload........................................................26 Third Helicopter Electroni cs and Sensor Payload...........................................................28 Micro Air Vehicle Embedded State Estimator and Control Payload..............................29 Testing Aircraft............................................................................................................... ........29 UF Micro Air Vehicles....................................................................................................29 ECO 8.......................................................................................................................... ....30 Miniature Aircraft Gas Xcell...........................................................................................30 Bergen Industrial Twin....................................................................................................31 Yamaha RMAX...............................................................................................................31 4 GEO-POSITIONING OF STATIC OBJE CTS USING MONOCULAR CAMERA TECHNIQUES..................................................................................................................... ..33 Simplified Camera Model and Transformation......................................................................33 Simple Camera Model.....................................................................................................33 Coordinate Transformation.............................................................................................34 Improved Techniques for Geo-Posi tioning of Static Objects.................................................36

PAGE 6

6 Camera Calibration............................................................................................................. ....39 Geo-Positioning Sensitivity Analysis.....................................................................................43 5 UNMANNED ROTORCRAFT MODELING.......................................................................55 6 STATE ESTIMATION USI NG ONBOARD SENSORS......................................................60 Attitude Estimation Using A ccelerometer M easurements.....................................................60 Heading Estimation Using Magnetometer Measurements.....................................................64 UGV State Estimation........................................................................................................... .66 7 RESULTS........................................................................................................................ .......69 Geo-Positioning Sensitivity Analysis.....................................................................................69 Comparison of Empirical Versus S imulated Geo-Positioning Errors....................................77 Applied Work................................................................................................................... ......79 Unexploded Ordnance (UXO) Detection and Geo-Positioning Using a UAV...............79 Experimentation VTOL aircraft...............................................................................80 Sensor payload.........................................................................................................81 Maximum likelihood UXO detection algorithm......................................................81 Spatial statistics UXO de tection algorithm..............................................................83 Collaborative UAV/UGV Control...................................................................................86 Waypoint surveying.................................................................................................87 Local map.................................................................................................................88 Citrus Yield Estimation...................................................................................................91 Materials and methods.............................................................................................93 Results......................................................................................................................96 Discussion................................................................................................................97 8 CONCLUSIONS..................................................................................................................102 LIST OF REFERENCES.............................................................................................................106 BIOGRAPHICAL SKETCH.......................................................................................................110

PAGE 7

7 LIST OF TABLES Table page 7-1 Parameter standard deviations fo r the horizontal and vertical position.............................70 7-2 Parameter standard deviations for the roll, pitch, and yaw angles.....................................71 7-3 Normalized pixel coordinate standard de viations used during sensitivity analysis...........73 7-4 Parameter standard deviations used during sensitivity analysis........................................73 7-5 Comparison of Monte Carlo Method results.....................................................................77 7-6 Production of Oranges (1000s me tric tons) (based on NASS, 2006)...............................92 7-7 Production of Grapefruit (1000s me tric tons) (based on NASS, 2006)............................92 7-8 Irrigation Treatments...................................................................................................... ...94 7-9 Results from Image Processi ng and Individual Tree Harvesting.......................................96

PAGE 8

8 LIST OF FIGURES Figure page 2-1 Commercially available GPS units....................................................................................15 2-2 Commercially available GPS antennas..............................................................................16 2-3 Commercially available IMU systems...............................................................................17 2-4 MicroMag3 magnetometer sensor from PNI Corp............................................................18 2-5 HMC1053 tri-axial analog ma gnetometer from Honeywell..............................................19 2-6 ADXL 330 tri-axial SMT magnetometer from Analog Devices Inc.................................19 2-7 ADXRS150 rate gyro from Analog Devices Inc...............................................................20 4-1 Image coordinates to projection angle calculation.............................................................33 4-2 Diagram of coordinate transformation...............................................................................34 4-3 Normalized focal and projective planes.............................................................................37 4-4 Relation between a point in the ca mera and global reference frames................................38 4-5 Calibration checkerboard pattern.......................................................................................40 4-6 Calibration images......................................................................................................... ....41 4-7 Calibration images......................................................................................................... ....42 5-1 Top view of the body fixed coordinate system..................................................................56 5-2 Side view of the body fixed coordinate system.................................................................56 5-3 Main rotor blade angle..................................................................................................... ..58 5-4 Main rotor thrust vector................................................................................................... ..58 6-1 Fast Fourier Transform of raw accelerometer data............................................................62 6-2 Fast Fourier Transform of raw accel erometer data after low-pass filter...........................62 6-3 Roll and Pitch measurement pr ior to applying low-pass filter..........................................63 6-4 Roll and Pitch measurement after applying low-pass filter...............................................64 6-5 Magnetic heading estimate................................................................................................65

PAGE 9

9 7-1 Roll and Pitch measurements used for defining error distribution....................................71 7-2 Heading measurements used for defining error distribution..............................................71 7-3 Image of triangular placard used for geo-positioning experiments...................................72 7-4 Results of x and y pixel error calculations.........................................................................72 7-5 Error Variance Histograms for the respective parameter errors........................................76 7-6 Experimental and simu lation geo-position results.............................................................78 7-7 BLU97 Submunition..........................................................................................................79 7-8 Miniature Aircraft Gas Xcell Helicopter...........................................................................80 7-9 Yamaha RMAX Unmanned Helicopter.............................................................................80 7-10 Sensor Payload System Schematic....................................................................................81 7-11 Segmentation software..................................................................................................... ..82 7-12 Pattern Recognition Process..............................................................................................84 7-13 Raw RGB and Saturation Images of UXO........................................................................85 7-14 Segmented Image........................................................................................................... ....85 7-15 Raw Image with Highlighted UXO...................................................................................85 7-16 TailGator and HeliGator Platforms....................................................................................86 7-17 Aerial photograph of all simulated UXO...........................................................................87 7-18 Local map generated with Novatel differential GPS.........................................................88 7-19 A comparison of the UGVs path to the differential waypoints........................................90 7-20 UAV waypoints vs. UGV path..........................................................................................91 7-21 Individual Tree Yields as Affect ed by Irrigation Depletion Treatments...........................96 7-22 Individual Tree Yield as a Func tion of Orange Pixels in Image........................................97 7-23 Individual Tree Yield as a Function of Orange Pixels with Nonirrigated Removed.........97 7-24 Image of Tree 2C Before and After Image Processing......................................................98 7-25 Image of Tree 2F Before and After Image Processing......................................................98

PAGE 10

10 7-26 Image of Tree 6D Before and After Image Processing......................................................99 7-27 Ground Images of Tree 6D and Tree 2E..........................................................................100 8-1 Simulated error calculation versus elevation...................................................................103 8-2 Geo-Position error versus elevation.................................................................................104

PAGE 11

11 Abstract of Dissertation Pres ented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy TRACKING AND STATE ESTIMATI ON OF AN UNMANNED GROUND VEHICLE SYSTEM USING AN UNMANNED AERIAL VEHICLE SYSTEM By Donald Kawika MacArthur May 2007 Chair: Carl Crane Major: Mechanical Engineering Unmanned Air Vehicles (UAV) have several ad vantages and disadvantages compared with Unmanned Ground Vehicles (UGV). Both systems have different mobility and perception abilities. These UAV systems have extended perception, tracking, and mobility capabilities compared with UGVs. Comparatively, UGVs have more intimate mobility and manipulation capabilities. This research investigates the collaboration of UAV and UGV systems and applies the theory derived to a heterogeneous unmanned mu ltiple vehicle system. This research will also demonstrate the use of UAV perception and tracki ng abilities to extend the capabilities of a multiple ground vehicle system. This research is unique in that it presents a comprehensive system description and analysis from the sensor and hardware level to the system dynamics. This work also couples the dynamics and kine matics of two agents to form a robust state estimation using completely passive sensor tech nology. A general sensitivity analysis of the geo-positioning algorithm was performed. This an alysis derives the sensitivity equations for determining the passive positioning error of th e target UGV. This research provides a framework for analysis of passive target positioning and error contributions of each parameter used in the positioning algorithms. This fram ework benefits the research and industrial community by providing a method of quantifying positioning error due to errors from sensor

PAGE 12

12 noise. This research presents a framework by which a given UAV payload configuration can be evaluated using an empirically derived sensor noi se model. Using this data the interaction between sensor noise and positioning error can be compared. This allows the researcher to selectively focus attention to sensors which have a greater e ffect on position error and quantify expected positioning error.

PAGE 13

13 CHAPTER 1 INTRODUCTION The Center for Intelligent Machines and Robo tics at the University of Florida has been performing autonomous ground vehicle research for over 10 years. In that time, research has been conducted in the areas of sensor fusion, pr ecision navigation, precision positioning systems, and obstacle avoidance. Research ers have used small unmanned helicopters for remote sensing purposes for various applications. Recently, expe rimentation with unmanned aerial vehicles has been in collaboration with the Tyndall Air Force Research Laboratory at T yndall AFB, Florida. Recently, unmanned aerial vehicles (UAVs) have been used more extensively for military and commercial operations. The improved pe rception abilities of UAVs compared with unmanned ground vehicles (UGVs) make them more attractive for surveying and reconnaissance applications. A combined UAV/UGV multiple vehicle system can pr ovide aerial imagery, perception, and target tracking alon g with ground target manipulati on and inspection capabilities. This research investigates collaborative UAV/UGV systems and also demonstrates the application of a UAV/UGV system fo r various task-based operations. The Air Force Research Laboratory at Tynda ll Air Force Base has worked toward improving EOD and range clearance operations by using unmanned ground vehicle systems. This research incorporates th e abilities of UAV/UGV systems to support these operations. The research vision for the range cl earance operations is to deve lop an autonomous multi-vehicle system that can perform surveying, ordnance dete ction/geo-positioning, and disposal operations with minimal user supervision and effort.

PAGE 14

14 CHAPTER 2 BACKGROUND Researchers have used small unmanned heli copters for remote sensing purposes for various applications [1,2,3]. These applications range from agricultura l crop yield estimation, pesticide and fertilizer app lication, explosive reconnaissanc e and detection, and aerial photography and mapping. This research effort will strive to estimate the states of a UGV system using monocular camera techniques and the extrinsic parameters of the camera sensor. The extrinsic parameters can be reduced to the transformation from the cam era coordinate system to the global coordinate system. Position and Orientation Measurement Sensors Global Positioning Systems Global Position Systems (GPS) are widely be coming the position system of choice for autonomous vehicle navigation. This technology allo ws for an agent to determine its location using broadcasted signals from satellites overhead. The Navigation Signal Timing and Ranging Global Positioning System (NAVST AR GPS) was established in 1978 and is maintained by the United States Department of Defense to provide a positioning service for use by the U.S. military and is utilized by the public as a public good. Sinc e its creation, the service has been used for commercial purposes such as na utical, aeronautical, and gr ound based navigation, and land surveying. The current U.S. based GPS satellite constellation system consists of over 24 satellites. The number of satellites in operation for this system can vary du e to satellites being taken in and out of service. Ot her countries are leading efforts to develop alternative satellite systems for their own GPS systems. A simila r GPS system is the GLONASS constructed by Russia. The GALILEO GPS system is being developed by a European consortium. This system

PAGE 15

15 is to be maintained by Europeans and will provide capabilities similar to that of the NAVSTAR and GLONASS systems. Each satellite maintains its own specific or bit and circumnavigates earth once every 12 hours. The orbit of each satellite is timed and coordi nated so that five to eight satellites are above the horizon of any location on th e surface of earth at any time. A GPS receiver calculates position by first receiving the microwave RF si gnals broadcast by each visible satellite. The signals broadcasted by the satellites are comple x high frequency signals with encoded binary information. The encoded binary data contains a large amount of information but mainly contains information about the time that the data was sent and location of the satellite in orbit. The GPS receiver processes this information to solve for its position and current time. GPSreceivers typically provide position solu tions at 1Hz but GPS receivers can be purchased that output po sition solutions up to 20 Hz. The accuracy of a commercial GPS system without any augmentation is approximately 15 mete rs. Several types of commercially available GPS units are shown in Figure 2-1. Some units are equipped with or without antennas. The Garmin GPS unit in Figure 2-1 contains the ante nna and receiver whereas the other two units are simply receivers. Several types of an tennas are shown in Figure 2-2. Figure 2-1. Commercially available GPS units

PAGE 16

16 Figure 2-2. Commercially available GPS antennas Differential GPS is an alternative method by which GPS signals from multiple receivers can be used to obtain higher accuracy position solutions. Differential GPS operates by placing a specialized GPS receiver in a know n location and measuring the er rors in the position solution and the associated satellite data. The information is then broadcast in the form of correction data so that other GPS receivers in the area can calculate a more accurate position solution. This system is based on the fact that there are inhere nt delays as the satellite signals are transmitted through the atmosphere. Localized atmospheric cond itions cause the satellite signals within that area to have the same delays. By calculating and broadcasting the correction values for each visible satellite the differential GPS system can attain accuracy from 1mm to 1cm [4]. In 2002, a new type of GPS correction system ha s been integrated so that a land-based correction signal is not require d to improve position solutions Satellite based augmentation systems (SBAS) transmit localized correction signals from orbiting satellites [5]. A SBAS system implemented for North America is the Wide Area Augmentation System (WAAS). This system has been used in this research and positio n solutions with errors of less than three meters have been observed. In 2005, the first in a series of new sate llites was introduced into the NAVSTAR GPS system. This system provides a new GPS signal referred to as L2C. This enhancement is intended to improve the accuracy and reliability of the NAVSTAR GPS sy stem for military and public use.

PAGE 17

17 Inertial Measurement Units Inertial Measurement Unit (IMU) systems are used extensively in vehicles where accurate orientation measurements are required. T ypical IMU systems cont ain accelerometers and angular gyroscopes. These sensors allow for th e rigid body motion of the IMU to be measured and state estimations to be made. These system s can vary greatly in cost and performance. When coupled with a GPS system, the positi oning and orientation of the system can be accurately estimated. The coupled IMU/GPS comb ines the position and velocity measurements based on satellite RF signals with inertial mo tion measurements. These systems complement each other whereby the GPS is characterized by low frequency global position measurements and the IMU provides higher frequency relative positioning/orientation measurements. Some of the commercially available IMU systems are shown in Figure 2-3. Figure 2-3. Commercially available IMU systems Other sensors allow for orientation measuremen ts such as fluidic tilt sensors, imaging sensors, light sensors, thermal sensors. Each of these sensors has different advantages and disadvantages for implementation. Fluidic tilt sensors provide high freq uency noise rejection and decent attitude estimation for low dynamic ve hicles. In high G turns and extreme dynamics these sensors fail to provide usable data. Im aging sensors have the advantage of not being affected by vehicle dynamics. However, advan ced image processing algorithms can require significant computational overhead and these sensors are highly affected by lighting conditions. Thermopile attitude sensors have been used for attitude estimation and are not affected by

PAGE 18

18 vehicle dynamics. These sensors provide excelle nt attitude estimations but are affected by reflective surfaces and changes in environment temperature. Magnetometers A magnetometer is a device that allows for th e measurement of a local or distant magnetic field. This device can be used to measure the strength and direction of a magnetic field. The heading of an unmanned vehicle may be determin ed by detecting the magnetic field created by the Earths magnetic poles. The magnetic north direction can aid in na vigation and geo-spatial mapping. For applications where the vehicle orient ation is not restricted to planar motion, the magnetometer is typically coupled with a tilt sensor to provide a hor izontal north vector independent of the vehicle orientation. There are several commercially available magnetometer sensors. The MicroMag3 from PNI Corp. provides magnetic field measurements in three axes with a digital serial peripheral interface (SPI) shown in Figure 2-4. Figure 2-4. MicroMag3 magnetome ter sensor from PNI Corp. The Honeywell Corporation also manufactures a line of magnetic field detection sensors. These products vary from analog linear/vector sens ors to integrated digital compass devices. The HMC1053 from Honeywell is a three axis ma gneto-resistive sensor for multi-axial magnetic field detection and is shown in Figure 2-5.

PAGE 19

19 Figure 2-5. HMC1053 tri-axial anal og magnetometer from Honeywell Accelerometer An accelerometer measures the acceleration of the device in single or multiple measurement axes. MEMS based accelerometers provide accurate and inexpensive devices for measurement of acceleration. The ADXL330 from Analog Devices Inc. provides analog three axis acceleration measurements in a small surface mount package shown in Figure 2-6. Figure2-6. ADXL 330 tri-axial SMT magne tometer from Analog Devices Inc. A two or three axis accelerometer can be used as a tilt sensor. The off horizontal angles can be determined by measuring th e projection of the gravity vector onto the sensor axes. These measurements relate to the roll and pitch angl e of the device and when properly compensated to account for effects from vehicle dynamics, can provide accurate orientation information. Rate Gyro A rate gyro is a device that m easures the angular time rate of change about a single axis or multiple axes. The ADXRS150 is a single axis MEMS rate gyro manufactured by Analog

PAGE 20

20 Devices Inc. which provides analog measurements of the angular rate of the device and is shown in Figure 2-7. Figure 2-7. ADXRS150 rate gyro from Analog Devices Inc. This device uses the Coriolis effect to measure the angular rate of the device. An internally resonating frame in the device is c oupled with capacitive pickoff elements. The response of the pickoff elements changes with th e angular rate. This signal is then conditioned and amplified. When coupled with an acceler ometer, these devices allow for enhanced orientation solutions. Unmanned Rotorcraft Modeling For this research, the aircraft operating regi on will mostly be in hover mode. The flight characteristics of the full flight envelope are very complex and i nvolve extensive dynamic, aerodynamic, and fluid mechanics analysis. Prev iously, researchers have performed extensive instrumentation of a Yamaha R-50 remote piloted helicopter [6]. These researchers outfitted a Yamaha R-50 helicopter with a sensor suite for in -flight measurements of rotor blade motion and loading. The system was equipped with sensor s along the length of the main rotor blades, measuring strain, accelera tion, tilt, and position. This research was unique due to the detail of instrumentation not to mention the difficulties of instrumenting rotating components. This work provided structural, modal, and load characteristics for this airframe and demonstrates the extensive lengths required for obt aining in-flight aircraft properties. In addition, extensive work has been conducted in system identification fo r the Yamaha R-50 and the Xcell 60 helicopters

PAGE 21

21 [7,8,9]. These researchers perfor med extensive frequency respons e based system identification and flight testing and compared modeling results with that of th e scaled dynamics of the UH-1H helicopter. These researcher s have conducted an extensive analysis of small unmanned helicopter dynamic equations and system identification. This wo rk has resulted in complete dynamic modeling of a model scale he licopter. These results showed great promise in that they demonstrated a close relation between the UH-1H helicopter dynamics and th e tested aircraft. This research also showed that the aircraft modeling technique used was valid and that the system identification techniques used for larger ro torcraft were extensible to smaller rotorcraft. Other researchers present a more systems level approach to the aircraft automation discussion [10]. They present the instrumentation equipment a nd architecture, and present the modeling and simulation derivations. They go on to present their work involving hardware-inthe-loop simulation and image processing. Unmanned Rotorcraft Control Many researchers have become actively i nvolved in the control and automation of unmanned rotorcraft. The resear ch has involved nume rous controls topics including robust controller design, fuzzy c ontrol, and full flight envelope control. Robust H controllers have been developed usin g Loop Shaping and Gain-Scheduling to provide rapid and reliable high bandwidth contro llers for the Yamaha R-50 UAV [11,12]. In this research, the authors sought to incorporate the use of high-fidelity simulation modeling into the control design to improve performance. Coupled with the use of multivariable control design techniques, they also sought to develop a contro ller that would provide fast and robust controller performance that could better utilize the full fli ght envelope of small unmanned helicopters. Anyone who has observed experienced competiti on level Radio Controlled (RC) helicopter

PAGE 22

22 pilots and their escapades during flight have observed the awesome capabilities of small RC helicopters during normal and inverted flight. It is these capabilities that draw researchers towards using helicopters for th eir research. But with increas ed capabilities comes increased complexities in aircraft mechanics and dynamics. These researchers have attempted to incorporate the synergic use of a high-fidelity aircraft model with robust multivariable control strategies and have validated their findings by implementing and flight testing their control algorithms on their testin g aircraft. Also, H controller design has been applied to highly flexible aircraft [13]. As will be shown later, helicopter airframes are significantly prone to failures cause by vibration modes. Disastrous co nsequences can occur if these vibration modes are not considered and compensated. In this resear ch, a highly flexible aircraft model is used for control design and validation. The controller is specifically desi gned to compensate for the high flexibility of the airframe. The authors present the aircraft model and uncertainties and discuss the control law synthesis algorithm. These resu lts demonstrate the meshing of the aircraft structure modeling/analysis and the control design /stability. This concept is important not only from a system performance perspective but also from a safety perspective. As UAVs become more prevalent in domestic airspace, the public can benefit from the improved system safety provided by more sophisticated mo deling and analysis techniques. Previous researchers have also conducted research on control optimization for small unmanned helicopters [14]. In this research, the authors focus on the problem of attitude control optimization for a small-scale unmanned helicopt er. By using an identified model of the helicopter system that incorporates the coupled rotor/stabilizer/fuselag e dynamic effects, they improve the overall model accuracy. This research is unique in that it inco rporates the stabilizer bar dynamic affects commonly not included in previ ous work. The system model is validated by

PAGE 23

23 performing flight tests using a Yamaha RMax helicopter test-bed system. They go on to compensate for the performance reduction i nduced by the stabilizer bar and optimize the Proportional Derivative (PD) atti tude controller usi ng an established control design methodology with a frequency response envelope specification.

PAGE 24

24 CHAPTER 3 EXPERIMENTAL TESTING PLATFORMS Electronics and Sensor Payloads In order to perform testing and evaluation of the theory and concepts involved in this research, several electronics and sensor payl oads were developed. The purpose of these payloads was to provide per ception and aircraft state meas urements and onboard processing capabilities. These systems were developed to operate modularly and en able transfer of the payload to different aircraft. The payloads were developed with varying capabilities and sizes. The host aircraft for these payloads ranged from a 6 fixed wing micro-air vehicle to a 3.1 meter rotor diameter agricultural mini-helicopter. First Helicopter Electron ics and Sensor Payload The first helicopter electronics and sensor payload was constr ucted to provide an initial testing platform to ensure proper operation of the electronics and aircra ft during flight. The system schematic is shown in Figure 3-1. Figure 3-1. First helicopter payload system schematic DC/DC Converters Industrial CPU Digital Stereovision Cameras LiPo Battery Laptop Hard Drive Wireless Ethernet Power Communication Imaging Data Storage

PAGE 25

25 The system consisted of five subsystems: 1. Main processor 2. Imaging 3. Communication 4. Data storage 5. Power The main processor provides the link between al l of the sensors, the data storage device, and the communication equipment. The imaging subsystem consists of a Videre stereovision system linked via two firewire connections. Th e data storage subsystem consists of a 40GB laptop hard drive which runs a Li nux operating system and was used for sensor data storage. The power subsystem consists of a 12V to 5V DC to DC converter, 12V power regulator and a 3Ah LiPo battery pack. The power regulators co ndition and supply power to all electronics. The LiPo battery pack served as th e main power source and was se lected based on the low weight and high power density of the LiPo battery chemis try. The first helicopter payload attached to the aircraft is shown in Figure 3-2. Figure 3-2. First payloa d mounted on helicopter

PAGE 26

26 The first prototype system was equipped on th e aircraft and was tested during flight. Although image data was able to be gathered dur ing flight, it was found that the laptop hard drive could not withstand the vibra tion of the aircraft. Figure 3-3 shows in flight testing of the first prototype payload. Figure 3-3. Helicopter tes ting with first payload Second Helicopter Electronics and Sensor Payload The payload design was refined in order to prov ide a more robust testing platform for this research. In order to improve th e designs, vibration isolation of th e payload from the aircraft was required as well as a data storage method that could withstand the ha rsh environment onboard the aircraft. The system schematic for the se cond prototype payload is shown in Figure 3-4. Figure 3-4. Second helicopter payload system schematic DC/DC Converters Industrial CPU Digital Stereovision Cameras LiPo Battery Wireless Ethernet Power Communication Imaging Compact Flash Data Storage Compact Flash OEM Garmin GPS Pose Sensors Digital Compass

PAGE 27

27 The system consisted of six subsystems: 1. Main processor 2. Imaging 3. Pose sensors 4. Communication 5. Data Storage 6. Power The second prototype payload c ontained similar components as the first prototype, but instead of a laptop hard drive it ut ilized two compact flash drives for storage, and in addition two pose sensors were added. The OEM Garmin GPS provided global position, ve locity, and altitude data at 5Hz. The digital compass provided head ing, roll, and pitch angles at 30Hz. The second prototype payload is shown in Figure 3-5. Figure 3-5. Second payload mounted to helicopter

PAGE 28

28 Flight tests showed that the second payload could reliably collect image and pose data during flight and maintain wireless communicati on at all times. Figure 3-6 shows the second prototype payload equipped on the aircraft during flight testing. Figure 3-6. Flight testing w ith second helicopter payload Third Helicopter Electronics and Sensor Payload The helicopter electronics and sensor payloa d was redesigned slightly to include a high accuracy differential GPS (Figure 3-7). This sy stem has a vendor stated positioning accuracy of 2 cm in differential mode and allows precise helicopter positioning. This system further improves the overall system performance and allows for comparison of the normal versus RT2 differential GPS systems. Figure 3-7. Third helicopter payload system schematic DC/DC Converters Industrial CPU Digital Stereovision Cameras LiPo Battery Wireless Ethernet Power Communication Imaging Compact Flash Data Storage Compact Flash Novatel RT2 Differential GPS Pose Sensors Digital Compass

PAGE 29

29 Micro Air Vehicle Embedded State Estimator and Control Payload An embedded state estimator and control payl oad was developed to support the Micro Air Vehicle research being performed at the Universi ty of Florida. This system provides control stability and video data. The system schematic is shown in Figure 3-8. Figure 3-8. Micro-Air Vehicl e embedded state estimator a nd control system schematic Testing Aircraft UF Micro Air Vehicles Several MAVs have been developed for rec onnaissance and control applications. This platform provides a payload capability of < 30 gram s with a wingspan of 6 (Figure 3-9). This system is a fixed wing aircraft with 2-3 control surface actuators and an electric motor. System development for this platform requires small size and weight, and low power consumption. Figure 3-9. Six inch micro air vehicle DC/DC Converters Atmel Mega128 CMOS Camera RF Video Transmitter LiPo Battery Aerocomm 900MHz RF Modem Power Communication Imaging 2 Axis Accelerometer Pose Sensors Altitude and Airspeed Pressure Sensors

PAGE 30

30 ECO 8 This aircraft was the first helicopter built in the UF laboratory. The aircraft is powered by a brushed electric motor with an eight cell nickel cadmium battery pack. Th e aircraft is capable of flying for approximately 10 minutes under nor mal flight conditions. This system has a payload capacity of less than 60 grams with CCPM swashplate mixing as shown in Figure 3-10. Figure 3-10. Eco 8 helicopter Miniature Aircraft Gas Xcell A Miniature Aircraft Gas Xcell was the first gas powered heli copter purchased for testing and experimentation (Figure 3-11). This aircraft is equipped with a two stroke gasoline engine, 740 mm main rotor blades, and has an optimal rotor head speed of 1800 rpm. The payload capacity is approximately 15 lbs wi th a runtime of 20 minutes. Figure 3-11. Miniatur e Aircraft Gas Xcell

PAGE 31

31 Bergen Industrial Twin A Bergen Industrial Twin was purchased for te sting with heavier payloads (Figure 3-12). This aircraft is equipped with a dual cylinder two stroke gaso line engine, 810 mm main rotor blades, and has an optimal rotor head speed of 1500 rpm. The payload capacity is approximately 25 lbs with a runtime of 30 minutes. Figure 3-12. Bergen indu strial twin helicopter Yamaha RMAX Several agricultural Yamaha RMAX helicopt ers were purchased by the AFRL robotics research laboratory at Tyndall Air Force base in Pa nama City, Florida. Th e aircraft is shown in Figure 3-13. This system has a two-stroke engi ne with internal power generation and control stabilization system. This system has a 60 lb pa yload capability. The system is typically used for small area pesticide and fertilizer spraying. These aircraft were used to conduct various experiments involving remote sensing, sensor noise analysis, system identifica tion, and various applied rotorcra ft tasks. These experiments and their results will be discusse d in the subsequent chapters. E ach aircraft has varying costs,

PAGE 32

32 payload capabilities, and runtimes. As with th e various sensors availa ble for UAV research, the aircraft should be selected to tailor to th e needs of the particular project or task. Figure 3-13. Yamaha RMAX helicopter

PAGE 33

33 CHAPTER 4 GEO-POSITIONING OF STATIC OBJE CTS USING MONOCULAR CAMERA TECHNIQUES Two derivations were performed which allowed for the global coordinates of an object in an image to be found. Both derivations perform the transformation from a 2D coordinate system referred to as the image coordinate system to the 3D global coordinate system. The first derivation utilizes a simplified camera model and cal culates the position of the static object using the concept of the inters ection of a line and a plane. The se cond derivation utilizes intrinsic and extrinsic camera parameters and uses projecti ve geometry and coordinate transformations. Simplified Camera Model and Transformation Simple Camera Model The cameras were modeled by linearly scaling the horizontal and vert ical projection angle with the x and y position of the pi xel respectively as illustrated in Figure 4-1. This allowed for the relative angle of the static obj ect to be calculated with respect to a coordinate system fixed in the aircraft. Figure 4-1. Image coordinates to projection angle calculation x y Roll Pitch

PAGE 34

34 Coordinate Transformation A coordinate transformation is performed on the static object location from image coordinates to global co ordinates as shown in Figure 4-2. The image data provides the relative angle of the static object with resp ect to the aircraft reference frame. In order to find the position of the static object, a solu tion of the intersec tion of a line and a plane was used. Figure 4-2. Diagram of c oordinate transformation The equation of a plane that is used for this problem is 0 D Cz By Ax (4-1) where x, y, and z are the coordina tes of a point in the plane. The equation of a line used in this problem is 1 2 1~ ~ ~ ~ p p u p p (4-2) where p1 and p2 are points on the line. Substituting (4-2) into (4-1) results in the solution ) ( ) ( ) (2 1 2 1 2 1 1 1 1z z C y y B x x A D Cz By Ax u (4-3)

PAGE 35

35 where x1, y1, and z1 are the coordinates of point p1, and x2, y2, and z2 are the coordinates of point p2. For this problem the ground plane is define d in the global reference frame by A=0, B=0, C=1, and D=-ground elevation. The point p1 is the focal point of the camera and it is determined in the global reference frame based on the sensed GPS data. The point p2 is calculated in the global reference frame as equa l to the coordinates of p1 plus a unit distance along the static object projection ray. This is known from the static object image angle and the cameras orientation as measured by attitude and heading se nsors. In other words the direction of the static object projection ray in the global reference frame was found by transforming the projection vector from the aircraft to the static object as measured in the aircraft frame to the global frame. This entailed using a downward v ector in the aircraft frame and rotating about the yaw, pitch, and roll axis by the projection angles and pose angles. The rotation matrices are 1 0 0 0 02 1 Cos Sin Sin Cos R (4-4) Cos Sin Sin Cos R 0 0 1 0 03 2 (4-5) Cos Sin Sin Cos R 0 0 0 0 14 3 (4-6) where =yaw of the aircraft =pitch of the aircraft pl us projection pitch angle =roll of the aircraft plus projection pitch angle.

PAGE 36

36 The downward vector r =(0 0 -1)T was transformed using the compound rotation matrix 4 3 3 2 2 1 4 1 R R R R (4-7) The new projection vector was found as r R r ~ ~ 4 1 (4-8) where r is the projection ray measured in the aircraft reference frame and ~ r is the projection ray as measured in the global reference frame. Us ing the solution found for th e intersection of a line and a plane and using the aircraft position as a point on the line, th e position of the static object in the global reference frame was found. Thus, for each object identified in an image, the coordinates of p1 and p2 are determined in the global refere nce frame and (4-2) and (4-3) are then used to calculate the pos ition of the object in the global reference frame. Improved Techniques for Geo-Posi tioning of Static Objects A precise camera model and an image to globa l coordinate transforma tion were developed. This involved finding the intrinsic and extrinsi c camera parameters of the camera system attached to the aerial vehicle. A relation between the normalized pixel coordinates and coordinates in the projective coordinate plane was used: c c c c v nZ Y Z X v u (4-9) The normalized pixel coordinate vector m ~ and the projective plane coordinate vector M ~ are related using Equation 4-9 and form the pr ojection relationship betw een points in the image plane and points in the camera reference frame as shown in Figure 4-9, where 1 ~n nv u m (4-10)

PAGE 37

37 1 ~c c cZ Y X M. (4-11) Figure 4-3. Normalized fo cal and projective planes The transformation from image coordinates to global coordinates was determined using the normalized pixel coordinates, and the camera positi on and orientation with respect to the global coordinate system (Figure 4-4). The transfor mation of a point M expressed in the camera reference system C to a point expressed in the global system is s hown in Equation 4-12. M C G C M GP T P (4-12) 1 1 0 13 C C C T Co G G C M C G C G G G M GZ Y X P R P T Z Y X P (4-13) Dividing both sides of Equation 4-13 by ZC and substituting ZG = 0 (assuming the elevation of the camera is evaluated as the a bove ground level and the target location exists on the ZG = 0 global plane) results in Equation 4-14.

PAGE 38

38 Figure 4-4. Relation between a point in the camera and global reference frames C C C C C T Co G G C C C G C GZ Z Y Z X P R Z Z Y Z X 1 1 1 0 1 03 (4-14) Substituting XC/ZC = un and YC/ZC = vn: C n n T Co G G C C C G C GZ v u P R Z Z Y Z X 1 1 1 0 1 03 (4-15) This leads to three equa tions and three unknowns XG, YG, ZC: C Cox G n n C GZ P R v R u R Z X 13 12 11 (4-16)

PAGE 39

39 C Coy G n n C GZ P R v R u R Z Y 23 22 21 (4-17) C Coz G n nZ P R v R u R 33 32 310 (4-18) where the scalar ij R represents the element in the ith row and jth column of the ij R matrix. Using Equations 4-16, 4-17, and 4-18, ZC, XG, YG can be determined explicitly: 33 32 31R v R u R P Zn n Coz G C (4-19) Cox G n n n n Coz G GP R v R u R R v R u R P X 13 12 11 33 32 31 (4-20) Coy G n n n n Coz G GP R v R u R R v R u R P Y 23 22 21 33 32 31 (4-21) Equations 4-20 and 4-21 provide the global coordinates of th e static object. Camera Calibration In order to calculate the normalized pixel co ordinates using raw imaging sensor data, a calibration procedure is performed using a cam era calibration toolbox fo r MATLAB[15]. The calibration procedure determines the extrinsic an d intrinsic parameters of the camera system. During the calibration procedure, several images ar e used with checkerboard patterns of specific size that allow for the different parameters to be estimated as shown in Figure 4-5. The extrinsic parameters define the position and orientation characte ristics of the camera system. These parameters are affected by the mounting and positioning of the camera relative to the body fixed coordinate system.

PAGE 40

40 Figure 4-5. Calibrati on checkerboard pattern The intrinsic parameters define the optic proj ection and perspective ch aracteristics of the camera system. These parameters are affected by the camera lens properties, imaging sensor properties, and lens/sensor plac ement properties. The camera lens properties are generally characterized by the focal length and prescribed imaging sensor size. The focal length is a measure of how strongly the lens fo cuses the light energy. This in essence correlates to the zoom of the lens given a fixed sensor size and distance. The imaging sensor properties are generally characterized by the physical size and horizontal/vertical resolu tion of the imaging sensor. These properties help to define the dimensi ons and geometry of the image pixels. The lens/sensor placement properties are generally char acterized by the misalignment of the lens and image sensor, and the lens to sensor planar dist ance. For our analysis we are mostly concerned with determining the intrinsic parameters of the camera system. These parameters are used for calculating the normalized pixel coordina tes given the raw pixel coordinates. The intrinsic parameters that are used for generating the normalized pixel coordinates are the focal length, principal point, skew coefficien t, and image distortion coefficients. The focal

PAGE 41

41 length as described earlier estima tes the linear projection of points observed is space to the focal plane. The focal length has components in the x and y axes and does not assume these values are equal. The principal point estimates the center pixel position. All norma lized pixel coordinates are referenced to this point. The skew coeffi cient estimates the angle between the x and y axes of each pixel. In some instances the pixel geometry is not squa re or even rectangular. This coefficient describes how off-square the pixel x and y axes are and allows for compensation. The image distortion coefficients estimate the radial and tangen tial distortions typically caused by the camera lens. Radial distortion causes a ch anging magnification effect at varying radial distances. These effects are apparent when a st raight line appears to be curved through the camera system. The tangential distortions are ca used by ill centering or defects of the lens optics. These cause the displacement of points perpendicular to the radial imaging field. Figure 4-6. Calibration images

PAGE 42

42 The camera calibration toolbox allo ws for all of the intrinsic parameters to be estimated using several images of the predefined checkerbo ard pattern. Once the ca libration procedure is completed, the intrinsic parameters are used in the geo-positioning algori thm. Selections of images were used that captured the checker patte rn at different ranges an d orientations as shown in Figure 4-6. The boundaries of the checker pattern were then selected manually for each image. The calibration algorithm used the gradient of the pa ttern to then find all of the vertices of the checkerboard as shown in Figure 4-7. Figure 4-7. Calibration images Once the boundaries for all of the images were selected the algorithm calculated the intrinsic camera parameter estimates using a gradie nt descent search. Using the selected images the following parameters were calculated: Focal Length: fc = [ 1019.52796 1022.12290 ] [ 20.11515 20.62667 ] Principal point: cc = [ 645.66333 527.72943 ] [ 13.60462 10.92129 ] Skew: alpha_c = [ 0.00000 ] [ 0.00000 ] => angle of pixel axes = 90.00000 0.00000 degrees

PAGE 43

43 Distortion: kc = [ -0.17892 0.13875 -0.00128 0.00560 0.00000 ] [ 0.01419 0.02983 0.00158 0.00203 0.00000 ] Pixel error: err = [ 0.22613 0.14137 ] Geo-Positioning Sensitivity Analysis In this section the derivation for the sensitiv ity of the position solution will be derived based on the measurable parameters used in th e geo-positioning algorithm. This analysis will show the general sensitivity of the positioning solution, and also the sensitivity at common operating conditions. Equation 4-15 is used to determine the globa l position of the targ et based on the global position and orientation of the camera, and the norm alized target pixel coordinates. Multiplying Equation 4-15 through by Zc produces: 31 0 01 1 1n G n GG G CCo C T Cu x v y RP z z (4-22) Since the geo-positioning coordinates are of primary concern for the sensitivity analysis, Equation 4-22 is reduced to the form: 111213 2122231 1x yn GGGG n CCCCo G C GGGG G CCCCo Cu v RRRP x z y RRRP z (4-23) Equation 4-23 is rewritten in the form: G C Gx zAb y (4-24) where

PAGE 44

44 x yG Co G CoCCSSSCSSSCSCP CSCCSP (4-25) 1 1n n cu v b z (4-26) The geo-positioning process is modeled by assuming there are some errors in the parameters used in the calculation. The parameter vector is defined below: x y zG Co G Co G Co n nP P P p u v (4-27) The modeled process is shown below: GG C p GG actualerrorxx zAb yy (4-28) where

PAGE 45

45 xx yy zzGG CoCo GG CoCo GG CoCo nn nnPP PP PP p uu vv (4-29) The positioning error from Equation 4-28 reduces to: G Gx CC pp ye ezAbzAb e (4-30) In order to establish a metric from measurement of the error for the sensitivity analysis, the inner product of Equation 4-30 is used. 2T T CCCC pppp TT T CCCC pp pp TTT T CCCCCC p pp pppeezAbzAbzAbzAb eezAbzAbzAbzAb eezAbzAbzAbzAbzAbzAb (4-31) Upon using Equation 4-24 to substitute for C pzAb the generic form for the error variance becomes: 22 2T G TTTTT CCC pp p Gx eezbAAbzAbzbAAb y (4-32) The partial derivative of the generic error va riance is shown for an arbitrary parameter

PAGE 46

46 22 2 22222 2 2T TTTT C T CC ppp G G TTTT T CC pp G G TTT TTTTTT C CCCCCzAb zbAAbzbAAb x ee y zbAAbzbA x ee y z eebAA zbAAbzAAbzbAbzbAbz 2TT p TT GGG TTTT C CC GGG pppb bAA xxx z bA bAzAzb yyy (4-33) In order to reduce the complexity of the analysis and to provide a more concise representation of the eff ects of the parameter errors on the error variance, and without loss of generality, the target position is set as the origin of the global coordinate system. Equation 4-30 reduces to: C pezAb (4-34) 2 T T CC pp TTT C peezAbzAb eezbAAb (4-35) This quantity equates to the error variance of the positioning solution given the system configuration and erro r values. It is desirable to determine the effects of errors in each parameter used in the geo-positioning solution. Hence th e partial derivative of the inner product is calculated with respect to each parameter error. The partial derivative of the reduced error va riance is shown for an arbitrary parameter 22222TTT TTTTTTTT C CCCCCz eebAAb zbAAbzAAbzbAbzbAbzbAA (4-36)

PAGE 47

47 Equation 4-25 is restated below along with the partial derivatives with respect to xG CoP, xG CoP, xG CoP, , nu, and nv: x yG Co G CoCCSSSCSSSCSCP CSCCSP (4-25) 0001 0000xG CoP (4-37) 0000 0001yG CoP (4-38) 0000 0000zG CoP (4-39) 0 0 SCSSCCSS SSSCC (4-40) 0 0000SCCSSSSCSCSS (4-41) 00 00CSSSCCCSSS CCCS (4-42) 0000 0000nu (4-43) 0000 0000nv (4-44) Equation 4-26 is restated below along with the partial derivatives with respect to xG CoP, xG CoP, xG CoP, , nu and nv : 1 1n n cu v b z (4-26)

PAGE 48

48 1 Oznn G c CuSCCSSvSSCSCCC z P (4-45) 1 0 0 Ozn G Cb u SCCSS P (4-46) 0 1 0 Ozn G Cb v SSCSC P (4-47) 0 0 0 0xG Cob P (4-48) 0 0 0 0yG Cob P (4-49) 20 1 0 z OzG Co nn G Cb P uSCCSSvSSCSCCC P (4-50) 0 0 0 Oznn G Cb uCCSvCCCCS P (4-51)

PAGE 49

49 0 0 0 Oznn G Cb uCCSSSvCSSSCSC P (4-52) 0 0 0 Oznn G Cb uSSCSCvSCCSS P (4-53) Equation 4-19 is restated below along with the partial derivatives with respect to xG CoP, y G CoP z G CoP, , nu and nv : OzG C c nnP z uSCCSSvSSCSCCC (4-54) 0xc G Coz P (4-55) 0yc G Coz P (4-56) 1 zc G Co nnz P uSCCSSvSSCSCCC (4-57) 2 OzG C c n nnPSCCSS z u uSCCSSvSSCSCCC (4-58) 2 OzG C c n nnPSSCSC z v uSCCSSvSSCSCCC (4-59) 2 OzG Cnn c nnPuCCSvCCCCS z uSCCSSvSSCSCCC (4-60)

PAGE 50

50 2 OzG Cnn c nnPuCCSSSvCSSSCSC z uSCCSSvSSCSCCC (4-61) 2 OzG Cnn c nnPuSSCSCvSCCSSCS z uSCCSSvSSCSCCC (4-62) With the partial derivatives for the compone nts of the error inne r product defined, the sensitivity of the error can be quantified for each parameter. Hence the sensitivity of the error variance can be derived with re spect to each parameter error. The error sensitivity with respect to xG CoP is shown in Equations 4-63 and 4-64. 22 222xxx x xxTTT TTTT C CCC GGG G CoCoCo Co TTTT CC GG CoCoz eebA zbAAbzAAbzbAb PPP P Ab zbAbzbAA PP (4-63) 221 2 00010001 00000000xT TT C G Co nn T TTT CCee zbAAb P uSCCSSvSSCSCCC zbAbzbAb (4-64) The error sensitivity with respect to y G CoP is shown in Equations 4-65 and 4-66. 22 222yyy y yyTTT TTTT C CCC GGG G CoCoCo Co TTTT CC GG CoCoz eebA zbAAbzAAbzbAb PPP P Ab zbAbzbAA PP (4-65) 2200000000 00010001yT T TTT CC G Coee zbAbzbAb P (4-66) The error sensitivity with respect to z G CoP is shown in Equations 4-67 and 4-68.

PAGE 51

51 22 222 z zz z zzTT TTTTT C CCC GGG G CoCoCo Co T TTT CC GG CoCoz eebb zbAAbzAAbzbAA PPP P AA zbAbzbAb PP (4-67) 2 2 2 1 2 0 1 0 0 1 0 z OzT TT C G Co nn T T C nn G C TT C nee zbAAb P uSCCSSvSSCSCCC zAAb uSCCSSvSSCSCCC P zbAA uSCCSS 2 Ozn G CvSSCSCCC P (4-68) The error sensitivity with respect to is shown in Equations 4-69 and 4-70. 22222TTT TTTTTTTT C CCCCCz eebAAb zbAAbzAAbzbAbzbAbzbAA (4-69) 2 22 2 00 00 00 Oz OzG T Cnn TT C nn T TTT CC nnn G CPuCCSvCCCCS ee zbAAb uSCCSSvSSCSCCC zAAbzbAA uCCSvCCCCSuCC P 22 00 00Ozn G C T TTT CCSvCCCCS P SCSSCCSSSCSSCCSS zbAbzbAb SSSCCSSSCC (4-70)

PAGE 52

52 The error sensitivity with respect to is shown in Equations 4-71 and 4-72. 22222TTT TTTTTTTT C CCCCCz eebbAA z bAAbzAAbzbAAzbAbzbAb (4-71) 2 2 2 0 0 0 Oz OzG T Cnn TT C nn C nn G CPuCCSSSvCSSSCSC ee z bAAb uSCCSSvSSCSCCC z uCCSSSvCSSSCSC P 2 2 20 0 0 0 0000OzT T TT C nn G C T T C TT CAAb zbAA uCCSSSvCSSSCSC P SCCSSSSCSCSS zbAb SCCSSSSCSCS zbA 0 0000 S b (4-72) The error sensitivity with respect to is shown in Equations 4-73 and 4-74. 22222TTT TTTTTTTT C CCCCCz eebbAA z bAAbzAAbzbAAzbAbzbAb (4-73)

PAGE 53

53 2 2 2 0 0 0 Oz OzG T Cnn TT C nn T T C nn G CPuSSCSCvSCCSSCS ee zbAAb uSCCSSvSSCSCCC zAAb uSSCSCvSCCSS P 2 2 2 0 0 0 00 00 00OzTT C nn G C T T C TT CzbAA uSSCSCvSCCSS P CSSSCCCSSS zbAb CCCS CSSSCCCSSS zbA CCC 00 b S (4-74) The error sensitivity with respect to nu is shown in Equations 4-75 and 4-76. 22222TTT TTTTTTTT C CCCCC nnnnnnz eebbAA zbAAbzAAbzbAAzbAbzbAb uuuuuu (4-75) 2 22 2 11 00 00 Oz Oz OzG T C TT C n nn T TTT CC GG CCPSCCSS ee zbAAb u uSCCSSvSSCSCCC zAAbzbAA SCCSSSCCSS PP (4-76) The error sensitivity with respect to nv is shown in Equations 4-77 and 4-78. 22222TTT TTTTTTTT C CCCCC nnnnnnz eebbAA zbAAbzAAbzbAAzbAbzbAb vvvvvv (4-77)

PAGE 54

54 2 22 2 00 11 00 Oz Oz OzG T C TT C n nn T TTT CC GG CCPSSCSC ee zbAAb v uSCCSSvSSCSCCC zAAbzbAA SSCSCSSCSC PP (4-78) This derivation provides the general sensitivit y equations for target geo-positioning from a UAV. These equations provide th e basis for the sensitivity anal ysis conducted in the following chapters. These results will be combined with em pirically derived sensor data to determine the parameter significance relative to th e induced geo-positioning error.

PAGE 55

55 CHAPTER 5 UNMANNED ROTORCRAFT MODELING In order to derive the equations of motion of the aircraft and to perform further analysis, an aircraft model was developed based on previous work [7,8,9,10,16,17]. For this research the scope of the rotorcraft mechanics was limited to Bell-Hiller mixing a nd Flapping rotor head design. A simplified aircraft model was deve loped previously [16,17] for simulation and controller development. A similar approach will be used here for the derivations. Mettler et al. [7,8,9] use more complex anal ysis when deriving thei r dynamic equations. Their analysis includes more complex dynamic fact ors such as fly-bar paddle mixing, main blade drag/torque effects, and fuselage /stabilizer aerodynamic effects. The actuator inputs commonly used for control of RC rotorcraft are composed of: lon: Longitudinal cyclic control lat: Lateral cyclic control col: Collective pitch control rud: Tail rudder pitch control thr: Throttle control A body fixed coordinate system was used in or der to relate sensor and motion information in the inertial and relative refe rence frames. Figures 5-1 and 52 show the body fixed coordinate system. A transformation matrix was derived which re lates the position and orientation of the body fixed frame to the inertial frame. The orientation of the body fixed frame is related to the inertial frame using a 3-1-2 rotation sequence. The inerti al frame is initially in the North-East-Down orientation. The coordinate s system undergoes a rotation about the Z axis, then a rotation

PAGE 56

56 about the X axis, and then a rotation about the Y axis. The compound rotation is equated below in Equation 5-1 and the subsequent ro tations are shown in 5-2, 5-3, and 5-4. Figure 5-1. Top view of the body fixed coordinate system Figure 5-2. Side view of th e body fixed coordinate system 4 3 3 2 2 1 4 1 R R R R (5-1) 1 0 0 0 02 1 Cos Sin Sin Cos R (5-2)

PAGE 57

57 Cos Sin Sin Cos R 0 0 0 0 13 2 (5-3) Cos Sin Sin Cos R 0 0 1 0 04 3 (5-4) The final compound rotation matrix is equated below in Equation 5-4. 14CCSSSCSSSCSC RCSCCS SCCSSSSCSCCC (5-4) where the notation iC and iS represent the Cosine and Sine of the angle i respectively. The transformation matrix which converts a po int measured in the body fixed frame to the point measured in the inertial fixe d frame is shown in Equation 5-5. 1 03 T Bodyo Inertial Inertial Body Body InertialP R T (5-5) where Bodyo InertialP represents the position of the body fixed frame origin measured in the inertial frame. The lateral and longitudinal motion of the aircraft is primarily controlled by the lateral and longitudinal cyclic control inputs. For a flapping rotor head, the mo tions of the main rotor blades form a disk whose orientation with respect to th e airframe is controlled by these inputs. The orientation of the main rotor disk is illustrated in Figure 5-3: In this analysis, a represents the lateral rotation of the main rotor blade disk and b represents the longitudina l rotation of the main rotor blade disk. In a report by Heffley and Munich [17], motion of the main rotor disc is approximated by a first order system as shown below:

PAGE 58

58 Figure 5-3. Main rotor blade angle max max00 00latlat lonlona aa b bb (5-6) where lat is the lateral cyclic damping coefficient and lon is the longitudinal cyclic damping coefficient. The angular velocity as measured in the body fi xed frame B can be translated into angular velocity in the inertial frame by using Equation 5-4: 14 BG T GBBG GB GBRR RR R (5-7) Figure 5-4. Main rotor thrust vector

PAGE 59

59 The main rotor induces a moment and linear force on the body of the aircraft. These induce lateral and longitudinal motion, and roll and pitch rotations of the aircraft. The main rotor thrust vector M RT is illustrated in Figure 5-4: The main rotor thrust vector as m easured in the body fixed frame is: 22() () 1()()MRMRSinb TTSina SinbSina (5-7) The equations of motion of th e aircraft were derived in the inertial frame using the following equations: GB B GBT B BRFma R MRIR (5-8) This derivation has resulted in a simplif ied helicopter dynamic model. This model provides a foundation for simulation of the aircraft in the absence of an experimental platform. This derivation was performed to provide the re ader with basic helicop ter dynamic principles and an introduction to helicop ter control mechanics. Now that a background on helicopter mechanics and dynamics has been presented, th e next chapter will discuss the use of onboard sensors and signal processing fo r aircraft state estimation.

PAGE 60

60 CHAPTER 6 STATE ESTIMATION USI NG ONBOARD SENSORS This research proposes to derive and dem onstrate the estimation of UGV states using a UAV. In order to estimate the UGV states, the estim ates of the UAV states are required. In this research, sensors measurements from the UAV will be used to perform the state estimation of the UAV and UGV. This research is primarily concer ned with developing a remote sensing system. Where it stands out is in how UA V dynamics and state measuremen ts are utilized in passively determining the states of the UGV. Attitude Estimation Using Accelerometer Measurements As discussed earlier, a two or three axis ac celerometer can be used for determining the attitude of an aircraft. A simple equation for de termining the roll and pitch angles of an aircraft using the acceleration measuremen ts in the x and y body fixed axes is shown in Equation 6-1 and 6-2. 1sinya roll g (6-1) 1sin x a pitch g (6-2) where x aand yaare the measured acceleration measurements in the body fixed x and y axes. A major problem with using accelerometers for attitude estimation is the effects from high frequency vibration inherent to ro tary wing aircraft. There are se veral characteristic frequencies in the rotorcraft system to consider when an alyzing the accelerometer signals. The main characteristic frequencies are the speed of the main rotor blades, tail rotor blades, and engine/motor. The highest freque ncy vibration will come from the engine/motor. The main gear of the power transmission reduces the frequency to the main rotor head by about a factor of 9.8 for the Gas Xcell helicopter. The frequency is then further reduced by the tail rotor transmission

PAGE 61

61 to the tail rotor blades. Any imbalances in the motor/motor fa n, transmission gears, and rotor heads/blades can cause significant vibration. Also any bent or misaligned shafts can cause vibration in the system. Due to the speed and number of moving parts in a helicopter, these aircraft have significant vibration at the engine, main and tail rotor fr equencies and harmonics. Extreme care must be taken to ensure balance and proper alignment of all elements of the drive train. Time taken balancing and inspecting components can payoff in the long run in system performance. The airframe and payload structure must be careful ly considered. Due to the energy content at specific frequencies, any structur al element with a natural freque ncy at or around the engine, or main/tail rotor frequencies or harmonics could pr oduce disastrous effects. Rigid mounting of payload is highly discouraged as there would be no element other than the aircraft structure to dissipate the cyclic loading. Prospective researchers are forewarned that small and large unmanned aircraft systems should be treated like any other pi ece of heavy machinery. In th is case the payload was rigidly attached to the base of the aircraft frame. Upon spool-up of the engine the head speed transitioned into the natural fre quency of the airframe with the most flexible component of the system being the side frames of the aircraft. In less than a second the airc raft entered a resonant vibration mode. This resulted in a tail-boom strike by the main blades. The damage was a result of the main shaft shattering and projecting the u pper main bearing block striking the pilot over thirty feet away. Airframe resonance is particularly dangerous in all rotary aircraft from small unmanned systems to large heavy lift commercial and military helicopters. A Fast Fourier Transform (FFT) of the accele rometer measurements show very specific spikes on all axes at specific fr equencies as shown in Figure 6-1.

PAGE 62

62 Figure 6-1. Fast Fourier Tran sform of raw accelerometer data Figure 6-2. Fast Fourier Transform of raw accelerometer data after low-pass filter Strategic filtering at the majo r vibration frequencies can im prove the attitude estimates while still allowing for the aircraft dynamics to be measured. Also by attenuating only specific

PAGE 63

63 frequency bands, the noise can be reduced yet sti ll produce fast signal response. A discrete lowpass Butterworth IIR filter was used, with a 5 Hz pass band and a 10 Hz stop band that filtered the high frequency noise evident between 15-25H z. The raw accelerometer FFT response using the low-pass filter is shown in Figure 6-2. The FFT of the raw acceleromete r data shows that the high frequency noise is attenuated beyond 5 Hz thereby eliminating the major effect s caused by the power train or high frequency electrical interference. Before the low-pass filter was applied, the roll and pitch measurements were almost unusable as shown in Figure 6-3. Figure 6-3. Roll and Pitch measuremen t prior to applying low-pass filter

PAGE 64

64 After the low-pass filter was applied, the meas urements produced much more viable results as shown in Figure 6-4. Figure 6-4. Roll and Pitch measurem ent after applying low-pass filter These results indicate the im portance of proper vehicle main tenance and assembly. More rigorous balancing and tuning of the vehicle can produce much be tter system performance and reduce the work required to compensate for vibration in sensor data. Heading Estimation Using M agnetometer Measurements The heading of unmanned ground and air vehicl es is commonly estimated by measuring the local magnetic field of the earth. The magneti c north or compass bearing has been used for hundreds of years for navigation and mapping. By measuring the local magnetic field, an

PAGE 65

65 estimate of the northern magnetic field vector can be obtained. Errors betw een the true north as measured relative to latitude and longitude compared with magnetic north varies depending on the location on the globe. Depending on the loca tion, the variations between the true and magnetic north are known and can be compensated. Alternative methods for determining the heading of unmanned systems exist including using highly accurate rate gyros. By precisely measuring the angular rate of a static vehicle, the angular rate induced from the rotation of the earth can be used to estimate heading. This requires extremely high pr ecision rate gyros which are currently too expensive, large, an d sensitive for small unmanned systems. Normally all three axes of the magnetometer would be used for heading estimation but due to the fact that the aircraft does not perform a ny radical roll or pitch maneuvers, only the lateral and longitudinal magnetometer measurements are required as shown in Figure 6-5. Figure 6-5. Magnetic heading estimate

PAGE 66

66 UGV State Estimation The geo-positioning equations derived in the previous chapters are restated below: G C Gx z Ab y (6-3) where x yG Co G CoCCSSSCSSSCSCP CSCCSP (6-4) 1 1n n cu v b z (6-5) 33 32 31R v R u R P Zn n Coz G C (6-6) By identifying two unique points fixed to the U GV, the direction vect or can be defined: 21 21 22 21 2121 21 22 2121() 1 () 1nn GG nn GGGG C GG GGGG cuu xx vv xxyy Cos hzA Sin yy xxyy z (6-7) The heading of the vehicle can be found using: atan2((),())SinCos (6-8) The kinematic motion of the vehicle can be de scribed by the linear and angular velocity terms. In the 2D case, the UGV is constrained to move in the x-y plane with only a z component in the angular velocity vector. Hence the state vector is shown below: ()()0 ()()0 01 xvCosCos v xyvSinSin (6-9)

PAGE 67

67 In [27] the researchers define the kinema tic equations for an Ackermann style UGV. Using these equations the kinematic equa tions are restated using our notation: ()()()() 22 ()()()() 22 LL vCosSinCosSin x v yLL vSinCosSinCos (6-10) Equation 6-10 follows the structure outlined in [28] and is rewritten in the form: ()() 2 ()() 2 L CosSin zxv L SinCos zHxv (6-11) where z is the measurement vector, x is the state vector, and v is the additive measurement error. The measurement error can be isolated an d the squared error can be written in the form: T TvzHx vvzHxzHx (6-12) The measurement estimate is written in the form: zHx (6-13) Hence the sum of the squares of the measurement variations zz is represented by: TJzHxzHx (6-14) The sum of the squares of the measurement variat ions is minimized with respect to the state estimate as shown: 1 0 0 022 T TT TTTTT TT TTzHxzHx J xx HzHxzHxH HzHHxzHxHH HzHHx xHHHz (6-15) Therefore the state estimate can be expressed as:

PAGE 68

68 1 1 2()()()() ()() 2 ()()()() ()() 2222 2 10 ()() ()() 0 22 4 10 L CosSinCosSin CosSin x z LLLL L SinCosSinCos SinCos CosSin xz LL L SinCos x 2()() 4 0()() 22 ()() 22 ()() CosSin z LL SinCos L CosSin xz SinCos LL (6-16) Equation 6-16 can be rewritten in the form: ()() 22 ()() CosSin vx y SinCos LL (6-17) This chapter has discussed the use of onboard sensors for aircraft and ground vehicle state estimation. These techniques will be used in th e following chapter to determine the sensor noise models for the sensitivity analysis. They w ill also be used for the validation of the geopositioning algorithm and comparison with simulation results.

PAGE 69

69 CHAPTER 7 RESULTS This chapter presents the results derived from the experiments performed using the experimental aircraft and payload systems. It will also present the results of the effort of applying this research to solv e several engineering problems. Geo-Positioning Sensitivity Analysis The sensitivity of the error vari ance to the errors of each para meter is highly coupled with the other parameters in the positioning solution. The mapping of the error variance sensitivity to the parameters is highly nonlinear. In order to achieve a concise qualitativ e representation of the effects of each parameters error in the positioning solution, a loca lized sensitivity analysis is performed. This entails using common operating parameters and experimentally observed values for the parameter errors. When substituted into the error variance partial differential equations, the respective sensitivity of each parameter is observed for common testing conditions. The error models for the various geo-pos itioning parameters were obtained using manufacturer specifications and empirically derive d noise measurements. The main aircraft used for empirical noise analysis was the Miniature Ai rcraft Gas Xcell platform. This aircraft was equipped with the testing payloa d as discussed in previously and sensor measurements were recorded with the aircraft on the ground but w ith the engine on, and head-speed just below takeoffs speed. The sensor used for measuring the global position of the camera was a WAAS enabled Garmin 16A model GPS. This GPS provides a 5 Hz positioning solution. The manufacturer specifications for horizontal and vertical positioni ng accuracy is less than 3 meters. For the sensitivity analysis the lateral and longitudina l error distribution was defined using a uniform

PAGE 70

70 radial error distributi on bounded by a three meter range. The error distribution parameters for the horizontal and vertical positioning m easurements are stated in Table 7-1. Parameter Value G C OxP ,G C OyP 3 m G C OzP 1.5 m Table 7-1. Parameter standard deviations for the horizontal a nd vertical position The sensor used for measuring the orientat ion of the camera was a Microstrain 3DMG orientation sensor. This sensor provide three axis measurements of lin ear acceleration, angular rate, and magnetic field. The se nsor measurements used for dete rmining the roll and pitch angle of the camera were the lateral and longitudinal li near accelerations. The roll and pitch angles were calculated using these measurements as described previously. The roll and pitch measurements used for defining the erro r distribution are shown in Figure 7-1.

PAGE 71

71 Figure 7-1. Roll and Pitch measurements used for defining error distribution The Microstrain 3DMG contains a three axis magn etometer for estimating vehicle heading. The measurements made to estimate the heading e rror distribution for the sensitivity analysis is shown in Figure 7-2. Figure 7-2. Heading measurements used for defining error distribution Using this data set the standard deviations fo r the roll, pitch, and yaw were calculated and shown in Table 7-2. ParameterValue 4.4 6.8 0.9 Table 7-2. Parameter standard deviations for the roll, pitch, and yaw angles

PAGE 72

72 The error distributions for the normalized pixe l coordinates were calcu lated using a series of images of a triangular placa rd taken from various elevations as shown in Figure 7-3. Figure 7-3. Image of triangular placar d used for geo-positioning experiments Figure 7-4. Results of x a nd y pixel error calculations

PAGE 73

73 It was difficult to quantify the expected error distribution for the normalized pixel coordinates. The error dist ribution for the x and y compone nts of the normalized pixel coordinates were estimated by comparing the dete cted vertex points of the placard with the calculated centroid of the volume. The resulting variation is s hown in Figure 7-4. The pixel errors were then converted to normalized pi xel errors and are shown in Table 7-3. Parameter Value nu 0.0021 nv 0.0070 Table 7-3. Normalized pixel co ordinate standard devi ations used during se nsitivity analysis A summary of the parameters for the sensor error distribut ions that are used in the proceeding sensitivity analysis are shown in Table 7-4. Parameter Value G C OxP 3 m G C OyP 3 m G C OzP 1.5 m 4.4 6.8 0.9 nu 0.0021 nv 0.0070 Table 7-4. Parameter standard deviati ons used during sensitivity analysis The Monte Carlo method was used to evaluate each sensitivity equa tion. In order to demonstrate the significance of each parameter in the Monte Carlo analysis, each parameter is perturbed by a uniform error distribution based on experimentally derived measurements. This

PAGE 74

74 analysis seeks to show the difference between th e positioning errors based off of each varying parameter. The key element to this analysis is that the error sensitiv ity for each parameter is calculated including e rrors from other parameters. This allows for the nonlinear and coupled relationship between the parameters to propagate through the sensitivity anal ysis. The results of this analysis determine the rank of the dominance of each parameter in causing positioning error. The error sensitivity is used in the subsequent analysis and is restated in Equation 7-1. TT eeee S (7-1) The error sensitivity is evaluated using the common parameter values perturbed by a uniform error distribution. The range of the error di stribution is defined using experimentally derived data. A uniform distribut ion was chosen instead of a no rmal distribution for the Monte Carlo simulation. It was found that the normal distribution t ook too long to converge during testing. The normal distributi on also had a larger search sp ace, combined w ith the nonlinear coupling between the parameters, caused for the processing times to not be manageable. The uniform distribution provides solid limits to th e error distribution and quickly traversed through the search space. This provided a quick yet fruitful analysis. In order to quantify the errors in position attributable to each pa rameter, Equation 7-1 was modified as shown in Equation 7-2. TTT eeee ppeeSSe (7-2) where p : parameter vector with all elements pertur bed by associated uniform error distribution p : parameter vector with all elements but perturbed by associated uniform error distribution e : parameter error.

PAGE 75

75 This formulation allows for the Monte Carlo simulation to calculate the error variance distribution associated with each parameter using all parameter e rror distributions. This allows not only for the coupling between the different para meters to affect the positioning error but also the various parameter error distributions to affect the results. As with many complex systems, not only does the inherent relationship between th e various parameters effect the observations but also the measurement errors of the various parameters. The simulation uses the empirically derived error distribution and the target system configuration. For this analys is, the target system is a hoveri ng unmanned rotorcraft, operating at a 10 meter elevation. The parameter values us ed for this analysis are shown in Equation 7-3. 0.0 0.0 10 0.0 0.0 0.0 0.0 0.0x y zG Co G Co G Co n nPm Pm Pm p u v (7-3) The error distribution is defined as a uniform distribution with bounds at the standard deviations defined previously for the geopositioning algorithm parameters. The histograms of the error variance relative to each parameter are shown in Figure 7-5.

PAGE 76

76 Figure 7-5. Error Varian ce Histograms for the respective parameter errors The bounds of the results with respect to the parameter standard deviations are shown in Table 7-5. For the given parameter error distribu tions and system configuration, the results show that the order of significance is as follows: zG CoP, xG CoP, yG CoP, , nv, and nu. The most significant term zG CoP, demonstrates the importance in the altitude data in the geo-positioning calculations. This simulation has shown the process by which the geo-positioning parameter rank was calc ulated using empirically derived sensor noise distributions and a specified sy stem configuration. By simp ly adapting the sensor noise distributions and system configurat ion values, this process can be applied to any given system to provide insight in geo-positioning error source dominance.

PAGE 77

77 Parameter Value maxG C Ox TG C OxP eePS 18.00 m2 maxG C Oy TG C OyP eePS 18.00 m2 maxG C Oz TG C OzP eePS 20.53 m2 maxTeeS 1.594 m2 maxTeeS 0.3130 m2 maxTeeS 0.0001524 m2 maxn T nu u eeS 0.001240 m2 maxn T nv v eeS 0.01316 m2 Table 7-5. Comparison of Monte Carlo Method results Comparison of Empirical Versus Simulated Geo-Positioning Errors The experimental results obtaine d using the Gas Xcell Aircraft equipped with a downward facing camera and the experimental payl oad discussed earlier were comp ared with simulation results using the estimated error distributions used in the Monte Carlo analysis. The testing conditions used for the simulation analysis are shown in Equation 7-4. The results show that the geopositioning errors from simulation closely match the geo-positioning results obtained using the experimental vehicle/payload setup. The ge o-positioning results are shown in Figure 7-6.

PAGE 78

78 0.0 0.0 10 0.0 0.0 0.0 0.0 0.0x y zG Co G Co G Co n nPm Pm Pm p u v (7-4) Figure 7-6. Experimental a nd simulation geo-position results The use of a uniform error distribution fo r the simulation produces different results compared with a normal distribution. While the simulation results vary slightly from the

PAGE 79

79 experimental results, the uniform distribution provides more of an absolute bound for the error distribution. Applied Work Unexploded Ordnance (UXO) Detectio n and Geo-Positioning Using a UAV This research investigated the automatic detection and geo-positioning of unexploded ordnance using VTOL UAVs. Personne l at the University of Florid a in conjunction with those at the Air Force Research Laboratory at Tyndall Air Force Base, Flor ida, have developed a sensor payload capable of gathering image, attitude, and position information during flight. A software suite has also been developed that processes the image data in order to identify unexploded ordnance (UXO). These images are then geo-refere nced so that the absolute positions of the UXO can be determined in terms of the ground reference frame. This sensor payload was outfitted on a Yamaha RMAX aircraft and several experiments were conducted in simulated and live bomb testing ranges. This paper discu sses the object recognition and classification techniques used to extract the UXO from the imag es, and present the results from the simulated and live bombing range experiments. Figure 7-7. BLU97 Submunition Researchers have used aerial imagery obtai ned from small unmanned VTOL aircraft for control, remote sensing and mapping experiments [1,2,3]. In these experiments, it was necessary

PAGE 80

80 to detect a particular type of ordnance. The primary UXO of interest in these experiments was the BLU97. After deployment, this ordnance has a yellow main body with a circular decelerator. The BLU97 is shown in Figure 7-7. Experimentation VTOL Aircraft The UXO experiments were conducted using severa l aircraft in order to demonstrate the modularity of the sensor payload and to determine the capabilities of each aircraft. The first aircraft that was used for testi ng was a Miniature Aircra ft Gas Xcell RC helicopter. The aircraft was configured for heavy lift a pplications and has a payload cap acity of 10-15lbs. The typical flight time for this aircraft is 15 minutes and pr ovides a smaller VTOL aircraft for experiments at UF and the Air Force Research Laboratory. Th e Xcell helicopter is shown in Figure 7-8. Figure 7-8. Miniature Aircra ft Gas Xcell Helicopter The second aircraft used for testing was a Yamaha RMAX unmanned helicopter. With a payload capacity of 60lbs and a runtime of 20 mi nutes, this platform provided a more robust and capable testing platform for range clearance op erations. The RMAX is shown in Figure 7-9. Figure 7-9. Yamaha RM AX Unmanned Helicopter

PAGE 81

81 Sensor Payload Several sensor payloads were developed for various UAV experiments. Each payload was constructed modularly so as to en able attachment to various airc raft. The system schematic for the sensor payload is shown in Figure 7-10. Figure 7-10. Sensor Payload System Schematic The detection sensor used for these experime nts was dual digital cameras operating in the visible spectrum. These cameras provided high resolution imagery with low weight packaging. These experiments sought to also explore and qu antify the effectiveness of this sensor for UXO detection. Maximum Likelihood UXO Detection Algorithm A statistical color model was used to differen tiate pixels in the image that compose the UXO. The maximum likelihood (ML) UXO detection algorithm used a priori knowledge of the color distribution of the surface of the BLU97s in order to detect ordnance in an image. The color model was constructed using the RGB colo r space. The feature vector was defined as b g r x ~ (7-5) where r, g, and b are the eight bit color values for each pixel. DC/DC Converters Industrial CPU Digital Stereovision Cameras LiPoBattery Wireless Ethernet Power Communication Imaging Compact Flash Data Storage Compact Flash NovatelRT2 Differential GPS Pose Sensors Digital Compass DC/DC Converters Industrial CPU Digital Stereovision Cameras LiPoBattery Wireless Ethernet Power Communication Imaging Compact Flash Data Storage Compact Flash NovatelRT2 Differential GPS Pose Sensors Digital Compass

PAGE 82

82 Using the K-means Segmentation Algorithm [18], an image containing a UXO was segmented and selected. A segmented image is shown in Figure 7-11. This implementation used a 5D feature vector for each pixel which allowed for clustering using spatial and color parameters. Results varied depending on relativ e scaling of the feature vector components. Figure 7-11. Segmentation software The distribution of the UXO pixels was assumed to be a Gaussian distribution [18] thereby the maximum likelihood method was used to ap proximate the UXO color model. The region containing the UXO pixels was selected and the color model was calculated. The mean color vector is calculated as nx n1~ 1 ~ (7-6) where n is the number of pixe ls in the selected region. The covariance matrix was then calculated as n Tx x n1) ~ ~ )( ~ ~ ( 1 (7-7) The mean and covariance of the UXO pixels we re then used to develop a classification model. This classification mode l described the location and the di stribution of the training data within the RGB color space. The equation used for the classification metric was Segmented Ordnance

PAGE 83

83 ) ~ ~ ( ) ~ ~ ( exp 11 x x pT. (7-8) The classification metric is similar to the li kelihood probability except that it lacks the prescaling coefficient required by Gaussian pdfs. Th e pre-scaling coefficient was removed in order to optimize the performance of the classification algorithm. This allows for the classification metric value to range from 0 to 1. The analys is was performed by selecting a threshold for the classification metric in order to classify UXO pixe ls from the image. This allowed for images to be screened for UXO detection and for the pixel c oordinate location of the UXO to be identified in the image. Initial experimentation using simulated UXO and the ML UXO detection algorithm proved to provide successful classifica tion of UXO using images obtai ned using both aircraft. As expected the performance of the al gorithm deteriorated when there were variations in the color of the surface of the UXO or the contrast between the UXO and the background. Relating to the data, the ML UXO detection algorithm failed when either the actual UXO color distribution fell far from the modeled distribution in RGB space or the background distribution closely encompassed the actual UXO color distribution. In these cases, the vari ations caused both false positives and negatives when using the classification algorithm. The use of an expanded training data set and multiple Gaussian di stributions for modeling was inve stigated but found to slightly improve UXO detection rates but greatly incr ease false positive readings from background pixels. The algorithm performance was also ex tremely sensitive to the likelihood threshold thereby introducing another tunabl e parameter to the algorithm. Spatial Statistics UXO Detection Algorithm Previous experimental results showed that when the background of the image closely resembled the UXO color, the ML UXO performance degraded. In order to perform more robust

PAGE 84

84 UXO detection, an algorithm wa s developed whose parameters were based solely on the dimensions of the UXO and not a trained color m odel. A more sophisticated pattern recognition approach was used as shown in Figure 7-12. Fig. 7-12. Pattern Recognition Process The spatial statistics UXO detection algorithm was designed to segment like colored/shaded objects and classify them based on their dimensions. This would allow for robust performance in varying lighting, color, and background conditions. The assumptions made for this algorithm were that the UXO was of contin uous color/shading, and the UXO region would have scaled spatial properties of an actual UXO. Based on the measured above ground level of the aircraft and the projective properties of the imaging device, the algo rithm parameters would be auto-tuned to accommodate the scaling from the imaging process. In order to reduce the dimensionality of the data set, the color space was first converted from RGB to HSV. By inspection, it was found th at the saturation channel provided the greatest contrast between the background and the UXO. The raw RGB image and the saturation channel images are shown in Figure 7-13. Capture Image Prefiltering Segmentation Classification

PAGE 85

85 Figure 7-13. Raw RGB and Saturation Images of UXO The pre-filtering process consis ted of histogram equalization of the saturation image. This allowed for the contrast between the UXO pixels and the background and improved segmentation. The segmentation process was conducted by segm enting the pre-filtered image using the kmeans algorithm as shown in Figure 7-14. Figure 7-14. Segmented Image Each region was analyzed and classified using the scaled spatial statistics of the UXO. Properties such as the major/minor axis length fo r the region were used to classify the regions. Regions whose spatial properties closely matche d those of the UXO were classified as UXO and highlighted in the final image as shown in Figure 7-15. Figure 7-15. Raw Image with Highlighted UXO

PAGE 86

86 Collaborative UAV/UGV Control Recently unmanned aerial vehicles (UAVs) have been used more extensively in military operations. The improved perception abiliti es of UAVs compared with unmanned ground vehicles (UGVs) make them more attractive for surveying and reconnaissance applications. A combined UAV/UGV multiple vehicle system can provide aerial imagery, perception, and target tracking along with ground target manipulation and inspection capabi lities. This experiment was conducted to demonstrate the application of a UAV/UGV system for simulated mine disposal operations. The experiment was conducted by surveying th e target area with the UAV and creating a map of the area. The aerial map was transmitted to the base station and post-processed to extract the locations of the targets and develop waypoi nts for the ground vehicle to navigate. The ground vehicle then proceeded to each of the targets, which simulated the validation, and disposal of the ordnance. Results include the aerial map, processed images of the extracted ordnances, and the ground vehi cles ability to navigate to the target points. The platforms used for the collaborative c ontrol experiments are shown in Figure 7-16. Figure 7-16. TailGator and HeliGator Platforms

PAGE 87

87 Waypoint Surveying In order to evaluate the performance of the UAV/UGV system, the waypoints were surveyed using a Novatel RT-2 differential GPS. This system provided two centimeter accuracy or better when provided with a ba se station correction signal. A ccurate surveying of the visited waypoints provided a baseline for comparison of the results obtained from the helicopter and the corresponding path the ground vehicle traversed. The UXOs were simulated to resemble BLU-97 or dnance. Aerial photo graphs as shown in Figure 7-17 of the ordnance along with the camera position and orientation were collected. Using the transformation described previously the global coordinates of the UXOs were calculated. The calculated UXO positions were compared with the precision survey data. Figure 7-17. Aerial phot ograph of all simulated UXO

PAGE 88

88 Local Map A local map of the operating region was genera ted using the precision survey data. This local map as shown in Figure 7-18 provided a baseline for all of the position comparisons throughout this task. 3280320 3280325 3280330 3280335 3280340 3280345 3280350 3280355 3280360 368955368960368965368970368975368980368985368990368995369000 Easting (m)Northing (m) Diff. Waypoints Boundaries Figure 7-18. Local map generate d with Novatel differential GPS The data collected compares the positioning ability of the UGV and the ability of the UAV sensor system to accurately calculate the UXO positions. While both the UGV and UAV use WAAS enabled GPS there is some inherent e rror due to vehicle motion and environmental affects. The UGVs control feedback was based on waypoint to waypoint control versus a path following control algorithm.

PAGE 89

89 Once a set of waypoints was provided by th e UAV, the UGV was programmed to visit every waypoint as if to simulate the automa ted recovery/disposal process of the UXOs. The recovery/disposal process was optimized by or dering the waypoints in a manner that would minimize the total distance traveled by the UGV. This problem was similar to the traveling salesman optimization problem in which a set of cities must all be visited once while minimizing the total distance traveled. An A* search algorithm was implemented in order to solve this problem. The A* search algorithm operates by creating a decision graph and traversing the graph from node to node until the goal is reached. For the problem of waypoint order optimization, the current path distance g estimated distance to the final waypoint h and the estimated total distance f was evaluated for each node by glength of straight line segments of all predecessor waypoints h = (minimum distance of any two waypoints (successors & current waypoints)) (# of successors) h g f (7-10) The requirement for the A* algorithm of the admissibility of the h heuristic is fulfilled due to the fact that there exists no path from the current node n to a goal node with a distance less than h Therefore the heuristic provides the minimum bound as required by the A* algorithm and guarantees optimality should a path exist. The UGV was commanded to come within a sp ecified threshold of a waypoint before switching to the next waypoint as shown in Figure 7-19. The UGV consistently tr aveled within three meters or less of each of the desired waypoint s which is within the er ror envelope of typical WAAS GPS accuracy.

PAGE 90

90 3280320 3280325 3280330 3280335 3280340 3280345 3280350 3280355 3280360 368955368960368965368970368975368980368985368990368995369000 Easting (m)Northing (m) Diff. Waypoints Boundaries UGV Figure 7-19. A comparison of the UGV s path to the differential waypoints The UAV calculates the waypoints based on its sensors and thes e points are compared with the surveyed waypoints. There is an offset in the UAVs data due to the GPS being used and due to error in the transformation from image coordi nates to global coordinate s as shown in Figure 720. The UGV is able to navigate within several meters of the waypoints, however, is limited due to the vehicle kinematics. Further work in volves a waypoint sorting algorithm that accounts for the turning radius of the vehicle.

PAGE 91

91 3280320 3280325 3280330 3280335 3280340 3280345 3280350 3280355 3280360 368955368960368965368970368975368980368985368990368995369000 Easting (m)Northing (m) Waypoints Boundaries UAV Figure 7-20. UAV wa ypoints vs. UGV path Citrus Yield Estimation Within the USA, Florida is the dominant stat e for citrus producti on, producing over twothirds of the USAs tonnage, even in the hurricane-damaged 2004-2005 crop year. The citrus crops of most importance to Flor ida are oranges and grapefruit, w ith tangerines and other citrus being of less importance. With contemporary globalization, citr us production and marketing is highly internationalized, especially frozen juice concen trates. So there is great competition between various countries. Tables 7-6 and 7-7 show the five most important coun tries for production of oranges and grapefruit in two crop years. Production can vary significantly from year-to-year due to weather, especially due to hurricanes. Note the dominance of Brazil in oranges and the rise of China in both crops.

PAGE 92

92 Country 2000-2001 Crop Year 2004-2005 Crop Year Brazil 14,729 16,606 USA 11,139 8,293 China 2,635 4,200 Mexico 3,885 4,120 Spain 2,688 2,700 Other Countries 9,512 9,515 World Total 44,588 45,434 Table 7-6. Production of Oranges (1000s metric tons) (based on NASS, 2006) Country 2000-2001 Crop Year 2004-2005 Crop Year China 0 1,724 USA 2,233 914 Mexico 320 310 South Africa 288 270 Israel 286 247 Other Countries 680* 330 World Total 3,807 3,795 *Cuba produced a very significant 310 (1000 metric tons) in 2000-2001 Table 7-7. Production of Grapefruit ( 1000s metric tons) (based on NASS, 2006) The costs of labor, land, and e nvironmental compliance are generally less in most of these countries than in the USA. Labor is the larg est cost for citrus production in the USA, even though many workers, especially harvesters, are migrants. In order for producers from the USA to be competitive, they must ha ve advantages in produc tivity, efficiency, or quality to counteract the higher costs. This need for productivity, efficiency, and quality translates in to a need for better management. One management advantage that USA producers can use to remain competitive is to utilize advanced tech nologies. Precision agriculture is on e such set of technologies which can be used to improve prof itability and sustainability. Preci sion agriculture technologies were researched and applied later to citrus than some other crops, but there has been successful precision agriculture research [19,20,21]. There has been some commercial adoption [22].

PAGE 93

93 Yield maps have been a very important part of precision agriculture for over twenty years [23]. They allow management to make appropriate decisions to maximize crop value (production quantity and quality) while minimizi ng costs and environmental impacts [24]. However, citrus yield maps, like most yield maps can currently only be generated after the fruit is harvested because the production data is obtained during the harv esting process. It would be advantageous if the yield map was available be fore harvest because th is would allow better management, including better harves t scheduling and crop marketing. There has been a history of using machine vision to locate fruit on trees for robotic harvesting [25]. More recent work at the Univ ersity of Florida has attempted to use machine vision techniques to do on-tree yield mapping. Machine vision has been used to count the number of fruit on trees [26]. Other research ers not only counted the fr uit, but used machine vision and ultrasonic sensors to determine fruit si ze [27]. This research has been extended to allow for counting earlier in the season when the fruit is still quite green[28]. However, these methods all require vehicles to travel down the alleys between the rows of trees to take the machine vision images. Resear chers have demonstrated that a small remotelypiloted mini-helicopter with machine vision hard ware and software could be built and operated in citrus groves[29]. They also discuss some of the recent res earch on using mini-helicopters in agriculture, primarily conducted at Hokkaido Un iversity and the University of Illinois. The objective of this research was to determine if images taken from a mini-helicopter would have the potential to be used to generate yield maps. If so there might be a possibility of rapidly and flexibly producing citrus yield maps before harvest. Materials and Methods The orange trees used to test this concept we re located at Water Conserv II, jointly owned by the City of Orlando and Orange County. Th e facility, located about 20 miles west of

PAGE 94

94 Orlando, is the largest water recl amation project (over 100 million l iters per day) of its type in the world, one that combines agricultural irrigati on and rapid infiltration ba sins (RIBs). A block of Hamlin orange trees, an early maturing vari ety (as opposed to the later maturing Valencia variety), was chosen for study. The spatial variability of citr us tree health and production ca n range from very small to extremely great depending upon local conditions. This block had some natural variability, probably due to its variable blight infesta tion and topography. Additional variability was introduced by the trees being subjected to irriga tion depletion experiments. However, mainly due to substantial natural rainfa ll in the 2005-2006 growing season, the variation in the yield is within the bounds of what might be expected in contemporar y commercial orange production, even with the depl etion experiments. The irrigation depletion treat ment (percent of normal irrigation water NOT applied) was indicated by the treatment number. Irrigation depletion amounts were sometimes different for the Spring and the Fall/Winter parts of the growi ng season, as seen in Table 7-8 below. The replication was indicated by a lett er suffix. Only 15 of the 42 tr ees (six treatments with seven replications each) were used for this mini-h elicopter imaging effort. Treatment 6 had no irrigation except periodic fe rtigation, and the trees lived on rainfall alone. Treatment Spring Depletion Fall/Winter Depletion 1 25 25 2 25 50 3 25 75 4 50 50 5 50 75 6 100 100 Table 7-8. Irrigation Treatments

PAGE 95

95 The mini-helicopter used for this work was a Gas Xcell model modified for increased payload by its manufacturer [30]. It was purch ased in 2004 for about US$ 2000 and can fly up to 32 kph and carry a 6.8 kg payload. Its rotor is rated to 1800 rpm and has a diameter of less than 1.6 m. The instrumentation platform is descri bed in MacArthur, et al., (2005) and includes GPS with WAAS, two compact flash drives, a digital compass, and wireless Ethernet. The machine vision system uses a Videre model STH-MDCS-VAR-C st ereovision sensor. The mini-helicopter was flown at the Water Conserv II site on 10 January 2006, a mostly sunny day, shortly before noon. The helicopter generally hovered over ea ch tree for a short period of time as it moved down the row taking images with the Videre camera. The images were stored on the helicopter and some were simultaneously tr ansferred to a laptop computer over the wireless Ethernet. In addition, a Canon PowerShot S2 IS five-m egapixel digital camera was used to take photos of the trees (in north-south rows) from the east and west sides. The fruit on the individual trees were hand harvested by professional pickers on 13 February 2006. The fruit from each tree was we ighed and converted to the industry-standard measurement unit of field boxes. A fiel d box is defined as 40.8 kg (90 lbs.). The images were later processed manually. A best image of each tree was selected, generally on the basis of lighti ng and complete coverage of the tree. Each overhead image was cropped into a square that encl osed the entire tree and scaled to 960 by 960 pixels. The pixel data from several oranges were collected from seve ral representative images in the data set. The data was assumed to be normally distributed, thus the probability function was calculated for each orange pixel dataset. Using a "Mixture of Gaussians" to represent the orange class model, the images were analyzed and a threshold estab lished based on our color model. The number of "orange" pixels was then calculated in each image and used in our further analysis

PAGE 96

96 Results The results of the image processing and th e individual tree harvesting of the 15 trees studied in this work are presented in Table 7-9. As Figure 7-21 illustrates, only irrigation depletion treatment 6 had a great effect on the individual tree yields. Treatment 6 was 100% depletion, or no irrigation. The natural rainfall was such in this productio n year that the other treatments produced yields of at least four boxes per tree. Table 7-9 Results from Image Pro cessing and Individual Tree Harvesting 0 2 4 6 8 10 01234567 Treatment NumberFruit Yield Per Tree (Boxes) Figure 7-21. Individu al Tree Yields as Affected by Irrigation Depletion Treatments Treatment Replication Orange Pixels Boxes Fruit 1 B 13990 7 1 G 6391 6 2 B 11065 8 2 C 2202 4 2 E 5884 5 2 F 17522 7.5 3 B 2778 6 4 A 4433 6.2 4 B 5516 4.8 4 E 5002 4 4 F 11559 4.3 5 B 9069 7 5 C 17088 6.8 6 B 5376 2.5 6 D 6296 1

PAGE 97

97 The images were treated by the process discu ssed above. The number of orange pixels varied from 2202 to 17,522. More pixels should i ndicate more fruit. However, as Figure 7-22 shows, there was substantial scatter in the data. It can be improved somewhat by the removal of the nonirrigated treatment 6, as shown in Figure 7-23. y = 0.0002x + 3.5919 R2 = 0.2835 0 1 2 3 4 5 6 7 8 9 05000100001500020000 Orange Pixels in ImageFruit Yield Per Tree (Boxes) Figure 7-22. Indivi dual Tree Yield as a Function of Orange Pixels in Image y = 0.0002x + 4.5087 R2 = 0.373 0 2 4 6 8 10 05000100001500020000 Orange PixelsFruit Yield Per Tree (boxes Figure 7-23. Individual Tree Yield as a Function of Orange Pixels with Nonirrigated Removed Discussion This work showed that good overhead images of citrus trees coul d be taken by a minihelicopter and processed to have some correlation with the individual tree yield. A tree with fewer oranges should have fewer pixels in the im age of the orange color. For example, tree

PAGE 98

98 2C had only 2202 pixels and 4 boxes of fruit whil e tree 2F had 17,522 pixels and 7.5 boxes of fruit. These are shown as Figures 7-24 and 725 below in which the oranges in the After photo are enhanced to indicat e their detection by the image processing algorithm. Figure 7-24. Image of Tree 2C Be fore and After Image Processing Figure 7-25. Image of Tree 2F Before and After Image Processing The image processing used in th is initial research was very simple. More sophisticated techniques would likely improve the ability to bette r separate oranges from other elements in the images. The strong sunlight like ly contributed to some of the e rrors. Again, the use of more

PAGE 99

99 sophisticated techniques from ot her previous research, especial ly the techniques developed for yield mapping of citrus from the ground, woul d likely improve the performance in overhead yield mapping. A major assumption in this work is that the number of orange pixels visible is proportional to the tree yield. However, th e tree canopy (leaves, branches, othe r fruit, etc.) do hide some of the fruit. Differing percentages of the fruit may be visible on differing tr ees. This is quite apparent with the treatment 6 tr ees. Figure 7-26 shows the imag es for tree 6D. This tree, obviously greatly affected by the l ack of irrigation and a blight disease, has 6296 orange pixels but only yielded one box of fruit. The poor heal th of the tree meant that there were not many leaves to hide the interior oranges. Hence, a falsely high estimate of the yield was given. Figure 7-27 shows the images taken from the ground of Trees 6D and 2E. Even though they had similar numbers of orange pixe ls on the images taken from the helicopter, Tree 2E had five times the number of fruit. The more vigorous ve getation, especially the leaves, meant that the visible oranges on Tree 2E represented a sma ller percentage of the total tree yield. Figure 7-26. Image of Tree 6D Be fore and After Image Processing

PAGE 100

100 Figure 7-27. Ground Images of Tree 6D and Tree 2E Mini-helicopters are smaller and less expensiv e than piloted aircraft. Accordingly, the financial investment in them may be justifiable to growers and small industry firms. The minihelicopters would give their owne rs the flexibility of being able to take images on their own schedule. The mini-helicopters al so do not cause a big disturbance in the fruit grove. The noise and wind are moderate. They can operate in a rather inconspicuous manner, as shown by Figure 7-28. Figure 7-28 Mini-He licopter Operating at Water Conserv II

PAGE 101

101 While the yield mapping results of this work may appear to be a little disappointing at first glance, the results do indicate that there is pote ntial. Getting accurate yield estimates by minihelicopter image processing will be a somewhat co mplex image processing task. But this is a start. The images acquired by the helicopter are not that different, other th an the direction, than those acquired from the ground. Hence, the techniques developed in the ground-based yield mapping might be applicable.

PAGE 102

102 CHAPTER 8 CONCLUSIONS This work has presented the theory and e quipment used for tracki ng and state estimation of an unmanned ground vehicle system using an unmanned aerial vehicle system. This research is unique in that it presents a comprehensive syst em description and analysis from the sensor and hardware level to the system dynamics. This wo rk also couples the dyna mics and kinematics of two agents to form a robust state estimation us ing completely passive sensor technology. A sensitivity analysis of the geo-positioning al gorithm was performed which identified the significance of the parameters used in the algorithm. The simulation results showed that the elevati on error was the most dominant parameter in geo-positioning algorithm. By assuming that the error distributions would not change dramatically across varying system configurations, it seems intuitively obvious that errors in the system position would dominate at low altitudes due to the clos e mapping of errors in system position to target position. This was shown in the results by the dominance of the three position parameters relative to all other parameters. It was hypothesized that as the elevation of the aircraft increases, the dominance of the horiz ontal position errors would diminish and the orientation and pixel errors w ould begin to dominate. While the errors attributed to the horizontal position parameters would remain rela tively constant, the errors attributed to the orientation and pixel parameters would increas e due to the projective nature of the geopositioning algorithm. This hypothesis was tested by performing the sensitivity analysis again at varying elevations. The sensitivity analysis was performed fr om 10 meters to 100 me ters to show the dominance trend of the various parameters The results are shown in Figure 8-1.

PAGE 103

103 101 102 10-5 10-4 10-3 10-2 10-1 100 101 102 103 Max Variance versus Elevation Elevation (m)Max Variance (m2) PxPyPzPhi Theta Psi unvn Figure 8-1. Simulated error calculation versus elevation These results show how the ge o-positioning error attributable to the horizontal position error stays constant as the eleva tion increases. Also, all orientati on and pixel parameters increase their dominance as the elevation increases. Th ese results validate th e hypothesis that the horizontal position parameters will dominate at low altitude and the orientation and pixel parameters will dominate at higher altitudes. This finding is significant in that it shows the usefulness of this analysis in predicting which pa rameters are most dominant in a given system. Moreover, this analysis can be used to guide the prospectiv e researcher to what sensor specifications would most bene fit their application given th e anticipated system operating conditions. For example, a low altitude UAV application should employ a high accuracy horizontal and vertical positioning system. Conversely, a high alt itude reconnaissance UAV

PAGE 104

104 should employ a high accuracy IMU and camera system. Depending on the application, the emphasis should shift towards the sensors which would most benef it the system performance. This analysis can also provide the anticipat ed system performance for a given system configuration. For the researcher this research provides a valuable tool for assisting the system level design. These results were also validated experimenta lly. The experimental aircraft system was flown at various altitudes. Us ing the previous data obtained fo r use in the geo-position error analysis, the target position error was evaluated rela tive to the aircraft elevation. These results show that the error distributi on increased with increasing alti tude as shown in Figure 8-2. Figure 8-2. Geo-Position error versus elevation

PAGE 105

105 This work can be extended to improve th e tracking and state es timation techniques and further reduce system errors and also perfor ming a sensitivity analys is given the system configuration and paramete r error statistics. Future work can also in clude autonomous control of the aircraft by way of UGV tracking to form collaborative heterogeneous control strategies.

PAGE 106

106 LIST OF REFERENCES 1. Drake, P. 1991. Fire mapping using airborne global posi tioning. Engineering Field Notes 23:17-24. USDA Forest Service Engineering Staff. 2. Feron, E., and J. Paduano. 2004. Vision te chnology for precision landing of agricultural autonomous rotorcraft. Proceedings, Automa tion Technology for Off-Road Equipment. Kyoto, Japan. October 7-8. pp. 64-73. 3. Iwahori, T., R. Sugiura, K. Ishi, and N. Noguchi. 2004. Remote sensing technology using an unmanned helicopter with a c ontrol pan-head. Proceedings, Automation Technology for Off-Road Equipment. Kyot o, Japan. October 7-8. pp. 220-225. 4. Dana, P. H., Global Positioning System Overview, 5/1/2001, http://www.colorado.edu/geography/gcr aft/notes/gps/gps.html, 1/14/2003. 5. Federal Aviation Administration W ide Area Augmentation System, http://gps.faa.gov/Programs/WAAS/waas.htm, 4/21/2003. 6. Schrage D., Yillikci Y., Liu S., Prasad J., Hanagud S., Instrumentation of the Yamaha R-50/RMAX Helicopter testbeds for Airloads Identificati on and follow-on research. 25th European Rotorcraft Forum, 1999. 7. Mettler B., Tischler M.B., Ka nade T., "System Identificat ion of Small-Size Unmanned Helicopter Dynamics". American Helicop ter Society 55th Forum, May, 1999. 8. Mettler B., Tischler M., Kanade T., System Identification of a Model-Scale Helicopter. Technical report CMU-RI-TR-0003, Robotics Institute, Carnegie Mellon University, January, 2000. 9. Mettler B., Dever C., Feron E ., Identification Modeling, Flying Qualities, and Dynamic Scaling of Miniature Roto rcraft, Nato SCI-120 Sym posium on Challenges in Dynamics, System Identification, Control a nd Handling Qualities for Land, Sea, Berlin, Germany, May, 2002. 10. Johnson E., DeBitetto P., Modeling and Simulation for Small Autonomous Helicopter Development. AIAA Modeling & Simula tion Technologies Conference, 1997. 11. Civita M., Papageorgiou G., Messner W., Ka nade T., Design and Flight Testing of a High-Bandwidth H-infinity Loop Shaping Contro ller for a Robotic Helicopter. Journal of Guidance, Control, and Dynamics, Vol. 29, No. 2, March-April 2006, pp. 485-494. 12. Civita M., Papageorgiou G., Messner W., Ka nade T., Design and Flight Testing of a Gain-Scheduled H-infinity Loop Shaping C ontroller for Wide-Envelope Flight of a Robotic Helicopter. Proceedings of th e 2003 American Control Conference, pp. 41954200, Denver, CO, 4-6 June 2003.

PAGE 107

107 13. Kron A., Lafontaine J., Alazar d D., Robust 2-DOF H-infin ity Controller for Highly Flexible Aircraft : Design Methodology and Numerical Results. Can. Aeronautics and Space J., Vol. 49, No. 1, pp. 19-29, 2003. 14. B. Mettler M.B. Tischler, and T. Kanade "Attitude Control Optimization for a SmallScale Unmanned Helicopter," AIAA Guidan ce, Navigation and Control Conference, 2000. 15. Bouguet, J., Camera Calibration Toolbox for MATLAB, http://www.vision.caltec h.edu/bouguetj/calib_doc/. 16. Koo T., Ma Y., Sastry S., Nonlinear Cont rol of a Helicopter Ba sed Unmanned Aerial Vehicle Model. IEEE Transa ctions on Control Systems Technology, January 2001. 17. Heffley R., Mnich M, Minimum Complexity Helicopter Simulation Math Model Program. Manudyne Report 83-2-3, October 1986. 18. Duda, R., Hart, P, Stork, D., Pattern Classification, John Wiley and Sons, Inc, 2001. 19. Schueller, J.K., J.D. Whitney, T.A. Wheaton, W.M. Miller, and A. E. Turner. 1999. Lowcost automatic yield mapping in hand-harvest ed citrus. Computers and Electronics in Agriculture 23:145-153. 20. Whitney, J.D., W.M. Miller, T.A. Wheaton, M. Salyani, and J.K. Schueller. 1999. Precision farming applications in Florida citrus. Applied Engi neering in Agriculture. 15: 399-403. 21. Cugati, S.A., W.M. Miller, J.K. Schuelle r, and A.W. Schumann. 2006. Dynamic characteristics of two commercial hydrauli c flow-control valves for a variable-rate granular fertilizer spread er. ASABE Paper No. 061071. 22. Sevier, B. J., and W. S. Lee. 2005. Precision farming adoption in Florida citrus: A grower case study. ASAE Paper No. 051054. 23. Schueller, J.K. and Y.H. Bae. 1987. Spa tially attributed automatic combine data acquisition. Computers and Electro nics in Agriculture. 2:119-127. 24. Schueller, J.K. 1992. A review and integrati ng analysis of spatially -variable control of crop production. Fertiliz er Research. 33:1-34. 25. Slaughter, D.C., and R.C. Harrell. 1987. Co lor vision in robotic fruit harvesting. Transactions of the ASAE. 30(4):1144-1148. 26. Annamalai, P., W. S. Lee, and T. F. Burk s. 2004. Color vision system for estimating citrus yield in real-tim e. ASAE Paper No. 043054. 27. Regunathan, M. and W. S. Lee. 2005. Citrus yield mapping and size determination using machine vision and ultrasonic sensors. ASAE Paper No. 053017.

PAGE 108

108 28. Kane, K. E., and W. S. Lee. 2006. Spectral sensing of different citrus varieties for precision agriculture. ASABE Paper No. 061065. 29. MacArthur, D.K., J.K. Schueller, and C.D. Crane. 2005. Remotely-piloted minihelicopter imaging of citrus. ASAE Paper No. 051055. 30. Miniature Aircraft of Sorrento, Florida, http://www.miniatureaircraftusa.com/. 31. DOC. 2006. CS Market Research: Unmanne d Aerial Vehicles (UAVs). U.S.A. Department of Commerce. U.S. Commercial Service. www.buyusa.gov/newengland/155.pd f (accessed 25 May 2006). 32. NASS. 2006. Florida agricultu ral statistics: Citrus su mmary 2004-2005. United States Department of Agriculture, National Agricultural Statistics Service, Florida Field Office. February. 54 pp. 33. Crane, C., Duffy, J., Kinematic Analysis of Robot Manipulators, Cambridge University Press, 1998. 34. Faugeras, O., Luong, Q., The Geometry of Multiple Images, The MIT Press, 2001. 35. Center for Advanced Aviation Systems Development, The MITRE Corporation, Navigation, 2/25/2002, www.caasd.org/ work/navigation.html, 4/21/2003. 36. Garmin, GPS16 OEM GPS r eceiver, Olathe, Kansas. 37. D. Burschka and G. Hager, Visio n-Based Control of Mobile Robots,Proc. of the IEEE International Conference on Robotics and Automation, pp. 1707-1713, 2001. 38. J. Chen, D. M. Dawson, W. E. Dixon, and A. Behal, Adaptive Homography-Based Visual Servo Tracking for Fixed an d Camera-in-Hand Configurations, IEEE Transactions on Control Systems Technology, accepted, to appear. 39. J. Chen, W. E. Dixon, D. M. Dawson, a nd M. McIntire, Homography based Visual Servo Tracking Control of a Wheeled Mobile Robot, Proc. of the IEEE International Conference on Intelligent Robots and Systems, Las Vegas, Nevada, pp. 1814-1819, October 2003. 40. J. Chen, W. E. Dixon, D. M. Dawson, a nd V. Chitrakaran, Visual Servo Tracking Control of a Wheeled Mobile Robot with a Monocular Fixed Camera, Proceedings of the IEEE Conference on Control Applications, Taipei, Taiwan, pp. 1061-1066, 2004. 41. A. K. Das, et al., Real-Time Vision-Base d Control of a Nonholonomic Mobile Robot, Proc. of the IEEE International Conference on Robotics and Automation, pp. 1714-1719, 2001.

PAGE 109

109 42. W. E. Dixon, D. M. Dawson, E. Zergeroglu, and A. Behal, Adaptive Tracking Control of a Wheeled Mobile Robot via an Uncalibrated Camera System, IEEE Transactions on Systems, Man, and Cybernet ics-Part B: Cybernetics, Vol. 31, No. 43. MacArthur, E., Crane, C., Development of a Multi-Vehicle Simulator and Control Software, 2005 Florida Conference on Recent A dvances in Robotics, Gainesville, Fl, 2005. 44. The Analytic Sciences Corporation, App lied Optimal Estimation, The MIT Press, Cambridge, Massachusetts, and London, England, 1974.

PAGE 110

110 BIOGRAPHICAL SKETCH Donald Kawika MacArthur was born in Miami, Florida. He attended the Maritime and Science Technology (MAST) High School. He atte nded the University of Florida where he graduated summa cum laude with a B.S. in mech anical engineering. He pursued graduate research at the University of Florida and receiv ed his masters degree and ultimately his Ph.D. His research has spanned various vehicle automation technologies. He has performed research in areas such as computer vision, autonomous ground vehicle control a nd navigation, sensor systems for guidance, navigation and control, unmanned aircraft automation, and embedded hardware and software design.