|UFDC Home||myUFDC Home | Help|
This item has the following downloads:
A FLIGHT TESTBED WITH VIRTUAL ENVIRONMENT CAPABILITIES FOR
DEVELOPING AUTONOMOUS MICRO AIR VEHICLES
JASON WESLEY GRZYWNA
A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE
UNIVERSITY OF FLORIDA
I would like thank Dr. Michael N. ivba for his guidance and support of my
research for this thesis. As my advisor, Dr. N. ,vba has motivated me through his
leadership and his ability to cultivate a synergistic work environment. I would also
like to thank Dr. A. Antonio Arroyo for his passion for education and his belief in
me. He invited me into his lab and pushed me to reach the next level. Dr. Eric
Schwartz taught the classes that lead me into robotics. He is a great friend and a
honest mentor. I would also like to thank Dr. Peter Ifju and his students. They
build the platforms which enable the work that I do.
Special thanks go to Jason Plew, an invaluable research partner, Ashish Jain,
a friend wise beyond his years, Uriel Rodriguez, a friend who ahv-- had a way,
Sinisa Todorovic, a mentor who provided invaluable insight, Shalom D -, ,1 ii ,:-
a friend who .,i. 1,-v made me laugh, and MAi ,liid Adbulrahim, a friend with great
Finally, I would like to thank my family for their unending support of my work
and belief that I would ahv--, succeed. I especially want to thank Jennifer, the girl
who owns my heart. She is my best friend and my inspiration.
TABLE OF CONTENTS
LIST OF FIGURES ....................
1 INTRODUCTION ..................
1.1 Micro Air Vehicles . . . .
1.1.1 C!i i!!, ii; in Developing Vision-based
1.1.2 Utilizing a Virtual Environment .
1.2 Overview of the Proposed MAV Testbed .
1.3 Overview of the Thesis . . .
2 MICRO AIR VEHICLE PLATFORM . .
2.1 Advantages and Limitations .........
2.2 Construction Techniques ...........
2.3 Propulsion System Design ..........
2.4 Integrating Vision ..............
3 VISION-BASED CONTROL . . .
3.1 Flight Stability . . . .
3.2 Object Tracking . . . .
3.3 Controller . . . . .
4 TESTBED IMPLEMENTATION . . .
4.1 Architecture of the System . . .
4.2 Virtual Environment Simulation . .
4.3 Testbed Hardware . . . .
5 EXPERIMENTAL RESULTS . . .
5.1 Flight Testing Procedures . . .
5.2 Simple Stabilization Experiment . . . . 26
5.3 Object Tracking . . . . . . . 27
5.4 Autonomous Landing: Virtual Environment . . ... 28
5.5 Autonomous Landing: Real-flight Experiments . . 30
6 CONCLUSION . . . . . . . . 32
REFERENCES ..... . . . . . . .... 33
BIOGRAPHICAL SKETCH . . . . . . . 36
LIST OF FIGURES
1.1 UF HILS facility currently under construction: concept diagram. 4
1.2 Try-by-flying approach: Feedback from the flight test. . . 6
1.3 Testbed architecture overview. . . . . .... 7
2.1 Adaptive washout in action. . . . . . 10
2.2 M AV platform . . . . . . . 12
3.1 Horizon tracking: (a) original image; (b) optimization criterion J as
a function of bank angle and pitch percentage; (c) resulting classifi-
cation of sky and ground pixels in RGB space. .. . 15
3.2 In object tracking, the search region for the next frame is a function
of the object location in the current frame. .... . . 18
3.3 Controller for vision-based stabilization and object tracking. . 19
4.1 Testbed system overview . . . . . . 22
4.2 Some sample virtual scenes: (a) field, trees and mountains, (b) sim-
ple urban, (c) urban with features, and (d) complex urban. . 23
5.1 Stabilization results: (a) Direct RC-piloted flight, and (b) horizon-
stabilized (human-directed) flight. Maneuvers for flight trajectory
(b) were executed to mimic flight trajectory (a) as closely as possible. 27
5.2 Object tracking: (a) virtual tested, and (b) real flight image sequence. 28
5.3 Autonomous landing in a virtual environment: four sample frames.. 29
5.4 Roll, pitch and tracking command for virtual autonomous landing in
Figure 5.3 . . . . . . . . 29
5.5 Real-flight autonomous landing in field testing: four sample frames. 30
5.6 Roll, pitch and tracking command for real-flight autonomous landing
in Figure 5.5 . . . . . . . 31
Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Science
A FLIGHT TESTBED WITH VIRTUAL ENVIRONMENT CAPABILITIES FOR
DEVELOPING AUTONOMOUS MICRO AIR VEHICLES
Jason Wesley Grzywna
CI! ,i': A. Antonio Arroyo
, i ri Department: Electrical and Computer Engineering
We seek to develop vision-based autonomy for small-scale aircraft known as
Micro Air Vehicles (M\ AVs). Development of such .,, iiil v111 presents significant
challenges, in no small measure because of the inherent instability of these flight
vehicles and the try-by-flying practices in use tod -v. Therefore, in this thesis,
we propose a flight tested system that seeks to mitigate these challenges by
facilitating the rapid development of new vision-based control algorithms that
would have been, in the testbeds absence, substantially more difficult to transition
to successful flight testing. The proposed tested system provides a complete
architecture, built from custom-designed hardware and software, for developing
autonomous behaviors for MAVs using a camera as the primary sensor. This
system bridges the gap between theory and flight testing through the integration
of a new virtual testing environment. This virtual environment allows the system
to be tailored to a number of different mission profiles through its ability to
perform test flights in a multitude of virtual locations. The virtual environment
presented in this thesis is a precursor to a more complex Hardware-in-the-Loop
Simulation (HILS) facility currently being constructed at the University of Florida.
HILS systems allow us to experiment with vision-based algorithms in controlled
laboratory settings, thereby minimizing loss-of-vehicle risks associated with actual
flight testing. Along with a virtual testing environment, the proposed system
optionally allows a human in the control loop. In this thesis, we first discuss the
background work done with MAVs and give an overview of the testbed system
architecture. Second, we present our vision-based approaches to MAV stabilization,
object tracking, and autonomous landing. Third, we present details of the proposed
system and show how the work done mitigates the problems and challenges of
implementing vision-based flight controllers. Finally, we report experimental
flight results and discuss how the presented system facilitates the development of
autonomous flight MAVs.
Over the past several years, Unmanned Air Vehicles (UAVs) have begun to
take on missions that had previously been reserved exclusively for manned aircraft,
as evidenced in part by the much publicized deployment of the Global Hawk and
Predator UAVs in the recent Afghan and Iraqi conflicts. While these vehicles
demonstrate remarkable advances in UAV technology, their deployment is largely
limited to high-altitude surveillance and munitions deployment, due to their size
and limited autonomous capabilities. Moreover, while such UAV missions can
prevent unnecessary loss of human life, at costs of $70 million and $4.5 million for
the Global Hawk and Predator, respectively , these UAVs cannot be considered
1.1 Micro Air Vehicles
Interest has grown for a different class of small-scale UAVs, known as Micro
Air Vehicles (M 1Vs), that overcome the limitations of larger and more expensive
UAVs. At the University of Florida, our on-going research efforts have led to the
development of a large number of MAV platforms, ranging in maximum dimension
from 5 to 24 inches [2, 3].1 Given their small size, weight, and cost (approximately
1 Recent development of bendable wings allows even larger MAVs to fit inside
containers with diameters as small as 4 inches.
$1,000/vehicle), MAVs allow for missions that are not possible for larger UAVs.
For example, such small-scale aircraft could safely be deploy, l1 at low altitudes in
complex urban environments , and could be carried and deploy, .1 by individual
soldiers for remote surveillance and reconnaissance of potentially hostile areas in
While MAVs present great possibilities, they also present great challenges
beyond those of larger UAVs. First, even basic flight stability and control present
unique challenges. The low moments of inertia of MAVs make them vulnerable
to rapid angular accelerations, a problem further complicated by the fact that
aerodynamic damping of angular rates decreases with a reduction in wingspan.
Another potential source of instability for MAVs is the relative magnitudes of wind
gusts, which are much higher at the MAV scale than for larger aircraft. In fact,
wind gusts can typically be equal to or greater than the forward airspeed of the
MAV itself. Thus, an average wind gust can immediately affect a dramatic change
in the vehicle's flight path.
Second, MAVs, due to severe weight restrictions, cannot necessarily make use
of the same sensor suite as larger UAVs. While some MAVs recently developed
have seen the incorporation of miniature on-board INS and GPS [5, 6], such sensors
may not be the best allocation of p loadd capacity. For many potential MAV
missions, vision is the only practical sensor than can achieve required and/or
desirable autonomous behaviors, as is the case, for example, for flight in urban
environments below roof-top altitudes . Furthermore, given that surveillance
has been identified as one of their primary missions, MAVs must necessarily be
equipped with on-board imaging sensors, such as cameras or infrared arrays. Thus,
computer-vision techniques can exploit already present sensors, rich in information
content, to significantly extend the capabilities of MAVs, without increasing their
required 'p i-load.
When additional sensors are present that don't compromise weight and size
constraints, more state information can be derived from the system and fused with
the data extracted with computer vision techniques for an overall more robust
system. In this thesis we do not rule out the use of additional sensors, we just treat
vision as the primary, and the only necessary, sensor for autonomous flight.
1.1.1 Challenges in Developing Vision-based Autonomy
In this thesis, we seek to build on our previous success in vision-based flight
stability and control [8, 9], on the MAV scale, to achieve more complex vision-
based autonomous behaviors, such as urban environment survival. Development
of such behaviors does, however, present some difficult challenges. First, dedicated
flight test locations typically do not exhibit the type of scene diversity likely to
be encountered in deployment scenarios. 2 Second, closed-loop, vision-based
approaches must operate within a tight computational budget for real-time
performance, and require extensive flight testing for robust performance in many
different scenarios. Because of the complexity involved, simple errors in software
development can often lead to critical failures that result in crashes and loss of
the MAV airframe and p loada. This in turn introduces substantial d.-1v in the
2 Our typical flight test location would be a featureless open field. This is a
sharp contrast to a deployment scene consisting of structures and other vertical
development cycle for intelligent, autonomous MAVs. It is also apparent that
having a human-control capability in the control loop would be advantageous to
mitigate scenarios where the airframe is in peril.
1.1.2 Utilizing a Virtual Environment
To address the challenges discussed above, we are currently constructing a
Hardware-In-the-Loop Simulation (HILS) facility, expected to be completed by
the spring of 2005, that will enable testing and debugging of complex vision-based
behaviors without risking destruction of the MAV flight vehicles. As conceived
and depicted in Figure 1.1, the HILS facility will simulate the flight of a single
MAV through diverse photo-realistic virtual worlds (e.g., urban environments), by
measuring and modeling aerodynamic flight characteristics in a wind tunnel in real
time. The virtual display will render the correct perspective of the virtual world as
the MAV's trajectory is computed from its dynamic model.
Figure 1.1: UF HILS facility currently under construction: concept diagram.
1.2 Overview of the Proposed MAV Testbed
In this thesis, we present a flight tested system that allows for rapid develop-
ment of vision-based autonomous MAVs. The proposed tested system provides a
complete architecture, built from custom-designed hardware and software, for devel-
oping autonomous behaviors for MAVs using a camera as the primary sensor. This
system bridges the gap between theory and flight testing through the integration of
a new virtual testing environment.
The virtual environment simulation component serves as a precursor to the
HILS facility being constructed. This simulated environment provides (1) a diverse
scenery set as well as vehicle models, including a realistic physics engine; (2) the
ability to define additional scenery and models externally; (3) full support for
collision detection and simulation of partial vehicle damage; and (4) environmental
factors such as wind or radio noise. These features are enough to perform precur-
sory experiments in a virtual environment but are only a subset of what the full
facility would offer.
Employing vision-based stability and navigation algorithms for UAV control is
an emerging science. There are systems that exist that utilize vision on larger UAV
platforms [10, 11], but none that allow for the safe and rapid development of vision-
based control on the scale of a MAV. Traditional MAV development approaches
involve a try-by-flying approach, shown in Figure 1.2, since the aircraft are small
and easy to repair in most cases. Try-by-flying works for simple tasks, (e.g., PID
loop tuning) but for a more sophisticated system is needed tuning complicated
vision algorithms. Larger aircraft (e.g., F-16s) use rigorous hardware-in-the-loop
and wind tunnel facilities for complete system verification before the aircraft leaves
the ground. We do not have the time to vigorously test our algorithms in a similar
manner. Therefore, we need to develop hardware that can be used in both a testing
situation and in a real flight. In addition, we need that system to provide at least
some level of hardware-in-the-loop verification.
Deveopment Ground Station FlightTest
Figure 1.2: Try-by-flying approach: Feedback from the flight test.
This thesis proposes such a system, shown in Figure 1.3. The testbed is
divided into the on-board components, (carried in the airframe), a virtual envi-
ronment simulation, (for laboratory verification and testing), and the off-board
components, located on the ground, (the interface to the flight vehicle). The ground
station interface to the "flight v, !hi !. does not change. That is, the ground sta-
tion is completely interchangeable between the real flight vehicle and the virtual
environment flight vehicle, so that code, controllers, and hardware developed in one
environment are immediately transferable to the other.
Instead of developing a control algorithm and going directly to flight testing,
as done in the past, we will develop that algorithm under the framework of the
presented testbed, which includes utilizing the virtual environment simulation.
Once the virtual environment testing has been completed and the algorithm has
been verified in a wide range of environmental conditions, we can then deploy that
technology to a real flight test with little risk to the aircraft.
Testbed System HILS Testbed System Flight Test
Figure 1.3: Testbed architecture overview.
1.3 Overview of the Thesis
In the following chapters we discuss the main components of our flight tested
system, shown in Figure 1.3.
First, in C'!i ipter 2, we discuss the MAV platform and the integration of vision.
Next, in C'!i Ipter 3, we present our vision-based approaches to MAV stabilization,
object tracking, and autonomous landing. Then, in C'!i Ipter 4, we discuss the
tested architecture in detail, including the virtual environment simulation and
the hardware. Next, in C'!i Ipter 5, we report experimental flight results for both
the virtual environment, as well as for flight tests in the field, and discuss how
algorithms developed in the virtual environment were seamlessly transitioned to
real flight testing. Finally, in C'! Ipter 6, we give our conclusions.
MICRO AIR VEHICLE PLATFORM
2.1 Advantages and Limitations
There are numerous challenges that prevent the direct application of technol-
ogy developed for larger vehicles to be implemented on MAVs . This section
will discuss some of these issues. On the MAV scale, there is a severe Reynolds
number dependent degradation in aerodynamic efficiency. This degradation requires
that MAVs fly at much lower wing loading, thus placing a premium on vehicle
weight. Traditional airframe design has limited applicability to MAVs. Control is
more difficult since the small mass moment of inertia requires increased control
input bandwidth. Disturbances (e.g., wind gusts) have an E -'::-v -ted effect on
the flight path since the vehicle speed is on the same order as the disturbance.
Additionally, off-the-shelf components (e.g., servos, electronics, and video cameras)
are not specifically designed for MAVs. Finally, supplying reliable and efficient
propulsion is a serious challenge.
Given these inherent technical obstacles, a series of MAVs and small UAVs,
that incorporate a number of advances, have been produced at the University of
Florida.1 A unique, thin, undercambered, flexible wing that is more aerodynam-
ically efficient than traditional airfoils has been developed . The airframes are
made from carbon fiber, durable plastic films, and latex rubber giving them high
specific strength .
The flexible wing, shown in Figure 2.1, exhibits advantages over traditional
rigid wings in ,ii-Ii, wind conditions. When a traditional aircraft encounters a
wind gust, the airspeed increases (head on gust) and, subsequently, the wing
lift increases. With vehicles of low inertia, such as MAVs, there is an almost
immediate altitude change. In erratic conditions (e.g., frequent gusts), the aircraft
becomes extremely difficult to control. The flexible wing on our MAVs incorporates
a passive mechanism, called "adaptive washout," that is designed to produce
smoother flight. The wing deforms with the increase in air pressure associated with
a gust, creating near-constant lift [13, 14, 15]. In erratic conditions, these vehicles
fly smoothly, making them easier to control and excellent camera platforms.
The overall MAV platform design is biologically inspired by small flying
creatures, such as birds and bats . These animals have thin, flexible wings and
virtually silent flight mechanisms. MAVs are designed to mimic these creatures.
They benefit from a similar visual likeness due to their small size and dark carbon
fiber fuselages. MAVs also use electric motors, which are much less noisy than
combustion engines, and are nearly silent at a distance. These characteristics allow
a MAV to operate with a high degree of stealth, making them difficult to detect.
1 These include airframes that range in size from a 4.5 inch maximum diameter
to small UAVs with a 24 inch maximum dimension.
Direction, WmgPrwrta t
Figure 2.1: Adaptive washout in action.
2.2 Construction Techniques
The airframe is constructed from 1 v -r of bidirectional carbon-fiber. The
composite is formed to a foam mold and cured in an autoclave to form a rigid
structure. Due to the fact that the aircraft is designed without landing gear, an
additional Il.-r, composed of kevlar, is interwoven into the bottom half of the
airframe to add strength.
The thin, under-cambered wing consists of a carbon-fiber skeleton that is
then covered with a wing skin.2 The leading edge of the wing is made thicker
to maintain the integrity of the airfoil by suppling additional reinforcement. The
tail empennage, also constructed from carbon-fiber, and sometimes fiberglass, is
connected to the fuselage by a carbon-fiber boom that runs concentrically through
the pusher-prop assembly. Tails on non-pusher prop designs are molded into the
2 The wing skin is typically made from polystyrene or parachute material.
2.3 Propulsion System Design
Typical small scale aircraft have their drive systems mounted in the nose of
the aircraft. In this configuration, the forward view, along the center-line of the
airframe, is obscured by the propellor when spinning. This propellor interference,
known as prop wash, forces any cameras to be placed off-center, typically on a
wing, to avoid the aliasing effects that arise when capturing images through a
propellor. Consequently, mounting the camera on the wing introduces a significant
amount of geometric complexity. This is due to the fact that the center of mass
view would need to be recovered mathematically. To simplify the camera geometry,
the new versions of our test platform are being designed with a rear-mounted drive
system, as shown in Figure 2.2. This allows a forward-looking camera to be placed
directly on the center-line of the airframe. Not only does the pusher-prop system
allow for a clear line-of-sight from the front of the aircraft, it increases lift on the
wing by reducing skin friction, drag, and provides channeled airflow over the tail of
The conventional pusher-prop configuration has many advantages, but it also
has disadvantages. Overall, it increases the envelope size of the airplane and creates
issues with propellor clearance during flight. These issues were initially dealt with
by utilizing a gearing system and a foldable propellor to reduce the overall size of
the drive system. That configuration was complicated due to the need to mount
and maintain correct alignment of the gears. New aircraft, using a direct-drive
system and a foldable prop, are now being developed. Their overall envelope is
slightly larger then their geared counterpart; however, the trade-off for simplicity is
Figure 2.2: MAV platform.
invaluable. Additionally, the reduction in moving parts makes the aircraft quieter
and easier to repair.
2.4 Integrating Vision
For many potential MAV missions, vision is the only practical sensor that
can achieve required and/or desirable autonomous behaviors, as is the case when
flying in urban environments below roof-top altitudes. Furthermore, given that
surveillance has been identified as one of their primary missions, MAVs must
necessarily be equipped with on-board imaging sensors, such as cameras or infrared
arrays. Thus, computer-vision techniques can exploit already present sensors, rich
in information content, to significantly extend the capabilities of MAVs, without
increasing their required p' load.
Vision is the most desirable sensor because it is very versatile. Traditional
aircraft sensors, like accelerometers and i.-., i- are limited to measuring only
the current state of the system, while vision measures information about the
environment. This information can be used to make the system react to its
surrounding environment in an anticipatory manner, such as object tracking and
path planning. Another advantage of vision is that it can also be used to measure
the vehicles current state by analyzing the aircraft's motion and location in the
environment. Using optical flow techniques and 3D vision, the position, orientation,
and trajectory of the aircraft can be estimated over time [17, 18]. Although these
estimates alone could potentially be used to replace traditional aircraft sensors,
a more reasonable approach would be to correlate the traditional sensors with
the information extracted through vision. AT wi,' techniques have been developed
to enable data from many different sources to be utilized together to make very
accurate estimates about the state of the aircraft [19, 20].
Placing imaging sensors on-board the aircraft is cost effective in both p ivload
and time. Processing the data they are capable of gathering is very computation-
ally expensive and non-trivial to implement on-board a MAV size platform. To
address this issue, a transmitter is installed along with the camera. This trans-
mitter allows the video signal to be broadcast to the ground station where a more
powerful computer can perform the computer vision calculations.
3.1 Flight Stability
Fundamentally, flight stability and control requires measurement of the MAV's
angular orientation. The two degrees of freedom critical for stability (i.e., the bank
i,,j1l. 4, and the pitch w.,l, 0, 1 ) can be derived from a line corresponding to
the horizon as seen from a forward facing camera on the aircraft. Below, we briefly
summarize the horizon-detection algorithm used in our experiments (further details
can be found in [9, 21]).
For a given hypothesized horizon line dividing the current flight image into a
sky and a ,..;';.l/ region, we define the following optimization criterion J:
J- (s- P9)'(s + 9g)-Vl(s- ,g) (3.1)
where ps and pg denote the mean vectors, and Y, and Zg denote the covariance
matrices in RGB color space of all the pixels in the sky and ground regions,
respectively. Since J represents the Mahalanobis distance between the color
distributions of the two regions, the true horizon should yield the maximum value
of J, as is illustrated for a sample flight image in Figure 3.1.
1 Instead of the pitch angle 0, we actually recover the closely related pitch per-
centage ao, which measures the percentage of the image below the horizon line.
Optimization criterion J RGB color cube
function of bank angle and pitch percentage; (c) resulting classification of sky and
04 50 bank gieen
pitch % Down-sample the image to XL x YL, where XL XH L H.
(a) (b) (c)
Figure 3.1: Horizon tracking: (a) original image; (b) optimization criterion J as a
function of bank angle and pitch percentage; (c) resulting classification of sky and
ground pixels in RGB space.
Given J, horizon detection proceeds as thellows for a video frame at XH x Y
1. Down-sample the image to XL Y15 where XL XH, Yprecise value of the pitchY.
3. Select (Q*, a*) such that,
4. Perform a bisection search on the high-resolution image to fine-tune the
values of (Q*, a*).
For experiments reported in this paper, we use the following parameters: XH x
YH 320 x 240, XL x YL 20 x 15, and n 60. Also, the precise value of the pitch
percentage (a) that results in level flight (i.e., no change in altitude) is dependent
on the trim settings for a particular aircraft. For our experiments, we assume a
perfectly aligned forward looking camera (see Figure 2.2), such that a a value of 0.5
corresponds to level flight.
3.2 Object Tracking
Object tracking is a well-studied problem in computer vision [22, 23]; our
intent here is to use object tracking to allow a user to easily control the flight
vehicle's heading (instead of, for example, GPS).2 We specifically do not perform
autonomous target recognition, since we want to be able to dynamically change
what ground region the MAV tracks. As such, a user can select which ground
region (i.e., object) to track by clicking on the live video with a mouse. This
action selects an M x M region to track, centered at the (x, y) coordinates of the
mouse click. For the experiments reported in Ch'! pter 5, we set M 15 for video
resolutions of XH x YH.
We employ template matching in RGB color space for our object tracking
over successive video frames. Our criterion is the sum of square differences (SSD),
a widely used correlation technique in stereo vision, structure from motion, and
egomotion estimation. Our approach differs from some of that work in that we
compute the SSD for RGB instead of intensity, since tracking results are much
better with full color information than intensity alone. To deal with varying image
intensities as environmental factors (e.g., clouds) or the MAV's attitude with
respect to the sun changes, we also update the M x M template to be the matched
2 The object tracking algorithm described in this section was developed by
Ashish Jain at the Machine Intelligence Lab during the Spring semester of 2004.
region for the current frame prior to searching for a new match in subsequent video
frames. Furthermore, since ground objects move relatively slowly in the image
plane from one frame to the next, due to the MAV's altitude above the ground, we
constrain the search region for subsequent frames to be in an N x N neighborhood
(N = 25 < XH, YH) centered around the current ground-object location (x, y),
as illustrated in Figure 3.2. This reduces the computational complexity from
O(M2XHXL) to O(M2 N2), and allows us to perform both horizon tracking for
stabilization and object tracking for heading control in real time (30 frames/sec).
In fact, with the PowerPC G4 Altivec Unit, we are able to dramatically reduce
CPU loads to as little as 35'. with both vision-processing algorithms running
Below, we briefly summarize the object-tracking algorithm:
1. User selects the image location (x, y) to be tracked for frame t.
2. The template T is set to correspond to the M x M square centered at (x, y)
for frame t.
3. The search region R for frame t + 1 is set to the N x N square centered at
4. The location (x, y) of the object for frame t + 1 is computed as the minimum
SSD between T and the image frame within search region R.
5. Go to step 2.
A controller is necessary to generate actuator movements based on feedback
to perform the mission at hand. Here, we describe the controller architecture that
Figure 3.2: In object tracking, the search region for the next frame is a function of
the object location in the current frame.
takes the information extracted from horizon and object tracking and converts it
to control surface command to direct the flight path of the aircraft. This control
architecture is shown in Figure 3.3.
There are two possible inputs to the system from a ground-station user:
(1) a human-directed input that commands a desired bank angle (4) and pitch
percentage (a) and (2) the desired location Xd8s of the ground object to be tracked.
In the absence of object tracking, the human-directed input serves as the primary
heading control; with object tracking, the human-directed input is typically not
engaged, such that the trim settings (, o)ds8 = (0, 0.5) are active. The two outputs
of the controller are i1 and 62 corresponding to the differential elevator surfaces
controlled by two independent servos.
The bank angle Q and pitch percentage a are treated as independent from one
another, and for both parameters we implement a PD (proportional-derivative)
controller. The gains Kp and Kd were determined experimentally in virtual
environment trials. Because of the differential elevator configuration, the control
signals 61 and 62 will obviously be coupled. For tracking, a P (proportional)
controller is used. When engaged (on activation of object tracking), the controller
adjusts the bank angle (Q) proportional to the distance between the center of the
Figure 3.3: Controller for vision-based stabilization and object tracking.
tracked target and from the center of the current field-of-view. As before, the gain
(Kp) is also determined experimentally in the virtual environment.
Thus, there are two possible modes of supervised control: (1) direct heading
control through a human-directed input or (2) indirect heading control through
object tracking. The first case allows users who are not experienced in flying RC
aircraft to stably command the trajectory of the flight vehicle. This is especially
critical for MAVs, because it is substantially more difficult to learn direct RC
control of MAVs than larger, more stable RC model airplanes. In the second case,
commanding trajectories for the MAV is even simpler and reduces to point-and-
click targeting on the flight video ground display. Either way, the controller will not
permit "ui- I. flight trajectories that could potentially lead to a crash.
The paramount goal of this research is to develop vision-based autonomy
for MAVs. Development of such autonomy presents significant challenges, in no
small measure, because of the inherent instability of these flight vehicles. In this
section we present the details of a flight tested system that seeks to mitigate
these challenges by facilitating the rapid development of new vision-based control
algorithms in two v--,-- (1) through the use of a virtual environment simulation
and (2) custom-designed flight hardware that is unified between the virtual and real
The proposed tested system provides a complete architecture, built from
custom-designed hardware and software, for developing autonomous behaviors
for MAVs using a camera as the primary sensor. Thus, the presented tested
effectively bridges the gap between designing vision-based algorithms for MAVs
and deploying them in the real world. The virtual environment allows the system
to be tailored to a number of different mission profiles through its ability to
perform flight tests in a multitude of virtual locations. Once the algorithm has
been tuned in the virtual environment, the unified hardware architecture is
interchangeable from that environment to a real-world deployment situation.
That is, the ground station, which performs the computation and control, is
completely interchangeable, so that code, controllers, and hardware developed in
one environment are immediately transferable to the other.
4.1 Architecture of the System
Instead of developing a control algorithm and going directly to flight testing,
we will now develop that algorithm under the framework of the presented tested,
which includes utilizing a virtual environment simulation. Once the virtual en-
vironment testing has been completed and the algorithm has been verified in a
wide range of environmental conditions, we can then deploy that technology to a
real flight test with little risk to the aircraft. Experiments are shown in Ch'! Ipter 5
where vision-based algorithms are prototyped using the virtual environment and
flown unmodified in a real test flight.
The complete tested system is shown in Figure 4.1. The tested architec-
ture is divided into three categories: (1) the on-board components (carried in the
airframe), (2) a virtual environment simulation (for laboratory verification and
testing), and (3) the off-board components, located on the ground (the interface
to the flight vehicle). The on-board components include a camera and a micropro-
cessor controlled multi-rate sensor board that includes inertial sensors, a GPS, an
altimeter, and an airspeed sensor. Also, a transceiver is placed on-board for inter-
action with the off-board components. The ground station components, consisting
of a laptop, a transceiver, and an optional human operable remote control, supply
machine-vision and control-processing capabilities not possible on-board the air-
craft. A virtual environment simulator was constructed using flight-trainer software
and a projection screen. The ground station was interfaced to that environment
and the aircraft, with its forward mounted camera, was positioned to observe the
virtual environment pusher 24" sma,
] colo, ._______
ca". i.iL ,tuet ,,i.. I.
- - 4 video link- I
- control link-
S/uAl On-board Sensors
Figure 4.1: Testbed system overview.
visual output of the simulator. Altogether, this complete flight tested system
allows flight in either a real or a simulated environment. In the following sections
we will discuss each of the categories of the tested architecture in detail.
4.2 Virtual Environment Simulation
The virtual environment simulation component serves as a precursor to the
full HILS facility being constructed at the University of Florida. This facility will
include a wind tunnel and a photo-realistic world. Our current virtual tested
offers only a subset of the features that the full facility would offer, focusing mainly
on the visualization aspect of the simulation and the position of the aircraft.
The features that the current system offer are enough to perform precursory
experiments and are discussed below. The virtual environment simulator is based
(a) (b) (c) (d)
Figure 4.2: Some sample virtual scenes: (a) field, trees and mountains, (b) simple
urban, (c) urban with features, and (d) complex urban.
on an off-the-shelf remote-control airplane simulation package. The advantages of
this software are: (1) it contains a diverse set of scenery as well as vehicle models,
including a realistic-physics engine; (2) additional scenery and vehicle models can
be defined externally; (3) it supports full collision detection and simulation of
partial vehicle damage (e.g. loss of a wing); and, finally, (4) environmental factors
such as wind or radio noise, for example, can also be incorporated. Figure 4.2
illustrates a few examples of the type of scenery supported by the software package;
note that the types of scenery available are significantly more diverse than what is
easily accessible for real test flights of our MAVs.
The only additional hardware required for the virtual testbed (as opposed to
the real flight vehicle) is a small interface board that converts control outputs from
the ground station into simulator-specific syntax. As such, the ground station does
not distinguish between virtual and real-flight experiments, since the inputs and
outputs to it remain the same in both environments.
Following the development of the virtual testbed, virtual flight experiments
proceed as follows. First, the flight simulator di pl-'1 a high-resolution image
which reflects the current field-of-view of the simulated aircraft at a particular
position, orientation, and altitude. Then, a video camera, which is identical to
the one mounted on the actual MAV, is fixed in front of the display to record
that image. The resulting signal from this video camera is then processed on the
ground-station laptop. Next, the extracted information from the vision algorithms
being tested is passed to the controller, which generates control commands to
maintain flight-vehicle stability and user-desired heading (depending, for example,
on ground-object tracking). These control commands are digitized and fed into the
flight simulator. Finally, the simulator updates the current position, orientation,
and altitude of the aircraft, and a new image is di-1p i 1 for image capture and
Note that this system allows us to experiment with vision algorithms in a
stable laboratory environment prior to actual flight testing. This means that we
can not only develop and debug algorithms without risking loss of the flight vehicle,
but we can also experiment with complex 3D environments well before risking
collisions of MAVs with real buildings in field testing. While the scenes in our
current prototype system are not as photo-realistic as desirable, even with this
limitation, we were able to develop significant vision-based autonomous capabilities
in real flight tests without a single crash (Ch'! Ipter 5). Moreover, our larger-scale
HILS facility will have substantially more computing power for rendering photo-
realistic views of complex natural and urban settings.
4.3 Testbed Hardware
The on-board components of the testbed (top right in Figure 4.1) include
a camera and a microprocessor controlled multi-rate sensor board. The camera,
a color C\ I OS array, is mounted in the nose of the airframe, along the center-
line. A 2.4GHz transmitter is used to broadcast the video stream to the ground
station. The sensor board, still under development during our experiments, includes
inertial sensors, a GPS, an altimeter, and an airspeed sensor. It also contains a
transceiver for interaction with the off-board components. The details of the sensor
board are beyond the scope of this thesis, as we have developed it fully in other
works [24, 25].
The ground station (bottom center in Figure 4.1) consists of: (1) a 2.4 GHz
video-patch antenna (not pictured), (2) a video-capture device from the Imaging
Source (formerly a Sony Video Walkman) for NTSC-to-firewire video conversion,
(3) a 12" G4 laptop (1GB/1GHz), (4) a custom-designed Futaba-compatible signal
generator for converting computer-generated control commands to PWM Futaba-
readable signals, and (5) a standard Futaba RC controller. Video is input to the
computer in uncompressed YUV format, then converted to RGB for subsequent
processing. The Futaba transmitter, the traditional remote-control mechanism for
piloting RC aircraft, is interfaced to the laptop computer through a Keyspan serial-
to-USB adapter and has a pass-through trainer switch that allows commands from
another transmitter to be selectively r!d i,.- to the aircraft. Our custom-designed
Futaba -compatible signal generator lets the laptop emulate that other transmitter,
and, therefore, allows for instantaneous switching between computer control and
human-piloted remote control of the flight vehicle during testing.
5.1 Flight Testing Procedures
In this section we describe several experiments conducted using the proposed
tested system. First, we contrast direct RC control with horizon-stabilized human-
directed control and illustrate object tracking on some sample image sequences.
Then, we apply the object tracking framework to develop autonomous landing
capabilities, first in the virtual environment simulator and then in field testing.
The principal difference in testing procedures between the virtual and real-flight
configurations occurs at take-off. In the virtual environment, the aircraft takes off
from a simulated runway, while in field I ii.- our MAVs are hand-launched. After
take-off, however, testing is essentially the same for both environments. Initially,
the aircraft is under direct RC control from a human pilot until a safe altitude is
reached. Once the desired altitude has been attained, the controller is enabled.
Throughout our test flights, both virtual and real, throttle control is typically set to
a constant level of ,,'.
5.2 Simple Stabilization Experiment
Here we illustrate simple horizon-based stabilization and contrast it to direct
RC control in the virtual tested; similar experiments have previously been carried
out in field testing [8, 9]. Figure 5.1 illustrates some simple rolling and pitch
trials for: (a) direct RC-piloted and (b) horizon-stabilized (human-directed) flight
0 50 100 150 0 50 100 150 200
time (sec) time (sec)
Figure 5.1: Stabilization results: (a) Direct RC-piloted flight, and (b) horizon-
stabilized (human-directed) flight. Maneuvers for flight trajectory (b) were exe-
cuted to mimic flight trajectory (a) as closely as possible.
trajectories. As can be observed from Figure 5.1, horizon-stabilized control tends
to do a better job of maintaining steady roll and pitch than direct RC flight;
this phenomenon has previously been observed in field testing. Not only does
horizon stabilization lead to smoother flights, but no special training is required to
command the flight vehicle when horizon stabilization is engaged.
5.3 Object Tracking
Here we report results on ground object tracking on some sample flight
sequences for both virtual and real-flight videos. Figure 5.2 illustrates some sample
frames that illustrate typical tracking results for: (a) a virtual sequence and (b)
a real-flight sequence; complete videos are available at http://mil.ufl. edu/
Once we had determined that tracking was sufficiently robust for both virtual
and real-flight videos, we proceeded to engage the tracking controller in the virtual
tested and verified that the aircraft was correctly turning toward the user-selected
targets. This led us to formulate autonomous l7<'.1..:' as a ground object-tracking
Figure 5.2: Object tracking: (a) virtual testbed, and (b) real flight image sequence.
problem, where the "object" to be tracked is the landing zone. We first developed
and verified autonomous landing in the virtual environment simulation and then,
without i,' fi ,n..,1.i. ,..mns of the developed code, successfully executed several
autonomous landings in real-flight field testing. We describe our experiments in
autonomous landing in further detail in the next section.
5.4 Autonomous Landing: Virtual Environment
An aircraft without a power source is basically a glider, as long as roll and
pitch stability are maintained. It will land somewhere, but, without any heading
control, yaw drift can make the landing location very unpredictable. However, using
our object tracking technique, we are able to exercise heading control and execute a
predictable landing. Landing at a specified location requires knowledge of the glide
slope (i.e., the altitude and distance to the landing location). Since we currently do
not have access to this data in our virtual environment simulation, we assume that
we can visually approximate these values. Although somewhat crude, this method
works well in practice and is replicable.
We proceed as follows. First, the horizon-stabilized aircraft is oriented so that
the runway (or landing site) is within the field of view. The user then selects a
Figure 5.3: Autonomous landing in a virtual environment: four sample frames.
location on the runway to be tracked, and the throttle is disengaged. Once tracking
is activated, the plane glides downward, adjusting its heading while maintaining
level flight. In our virtual environment, mountains are visible, introducing some
error in horizon estimates at low altitudes. As the plane nears ground level during
its descent, these errors become increasingly pronounced, causing slight roll and
pitch anomalies to occur. Nevertheless, the aircraft continues to glide forward,
successfully landing on the runway in repeated trials. Sample frames from one
autonomous landing are shown in Figure 5.3, while the roll, pitch and tracking
command are plotted in Figure 5.4 for that landing. (As before, complete videos
are available at http://mil.ufl.edu/~number9/mav visualization.)
20 40 60 80
Figure 5.4: Roll, pitch and tracking command for virtual autonomous landing in
.... .... ...
Figure 5.5: Real-flight autonomous landing in field testing: four sample frames.
5.5 Autonomous Landing: Real-flight Experiments
In real-flight testing of autonomous 1 .lii; we did not have access to the
same ground feature (i.e., a runway) as in the virtual environment. Our MAVs do
not have landing gear and do not typically land on a runway. Instead, they are
typically landed in large grass fields. As such, we sought to first identify ground
features in our test field that would be robustly trackable. We settled on a gated
area near a fence where the ground consisted mostly of sandy dirt, which provided
a good contrast to the surrounding field and good features for tracking.
During the flight testing, the horizon-stabilized MAV is oriented such that
the sandy area is within the field of view. The user then selects a location at
the edge of the sandy area to be tracked, and the throttle is disengaged. As in
the virtual environment, the MAV glides downward toward the target, adjusting
its heading to keep the target in the center of the image while maintaining level
flight. When the aircraft approaches ground level, the target being tracked may
fall out of view. However, if the target is lost at this point, the plane will still
land successfully. This occurs because the maximum allowable turn command
0 I I
1 -_ '
human control landing landed
70 20 30 40
Figure 5.6: Roll, pitch and tracking command for real-flight autonomous landing in
generated by the object tracking controller, at that speed, will not cause the
plane to roll significantly. Once on the ground, the MAV skids to a halt on its
smooth underbelly. In several repeated trials, we landed the MAV within 10 meters
of the target location. Sample frames from one of those autonomous landings
are shown in Figure 5.5 (along with ground views of the MAV during landing).
Figure 5.6 depicts the roll, pitch, and tracking commands for that landing. (As
before, complete videos are available at http://mil .uf 1. edu/~number9/mav_
Flight testing of MAVs is difficult in general because of the inherent instability
of these flight vehicles, and even more so when implementing complex vision-
based behaviors. Over the years, many planes have been destroyed in crashes due
to relatively simple errors in coding or algorithmic weaknesses. The proposed
testbed system described in this paper was developed, in large measure, to deal
with these problems and to investigate potential uses of the full-scale UF HILS
facility currently under construction. It is virtually inconceivable that we could
have developed object tracking and autonomous landing without any crashes in the
absence of the virtual testbed. In the coming months, we plan to extend the use of
the virtual testbed facility to more complex vision problems, such as, for example,
3D scene estimation within complex urban environments, a problem which we are
now actively investigating.
 Global Security.org, "RQ-4A Global Hawk (Tier II+ HAE UAV)," World Wide
 P. G. Ifju, S. Ettinger, D. A. Jenkins, Y. Lian, W. Shyy, and M. R. Waszak,
"Flexible-wing-based Micro Air Vehicles," in Proc. 40th AIAA Aerospace
Sciences Meeting, Reno, Nevada, January 2002, paper no. 2002-0705.
 P. G. Ifju, S. Ettinger, D. A. Jenkins, and L. Martinez, "Composite materials
for Micro Air Vehicles," in Presentation at SAMPE Conference, Long Beach,
California, M r- 2001.
 J. M. McMichael and Col. M. S. Francis, \!lcro Air Vehicles-Toward a new
dimension in flight," World Wide Web, http://www.darpa.mi1/tto/mav/mav_
auvsi.html, December 1997.
 J. W. Grzywna, J. Plew, M. C. N. .iirba, and P. Ifju, "Enabling autonomous
flight," in Proc. Florida Conference on Recent Advances in Robotics, Miami,
Florida, April 2003, vol. 16, sec. TA3, pp. 1-3.
 J. M. Grasmeyer and M. T. Keennon, "Development of the Black Widow
Micro Air Vehicle," in Proc. 39th AIAA Aerospace Sciences Meeting, Reno,
Nevada, January 2001, paper no. 2001-0127.
 A. Kurdila, M. C. N. I.Iiba, R. Lind, P. Ifju, W. Dahmen, R. DeVore, and
R. Sharpley, "Vision-based control of Micro Air Vehicles: Progress and
problems in estimation," in Presentation at IEEE Int. Conference on Decision
and Control, Nassau, Bahamas, December 2004.
 S. M. Ettinger, M C. N. .!irba, P. G. Ifju, and M. Waszak, "Vision-guided
flight stability and control for Micro Air Vehicles," in Proc. IEEE Int.
Conference on Intelligent Robots and S.1-/. m- Lausane, October 2002, vol. 3,
 S. Ettinger, M. C. N. .ivba, P. G. Ifju, and M. Waszak, "Vision-guided flight
stability and control for Micro Air Vehicles," in Journal of Advanced Robotics,
2003, vol. 17, no. 3, pp. 617 40.
 C. S. Sharp, 0. Shakernia, and S. Sastry., "A vision system for landing
an Unmanned Aerial Vehicle," in Proc. IEEE Int'l Conf. on Robotics and
Automation, Seoul, Korea, May 2001, pp. 1720-27.
 B. Sinopoli, M. Micheli, G. Donato, and T. J. Koo, "Vision based navigation
for an unmanned aerial vehicle," in Proc. IEEE Int'l Conf. on Robotics and
Automation, Seoul, Korea, May 2001, pp. 1757-65.
 T. J. Mueller, "The influence of laminar separation and transition on low
reynold's number airfoil hysteresis," in Journal of Aircraft, 1985, vol. 22, pp.
 D. A. Jenkins, P. G. Ifju, M. Abdulrahim, and S. Olipra, "Assessment of
controllability of Micro Air Vehicles," in Presentation at 16th Intl. Conf.
Unmanned Air Vehicle S.i'-/1 Bristol, United Kingdom, April 2001.
 W. Shyy, D. A. Jenkins, and R. W. Smith, "Study of adaptive shape airfolis at
low reynolds number in oscillatory flows," in AIAA Journal, 1997, vol. 35, pp.
 R. W. Smith and W. Shyy, "Computation of aerodynamics coefficients for
a flexible membrane airfoil in turbulent flow: A comparison with classical
theory," in Phys. Fluids, 1996, vol. 8 no. 12, pp. 3346-53.
 P. R. Ehrlich, D. S. Dobkin, and D. WI. i, "Adaptions for flight," World
Wide Web, http://www. stanfordalumni. org/birdsite/text/essays/
Adaptions.html, June 2001.
 B. Lucas and T. Kanade, "An iterative image registration technique with
an application to stereo vision," in Proc. 7th Int'l Joint Conf. on Arl'.:[i .:./
Intelligence, 1981, pp. 674-79.
 T. Kanade, "Recovery of the three-dimensional shape of an object from a
single view," in Journal of Ar'-.:[ i.., Intelligence, 1981, vol. 17, pp. 409-60.
 L. Armesto, S. Chroust, M. Vincze, and J. Tornero, \!i!l -rate fusion with
vision and inertial sensors," in Proc. IEEE Int'l Conf. on Robotics and
Automation, April 2004, vol. 1, pp. 193-99.
 R. Meier, T. Fong, C. Thorp, and C. Baur, "Sensor fusion based user interface
for vehicle teleoperation," in Presentation at Int'l Conf. on Field and Service
Robotics, August 1999.
 S. M. Ettinger, "Design and implementation of autonomous vision-guided
Micro Air Vehicles," M.S. thesis, University of Florida, M ,i- 2001.
 J. Shi and C. Tomasi, "Good features to track," in Proc. IEEE Int'l Conf. on
Computer Vision and Pattern Recognition, Seattle, Washington, June 1994,
 L. G. Brown, "A survey of image registration techniques," in AC[ CorTniH;..
Surveys, December 1992, vol. 24, no. 4, pp. 325-76.
 J. Plew, J. W. Grzywna, M. C. N. -. vba, and P. Ifju, "Recent progress in the
development of on-board electronics for Micro Air Vehicles," in Proc. Florida
Conference on Recent Advances in Robotics, Orlando, Florida, April 2004, vol.
17, sec. FP3, pp. 1-6.
 J. Plew, "Development of a flight avionics system for autonomous MAV
control," M.S. thesis, University of Florida, December 2004.
In 2002, Jason W. Grzywna graduated from the University of Florida with
dual Bachelor of Science degrees in Electrical and Computer Engineering. Con-
tinuing his education, Jason was admitted to the master's degree program in the
summer of 2002, at the University of Florida. His main fields of interest, through-
out his graduate studies, were developing intelligent systems for autonomous
vehicles and robotics. During his time at the University of Florida, Jason was part
of many other MAV research projects, including immediate bomb damage assess-
ment, the small folding wing PocketMAV with inertial and vision stabilization, and
MAV deployment from a Pointer.