<%BANNER%>

Vision-Based Navigation Using Multi-Rate Feedback from Optic Flow and Scene Reconstruction


PAGE 1

VISION-B ASED N A VIGA TION USING MUL TI-RA TE FEEDB A CK FR OM OPTIC FLO W AND SCENE RECONSTR UCTION By AMAND A AR V AI A THESIS PRESENTED T O THE GRADU A TE SCHOOL OF THE UNIVERSITY OF FLORID A IN P AR TIAL FULFILLMENT OF THE REQ UIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORID A 2005

PAGE 2

Cop yright 2005 by Amanda Arv ai

PAGE 3

I dedicate this w ork to the best thing that e v er happened to me, my husband, Bryan.

PAGE 4

A CKNO WLEDGMENTS This w ork w as supported jointly by the Air F orce Research Laboratory and the Air F orce Of ce of Scientic Research under F49620-03-1-0381 with Johnn y Ev ers, Neal Glassman, Sharon Heise, and Robert Sierak o wski as project monitors. I w ould also lik e to sincerely thank my advisor Dr Rick Lind, for his in v aluable guidance and support throughout my time at the Uni v ersity of Florida. Special thanks also to my supervisory committee members, Dr W arren Dixon and Dr Carl Crane, for their time and consideration. This w ork w ould not be possible without the members of the Flight Controls Lab, Joe K ehoe, Ryan Cause y Mujahid Abdulrahim, and Adam W atkins, who ha v e al w ays been ready with a helping hand. Finally I w ould lik e to thank my f ather Denn y Roderick, who ga v e me a lo v e for aerospace; my mother Mary Roderick, who taught me the dedication needed to complete this w ork; and my sister Suzanne Noe, who al w ays ga v e me a model to aspire to w ard. i v

PAGE 5

T ABLE OF CONTENTS page A CKNO WLEDGMENTS . . . . . . . . . . . . . . . . i v LIST OF FIGURES . . . . . . . . . . . . . . . . . vii ABSTRA CT . . . . . . . . . . . . . . . . . . . ix CHAPTER1 INTR ODUCTION . . . . . . . . . . . . . . . . 1 1.1 Moti v ation . . . . . . . . . . . . . . . . 1 1.2 Background . . . . . . . . . . . . . . . . 4 1.3 Ov ervie w . . . . . . . . . . . . . . . . 8 2 AIRCRAFT EQ U A TIONS OF MO TION . . . . . . . . . . 9 3 VISION B ASED CONTR OL USING FEA TURE POINTS . . . . . 13 4 SCENE RECONSTR UCTION . . . . . . . . . . . . . 16 4.1 Concept . . . . . . . . . . . . . . . . . 16 4.2 Strate gy . . . . . . . . . . . . . . . . . 17 4.3 Adv antages and Risks . . . . . . . . . . . . . 19 5 OPTIC FLO W . . . . . . . . . . . . . . . . . 21 5.1 Concept . . . . . . . . . . . . . . . . . 21 5.2 Strate gy . . . . . . . . . . . . . . . . . 27 5.3 Adv antages and Risks . . . . . . . . . . . . . 33 6 MUL TI-RA TE CONTR OLLER . . . . . . . . . . . . 37 6.1 Concept . . . . . . . . . . . . . . . . . 37 6.2 Strate gy . . . . . . . . . . . . . . . . . 38 7 EXAMPLE 1 . . . . . . . . . . . . . . . . . 41 7.1 Setup . . . . . . . . . . . . . . . . . . 41 7.2 Actuator Controllers . . . . . . . . . . . . . 43 7.3 Control based on Optic Flo w . . . . . . . . . . . 44 7.4 Control based on Scene Reconstruction . . . . . . . . 46 7.5 Multi-Rate Control . . . . . . . . . . . . . . 47 v

PAGE 6

8 EXAMPLE 2 . . . . . . . . . . . . . . . . . 51 8.1 Setup . . . . . . . . . . . . . . . . . . 51 8.2 Control based on Optic Flo w . . . . . . . . . . . 52 8.3 Control based on Scene Reconstruction . . . . . . . . 52 8.4 Multi-Rate Control . . . . . . . . . . . . . . 53 9 CONCLUSION . . . . . . . . . . . . . . . . . 56 REFERENCES . . . . . . . . . . . . . . . . . . 58 BIOGRAPHICAL SKETCH . . . . . . . . . . . . . . . 62 vi

PAGE 7

LIST OF FIGURES Figure page 2–1 Coordinate Systems . . . . . . . . . . . . . . . 9 3–1 Feature Mapping from 3D Space to 2D Image Plane . . . . . . 14 3–2 V ector Diagram . . . . . . . . . . . . . . . . 15 4–1 Control Scheme using Scene Reconstruction . . . . . . . . 19 5–1 Camera Position . . . . . . . . . . . . . . . . 22 5–2 Feature Point Positions in Image Plane at T imesteps 1 (Left) and 2 (Right) during Straight and Le v el Flight . . . . . . . . . . . 22 5–3 Corresponding Optic Flo w during Straight and Le v el Flight . . . . 22 5–4 Feature Point Locations at T imesteps 1 (Left) and 2 (Right) during Roll Maneuv er . . . . . . . . . . . . . . . . . 26 5–5 Corresponding Optic Flo w during Roll Maneuv er . . . . . . . 26 5–6 Optic Flo w (Left) and Scaled Optic Flo w with Rotational Components Remo v ed (Right) during Roll Maneuv er . . . . . . . . 27 5–7 Optic Flo w (Left) and Optic Flo w with Rotational Components Remo v ed (Right) of Straight and Le v el Flight . . . . . . . . . . 27 5–8 V elocity V ector Projected on the Image Plane . . . . . . . . 31 5–9 Optic Flo w with Rotational Components Remo v ed (Left) and Cost Function 3D plot (Right) . . . . . . . . . . . . . . 33 5–10 Feature Point Location at T imestep 1 (Left) and T imestep 2 (Right) . . 35 5–11 Resulting Optic Flo w . . . . . . . . . . . . . . 36 6–1 Closed-Loop System with Multi-Rate Control . . . . . . . . 38 7–1 Simulation En vironment . . . . . . . . . . . . . 42 7–2 Altitude Hold Controller . . . . . . . . . . . . . 44 7–3 T urn Controller . . . . . . . . . . . . . . . . 44 7–4 Speed Controller . . . . . . . . . . . . . . . 45 vii

PAGE 8

7–5 Optic Flo w Results . . . . . . . . . . . . . . . 46 7–6 Scene Reconstruction Results . . . . . . . . . . . . 47 7–7 Camera Image at T=0 . . . . . . . . . . . . . . 48 7–8 En vironment as Assumed by Scene Reconstruction from Input T ak en at T=0 . . . . . . . . . . . . . . . . . . 49 7–9 Multi-Rate Controller Results . . . . . . . . . . . . 50 8–1 Simulation En vironment . . . . . . . . . . . . . 51 8–2 Optic Flo w Results . . . . . . . . . . . . . . . 52 8–3 Scene Reconstruction Results . . . . . . . . . . . . 53 8–4 En vironment as Assumed by Scene Reconstruction from Input T ak en at T=0 . . . . . . . . . . . . . . . . . . 54 8–5 Aircraft Position at T=0 (Left) and T=45 (Right) . . . . . . . 54 8–6 Multi-Rate Controller Results . . . . . . . . . . . . 55 viii

PAGE 9

Abstract of Thesis Presented to the Graduate School of the Uni v ersity of Florida in P artial Fulllment of the Requirements for the De gree of Master of Science VISION-B ASED N A VIGA TION USING MUL TI-RA TE FEEDB A CK FR OM OPTIC FLO W AND SCENE RECONSTR UCTION By Amanda Arv ai December 2005 Chair: Richard C. Lind, Jr Major Department: Mechanical and Aerospace Engineering Due to an increasing demand for autonomous v ehicles, considerable attention has been focused on vision-based control. Cameras are small, lightweight, and relati v ely ine xpensi v e, making them an attracti v e alternati v e to other more traditional sensors, such as infrared and radar Cameras pro vide a rich stream of data to describe the v ehicle' s en vironment. These data can be analyzed to pro vide the controller with information such as the relati v e size and location of obstacles. Intelligent control decisions can then be made using this information in order to na vigate the v ehicle safely through the en vironment. This thesis focuses upon tw o f airly established visionbased control methodologies, optic o w and scene reconstruction. The adv antages and disadv antages of each approach are analyzed. A multi-rate controller which mer ges these tw o approaches is introduced. It attempts to emphasize the adv antages of each approach by alternating between the tw o, based on the characteristics of the en vironment. T w o simulations v alidate the benets of this multi-rate controller for the purposes of reacti v e obstacle a v oidance and na vigation. These simulations include a nonlinear F-16 model ying through a virtual scaled-up urban en vironment. Optic o w scene reconstruction, and the multi-rate control approaches are applied to autonomously ix

PAGE 10

control the aircraft. The multi-rate controller is singularly capable of achie ving the mission objecti v es by reaching the desired destination while simultaneously a v oiding obstacles in its path. x

PAGE 11

CHAPTER 1 INTR ODUCTION 1.1 Moti v ation In today' s technology-dri v en w orld, an increasing emphasis has been placed on autonomous systems. From industrial applications to mobile robots, autonomy allo ws for minimal human ef fort and oftentimes leads to results which are superior to humanin-the-loop systems. Autonomy can be used for a v ariety of applications, most of which are focused on either protecting human life or increasing its quality Man y of man' s most treacherous missions can be accomplished by machines. Robots can e xca v ate mine-elds. Drones can surv e y w ar zones. Already such machines ha v e protected man y li v es. As autonomy technology adv ances, it will undoubtedly be applied to man y more applications for similar purposes. If a machine can successfully accomplish a job, it is not necessary to risk precious human life. This moti v ation is a dri ving force for autonomy Other missions require autonomy because a human is simply incapable of accomplishing them alone. These missions often require high precision, lar ge calculations, or f ast response times. F or e xample, an e xperiment may require an e xact balance of chemicals. A manuf acturing process may require an e xtremely precise measurement. These systems demand a le v el of precision that is unreachable when allo wing for human error; the y require autonomy Other systems are simply too complicated for a human to process. Coordinating a sw arm of drones, for e xample, requires the e v aluation of e xtremely lar ge amounts of data. The tracking of se v eral dif ferent tar gets is also v ery computationally intensi v e. Computers are better suited than humans to account for these lar ge amounts of data. Finally human reaction times cause unacceptable delays for some systems. The time for a human to see, process, and be gin to respond 1

PAGE 12

2 to data a v erages around 0.6 s [ 11 ]. F or systems operating at high rates, such as f astmaneuv ering air v ehicles or tracking torpedoes, this delay can be de v astating. These types of systems also demand autonomy All of the abo v e reasons ha v e been dri ving causes for adv ancing autonomous technology A topic of particular interest to this thesis is the applicability of autonomy to unmanned aerial v ehicles (U A Vs). U A Vs are being aggressi v ely pursued for both military and commercial applications. F or defense w ork, the y are well suited for enemy tar geting and monitoring. The y ha v e recently been implemented in the Iraq w ar ef fort. Commercially their applications v ary greatly The y are used to surv e y and re gulate forest res, monitor high v alue crops such as the thermal imaging of grape vine yards, and e v en accomplish aquatic search-and-rescue missions. Most of these missions ha v e been controlled remotely Allo wing these U A Vs to operate autonomously w ould enable humans to otherwise apply their time to further support their missions. A subset of U A Vs in lar ge demand is micro aerial v ehicles (MA Vs). Autonomous MA Vs w ould be ideal for detecting biological agents throughout a city The y are agile enough to wea v e through b uildings, granting them access to areas that were pre viously unattainable. MA Vs could also be used to monitor a nearby enemy that w ould otherwise be hidden from sight. Since the y are lightweight, the y are con v enient to transport. The y w ould be conduci v e to being stored in a police car' s trunk, to be launched at a moment' s notice for surv eillance and tracking of a suspect. Furthermore, MA Vs could be used to delicately place a lightweight sensor in enemy territory Gi v en their agility and stealth, the applications are v ast, thus dri ving the desire for such technology Research has also been emplo yed to create teams of MA Vs and U A Vs in conjunction with autonomous ground v ehicles. Oftentimes these teams use the aerial v ehicles to detect and monitor a tar get and use a ground v ehicle to physically approach the tar get to perform a gi v en mission. Independent of MA Vs and U A Vs, autonomous

PAGE 13

3 ground v ehicles are sought for e xca v ating land mines and for bomb disposal. The y are applicable to w ork in nuclear engineering. More mundane applications include the mo wing of lar ge elds and automated parallel parking. Among the v arious approaches tak en in the adv ancement of autopilots for autonomous v ehicles, this thesis will focus upon vision-based control. V ision is perhaps the primary sensor used by human pilots. By seeing the w orld around them, pilots na vigate throughout their en vironment with the ability to tra v el in a logical path and a v oid obstacles. These na vigation decisions seem only natural to a human pilot. Autonomous v ehicles with vision-based control use a camera. The same information is pro vided to the vision-based autopilot as it is to the human pilot. The task is to create a method for the autopilot to interpret the images to mak e logical control decisions. There are se v eral adv antages to vision-based control. The sensors are relati v ely small and lightweight, making them appealing to v ehicles with small payloads. Compared to alternati v e sensors, cameras are relati v ely ine xpensi v e. Cameras also pro vide a real-time stream of information about their en vironment. The cameras can be rotated about their ax es to pro vide an increased eld of vie w The eld of vie w can also be e xpanded through the use of multiple cameras, a technique kno wn as stereo-vision. V ision-based control is applicable for enhancing man y of the applications previously mentioned. F or e xample, it can be applied to enemy tracking and tar geting, path planning and obstacle a v oidance, crop monitoring, and forest re re gulation, among se v eral others. It is also v ery applicable to MA Vs, which ha v e an e xtremely small payload due to their size and weight restrictions. MA Vs simply cannot af ford to carry all of the sophisticated radar GPS, sonar gyros, accelerometers, altimeters, and other types of sensors commonly a v ailable to other aircraft. Furthermore, the y ha v e signicantly reduced processing po wer a v ailable onboard. Since most current commercially-a v ailable autopilots rely on se v eral sensors in conjunction with hefty

PAGE 14

4 processors, these traditional autopilots are not v ery applicable for MA Vs. Instead, vision can be used to pro vide e xtensi v e data about the en vironment. Ov erall, vision-based control is v ery applicable for use onboard autonomous v ehicles. As image processing techniques ha v e progressed, the application possibilities ha v e increased. It is no w desired to further adv ance the control theory for vision-based autopilots to mak e them more suitable for real-w orld applications. 1.2 Background V ision-based na vigation techniques ha v e been e xplored via man y dif ferent a v enues. Most of these techniques use image processing to e xtract information about the en vironment which is then used for control purposes. Among these techniques, the approaches related to optic o w and scene reconstruction are of direct interest to this thesis. The concept of optic o w w as rst introduced by B.D. Lucas and T Kanade [ 23 ] in conjunction with B.K. Horn and B.G. Schunck [ 16 ] in 1981. Since then, techniques emplo ying the concept ha v e been widely in v estigated. It has been used for v arious applications, the majority of which include the na vigation and control of autonomous v ehicles. Generally optic o w is induced by relati v e motion between the camera and the surroundings. Ho we v er dif ferent methods, such as an optical zoom, ha v e been in v estigated [ 25 ]. Most approaches stri v e to be independent of pre vious kno wledge of the en vironment, although others still require topographical maps, model optic o w elds, etc. [ 14 33 ]. Optic o w is commonly incorporated into the theory behind ground v ehicle autonomous control [ 21 ]. By analyzing the peripheral optic o w these v ehicles can na vigate through corridors [ 1 8 ]. This is oftentimes accomplished by balancing the optic o w on the left and the right of the image plane. Analyzing the magnitude of the peripheral optic o w permits speed control [ 1 ]. The slope and consequently transv ersibility of terrain can also be computed using optic o w techniques [ 37 43 ].

PAGE 15

5 This is commonly applied to bipeds and other mobile robots. In addition, the optic o w located in the center of the image is often used to compute the time to contact of a feature [ 1 34 38 ]. This allo ws for obstacle a v oidance which is necessary for all autonomous v ehicles. Perhaps a more unusual application includes guide robots for the visually impaired [ 34 ]. Similar technologies ha v e been applied to unmanned aerial v ehicles. Optic o w enables aircraft to y through urban can yons [ 17 18 ] and e v en to intercept objects mid-ight [ 28 ]. It is possible to calculate the collision points in the image plane based on the optic o w and the projected path of motion [ 7 ]. Assumptions on the time to contact are used for the autonomous landing of aerial v ehicles [ 30 43 ]. Similarly optic o w has been applied to helicopters for terrain-follo wing [ 30 31 ] and ho v ering [ 13 ]. Optic o w sensors are also v ery f ast and usually lightweight, making them v ery applicable to micro air v ehicles [ 2 3 4 31 ]. Oftentimes the inspiration for the optic o w technology has come from biology Barro ws uses inspiration from the biology of ying insects in his application of optic o w to micro air v ehicles [ 2 3 4 ]. Similar inspiration has been used for terrain follo wing on micro-helicopters [ 31 ]. Also, a space-v ariant map used to reduce peripheral optic o w resolution w as inspired by feline and primate retinas [ 1 ]. Other research has focused on the optimization of optical o w algorithms. Intensity gradients as features for pattern matching ha v e been used in combination with a brightness constraint in order to create a f ast optic o w algorithm [ 34 ]. It has been sho wn that compressed peripheral optical o w can reduce input data for f aster computation [ 1 ]. Kalman lters ha v e also been used for more rob ust feature point tracking applications in relation to optic o w [ 13 14 ]. Feature point analysis encompasses all methods of vision based na vigation which rely on the e xtracting of points of interest in an image, tracking them throughout a period of time, and e xtracting information from the images. This information can then

PAGE 16

6 be used for control purposes. Adv ancement in this eld includes research concerning feature point detection, tracking, and algorithms for analysis. Feature points are points of interest in an image and generally correspond to corners, edges, or sharp color gradients. Specically some approaches e xtract feature points using a corner detection algorithm [ 40 ]. Others detect points based on brightness gradients; red, green, and blue (RGB) color; and hue and saturation v alues (HSV) [ 22 ]. After feature points ha v e been detected, it is often desired to track them between frames. Adv anced probability methods ha v e been used to estimate correspondence [ 24 ]. This probability also indicates the lik elihood of obstacles in that location. This technique eliminates the requirement for optic o w at each feature point [ 24 ]. T racking has also been accomplished using sample-based representation instead of a traditional Gaussian representation of the feature point uncertainty [ 6 ]. Man y feature point tracking methods emplo y Kalman lters [ 36 40 42 ]. The other vision-based control methodology to be addressed in addition to optic o w is scene reconstruction, or more formally Structure from Motion (SFM). The concept w as popularized during the 1990' s and general procedures are f airly well established. Ho we v er research is still ongoing to achie v e further automation and precision [ 10 ]. The concept of SFM is to essentially map the 2-D points on the image plane into a virtual 3-D space. First, the 2-D coordinates are translated into 3-D coordinates based on their optic o w and estimated time-to-contact. Ne xt, the 3-D points are connected to form surf aces which correspond to objects in the real w orld. P ath planning algorithms can then be implemented on the virtual en vironment. Due to the e xtensi v e computation required in SFM, these algorithms generally run at slo w rates. Therefore, some algorithms for mobile robots use a “start, mo v e, stop, mo v e again” approach to allo w for this processing delay [ 40 ]. While this technique is applicable for ground v ehicles, it poses an issue for aerial v ehicles. Thus man y feature point analyses ha v e rst been applied to ground v ehicles [ 36 40 ].

PAGE 17

7 Se v eral approaches ha v e been tak en to implement the general SFM concept, a sample of which are referenced here. One approach used geometry and the v elocity eld to approximate the shape inde x of objects in the image [ 9 ]. Structure from motion has also been accomplished by k eeping the xation point of the camera still throughout the camera' s motion [ 32 ]. Line se gments ha v e been used to create SFM within an of ce en vironment [ 41 ]. A least squares approach using v ertical lines has been in v estigated for the special case when the feature points and camera position are conned to a 2-D plane [ 39 ]. Also, e xtracting the v ertical motion of edges pro vides relati v e mo v ement information, allo wing a ground robot to gradually stop so as to a v oid collision [ 26 ]. The majority of the abo v e SFM algorithms required the optic-o w calculation at each feature point. If these calculations are not a v ailable due to noise, it is possible to still create SFM using lik ely optical o w v alues and their associated probabilities [ 24 ]. A structure from motion application of particular interest to this thesis is air v ehicles. An autonomous blimp used a v ersion of the Lucas-Kanade algorithm to detect and track feature points and emplo yed SFM for impro v ed state estimation [ 12 ]. Another approach used prior kno wledge of the en vironment to create and update a virtual 3-D model of the en vironment for the na vigation of an unmanned aerial v ehicle [ 33 ]. Helicopter guidance has been accomplished using range information and spatial relations from the static image in order to group feature points into objects [ 35 ]. This thesis seeks to incorporate optic o w with structure from motion, thus amending processing delay issues. Another approach addressing the processing delay commanded a loitering maneuv er while results were processed. This resulted in a ight path which delayed the o v erall progression of the U A V throughout the en vironment [ 29 ].

PAGE 18

8 1.3 Ov ervie w This thesis will demonstrate an approach to create a controller which inte grates the tw o established vision-based control techniques of optic o w and scene reconstruction. This controller is inherently multi-rate with a f ast loop running an optic o w algorithm for obstacle a v oidance and a slo wer loop running scene reconstruction analysis for general na vigation. First, the tw o techniques will be in v estigated in detail to determine their indi vidual strengths and weaknesses. It is generally established that scene reconstruction analysis pro vides reliable path-planning. It in v olv es lar ge computations though and is therefore performed at slo w rates. The slo w rates cause a delay between data acquisition and the implementation of the control decision, causing the control decision to potentially be based on outdated information. Some v ehicles, such as ground v ehicles or helicopters, will either stop or ho v er until the ne w information is processed and it is considered safe to continue. Fix ed-wing aircraft do not ha v e this ability making the delay all the more detrimental. Comparati v ely optic o w is capable of running at higher rates than scene reconstruction b ut pro vides less detailed information to be used for na vigation and control purposes. The information is assumed suf cient though for detecting and a v oiding obstacles in a real-time f ashion. This thesis proposes using optic o w for obstacle detection and a v oidance in the circumstances which scene reconstruction analysis is too slo w for safe na vigation. A switch will determine which of the tw o loops is acti v e. T o demonstrate the capability of the controller it is tested in simulation with a nonlinear F-16 model ying in a virtual en vironment.

PAGE 19

CHAPTER 2 AIRCRAFT EQ U A TIONS OF MO TION Three coordinate systems will be used throughout the course of this thesis. These systems include the inertial, or earth-x ed basis, which is dened as E This axis is chosen as north for ˆ e 1 east for ˆ e 2 and do wn for ˆ e 3 The body basis B is x ed to the center of gra vity of the aircraft. It is aligned such that ˆ b 1 is in the plane of symmetry pointing out the nose of the aircraft. The ˆ b 2 axis is perpendicular to the plane of symmetry pointing to the right of the nose of the aircraft. The ˆ b 3 axis is perpendicular to both the ˆ b 1 and ˆ b 2 ax es, in the plane of symmetry pointing do wnw ard. It is important to note that the position of the x ed, or inertial, E frame w as chosen such that at t0 it coincided with the B frame. Finally the camera basis, C is also x ed to the center of gra vity of the aircraft. Ho we v er the camera axis is aligned such that ˆ c 1 is pointed do wnw ard, coinciding with the ˆ b 3 The ˆ c 2 axis is perpendicular to the plane of symmetry pointing to the left of the nose of the aircraft. The ˆ c 3 axis, which is the camera' s optic axis, is in the plane of symmetry pointing out the nose of the aircraft, also coinciding with ˆ b 1 These frames are sho wn in Figure 2–1 Figure 2–1: Coordinate Systems 9

PAGE 20

10 This thesis will focus on creating a vision-based autopilot for unmanned aerial v ehicles. Before analyzing the system, the aircraft rigid body equations of motion (EOM) must rst be determined. These equations are v ery well documented in literature. An aircraft has six de grees of freedom, including three position components and three angular components. These components, along with their deri v ati v es, are the states of the aircraft. z E is the v ector to the center of gra vity of the aircraft from the earth basis. Its components are dened in Equation 2.1 The angular components, f q and y correspond to the roll, pitch, and ya w angles of the aircraft. The components of the linear and angular v elocities, z B and E w B are gi v en in Equations 2.2 and 2.3 respecti v ely Note that z E is dened in the earth frame, whereas z B is dened in the body frame. E w B is dened as the relati v e angular v elocity between the body frame and the earth frame. z Ex ˆ e 1y ˆ e 2z ˆ e 3 (2.1) z Bu ˆ b 1v ˆ b 2w ˆ b 3 (2.2) E w Bp ˆ b 1q ˆ b 2r ˆ b 3 (2.3) Using Ne wtons la ws, the rst 6 EOM were deri v ed. Equations 2.4 through 2.6 are force equations and Equations 2.7 through 2.9 are moment equations. The v ariables F x F y and F z are the aerodynamic forces, LMand N are the aerodynamic moments, m is the mass of the aircraft, I xI yI zI xyI yz and I xz are the aircraft' s inertias, and g is the gra vitational constant. Standard aircraft notation w as used to denote f q and y as the aircraft' s roll, pitch, and ya w angles, pqand r as the aircraft' s roll, pitch, and ya w rates, and uvand w as the aircraft' s v elocities as e xpressed in the aircraft basis. Subscripts denote the v ector' s basis which it is e xpressed in. Equations 2.10 through 2.12 use Euler angles and rates to describe the body angular v elocities. Equations 2.13 through 2.15 use Euler angles and body angular v elocities to determine

PAGE 21

11 the Euler rates. Finally Equation 2.16 denes the v elocity of the aircraft from the earth frame using Euler angles and the v elocity components from the body frame. Clearly these equations are highly coupled and nonlinear F xmg sinq m uqwr v(2.4) F ymg cosqsinf m vr upw(2.5) F zmg cosqcosf m wpvqu(2.6) LI x pI xz rqrI zI y I xz pq (2.7) MI y qr pI xI z I xzp 2r 2(2.8) N I xz pI z rpqI yI x I xz qr (2.9) p f y sin q (2.10) q q cos f y cos q sin f (2.11) r y cos q cos f q sin f (2.12) qq cos fr sin f (2.13) fpq sin f tan qr cos f tan q (2.14) y q sin fr cos fsec q (2.15)

PAGE 22

12 d x d t d y d t d z d tn n n n ˆ E C q C y S f S q C yC f S y C f S q C yS f S y C q S y S f S q S yC f C y C f S q S yS f C yS q S f C q C f C qn n n n u v wn n n n ˆ B (2.16)

PAGE 23

CHAPTER 3 VISION B ASED CONTR OL USING FEA TURE POINTS V ision-based control is an acti v e a v enue for the pursuit of v ehicle autonomy Considering that vision is perhaps a pilot' s most utilized sensor it is logical to assume that vision has applications onboard an autonomous v ehicle. When a person is piloting a craft, the human brain inputs information from the e yes and uses that information to mak e assumptions concerning the en vironment. These assumptions are used to determine control decisions to k eep the person and the craft along a safe trajectory The same task is presented to autonomous vision-based control. A camera is placed onboard a mo ving v ehicle. The camera then projects its en vironment onto an image plane and transmits that information to a controller The controller interprets this information and determines assumedly safe control decisions. The control theory used to analyze the image data encompasses the v ast eld of vision-based control. This thesis will focus upon the particular area of vision-based control using feature points. Feature points are dened as points of special signicance in the 3D en vironment. These points, along with the rest of the camera' s en vironment, are then projected onto a camera' s image plane. These points can be e xtracted from the image using a v ariety of methods including edge detection, color distrib ution, intensity v ariation, or basic dif ferentiation of image properties. Corners, edges, and light sources are thus ob vious possibilities for feature points. These points are then track ed within the image plane throughout time. V ision-based control mak es logical assumptions concerning the location and mo v ement of feature points to mak e lar ger interpretations about the en vironment. A group of feature points can be used to pro vide information about the o v erall en vironment. F or e xample, the feature points located on the corners of a b uilding can 13

PAGE 24

14 be used to interpret information concerning the whole b uilding. This information can then be used for control purposes. A camera in mathematical terms essentially maps the 3D en vironment onto a 2D image plane. The image plane is dened as the plane normal to the camera' s central, or optic, axis, located the focal length f distance a w ay from the camera basis [ 5 ]. Figure 3–1 portrays a feature' s projection from 3D space to the camera' s 2D image plane. It denes the v ector h as spanning from the lens of the camera to a feature point. Its components, h 1h 2 and h 3 are e xpressed in the camera basis C Figure 3–1: Feature Mapping from 3D Space to 2D Image Plane Equations 3.1 and 3.2 gi v e the standard pin-hole camera model. It is assumed that there is no of fset of the camera basis to the center of the camera lens. This model ef fecti v ely maps the camera' s surroundings onto the image plane. f h 1 h 3 (3.1) nf h 2 h 3 (3.2) Figure 3–2 portrays the v ectors x and z as the position of feature point and the center of mass of the aircraft respecti v ely both in the inertial earth frame E The v ector h can then be described in terms of x and z as sho wn in Equation 3.3

PAGE 25

15 Figure 3–2: V ector Diagram h Ex Ez E (3.3) Each of the terms in Equation 3.3 can be e xpressed in the camera basis by using the appropriate Euler transformation. The result is sho wn in Equation 3.4 R B C is the Euler transformation between the camera basis C and body basis B R E B is the transformation between the body basis and the earth basis. Again, it is assumed that the camera is x ed to the aircraft with zero of fset from its center of gra vity h CR C B R BEx Ez E(3.4)

PAGE 26

CHAPTER 4 SCENE RECONSTR UCTION 4.1 Concept V arious vision-based feedback approaches utilizing feature points e xist and are documented in literature. Among these, structure from motion (SFM) will be focused upon throughout this thesis. The general concept behind this approach has already been e xtensi v ely demonstrated in papers so this section will summarize that pre vious w ork [ 20 ]. The process of structure from motion describes an approach using feature points to estimate the relati v e location of a v ehicle and its en vironment. Assuming a stationary en vironment, SFM uses kno wn aircraft states and the locations of feature points in the image plane to create a virtual 3D scene of the en vironment. Con v ersely using a kno wn stationary en vironment and unkno wn aircraft states, SFM estimates the position and attitude of the v ehicle. Research is currently being conducted concerning identifying both unkno wn v ehicle states and an unkno wn en vironment. This thesis will focus upon the situation of kno wn aircraft states and an unkno wn stationary en vironment. SFM essentially creates a virtual 3D scene from a series of 2D images. F or this process, SFM analyzes the position of each feature point throughout the v arious images. SFM e xtracts depth information for each feature point by using the kno wn aircraft states. The 3D coordinates for a feature point are determined using its depth and image plane coordinates in addition to the aircraft states. SFM then has the relati v e location of each feature point. The feature points are then combined into groups, forming surf aces. The result is a virtual 3D reconstruction of the camera' s en vironment. 16

PAGE 27

17 4.2 Strate gy Structure from motion relies on feature points and their tracking in the image plane throughout time. A series of images may present a great disparity in feature points. Some of these feature points can be grouped together to represent an object, pro viding rele v ant information for SFM. Other feature points represent noise, not pro viding an y rele v ant or consistent information for SFM. The standard approach for feature point tracking is the Lucas-Kanade algorithm, which uses template re gistration to identify a set of feature points in a series of images. Assuming that a set of feature points can be found to correspond between tw o images, the problem statement of SFM is to e xtract the three-dimensional coordinates for each feature point. This is approached by minimizing a cost function. Although linear cost functions of fer appealing simplied mathematical solutions, the cost function is generally accepted as inherently nonlinear This requires iterati v e optimization and addressing local minima; ho we v er the results of the nonlinear cost functions pro vide more accurate results. Most nonlinear approaches are deri v ed from Horn' s relati v e-orientation problem [ 15 ]. Using tw o frames from the same mo ving camera and by assuming a stationary en vironment, the technique reco v ers the en vironment' s 3D structure. The camera basis at timestep k1 can be dened as a translation and rotation of the camera basis at timestep k Therefore the projection of point P at timestep k1 can be dened as a translation and rotation of its projection at timestep k Assuming the aircraft has kno wledge of its relati v e rotation R and translation t from the aircraft states, Equation 4.1 can be used to calculate the relati v e h v ector at timestep k1 from h at timestep k

PAGE 28

18 h 1rk1 h 2rk1 h 3rk1n n n n R h 1rk h 2rk h 3rkn n n n t (4.1) By combining Equations 3.1 and 3.2 with Equation 4.1 Equation 4.2 can be deri v ed. k1 h 3k1 h 3k n k1 h 3k1 h 3k f h 3k1 h 3kn n n n R k n k fn n n n t f h 3rk (4.2) Each point track ed between the tw o frames pro vides the system with three equations and tw o unkno wns, h 3rk and h 3rk1 which correlate to the depths of the feature point with respect to the camera at timesteps k and k1. A least-squares approach is used for accurac y The h 3rk and h 3rk1 v alues for each feature point are then reco v ered, creating a 3D structure [ 19 ]. A least-squares approach is also used for video sequences greater in length than tw o images. This iterati v e nonlinear optimization minimizes the error for each feature point' s 3D coordinates. SFM, using vision-based feedback, pro vides the system with information that completely describes the ight en vironment. This kno wn en vironment can then be fused with GPS data. This pro vides the controller with information concerning the en vironment with respect to the aircraft and the desired destination. V arious pathplanning approaches can then be implemented to maneuv er the aircraft through the kno wn en vironment to w ard its goal. The desired trajectory can be described in the form of w aypoints. W aypoints are points in 3D inertial space that are tar gets for the aircraft to y to w ard. When o wn to w ard in sequential order the resulting ight path forms a trajectory similar to

PAGE 29

19 the desired trajectory from the SFM data. In this approach, the outputted w aypoints represent the desired path through the reconstructed scene. Figure 4–1 demonstrates a structure for the closed loop system. This system includes tw o compensator elements: a path planner and a maneuv er track er The path planner creates a desired or optimal trajectory through the virtual en vironment. This can be implemented using a classical optimization approach or a receding horizon approach, among others. The maneuv er track er ensures that the aircraft remains along the optimal trajectory This track er can be an y type of controller such as a common PID or LQR. MA V camera feature e xtraction structure from motion terrain mapping path planning maneuv er tracking en vironment Figure 4–1: Control Scheme using Scene Reconstruction 4.3 Adv antages and Risks The results of structure from motion pro vide a v ery reliable and v ery detailed description of the en vironment. Compared to an optic o w calculation, which pro vides a rough inference on the direction of obstacles, the output of structure from motion gi v es an approximate size and relati v e location of each obstacle. There is a signicant dif ference in the le v el of detail between the tw o algorithms. This additional detail requires additional processing time, which leads to a time lapse associated with scene reconstruction data. Due to weight and po wer restrictions, the processor on a MA V for e xample, has limited onboard computational capabilities. Se v eral minutes may be required for scene reconstruction. This requirement creates a

PAGE 30

20 time lag between data acquisition and data processing which lea v es the aircraft ying with outdated information. The airplane ob viously can not stop and ho v er during computation so the v ehicle will be ying without an y ne w information for se v eral minutes. A signicant problem then arises when an obstacle is hidden from vie w at the point of data acquisition. Theoretically the scene reconstruction analysis could then create a ight path which coincided with the obstacle. Although the obstacle w ould most lik ely come into vie w before collision, the analysis is so slo w that it may not ha v e time to react. This risk is a serious limitation for scene reconstruction.

PAGE 31

CHAPTER 5 OPTIC FLO W 5.1 Concept As an aircraft ies through its en vironment, there is relati v e motion between the camera and its surroundings. F or an indi vidual feature point, this motion alters its h v ector potentially changing its and n coordinates. This mo v ement is optic o w More formally optic o w is the tw o dimensional motion of feature points in the image plane [ 43 ]. An optic o w v ector can be dened as a feature point' s change in image plane position between tw o consecuti v e timesteps. Using the deri v ati v e quotient rule on Equations 3.1 and 3.2 the optic o w is gi v en in Equations 5.1 and 5.2 fh 3 h 1h 1 h 3 h 23(5.1) nfh 3 h 2h 2 h 3 h 23(5.2) Consider a camera x ed to an aircraft ying straight and le v el through the en vironment sho wn in Figure 5–1 The stars along the edges of the b uildings denote the feature points that will be track ed. Figure 5–2 portrays the locations of these feature points in the image plane as visible within the eld of vie w of the camera at timesteps 1 and 2. The displacement of the feature points between timesteps 1 and 2 from Figure 5–2 yields the optic o w v ectors in Figure 5–3 F or visibility these v ectors ha v e been scaled by a f actor of 25. Although the abo v e b uildings are centered in Euclidean space about the aircraft' s central body axis ˆ b 1 the y do not appear centered in the image plane. The b uilding on the right' s feature points ha v e comparati v ely smaller h 3 v alues, indicating that it 21

PAGE 32

22 Figure 5–1: Camera Position -0.5 0 0.5 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 nm 0.5 0 0.5 0.6 0.4 0.2 0 0.2 0.4 0.6 nm Figure 5–2: Feature Point Positions in Image Plane at T imesteps 1 (Left) and 2 (Right) during Straight and Le v el Flight 0.5 0 0.5 0.6 0.4 0.2 0 0.2 0.4 0.6 nm Figure 5–3: Corresponding Optic Flo w during Straight and Le v el Flight

PAGE 33

23 is closer in proximity From Equations 3.1 and 3.2 this causes its feature points to be located comparati v ely further from the center of the image plane in both and n coordinates. In contrast, the b uilding on the left has a lar ger h 3 v alue, indicating that it is further in the distance. The b uilding is therefore located closer to the center of the image plane. This phenomenon can also be noted by the number of feature points visible on each b uilding. Each b uilding is lined with feature points at the same equal interv al. Ho we v er a smaller number of feature points from the b uilding on the right are visible than from the b uilding on the left. This characteristic is again because the b uilding on the right' s feature points are located further from the center of the image plane, man y of which are out of the camera' s eld of vie w The nature of optic o w allo ws estimation of the relati v e position of the camera' s surroundings. It can be seen from Equations 5.1 and 5.2 that for all other parameters being equal, lar ge h 3 v alues cause smaller magnitude and n v alues than do small h 3 v alues. Therefore, assuming a stationary en vironment, relati v ely lar ge optic o w v ectors correlate to nearby objects whereas smaller optic o w v ectors correlate to further a w ay objects. From Figure 5–3 it can be seen that the v ectors on the righthand side of the image plane are lar ger in magnitude than those on the left. This property implies that the corresponding feature points are closer in vicinity This assumption is conrmed by Figure 5–1 The camera' s vie w is intrinsically link ed to the aircraft' s motion. Their joint equations of motion ha v e been in v estigated by Cause y and Lind [ 5 ]. Equation 5.3 can be deri v ed by dif ferentiating Equation 3.4 with respect to the camera frame. This relationship denes the v elocity of the camera to the feature point in the C frame in terms of the aircraft' s v elocities, the feature point' s position, and the relati v e angular v elocity between the E and C frames, E w C as dened in Equation 5.4 The aircraft' s roll, pitch, and ya w rates are dened as pq and r respecti v ely h C R C B R BE z EE w Ch C (5.3)

PAGE 34

24 E w Crq p(5.4) It is then desired to e xpress optic o w as functions of the aircraft states, feature point states, and the intrinsic camera properties, such as f It can be sho wn that Equations 5.5 and 5.6 can be deri v ed by using Equations 5.1 5.2 5.3 and 5.4 The terms uvand w are the v elocities of the aircraft' s center of mass as e xpressed in the body frame from Equation 2.2 fqp nq 2r n uw h 3(5.5) nfrpr n 2q n u nv h 3(5.6) From Equations 5.5 and 5.6 it can be seen that the optic o w is a function of the image plane coordinates and h 3 as well as tw o other cate gories: the translational and the angular v elocities of the aircraft. Therefore, the optic o w v ector can be dened as the summation of tw o other v ectors, one which is dened by the optic o w produced only by the translational v elocities and the other which is dened by the optic o w produced only by the rotational v elocities. It is easy to conceptualize the e xpected optic o w due to only the translational v elocities. As one approaches an obstacle at an of fset, ying in a straight line, the feature points accelerate a w ay from the focus of e xpansion. The focus of e xpansion is dened as the point in the image plane from which the optic o w v ectors di v er ge and its and n coordinates are aligned with the aircraft' s v elocity v ector The results of the e xample in Figures 5–1 5–2 and 5–3 are indicati v e of the optic o w produced by the translational v elocities. The optic o w due to the angular v elocities causes signicantly more complicated results. If the aircraft has a lar ge ya w rate, the magnitude of the n v ectors escalate.

PAGE 35

25 Similarly the magnitude of the v ectors increase dramatically with the magnitude of the pitch rate. Lar ge roll rates create a swirl-lik e ef fect on the optic o w It is desired to use the magnitude of the optic o w v ectors as an indication of a feature point' s proximity It is therefore necessary to establish consistenc y The angular v elocities cause lar ge changes in magnitude which are not representati v e of the change in the aircraft' s position, b ut rather its orientation. F or the purpose of obstacle a v oidance, it is the position of the aircraft' s center of mass which is of direct concern. An aircraft collision implies that the position of the aircraft' s position coincided with that of an obstacle. The orientation of the aircraft is irrele v ant. Therefore, the ef fects of the angular v elocities will be remo v ed from the optic o w This decomposition will enable a more direct correlation between the magnitude of an optic o w v ector and its proximity The rotational components will be e xtracted by subtracting the appropriate combination of aircraft states and feature coordinates. The resulting optic o w and n can then be deri v ed as sho wn in Equations 5.7 and 5.8 where and n are the true optic o w components as gi v en by the optic o w sensors. The results are a function of only the translational v elocities, h 3 the focal length, and the image plane coordinates. fqp nq 2r n(5.7) n nfrpr n 2q n(5.8) An e xample is used to demonstrate the ef fects of remo ving the rotational components from the optic o w The setup is identical to the pre vious e xample sho wn in Figures 5–1 5–2 and 5–3 with the e xception that the aircraft is in the middle of a counter -clockwise roll maneuv er The feature point locations at timesteps 1 and 2 are sho wn in Figure 5–4 The corresponding optic o w is gi v en in Figure 5–5 These v ectors ha v e not been scaled.

PAGE 36

26 0.5 0 0.5 0.6 0.4 0.2 0 0.2 0.4 0.6 nm 0.5 0 0.5 0.6 0.4 0.2 0 0.2 0.4 0.6 nm Figure 5–4: Feature Point Locations at T imesteps 1 (Left) and 2 (Right) during Roll Maneuv er 0.5 0 0.5 0.6 0.4 0.2 0 0.2 0.4 0.6 nm Figure 5–5: Corresponding Optic Flo w during Roll Maneuv er At timestep 1, the roll rate w as -2.608 rad/s, pitch rate w as 0.0126 rad/s, and the ya w rate w as -0.1176 rad/s. Clearly the total optic o w is highly coupled with the angular v elocities. Using Equations 5.7 and 5.8 and n can be deri v ed. These results are plotted in Figure 5–6 alongside the total optic o w from Figure 5–5 In this plot, and n ha v e been scaled by a f actor of 25. This ne wly deri v ed optic o w is independent of the angular v elocities. Note the correlation between the translational optic o w and the optic o w of Figure 5–3 where the angular v elocities were ne gligible. F or comparison, the rotational components of the optic o w from the straight and le v el ight of Figure 5–3 ha v e been remo v ed. At timestep 1 of this e xample, all of the angular v elocities were nearly zero. The rotational components were ne gligible.

PAGE 37

27 0.5 0 0.5 0.6 0.4 0.2 0 0.2 0.4 0.6 nm 0.5 0 0.5 0.6 0.4 0.2 0 0.2 0.4 0.6 nm Figure 5–6: Optic Flo w (Left) and Scaled Optic Flo w with Rotational Components Remo v ed (Right) during Roll Maneuv er The resulting optic o w is essentially identical to the original optic o w as sho wn in Figure 5–7 Therefore the remo v al of the angular components of the optic o w does not cause a signicant impact at times of lo w angular v elocities. 0.5 0 0.5 0.6 0.4 0.2 0 0.2 0.4 0.6 nm 0.5 0 0.5 0.6 0.4 0.2 0 0.2 0.4 0.6 nm Figure 5–7: Optic Flo w (Left) and Optic Flo w with Rotational Components Remo v ed (Right) of Straight and Le v el Flight 5.2 Strate gy Optic o w controllers onboard mo ving v ehicles mak e use of the phenomenon of optic o w to mak e assumptions about their en vironment. The y then use these assumptions to mak e control decisions for na vigation. F or the purpose of this thesis, optic o w will be used for reacti v e obstacle a v oidance. The goal is a controller which is able to sense impending obstacles in the ight path and ef fecti v ely a v oid them in a

PAGE 38

28 real-time manner This objecti v e will be accomplished by using the correlation between optic o w v ector magnitude and the distance between the aircraft and the feature point. Ev ery point on the image plane correlates to an aircraft heading which can then be used to control the aircraft. The general strate gy emplo yed is to determine the assumed safest point in the image plane, gi v en as o p t and n o p t and then use its corresponding heading for control purposes. Selecting the assumed safest point in v olv es a cost function which analyzes the optic o w v ectors in the image plane. Using the assumption that lar ge optic o w v ectors correlate to nearby obstacles, it can be determined that the re gions of the image plane containing these v ectors are dangerous to y to w ard. Comparati v ely re gions in the image plane f ar a w ay from these lar ge v ectors are assumed to be f ar a w ay from nearby obstacles and are thus less dangerous. The cost function is designed to quantify the o v erall “threat” le v el of a point on the image plane. The cost, or threat le v el, correlates to the assumed danger of ying to w ard that point. It is designed such that lar ge costs denote a point which is either located on or dangerously close to a nearby obstacle. A lo w cost denotes that the point is f ar a w ay from all nearby obstacles. The image plane is di vided into a grid of discrete points, each of which represents its surrounding re gion. The cost function is then e v aluated for each of these points. The result is an o v erall estimation of the high-threat and lo w-threat re gions of the image plane. It is designed such that the lar gest costs should correlate to the re gions of the image plane containing the closest obstacles. The process of discretizing the image plane into discrete points helps to minimize the cost function computations. Dense grids create a more precise estimation whereas sparser grids are less computationally intensi v e. The density of the grid can be adjusted according to the en vironment and processor a v ailable. An interv al of 0.05 between points on a grid ranging from! 7"" 7 and 7"n" 7 has been found to pro vide adequate results for se v eral ight simulations.

PAGE 39

29 Before dening the cost function, it is necessary to rst dene some parameters. F or a feature point with coordinates n n n and its corresponding optic o w v ector with components n n n its v elocity is dened as V n in Equation 5.9 V n$# 2n n 2n (5.9) The distance from an indi vidual image plane point to a feature point is dened as d nnin Equation 5.10 d nn # n nn2 n2 (5.10) Each image plane point is e v aluated with respect to each of the optic o w v ectors. A cost J nnis then associated with the point and v ector as dened in Equation 5.11 This cost represents the threat of the optic o w v ector with respect to the image plane point being e v aluated. J nn % & & (V 4 n d n)rn* d nn,+0; d nn 0 & & /(5.11) The cost is in v ersely proportional to the distance between the image plane point and the feature point. The cost also rises with the magnitude of the v ector The quartic of the magnitude is used in order to place emphasis on the lar gest v ectors. These lar ge v ectors correlate to the most immediate threats and it is logical that the y should dominate the imminent na vigation decisions. The po wer of 4 w as found as suf cient emphasis without making smaller v ectors obsolete. Small v ectors may indicate a f ar a w ay obstacle which should still be tak en into account during the na vigation decisions. The total cost Jnfor a point is the sum of the cost of that point with respect to each of the v ectors. See Equation 5.12 where N is the total number of optic o w v ectors.

PAGE 40

30 Jn N n01 J nn(5.12) Points on the image plane with high cost are generally close to feature points with lar ge magnitude optic o w v ectors. These points are assumed to possess an obstacle in close vicinity to the camera. This obstacle is a high threat to the mo ving v ehicle. Therefore these points should be a v oided when na vigating the v ehicle. Points on the image plane with lo w cost are suf ciently f ar from the lar ge magnitude v ectors. These points are assumed to possess no obstacles or obstacles in the f ar distance. In comparison, these points are a lo wer threat to the v ehicle. The point corresponding to this cost is dened as the optimal point ha ving coordinates o p tn o p t, as sho wn in Equation 5.13 These coordinates correlate to the line of sight that is assumed to pose the least threat to the aircraft. The v ariables n n represent the minimum and maximum and n v alues respecti v ely within the eld of vie w . o p tn o p t ar g min 132 4n1 n nJn(5.13) The optimal point in the image plane correlates to a heading which can then be used to control the aircraft. This heading is dened as a change in pitch Dq and a change in ya w Dy from the current aircraft states. These are geometric relationships which can be e xtracted from Figures 2–1 and 3–1 It is assumed that there is a constant angle of attack during the timestep and a ne gligible sideslip. These tw o v alues are the control inputs. Dq ctan51tana o p t f(5.14) Dy c tan51n o p t f(5.15)

PAGE 41

31 In the e v ent that it is desired to maintain a gi v en altitude, the control inputs can be dened as the commanded altitude h c and the commanded change in heading Dy c This procedure ef fecti v ely reduces the 3D obstacle a v oidance problem to a 2D obstacle a v oidance problem. It also reduces computation time signicantly Here h c assumes the role of Dq c The controller uses actuator deections which in turn determine g the angle the v elocity v ector mak es with the horizon in the v ertical plane. An inner loop controller is used to maintain the desired altitude [ 37 ]. Assuming reasonable g and a v alues, the points in the image plane corresponding to this desired altitude are contained within a parallelogram-shaped re gion. The desired or commanded change in ya w Dy c can then be found by e v aluating the points within this re gion. If the aircraft is already at the desired altitude, this re gion reduces to a line. If the image plane is unrotated about its roll angle, this line is horizontal in the image plane. This line has a constant v alue of dened as v Figure 5–8 denes this projection. Figure 5–8: V elocity V ector Projected on the Image Plane Equation 5.16 denes v in terms of the focal length f and the angle of attack a In this instance, the image plane grid can then be reduced to a line of discrete points located along the line of constant v The point of minimum cost along this line is then

PAGE 42

32 the optimum point in the image plane. It correlates to the desired change in heading Dy c which can then be e xtracted using Equation 5.15 v f tan a (5.16) The cost function has been applied to the pre vious e xample sho wn in Figures 5–1 5–2 and 5–3 The optic o w due only to the translational v elocities, scaled by a f actor of 25, is sho wn in Figure 5–9 These optic o w v ectors were e v aluated for each point in the image plane grid using the cost function. The results are also plotted in Figure 5–9 The peaks in the cost plot correlate to the track ed feature points of the right b uilding. The surrounding re gions of high cost correlate to the locations of this b uilding. The b uilding on the right has the highest cost since it had the lar gest magnitude optic o w v ectors and is therefore considered to be the most immediate threat. The left side of the image plane had a comparati v ely lo wer cost. Upon close e xamination it can be seen that there is a spik e in cost around the left b uilding, b ut nothing substantial compared to the b uilding on the right. It is desired to maintain the current altitude for this e xample. The aircraft is currently ying straight and le v el at the desired altitude with an angle of attack of 2.67 d e g Therefore Equation 5.16 is applicable, from which we deri v e the v alue of v0 046 for f1. Since it is desired to maintain the current altitude o p t0 046. Equation 5.14 sho ws that Dq c0 and thus no change in pitch is necessary The point of lo west cost along the horizontal line o p t0 046 correlates to n o p t 7. This point correlates to a change in ya w Dy c of -35 d e g or -0.61 r ad from Equation 5.15 The optic o w controller is commanding the aircraft to steer left, a w ay from the impending b uilding on the right. It detects the b uilding on the right to be the most immediate threat. Although there are no obstacles in the center of the image plane, directly in front of the aircraft, the controller feels that this ight path is too close to obstacles for safe ight. This left turn w ould ef fecti v ely a v oid the closest obstacle as sho wn in Figure 5–1

PAGE 43

33 0.5 0 0.5 0.6 0.4 0.2 0 0.2 0.4 0.6 nm Figure 5–9: Optic Flo w with Rotational Components Remo v ed (Left) and Cost Function 3D plot (Right) 5.3 Adv antages and Risks A strong adv antage of optic o w is its real-time ability of reacti v e control for obstacle a v oidance. Optic o w sensors run at v ery high speeds and the control decisions can be computed f airly quickly Thus, a v ehicle can be gin reacting to a threat almost immediately after it is detected. This speed is v ery adv antageous, if not essential, for f ast mo ving aircraft within dense en vironments. Since optic o w is based on relati v e motion, it also has the potential to account for mo ving obstacles, making the applications more v ersatile. The algorithm presented is based on the assumption of stationary obstacles; ho we v er it is possible to adapt the algorithm for other applications. The direction of the optic o w v ector w ould be of greater signicance when accounting for mo ving obstacles. An optic o w v ector pointing to w ard the FOE generally indicates that either the aircraft is turning to w ard the obstacle or the obstacle is approaching the aircraft' s ight path. Con v ersely an optic o w v ector pointing a w ay from the FOE suggests the obstacle is either mo ving a w ay from the ight path or the aircraft is ying a w ay from it. These general rules assume that you are not ying directly to w ard the obstacle. There is no optic o w when ying directly to w ard an obstacle since the feature point is then located in the focus of e xpansion. An y surrounding feature points w ould still di v er ge.

PAGE 44

34 A disadv antage to optic o w as used in this capacity is its lack of path-planning. The abo v e algorithm is relati v ely unsophisticated as it only reacts on the most immediate threat without substantial consideration of future threats. Thus it may guide the v ehicle through an o v erall more treacherous path simply due to an initial maneuv er to a v oid the rst obstacle. The algorithm does ha v e its assumptions and risks. Optic o w control in this capacity assumed the ability to track feature points on obstacles. It also assumed cor respondence of the points between frames. That is, it requires the kno wledge of which point corresponds to which point in the pre vious timeframe. This kno wledge creates the optic o w These assumptions are f airly demanding for practical applications. Feature points are dif cult to reliably e xtract from an en vironment. In practice, e v en some of the most accepted feature point e xtraction methods, such as Lucas-Kanade, are often noisy and include lar ge errors. Another risk concerns the location of feature points. Since most feature point e xtraction algorithms are based on contrast and te xture gradients, the y are typically located at corners and edges. A smooth b uilding may only ha v e feature points along its edges and none in its center The controller then cannot distinguish between the center of the smooth b uilding and empty space. This is a lar ge risk which may lead to collision. F or this thesis it will be assumed that the obstacles are te xtured enough to a v oid this issue. Another risk in v olv es the focus of e xpansion. Recall that the focus of e xpansion is dened as the point in the image plane corresponding with the heading of the v elocity v ector Equations 5.17 and 5.18 dene the FOE' s coordinates, F OE and n F OE in terms of the angle of attack a angle of sideslip b and focal length f These equations were deri v ed using geometric relationships from Figure 3–1 It should be noted that if an aircraft is ying straight and le v el, with no sideslip or angle of attack, then the feature point is located in the center of the image plane.

PAGE 45

35 F OE f tan a (5.17) n F OE f tan b (5.18) An issue arises when the aircraft is ying in a straight line directly to w ard a feature point. In this instance, the feature point coincides with the FOE. If the aircraft remains on that ight path, the feature point remains on the FOE, not mo ving in the image plane. Since the feature point does not change location, there is no optic o w An e xample is used to demonstrate this concept. Figure 5–10 portrays the location of the feature point at timesteps 1 and 2, where each is located at e xactly 0 1 n0. The aircraft is ying straight with a constant angle of attack of 5.7 d e g and zero sideslip. Using Equations 5.17 and 5.18 the FOE is also located at F OE0 1 and n F OE0, thus coinciding with the feature point. Assuming that the aircraft continues ying in this direction, the feature point w ould remain in the same position at timestep 2. Since the feature point did not mo v e in the image plane, there is no optic o w as seen from Figure 5–11 0.5 0 0.5 0.6 0.4 0.2 0 0.2 0.4 0.6 nm 0.5 0 0.5 0.6 0.4 0.2 0 0.2 0.4 0.6 nm Figure 5–10: Feature Point Location at T imestep 1 (Left) and T imestep 2 (Right) W ithout optic o w the pre viously described algorithm does not recognize the presence of an obstacle. It is then concei v able that the aircraft w ould continue ying to w ard the obstacle, resulting in a collision. This beha vior is a primary risk for man y

PAGE 46

36 0.5 0 0.5 0.6 0.4 0.2 0 0.2 0.4 0.6 nm Figure 5–11: Resulting Optic Flo w approaches based on optic o w F or the purpose of this thesis, it will be assumed that e v ery obstacle consists of multiple feature points. If properly oriented, these feature points will appear in the re gions surrounding the FOE, thus creating optic o w ho we v er small. It is assumed that this optic o w will be suf cient for indicating the presence of an obstacle in this re gion. Another method of a v oiding this singularity is the assumption that the aircraft is al w ays ying at least at a slight of fset to an y obstacle.

PAGE 47

CHAPTER 6 MUL TI-RA TE CONTR OLLER 6.1 Concept Clearly both scene reconstruction and optic o w ha v e their adv antages and disadv antages. Scene reconstruction pro vides reliable, detailed information for path planning purposes; ho we v er it requires lar ge computation times. Optic o w operates at much f aster rates which is necessary for na vigating through dense en vironments. These f ast computations though only pro vided rough inferences about the obstacles, resulting in a simplistic method of na vigation. As such, there is an innate trade-of f between the le v el of information pro vided and the corresponding computation time. The le v el of information in v olv ed in scene reconstruction deems its na vigation control seemingly more reliable than that of optic o w It is therefore desired to use this dependable data whene v er a v ailable. The time lapse between data acquisition and control implementation associated with scene reconstruction deems its information as potentially outdated though. This threat is most prominent in en vironments that are densely populated with obstacles. F or these situations, the reaction time may not be f ast enough to safely maneuv er to a v oid obstacles. This is a signicant limitation for scene reconstruction. The goal of the multi-rate controller is to emphasize the adv antages of each type of vision-based control while mitigating the disadv antages. Namely it is to use scene reconstruction' s reliable path planning capability and optic o w' s f ast-rate obstacle a v oidance capability The multi-rate controller utilizes each of these capabilities by using the f ast-rate capability of optic o w to supplement the disadv antageous time lapse that occurs during the processing of scene reconstruction data. Essentially it alternates between the tw o control strate gies based on the characteristics of the current 37

PAGE 48

38 en vironment. Scene reconstruction control is used when obstacles are sparse and suf cient time is allocated for detailed path-planning. Optic o w control is used when a nearby obstacle is detected and f ast reaction times are required. 6.2 Strate gy The control scheme is inherently a multi-rate design with a slo w loop in v olving scene reconstruction for na vigation and a f ast loop in v olving optic o w for obstacle a v oidance. The basics of this scheme are sho wn in Figure 6–1 The v alues t 1 and t 2 are the update times for the optic o w and scene reconstruction algorithms respecti v ely Since optic o w runs at a f aster rate than scene reconstruction t 1"t 2 Both loops, scene reconstruction and optic o w are constantly running. Each output a commanded change in heading Dy c and change in pitch Dq c A switch determines which of the control outputs are passed through. F or times when nearby obstacles are detected, optic o w control decisions are used to pro vide f ast-rate obstacle a v oidance; otherwise, scene reconstruction analysis retains control. en vironment MA V camera optic o w scene reconstruction t 2 t 1 switch maneuv er control Figure 6–1: Closed-Loop System with Multi-Rate Control The algorithm commences with scene reconstruction controlling the aircraft. Meanwhile, a switch, or “trigger”, based on optic o w principles, searches for impending obstacles in the ight path. If an obstacle is detected, the switch acti v ates the optic o w control loop. This loop is better suited to immediately respond to the obstacle and

PAGE 49

39 na vigate the aircraft to safety The optic o w loop remains acti v e for a predetermined amount of time which is deemed suf cient to allo w the optic o w controller to safely maneuv er around an y impending obstacles. After this time, the scene reconstruction analysis loop re gains control of the aircraft. If again an obstacle is detected, the optic o w resumes control. This process is repeated until the aircraft reaches its desired destination. The switch is based on a numerical quantication, dened as F of the total optic o w in the image plane. This function is designed such that it produces high v alues when lar ge magnitude optic o w v ectors, and therefore nearby objects, are present. The function produces lo w v alues when small magnitude optic o w v ectors, and therefore distant objects, are present. The function for F gi v en in Equation 6.1 is the sum of the optic o w magnitudes to the fourth po wer The magnitudes are again raised to the fourth po wer as w as the case with the optic o w cost function, to gi v e emphasis to the lar gest optic o w v ectors. FN n01 m 4n (6.1) F or lo w v alues of F it is assumed that the aircraft' s path is reasonably safe from approaching obstacles. Con v ersely high v alues imply an impending threat. F or these circumstances, it is desired for the controller to initiate optic o w control, which is better equipped to mak e quick control decisions for obstacle a v oidance. Since both control loops output the same command type, the switch merely alters which command is passed through. In order to control the switch, a predened threshold dened as s is used in Equation 6.2 When F e xceeds s the switch is acti v ated and optic o w attains control of the aircraft. If F remains belo w s the aircraft remains in scene reconstruction control. Since F is a function of the camera, feature point tracking parameters, and en vironment, it is necessary to customize the v alue of s for a gi v en mission.

PAGE 50

40 Dy c Dq cn % & & (output of optic o w : i f F+s output of scene reconstruction : el se (6.2) Ov erall, this multi-rate controller sho wn in Figure 6–1 is particularly well suited to autonomous aircraft operation. The slo w scene reconstruction loop pro vides for detailed, reliable path planning. The high-rate optic o w switch pro vides additional information during these minutes de v oted to scene reconstruction. Should a threat be detected, optic o w control is used for real-time obstacle a v oidance. In this w ay the system uses both types of vision-based feedback to continually pro vide information for guidance and na vigation.

PAGE 51

CHAPTER 7 EXAMPLE 1 7.1 Setup A simulation is used to demonstrate the vision-based controllers. The simulation uses a high-delity nonlinear model of an F-16 which accurately represents the lo wspeed ight dynamics. A control augmentation system is included to enable the system to track desired changes in heading and desired altitudes [ 37 ]. It also includes an inner -loop stabilizer to smooth the ight path. This simulation w as designed to v erify closed-loop performance. Optimal implementations of either the optic o w or scene reconstruction analysis are unnecessary Therefore, the actual computation time will be ignored and the controllers will be updated at rates found in published literature. F or this e xample, the optic o w algorithm will be running at 100 H z and the scene reconstruction algorithm will be running at 0 008 H z The F-16 will y through an en vironment designed to demonstrate the adv antages of the multi-rate controller The en vironment is sho wn in Figure 7–1 A b uilding is situated behind the center b uilding such that it is initially hidden from vie w Therefore, it is not accounted for by the initial SFM calculations. The mission objecti v e seeks a path to the north to w ard a tar get GPS w aypoint at a constant altitude. The goal is to arri v e at the desired w aypoint in a timely f ashion while a v oiding obstacles in its path. This simulation w as run using feature points lining the edges of b uildings at an e v en interv al. There is a camera mounted on the center of gra vity of the F-16, aligned parallel with the plane' s ˆ b 1 axis. At each timestep, the simulation projects the feature points onto the image plane of the camera in relation to the aircraft' s position and orientation. T o demonstrate the adv antages of the multi-rate controller the results using 41

PAGE 52

42 Figure 7–1: Simulation En vironment only the optic o w controller and only a w aypoint controller will rst be presented. The multi-rate controller ying through the same en vironment will follo w The optic o w algorithms are acti v ely running. Ho we v er the scene reconstruction has been replaced by representati v e path planning. Scene reconstruction techniques are f airly well established in literature. De v elopments are still being pursued in research, b ut the o v erall concept remains constant. This thesis does not present an y ne w de v elopments on the topic. Therefore, in place of an acti v e scene reconstruction loop, w aypoints will be used to represent the output of scene reconstruction. An outer loop guidance and na vigation system allo ws the v ehicle to follo w these w aypoints. It utilizes a vision-based homing controller to track to w ard a particular point in the image plane. The w aypoints are chosen to mimic the e xpected results of ongoing SFM research performed at the Uni v ersity of Florida and the Uni v ersity of South Carolina. This approach is deemed suf cient since scene reconstruction is not the focus of this thesis. The nature of the scene reconstruction controller is still represented.

PAGE 53

43 F or this simulation, it is assumed that perfect feature point e xtraction and tracking is a v ailable. There is also assumed to be perfect state estimation, perfect terrain mapping, and perfect path planning. Clearly these assumptions are unrealistic b ut will suf ce to demonstrate the dif ferences between the v arious controllers. This simulation could also be used to represent the nonlinear dynamics of a micro air v ehicle by scaling do wn the en vironment. An autonomous micro air v ehicle with the capability of GPS w aypoint na vigation with reacti v e obstacle a v oidance w ould ha v e man y practical applications. A credible micro air v ehicle model unfortunately is not a v ailable. The nonlinear F-16 model in a scaled-up urban en vironment will instead be used to demonstrate the concept. 7.2 Actuator Controllers Three controllers were used to determine the actuator deection commands [ 37 ]. The ele v ator recei v ed its commands from the altitude controller The ailerons were controlled by the turn controller Finally the thrust w as controlled by the speed controller No rudder w as used during this simulation. The altitude hold controller used is sho wn in Figure 7–2 This system inputs the desired altitude pro vided by the vision-based controllers and outputs the actual altitude. It is a ne gati v e-feedback proportional-inte gral controller also utilizing the aircraft states of pitch angle q and pitch rate q The block A e represents the ele v ator actuator The block P represents the F-16 plant model. The v ariable d e is a constant trim condition equal to the v alue of -2.677. F or this simulation, the desired altitude h c remained constant at 15,000 f t This controller pro v ed to be satisf actory by remaining within an en v elope of 2%, or 300 f t of the nominal desired v alue of 16,000 f t throughout all of the simulations. The turn controller tracks a commanded heading Dy c by altering the roll command. This system also uses the plant outputs of roll angle q and roll rate p The input v alue of Dy c w as constantly changing due to the output of the vision-based algorithms.

PAGE 54

44 Figure 7–2: Altitude Hold Controller Again, the P block represents the F-16 plant model. The A a block represents the aileron actuators. Figure 7–3: T urn Controller The speed controller is a ne gati v e-feedback proportional controller It tracks the commanded v elocity v c The block A t represents the thrust actuators. The block P denotes the F-16 plant model. A constant v alue of 600 f t6s w as used for v c 7.3 Control based on Optic Flo w The ight path of the F-16 in Figure 7–5 w as formulated using only the optic o w controller Since it is desired to maintain a constant altitude, the control inputs of Dy c and h c are used. The aircraft is initially positioned at the desired altitude. This

PAGE 55

45 Figure 7–4: Speed Controller altitude is maintained by the altitude controller Therefore the mission is a 2D obstacle a v oidance problem instead of a 3D obstacle a v oidance problem. This reduction allo ws for the assumption that the trapezoidal re gion of the image plane which encompasses the desired altitude is reduced to a line. The results sho wn in Figure 7–5 were created using the follo wing algorithm. F or e v ery timestep of 0.01 sec the controller e v aluates the optic o w in the image plane. It computes the cost for each point along the v line in the unrotated image. The point of lo west cost along this line is then chosen as the optimal point. It is the assumed safest direction for which to na vigate the aircraft. The change in heading Dy is then calculated using this point. In the resulting path of Figure 7–5 the v ehicle ob viously a v oids the obstacles b ut does not e v en approach the desired w aypoint. It detected a lar ge amount of optic o w in the right half of the image plane and consequently turned left. By not ying to w ard the desired destination, this controller did not meet mission objecti v es. The path is not entirely une xpected gi v en the simplistic nature of the implemented controller Se v eral approaches could be used that are much more adv anced than simply steering to w ard the least o w Ho we v er this simplistic controller is used because it can operate at an e xtremely high rate. As such, the results are not indicati v e of a limitation in optic o w as much as the y are indicati v e of a limitation this particular high-rate optic o w algorithm.

PAGE 56

46 Figure 7–5: Optic Flo w Results 7.4 Control based on Scene Reconstruction Representing the scene reconstruction algorithm is a set of w aypoints. These w aypoints, when o wn sequentially form a trajectory which could be de v eloped by a scene reconstruction analysis. The resulting ight path is sho wn in Figure 7–6 This result is only understood by noting that the SFM updates are computed at the green points noted on the path. The v ehicle records an image, sho wn in Figure 7–7 at the initial time T0. Only the b uilding immediately in front of the aircraft and the b uilding to its slight right are visible at this time. It has no data indicating the presence of the third, furthest b uilding. Assuming perfect scene reconstruction based on the information pro vided to the controller at T=0, the controller assumes that the en vironment is that as pictured in Figure 7–8 It creates an optimal trajectory for this assumed en vironment. Assuming that the scene reconstruction tak es 2 min or 120 sec a resulting trajectory for this image is not created until point T120. The third b uilding comes into vie w at approximately T45; ho we v er a trajectory resulting from

PAGE 57

47 this ne w information could not be computed until T165. The aircraft w ould ha v e already collided with the b uilding before the data w as processed. The lag associated with the computation time for the analysis is a great risk for the aircraft. F or this en vironment, the reaction is too slo w and results in a collision. Figure 7–6: Scene Reconstruction Results The ight path in Figure 7–6 is meant to indicate a possible problem with scene reconstruction for aircraft. The approach has been used with considerable success for some systems, such as ground v ehicles and e v en helicopters, that can stop and or ho v er A x ed wing aircraft, ho we v er is continually mo ving forw ard. It cannot remain in a kno wn safe zone until the results are processed; it instead v entures into an unkno wn en vironment. The lack of information about the en vironment can result in collision. This scenario mak es the computational delay potentially de v astating to aircraft. 7.5 Multi-Rate Control The multi-rate control scheme is also introduced to the simulation. This simulation uses the lo w-rate scene reconstruction to compute a path b ut w atches for impending

PAGE 58

48 0.5 0 0.5 0.6 0.4 0.2 0 0.2 0.4 0.6 Figure 7–7: Camera Image at T=0 obstacles using the high-rate optic o w while follo wing that path. Should a threat be detected, optic o w na vigates the aircraft to safety The initial commands from the tw o schemes, as sho wn in Figure 7–5 and Figure 7–6 try to steer the v ehicles in dif ferent directions. So, the threshold s21054 is used such that the optic o w controller does not change the v ehicle path until the magnitude of o w passes a critical v alue. In this w ay the v ehicle follo ws the scene reconstruction control b ut a v oids obstacles. Figure 7–9 presents the results of the multi-rate controller The path is initially the same as the scene reconstruction controller At T=0, the aircraft e xtracts the feature point information to input into the scene reconstruction algorithm. The path is then follo wed until approximately T=60, at which point the optic o w subroutine senses an impending threat in the ight path. Optic o w assumes control of the aircraft as it na vigates throughout the obstacles. After a predetermined amount of time, chosen here to be 10 sec scene reconstruction analysis resumes control of the aircraft and attempts to again y to w ard the mission tar get. The time of 10 sec w as chosen as suf cient to allo w the optic o w controller to mak e a maneuv er to a v oid an y nearby danger

PAGE 59

49 Figure 7–8: En vironment as Assumed by Scene Reconstruction from Input T ak en at T=0 This multi-rate control strate gy posed the best o v erall solution. The optic o w controller lack ed the ability to arri v e at a desired destination. The scene reconstruction controller w as vulnerable to collision due to the computational delay The multirate controller ho we v er pro v ed the ability to direct the aircraft in the desired north direction while simultaneously a v oiding obstacles in a real-time f ashion. It successfully balanced all of the mission objecti v es, indicating that the multi-rate controller is well suited for this simulation.

PAGE 60

50 Figure 7–9: Multi-Rate Controller Results

PAGE 61

CHAPTER 8 EXAMPLE 2 8.1 Setup The purpose of this e xample is to demonstrate the multi-rate controller' s capability when placed within more complicated surroundings. The simulation used in this e xample is identical to Example 1 with the e xception of the en vironment. The same nonlinear F-16 model, algorithms, and controllers were used. Ho we v er in this e xample, the en vironment w as more densely populated by obstacles. This e xample posed a more complicated situation in which a lar ge number of b uildings were originally obstructed from vie w The en vironment is sho wn in Figure 8–1 The aircraft' s mission w as to adv ance to w ard the indicated w aypoint while a v oiding obstacles along the w ay This mission implied ying north and then turning west once the aircraft had suf ciently cleared the long w all. Figure 8–1: Simulation En vironment This simulation assumes that feature points are located at interv als small enough for the vision algorithms to suf ciently distinguish the center of the obstacle from 51

PAGE 62

52 empty space. Due to the parameters of this simulation, the feature points are located at an interv al of 5000 f t along the obstacle' s edges. 8.2 Control based on Optic Flo w The ight path resulting from the optic o w controller is sho wn in Figure 8–2 The v ehicle initially sees the optic o w from the long w all on the left and consequently turns right. At this point, there are no obstacles in its path so the aircraft continues to y straight. These results are to be e xpected of the simplistic controller; it simply na vigated the aircraft a w ay from the obstacles on the left. It did not balance the mission objecti v es of a v oiding obstacles while approaching the desired destination. In f act, it e w a w ay from the desired location. Therefore, this controller alone is not well suited for the intended mission. Figure 8–2: Optic Flo w Results 8.3 Control based on Scene Reconstruction Figure 8–3 portrays the resulting ight path of the scene reconstruction controller The timesteps are important to understanding these results. It is assumed that at T=0 the aircraft has pre vious data informing it of the long w all to its left. The controller then fuses this information with GPS data of its desired destination to create an optimal path to w ard this destination. This ight path in v olv es ying north around the w all and

PAGE 63

53 then turning west to w ard the destination. The controller also inputs images at T=0 to analyze for structure from motion. From the information a v ailable to the aircraft at the time, the scene reconstruction analysis assumes the en vironment to look as it is pictured in Figure 8–4 As sho wn in Figure 8–5 the 7 obstacles located to the west of the w all are not visible in the image. These b uildings were therefore not accounted for in the scene reconstruction. Assuming again that the SFM analysis tak es 2 min the results are not a v ailable until T=120. By this time, the aircraft w ould ha v e already collided with a b uilding. Ev en if the controller w as fortunate enough so as to input images at the moment the obstacles rst came into vie w approximately around T=45 (see Figure 8–5 ), the processing lag is still too great to a v oid a collision. Therefore, although the controller does attempt to direct the aircraft to w ard the desired destination, it is not successful in a v oiding obstacles. Indi vidually this controller w as not capable of meeting mission objecti v es. Figure 8–3: Scene Reconstruction Results 8.4 Multi-Rate Control Lastly the results of the multi-rate controller are demonstrated in Figure 8–6 The controller be gan in scene reconstruction analysis mode, follo wing the same ight path as the scene reconstruction controller During this time, the optic o w based trigger

PAGE 64

54 Figure 8–4: En vironment as Assumed by Scene Reconstruction from Input T ak en at T=0 Figure 8–5: Aircraft Position at T=0 (Left) and T=45 (Right) searched for impending obstacles in the ight path. Danger w as detected at T=57 so the switch be gan passing through the optic o w controller' s commands. The optic o w controller directed the aircraft to w ard the south, a w ay from the obstacle in its path to the west. The switch continued to pass through the optic o w controller' s information. After a predetermined amount of time, the aircraft resumed ying to w ard the tar get destination. This continued until T=86, at which point the optic o w controller commanded a swerv e to the right to a v oid a b uilding on its left. The ight concluded under scene reconstruction control. This ight path demonstrates the capability of the multi-rate controller to achie v e mission objecti v es of reaching a desired destination while a v oiding obstacles in the path.

PAGE 65

55 Figure 8–6: Multi-Rate Controller Results

PAGE 66

CHAPTER 9 CONCLUSION V ision-based control is a viable approach to w ard v ehicle autonomy Cameras are relati v ely small and lightweight, making them especially well suited for v ehicles with small payloads. The y pro vide a rich stream of data describing their en vironment. V ision-based controllers can then analyze this data to mak e intelligent control decisions. The amount of information outputted by the vision-based analysis is compromised by the length of time tak en to conduct the analysis. Rough inferences about the en vironment can be conducted quickly as demonstrated by the optic o w controller More intensi v e in v estigations, such as scene reconstruction, require more time. The additional detail pro vided by scene reconstruction causes a lag between data acquisition and the implementation of the control decision. V ehicles such as airplanes, which are continually in motion, are then forced to na vigate with outdated information during this delay This time lag may cause a collision in dense en vironments for which the controller does not ha v e adequate reaction time. P ath-planning optimization algorithms for na vigation are most reliable when in-depth information about the en vironment is pro vided. It is therefore desired to use a detailed vision-based analysis, such as scene reconstruction, whene v er possible. The time lag associated with scene reconstruction causes this information to be potentially outdated though, especially in dense en vironments. Optic o w pro vides less detailed information b ut operates at much higher rates, making its data readily a v ailable for analysis. This thesis introduces a multi-rate controller which demonstrates the v alue of utilizing dif ferent control methodologies based on the characteristics of the en vironment. 56

PAGE 67

57 The controller uses slo w scene reconstruction analysis for reliable path planning in sparse en vironments. It uses f ast optic o w control for obstacle a v oidance in dense en vironments and when obstacles are within close range. Optic o w operates at f ast rates and is better suited to quickly react to impending threats. The tw o controllers are monitored by a switch which e v aluates the current en vironment and selects the most suitable controller The benets of the multi-rate controller are demonstrated through tw o simulations. These simulations apply the controller to a nonlinear F-16 model ying within a scaled-up urban en vironment. The simulations are limited by the assumptions of perfect feature point detection and tracking, b ut suf ce to demonstrate the applicability of the multi-rate controller First, each vision-based feedback controller optic o w and scene reconstruction, are independently implemented and analyzed. Both f ail to meet mission objecti v es in each case. Ne xt, the multi-rate controller is implemented. Its ight paths demonstrate the ability to autonomously achie v e obstacle a v oidance while still maintaining a mission objecti v e for na vigation purposes.

PAGE 68

REFERENCES [1] G. Baratof f, C. T oepfer M. W ende and H. Neumann, “Real-T ime Na vigation and Obstacle A v oidance from Optic Flo w on a Space-V ariant Map, ” IEEE International Symposium on Intellig ent Contr ol Gaithersb ur g, MD, September 1998, pp. 289-294. [2] G.L. Barro ws, “Future V isual Microsensors for Mini/Micro-U A V Applications, ” 7th IEEE International W orkshop on Cellular Neur al Networks and their Applications July 2002, pp. 498-506. [3] G.L. Barro ws, J.S. Chahl and M.V Srini v asan, “Biomimetic V isual Sensing and Flight Control, ” 2002 Pr esented at 2002 Bristol U A V Confer ence Bristol, UK, April 2002. [4] G.L. Barro ws and C. Neely “Mix ed-Mode VLSI Optic Flo w Sensors for In-Flight Control of a Micro Air V ehicle, ” Pr esented at the SPIE 45th Annual Meeting San Die go, CA, July 2000. [5] R.S. Cause y and R. Lind, “ Aircraft-Camera Equations of Motion, ” AIAA J ournal of Air cr aft Submitted. [6] P Chang and M. Herbert, “Rob ust T racking and Structure from Motion with Sample Based Uncertainty Representation, ” IEEE International Confer ence on Robotics and A utomation W ashington, D.C., May 2002, pp. 3030-3037. [7] A. De v B. Krose, F Groen, “Heading Direction for a Mobile Robot from Optical Flo w ” IEEE International Confer ence on Robotics and A utomation Leuv en, May 1998, V olume 2, pp. 1578-1583. [8] A. De v B. Krose, and F Groen, “Na vigation of a Mobile Robot on the T emporal De v elopment of the Optic Flo w ” Pr oceedings of the 1997 IEEE/RSJ International Intellig ent Robots and Systems Grenoble, September 1997, V olume 2, pp. 558563. [9] T .M. Dijkstra, P .R. Snoeren, and C.C. Gielen, “Extraction of 3D Shape from Optic Flo w: a Geometric Approach, ” IEEE Pr oceedings of Computer V ision and P attern Reco gnition Seattle, W A, June 1994, pp. 35-140. [10] C. Fermuller P Bak er and Y Aloimonos, “V isual Space-T ime Geometry-A T ool for Perception and the Imagination, ” Pr oceedings of the IEEE July 2002, V olume 90, Issue 7, pp. 1113-1135. 58

PAGE 69

59 [11] “Human Reaction T ime, ” http://www .gecdsb .on.ca/sub/projects/psl/ senior/2 4/reaction.h tm. Accessed June 20, 2005. [12] T Fukao, K. Fujitani, and T Kanade, “ An Autonomous Blimp for a Surv eillance System, ” IEEE International Confer ence on Intellig ent Robots and Systems October 2003, V olume 2, pp. 1920-1825. [13] M.A. Garratt and J.S. Chahl, “V isual Control of an Autonomous Helicopter ” Pr oceedings of the 41st Aer ospace Sciences Meeting and Exhibit Reno, NV January 2003. [14] E. Hagen and E. He yerdahl, “Na vigation by Optical Flo w ” 11th IAPR International Confer ence on P attern Reco gnition The Hague, Netherlands, August 1992, V olume 1, pp. 700-703. [15] B.K. Horn, Robot V ision MIT Press, Cambridge, MA, 1986. [16] B.K. Horn and B.G. Schunk, “Determining Optical Flo w ” Articial Intellig ence 1981, V olume 17, pp. 185-203. [17] S. Hrabar and G. Sukhatme, “ A Comparison of T w o Camera Congurations for Optic-Flo w Based Na vigation of a U A V through Urban Can yons, ” IEEE/RSJ International Confer ence on Intellig ent Robots and Systems September 2004, V olume 3, pp. 2673-2680. [18] S. Hrabar G. Sukhatme, P Cork e, K. Usher and J. Roberts, “Combined OpticFlo w and Stereo-Based Na vigation of Urban Can yons for a U A V ” Submitted to the 2005 IEEE/RSJ International Confer ence on Intellig ent Robots and Systems 2005. [19] T Jebara, A. Azarbayejani, and A. Pentland, “3D Structure from 2D Motion, ” IEEE Signal Pr ocessing Ma gazine May 1999, pp. 66-84. [20] T Kanade, O. Amidi and Q. K e, “Real-T ime and 3D V ision for Autonomous Small and Micro Air V ehicles, ” IEEE Confer ence on Decision and Contr ol December 2004, V olume 2, pp. 1655-1662. [21] M. A. Le wis, “Detecting Surf ace Features During Locomotion Using Optic Flo w ” IEEE International Confer ence on Robotics and Animation May 2002, V olume 1, pp. 305-310. [22] L.M. Lorigo, R.A. Brooks, and W .E.L. Grimsou, “V isually-Guided Obstacle A v oidance in Unstructured En vironments, ” IEEE International Confer ence on Intellig ent Robots and Systems Grenoble, September 1997, V olume 1, pp. 373379.

PAGE 70

60 [23] B. Lucas and T Kanade, “ An Iterati v e Image Re gistration T echnique with an Application to Stereo V ision, ” Pr oceedings of the D ARP A Ima g e Under standing W orkshop W ashington, D.C., 1981, pp. 121-130. [24] P .C. Merrell, D.J. Lee, and R.W Beard, “Obstacle A v oidance for Unmanned Air V ehicles Using Optical Flo w Probability Distrib utions, ” SPIE Optics East, Robotics T ec hnolo gies and Ar c hitectur es, Mobile Robot XVII Philadelphia, P A, October 2004, V olume 5609-04. [25] B.G. Mobasseri, “V irtual Motion: 3-D Scene Reco v ery Using F ocal LengthInduced Optic Flo w ” IEEE International Confer ence on Ima g e Pr ocessing Austin, TX, No v ember 1994, V olume 3, pp. 78-82. [26] D. Nair and J.K. Aggarw aal, “Mo ving Obstacle Detection from a Na vigating Robot, ” IEEE T r ansactions on Robotics and A utomation June 1998, V olume 14, Issue 3, pp. 404-416. [27] R.C. Nelson, “Flight Stability and Automatic Control, Second Edition, ” WCB/McGr aw-Hill Publishing Boston, MA, 1998. [28] Z. Rahman, R. Inigo, and E.S. McV e y “ Algorithms for Autonomous V isual Flight Control, ” International J oint Confer ence on Neur al Networks W ashington, DC, June 1989, V olume 2, pp. 619. [29] S. Rathinam and R. Sengupta, “Safe U A V Na vigation with Sensor Processing Delays in an Unkno wn En vironment, ” IEEE Confer ence on Decision and Contr ol December 2004, V olume 1, pp. 1081-1086. [30] F Ruf er and N. Franceschini, “V isually Guided Micro-Aerial V ehicle: Automatic T ak e Of f, T errain F ollo wing, Landing and W ind Reaction, ” IEEE International Confer ence on Robotics and A utomation April 2004, V olume 3, pp. 2339-2346. [31] F Ruf er S. V iollet, S. Amic, and N. Franceschini, “Bio-Inspired Optical Flo w Circuits for the V isual Guidance of Micro-Air V ehicles, ” International Symposium on Cir cuits and Systems May 2003, V olume 3, pp. 846-849. [32] G. Sandini, V T agliasco, and M. T istarelli, “ Analysis of Object Motion and Camera Motion in Real Scenes, ” IEEE International Confer ence on Robotics and A utomation April 1986, V olume 3, pp. 627-633. [33] B. Sinopoli, M. Micheli, G. Donato, and T .J. K oo, “V ision Based Na vigation for an Unmanned Aerial V ehicle, ” IEEE International Confer ence on Robotics and A utomation 2001, V olume 2, pp. 1757-1764. [34] K.-T Song and J.-H. Huang, “F ast Optical Flo w Estimation and its Application to Real-time Obstacle A v oidance, ” IEEE International Confer ence on Robotics and A utomation Seoul, K orea, May 2001, V olume 3, pp. 2891-2896.

PAGE 71

61 [35] B. Sridhar and G.B. Chatteriji, “V ision-Based Obstacle Detection for Grouping for Helicopter Guidance, ” AIAA J ournal of Guidance Contr ol, and Dynamics September 1994, V olume 17, Number 5, pp. 908-914. [36] M.J. Stephens, R.J. Blissett, D. Charnle y E.P Sparks and J.M. Pik e, “Outdoor V ehicle Na vigation Using P assi v e 3D V ision, ” IEEE Computer Society Confer ence on Computer V ision and P attern Reco gnition San Die go, CA, June 1989, pp. 556562. [37] B.L. Ste v ens and F .L Le wis, Air cr aft Contr ol and Simulation W ile y Hobok en, NJ, 2003. [38] N.O. Stof er T Burk ert, and G. F arber “Real-T ime Obstacle A v oidance Using an MPEG-Processor -based Optic Flo w Sensor ” IEEE International Confer ence on P attern Reco gnition Barcelona, September 2000, V olume 4, pp. 161-166. [39] C. T aylor D. Krie gman, and P Anandan, “Structure and Motion in T w o Dimensions from Multiple Images: A Least Squares Approach, ” Pr oceedings of the IEEE W orkshop on V isual Motion Princeton, NJ, October 1991, pp. 242-248. [40] H. W ang and M. Brady “ A Structure-from-Motion Algorithm for Robot V ehicle Guidance, ” Pr oceedings of the Intellig ent V ehicles 1992 Symposium Detroit, MI, July 1992, pp. 30-35. [41] W .M. W ells, “V ision Estimation of 3-D Line Se gments from Motion-A Mobile Robot V ision System, ” IEEE T r ansactions on Robotics and A utomation December 1989, V olume 5, Issue 6, pp. 820-825. [42] Y .-S. Y ao and R. Chellappa, “Dynamic Feature Point T racking in an Image Sequence, ” Pr oceedings of the 12th IAPR International Confer ence on P attern Reco gnition Jerusalem, Israel, October 1994, V olume 1, pp. 654-657. [43] G.-S. Y oung, T .-H. Hong, M. Herman, J.C.S. Y ang, and J.C.S., “Obstacle Detection for a V ehicle Using Optical Flo w ” Pr oceedings of the Intellig ent V ehicles 1992 Symposium Detroit, MI, June 1992, pp. 185-190.

PAGE 72

BIOGRAPHICAL SKETCH Amanda Arv ai w as born in P atux ent Ri v er Maryland, on January 17, 1982. Her f amily mo v ed around the country for her rst se v eral years before nally returning to southern Maryland. Most of her childhood hours were spent playing soccer and softball. This carried o v er to her high school years when she be gan running on the track team. After graduating from Leonardto wn High School in 2000, she attended the Uni v ersity of Notre Dame, in sno wy South Bend, Indiana. Go Irish! She spent her summers w orking on the P atux ent Ri v er Na v al Base for V eridan Engineering on the F/A-18 Hornet T eam. She also spent a summer w orking at Hone ywell in South Bend, Indiana, where she w ork ed on the Joint Strik e Fighter auxiliary po wer unit fuel control. In 2004, she graduated from Notre Dame with a de gree in mechanical engineering. She attended the Uni v ersity of Florida for her masters de gree, where she w ork ed under the advisement of Dr Rick Lind. She married her husband, Bryan Arv ai, in August of 2005. The y are currently mo ving to Redondo Beach, CA, where Amanda will w ork for Northrop Grumman Space T echnology 62


Permanent Link: http://ufdc.ufl.edu/UFE0013261/00001

Material Information

Title: Vision-Based Navigation Using Multi-Rate Feedback from Optic Flow and Scene Reconstruction
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0013261:00001

Permanent Link: http://ufdc.ufl.edu/UFE0013261/00001

Material Information

Title: Vision-Based Navigation Using Multi-Rate Feedback from Optic Flow and Scene Reconstruction
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0013261:00001


This item has the following downloads:


Full Text











VISION-BASED NAVIGATION USING MULTI-RATE FEEDBACK
FROM OPTIC FLOW AND SCENE RECONSTRUCTION

















By

AMANDA ARVAI

















A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE

UNIVERSITY OF FLORIDA


2005

































Copyright 2005

by

Amanda Arvai
















I dedicate this work to the best thing that ever happened to me,

my husband, Bryan.















ACKNOWLEDGMENTS

This work was supported jointly by the Air Force Research Laboratory and the

Air Force Office of Scientific Research under F49620-03-1-0381 with Johnny Evers,

Neal Glassman, Sharon Heise, and Robert Sierakowski as project monitors. I would

also like to sincerely thank my advisor, Dr. Rick Lind, for his invaluable guidance and

support throughout my time at the University of Florida. Special thanks also to my

supervisory committee members, Dr. Warren Dixon and Dr. Carl Crane, for their time

and consideration. This work would not be possible without the members of the Flight

Controls Lab, Joe Kehoe, Ryan Causey, Mujahid Abdulrahim, and Adam Watkins, who

have always been ready with a helping hand. Finally, I would like to thank my father,

Denny Roderick, who gave me a love for aerospace; my mother, Mary Roderick, who

taught me the dedication needed to complete this work; and my sister, Suzanne Noe,

who always gave me a model to aspire toward.















TABLE OF CONTENTS
page

ACKNOWLEDGMENTS ............................ iv

LIST OF FIGURES ................... ............ vii

ABSTRACT .................... ......... ....... ix

CHAPTER

1 INTRODUCTION ................... ......... 1

1.1 Motivation ................... ............ 1
1.2 Background ................... .......... 4
1.3 Overview ................... ............ 8

2 AIRCRAFT EQUATIONS OF MOTION ......... ........ .... 9

3 VISION BASED CONTROL USING FEATURE POINTS .......... 13

4 SCENE RECONSTRUCTION ................... ...... 16

4.1 Concept ................... ........... 16
4.2 Strategy ......... ................ ....... 17
4.3 Advantages and Risks ................... ....... 19

5 OPTIC FLOW .......... ................ ....... 21

5.1 Concept ................... ............. 21
5.2 Strategy ...... .... ................ ...... 27
5.3 Advantages and Risks ................... ....... 33

6 MULTI-RATE CONTROLLER ......... ................ 37

6.1 Concept ......... ................ ....... 37
6.2 Strategy ......... ................ ....... 38

7 EXAMPLE 1 ...... .... ................. ...... 41

7.1 Setup ....... ............ ........ ...... 41
7.2 Actuator Controllers ................... ...... 43
7.3 Control based on Optic Flow .......... ....... ...... 44
7.4 Control based on Scene Reconstruction ....... ......... 46
7.5 Multi-Rate Control ................... ........ 47









8 EXAMPLE 2 .............. .. ..................... 51

8.1 Setup ........ ........... ............... 51
8.2 Control based on Optic Flow ............... . .. 52
8.3 Control based on Scene Reconstruction . . . 52
8.4 Multi-Rate Control .................. ......... 53

9 CONCLUSION .................. . . . ..56

REFERENCES ................... .... .... ....... 58

BIOGRAPHICAL SKETCH .............. . . . 62















LIST OF FIGURES
Figure page

2-1 Coordinate Systems .............. . . .... 9

3-1 Feature Mapping from 3D Space to 2D Image Plane . . ... 14

3-2 Vector Diagram ................ . . .... 15

4-1 Control Scheme using Scene Reconstruction ..... . ..... 19

5-1 Camera Position ................ . . ...... 22

5-2 Feature Point Positions in Image Plane at Timesteps 1 (Left) and 2 (Right)
during Straight and Level Flight ................... . 22

5-3 Corresponding Optic Flow during Straight and Level Flight . ... 22

5-4 Feature Point Locations at Timesteps 1 (Left) and 2 (Right) during Roll
Maneuver .................. . . . ..26

5-5 Corresponding Optic Flow during Roll Maneuver . . .... 26

5-6 Optic Flow (Left) and Scaled Optic Flow with Rotational Components
Removed (Right) during Roll Maneuver ................ ..27

5-7 Optic Flow (Left) and Optic Flow with Rotational Components Removed
(Right) of Straight and Level Flight ................. ..27

5-8 Velocity Vector Projected on the Image Plane . . ..... 31

5-9 Optic Flow with Rotational Components Removed (Left) and Cost Func-
tion 3D plot (Right) .................. ......... .. 33

5-10 Feature Point Location at Timestep 1 (Left) and Timestep 2 (Right) . 35

5-11 Resulting Optic Flow ............. . . .... 36

6-1 Closed-Loop System with Multi-Rate Control . . ..... 38

7-1 Simulation Environment .................. ....... 42

7-2 Altitude Hold Controller .................. ....... 44

7-3 Turn Controller .................. ............. .. 44

7-4 Speed Controller .................. .......... .. .. 45









7-5 Optic Flow Results .................. .......... .. 46

7-6 Scene Reconstruction Results ............... ..... ..47

7-7 Camera Image at T=0 ............. . . ..... 48

7-8 Environment as Assumed by Scene Reconstruction from Input Taken at
T=0 .... ..... ............ ... .... .. 49

7-9 Multi-Rate Controller Results .................. ...... 50

8-1 Simulation Environment .................. ....... 51

8-2 Optic Flow Results .................. ........... .. 52

8-3 Scene Reconstruction Results .................. ...... ..53

8-4 Environment as Assumed by Scene Reconstruction from Input Taken at
T=0 . . . . . . . 54

8-5 Aircraft Position at T=0 (Left) and T=45 (Right) . . .... 54

8-6 Multi-Rate Controller Results .................. ...... 55















Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Science

VISION-BASED NAVIGATION USING MULTI-RATE FEEDBACK
FROM OPTIC FLOW AND SCENE RECONSTRUCTION

By

Amanda Arvai

December 2005

Chair: Richard C. Lind, Jr.
Major Department: Mechanical and Aerospace Engineering

Due to an increasing demand for autonomous vehicles, considerable attention has

been focused on vision-based control. Cameras are small, lightweight, and relatively

inexpensive, making them an attractive alternative to other more traditional sensors,

such as infrared and radar. Cameras provide a rich stream of data to describe the

vehicle's environment. These data can be analyzed to provide the controller with

information such as the relative size and location of obstacles. Intelligent control

decisions can then be made using this information in order to navigate the vehicle

safely through the environment. This thesis focuses upon two fairly established vision-

based control methodologies, optic flow and scene reconstruction. The advantages

and disadvantages of each approach are analyzed. A multi-rate controller which

merges these two approaches is introduced. It attempts to emphasize the advantages

of each approach by alternating between the two, based on the characteristics of the

environment. Two simulations validate the benefits of this multi-rate controller for the

purposes of reactive obstacle avoidance and navigation. These simulations include a

nonlinear F-16 model flying through a virtual scaled-up urban environment. Optic flow,

scene reconstruction, and the multi-rate control approaches are applied to autonomously









control the aircraft. The multi-rate controller is singularly capable of achieving the

mission objectives by reaching the desired destination while simultaneously avoiding

obstacles in its path.















CHAPTER 1
INTRODUCTION

1.1 Motivation

In today's technology-driven world, an increasing emphasis has been placed on

autonomous systems. From industrial applications to mobile robots, autonomy allows

for minimal human effort and oftentimes leads to results which are superior to human-

in-the-loop systems. Autonomy can be used for a variety of applications, most of

which are focused on either protecting human life or increasing its quality.

Many of man's most treacherous missions can be accomplished by machines.

Robots can excavate mine-fields. Drones can survey war zones. Already such ma-

chines have protected many lives. As autonomy technology advances, it will undoubt-

edly be applied to many more applications for similar purposes. If a machine can

successfully accomplish a job, it is not necessary to risk precious human life. This

motivation is a driving force for autonomy.

Other missions require autonomy because a human is simply incapable of accom-

plishing them alone. These missions often require high precision, large calculations,

or fast response times. For example, an experiment may require an exact balance of

chemicals. A manufacturing process may require an extremely precise measurement.

These systems demand a level of precision that is unreachable when allowing for

human error; they require autonomy. Other systems are simply too complicated for a

human to process. Coordinating a swarm of drones, for example, requires the evalua-

tion of extremely large amounts of data. The tracking of several different targets is also

very computationally intensive. Computers are better suited than humans to account

for these large amounts of data. Finally, human reaction times cause unacceptable

delays for some systems. The time for a human to see, process, and begin to respond









to data averages around 0.6 s [11]. For systems operating at high rates, such as fast-

maneuvering air vehicles or tracking torpedoes, this delay can be devastating. These

types of systems also demand autonomy.

All of the above reasons have been driving causes for advancing autonomous

technology. A topic of particular interest to this thesis is the applicability of autonomy

to unmanned aerial vehicles (UAVs). UAVs are being aggressively pursued for both

military and commercial applications. For defense work, they are well suited for

enemy targeting and monitoring. They have recently been implemented in the Iraq

war effort. Commercially, their applications vary greatly. They are used to survey and

regulate forest fires, monitor high value crops such as the thermal imaging of grape

vineyards, and even accomplish aquatic search-and-rescue missions. Most of these

missions have been controlled remotely. Allowing these UAVs to operate autonomously

would enable humans to otherwise apply their time to further support their missions.

A subset of UAVs in large demand is micro aerial vehicles (MAVs). Autonomous

MAVs would be ideal for detecting biological agents throughout a city. They are

agile enough to weave through buildings, granting them access to areas that were

previously unattainable. MAVs could also be used to monitor a nearby enemy that

would otherwise be hidden from sight. Since they are lightweight, they are convenient

to transport. They would be conducive to being stored in a police car's trunk, to be

launched at a moment's notice for surveillance and tracking of a suspect. Furthermore,

MAVs could be used to delicately place a lightweight sensor in enemy territory. Given

their agility and stealth, the applications are vast, thus driving the desire for such

technology.

Research has also been employed to create teams of MAVs and UAVs in con-

junction with autonomous ground vehicles. Oftentimes these teams use the aerial

vehicles to detect and monitor a target and use a ground vehicle to physically approach

the target to perform a given mission. Independent of MAVs and UAVs, autonomous









ground vehicles are sought for excavating land mines and for bomb disposal. They

are applicable to work in nuclear engineering. More mundane applications include the

mowing of large fields and automated parallel parking.

Among the various approaches taken in the advancement of autopilots for

autonomous vehicles, this thesis will focus upon vision-based control. Vision is

perhaps the primary sensor used by human pilots. By seeing the world around them,

pilots navigate throughout their environment with the ability to travel in a logical path

and avoid obstacles. These navigation decisions seem only natural to a human pilot.

Autonomous vehicles with vision-based control use a camera. The same information is

provided to the vision-based autopilot as it is to the human pilot. The task is to create

a method for the autopilot to interpret the images to make logical control decisions.

There are several advantages to vision-based control. The sensors are relatively

small and lightweight, making them appealing to vehicles with small payloads.

Compared to alternative sensors, cameras are relatively inexpensive. Cameras also

provide a real-time stream of information about their environment. The cameras can

be rotated about their axes to provide an increased field of view. The field of view

can also be expanded through the use of multiple cameras, a technique known as

stereo-vision.

Vision-based control is applicable for enhancing many of the applications pre-

viously mentioned. For example, it can be applied to enemy tracking and targeting,

path planning and obstacle avoidance, crop monitoring, and forest fire regulation,

among several others. It is also very applicable to MAVs, which have an extremely

small payload due to their size and weight restrictions. MAVs simply cannot afford

to carry all of the sophisticated radar, GPS, sonar, gyros, accelerometers, altimeters,

and other types of sensors commonly available to other aircraft. Furthermore, they

have significantly reduced processing power available onboard. Since most current

commercially-available autopilots rely on several sensors in conjunction with hefty









processors, these traditional autopilots are not very applicable for MAVs. Instead,

vision can be used to provide extensive data about the environment.

Overall, vision-based control is very applicable for use onboard autonomous

vehicles. As image processing techniques have progressed, the application possibilities

have increased. It is now desired to further advance the control theory for vision-based

autopilots to make them more suitable for real-world applications.

1.2 Background

Vision-based navigation techniques have been explored via many different

avenues. Most of these techniques use image processing to extract information about

the environment which is then used for control purposes. Among these techniques, the

approaches related to optic flow and scene reconstruction are of direct interest to this

thesis.

The concept of optic flow was first introduced by B.D. Lucas and T. Kanade [23]

in conjunction with B.K. Horn and B.G. Schunck [16] in 1981. Since then, techniques

employing the concept have been widely investigated. It has been used for various

applications, the majority of which include the navigation and control of autonomous

vehicles. Generally, optic flow is induced by relative motion between the camera and

the surroundings. However, different methods, such as an optical zoom, have been

investigated [25]. Most approaches strive to be independent of previous knowledge of

the environment, although others still require topographical maps, model optic flow

fields, etc. [14,33].

Optic flow is commonly incorporated into the theory behind ground vehicle

autonomous control [21]. By analyzing the peripheral optic flow, these vehicles can

navigate through corridors [1, 8]. This is oftentimes accomplished by balancing the

optic flow on the left and the right of the image plane. Analyzing the magnitude

of the peripheral optic flow permits speed control [1]. The slope and consequently

transversibility of terrain can also be computed using optic flow techniques [37, 43].









This is commonly applied to bipeds and other mobile robots. In addition, the optic

flow located in the center of the image is often used to compute the time to contact

of a feature [1,34, 38]. This allows for obstacle avoidance which is necessary for all

autonomous vehicles. Perhaps a more unusual application includes guide robots for the

visually impaired [34].

Similar technologies have been applied to unmanned aerial vehicles. Optic flow

enables aircraft to fly through urban canyons [17, 18] and even to intercept objects

mid-flight [28]. It is possible to calculate the collision points in the image plane based

on the optic flow and the projected path of motion [7]. Assumptions on the time to

contact are used for the autonomous landing of aerial vehicles [30,43]. Similarly, optic

flow has been applied to helicopters for terrain-following [30, 31] and hovering [13].

Optic flow sensors are also very fast and usually lightweight, making them very

applicable to micro air vehicles [2,3,4,31].

Oftentimes the inspiration for the optic flow technology has come from biology.

Barrows uses inspiration from the biology of flying insects in his application of optic

flow to micro air vehicles [2, 3, 4]. Similar inspiration has been used for terrain fol-

lowing on micro-helicopters [31]. Also, a space-variant map used to reduce peripheral

optic flow resolution was inspired by feline and primate retinas [1].

Other research has focused on the optimization of optical flow algorithms.

Intensity gradients as features for pattern matching have been used in combination

with a brightness constraint in order to create a fast optic flow algorithm [34]. It has

been shown that compressed peripheral optical flow can reduce input data for faster

computation [1]. Kalman filters have also been used for more robust feature point

tracking applications in relation to optic flow [13, 14].

Feature point analysis encompasses all methods of vision based navigation which

rely on the extracting of points of interest in an image, tracking them throughout a

period of time, and extracting information from the images. This information can then









be used for control purposes. Advancement in this field includes research concerning

feature point detection, tracking, and algorithms for analysis.

Feature points are points of interest in an image and generally correspond to

corners, edges, or sharp color gradients. Specifically, some approaches extract feature

points using a corer detection algorithm [40]. Others detect points based on brightness

gradients; red, green, and blue (RGB) color; and hue and saturation values (HSV) [22].

After feature points have been detected, it is often desired to track them between

frames. Advanced probability methods have been used to estimate correspondence [24].

This probability also indicates the likelihood of obstacles in that location. This

technique eliminates the requirement for optic flow at each feature point [24]. Tracking

has also been accomplished using sample-based representation instead of a traditional

Gaussian representation of the feature point uncertainty [6]. Many feature point

tracking methods employ Kalman filters [36,40,42].

The other vision-based control methodology to be addressed in addition to optic

flow is scene reconstruction, or, more formally, Structure from Motion (SFM). The

concept was popularized during the 1990's and general procedures are fairly well

established. However, research is still ongoing to achieve further automation and

precision [10]. The concept of SFM is to essentially map the 2-D points on the

image plane into a virtual 3-D space. First, the 2-D coordinates are translated into

3-D coordinates based on their optic flow and estimated time-to-contact. Next, the

3-D points are connected to form surfaces which correspond to objects in the real

world. Path planning algorithms can then be implemented on the virtual environment.

Due to the extensive computation required in SFM, these algorithms generally run

at slow rates. Therefore, some algorithms for mobile robots use a "start, move, stop,

move again" approach to allow for this processing delay [40]. While this technique is

applicable for ground vehicles, it poses an issue for aerial vehicles. Thus many feature

point analyses have first been applied to ground vehicles [36,40].









Several approaches have been taken to implement the general SFM concept, a

sample of which are referenced here. One approach used geometry and the velocity

field to approximate the shape index of objects in the image [9]. Structure from

motion has also been accomplished by keeping the fixation point of the camera still

throughout the camera's motion [32]. Line segments have been used to create SFM

within an office environment [41]. A least squares approach using vertical lines has

been investigated for the special case when the feature points and camera position are

confined to a 2-D plane [39]. Also, extracting the vertical motion of edges provides

relative movement information, allowing a ground robot to gradually stop so as to

avoid collision [26]. The majority of the above SFM algorithms required the optic-flow

calculation at each feature point. If these calculations are not available due to noise,

it is possible to still create SFM using likely optical flow values and their associated

probabilities [24].

A structure from motion application of particular interest to this thesis is air

vehicles. An autonomous blimp used a version of the Lucas-Kanade algorithm to

detect and track feature points and employed SFM for improved state estimation [12].

Another approach used prior knowledge of the environment to create and update

a virtual 3-D model of the environment for the navigation of an unmanned aerial

vehicle [33]. Helicopter guidance has been accomplished using range information and

spatial relations from the static image in order to group feature points into objects [35].

This thesis seeks to incorporate optic flow with structure from motion, thus

amending processing delay issues. Another approach addressing the processing

delay commanded a loitering maneuver while results were processed. This resulted

in a flight path which delayed the overall progression of the UAV throughout the

environment [29].









1.3 Overview

This thesis will demonstrate an approach to create a controller which integrates the

two established vision-based control techniques of optic flow and scene reconstruction.

This controller is inherently multi-rate with a fast loop running an optic flow algorithm

for obstacle avoidance and a slower loop running scene reconstruction analysis for

general navigation. First, the two techniques will be investigated in detail to determine

their individual strengths and weaknesses. It is generally established that scene

reconstruction analysis provides reliable path-planning. It involves large computations

though and is therefore performed at slow rates. The slow rates cause a delay between

data acquisition and the implementation of the control decision, causing the control

decision to potentially be based on outdated information. Some vehicles, such as

ground vehicles or helicopters, will either stop or hover until the new information

is processed and it is considered safe to continue. Fixed-wing aircraft do not have

this ability, making the delay all the more detrimental. Comparatively, optic flow

is capable of running at higher rates than scene reconstruction but provides less

detailed information to be used for navigation and control purposes. The information is

assumed sufficient though for detecting and avoiding obstacles in a real-time fashion.

This thesis proposes using optic flow for obstacle detection and avoidance in the

circumstances which scene reconstruction analysis is too slow for safe navigation. A

switch will determine which of the two loops is active. To demonstrate the capability

of the controller, it is tested in simulation with a nonlinear F-16 model flying in a

virtual environment.















CHAPTER 2
AIRCRAFT EQUATIONS OF MOTION

Three coordinate systems will be used throughout the course of this thesis. These

systems include the inertial, or earth-fixed basis, which is defined as E. This axis is

chosen as north for el, east for e2, and down for e3. The body basis B is fixed to the

center of gravity of the aircraft. It is aligned such that b1 is in the plane of symmetry,

pointing out the nose of the aircraft. The b2 axis is perpendicular to the plane of

symmetry, pointing to the right of the nose of the aircraft. The b3 axis is perpendicular

to both the b1 and b2 axes, in the plane of symmetry, pointing downward. It is

important to note that the position of the fixed, or inertial, E frame was chosen such

that at t = 0 it coincided with the B frame. Finally, the camera basis, C, is also fixed to

the center of gravity of the aircraft. However, the camera axis is aligned such that cl

is pointed downward, coinciding with the b3. The 82 axis is perpendicular to the plane

of symmetry, pointing to the left of the nose of the aircraft. The 83 axis, which is the

camera's optic axis, is in the plane of symmetry, pointing out the nose of the aircraft,

also coinciding with b1. These frames are shown in Figure 2-1.






b2 b, c3
b3 c1




e21



Figure 2-1: Coordinate Systems

9









This thesis will focus on creating a vision-based autopilot for unmanned aerial

vehicles. Before analyzing the system, the aircraft rigid body equations of motion

(EOM) must first be determined. These equations are very well documented in

literature. An aircraft has six degrees of freedom, including three position components

and three angular components. These components, along with their derivatives, are the

states of the aircraft. E is the vector to the center of gravity of the aircraft from the

earth basis. Its components are defined in Equation 2.1. The angular components, 0,

0, and y correspond to the roll, pitch, and yaw angles of the aircraft. The components

of the linear and angular velocities, B and EWB, are given in Equations 2.2 and 2.3

respectively. Note that E is defined in the earth frame, whereas B is defined in the

body frame. EWB is defined as the relative angular velocity between the body frame

and the earth frame.


E E= el + ye2 + e3 (2.1)


BB = ub +vb2 +wb3 (2.2)


E B= pb + qb2 +rb3 (2.3)

Using Newtons laws, the first 6 EOM were derived. Equations 2.4 through 2.6 are

force equations and Equations 2.7 through 2.9 are moment equations. The variables

F, Fy, and Fz are the aerodynamic forces, L,M, and N are the aerodynamic moments,

m is the mass of the aircraft, I, Iy,I,I4,y, y, and Ix are the aircraft's inertias, and g

is the gravitational constant. Standard aircraft notation was used to denote 0, 0, and

y as the aircraft's roll, pitch, and yaw angles, p, q, and r as the aircraft's roll, pitch,

and yaw rates, and u, v, and w as the aircraft's velocities as expressed in the aircraft

basis. Subscripts denote the vector's basis which it is expressed in. Equations 2.10

through 2.12 use Euler angles and rates to describe the body angular velocities.

Equations 2.13 through 2.15 use Euler angles and body angular velocities to determine









the Euler rates. Finally, Equation 2.16 defines the velocity of the aircraft from the earth

frame using Euler angles and the velocity components from the body frame. Clearly

these equations are highly coupled and nonlinear.


Fx mg sin(O) = m(u + qw rv)


Fy +mgcos(O)sin() = m(v + ru -pw)


F, + mgcos(O) cos()) = m(w + pv -qu)


L = Ixp Ixz" + qr(Iz ly) I, Lq


M = Iq + rp(Ix Iz) + Ixz(P2 r2)


N = -Ix + Iz + pq(Iy Ix) + Iqr


p = ( isin O


(2.4)


(2.5)


(2.6)


(2.7)


(2.8)


(2.9)


(2.10)


(2.11)


(2.12)


(2.13)


(2.14)


(2.15)


q = 6 cos 0 + cos sin


r = cos cos O-- sin


= qcos) rsin


= p+ qsin tan 9+rcos )tan


*= (qsin) +rcosQ) secO














C6Cy SoS6C14v CpS1cs CoS6Cv+S+SiS u

COSY SOSeSi+C+CiC COS6Sv S v

-se SC6 Coco W


dx
dt
dy
dt
dz
dt


(2.16)















CHAPTER 3
VISION BASED CONTROL USING FEATURE POINTS

Vision-based control is an active avenue for the pursuit of vehicle autonomy.

Considering that vision is perhaps a pilot's most utilized sensor, it is logical to assume

that vision has applications onboard an autonomous vehicle. When a person is piloting

a craft, the human brain inputs information from the eyes and uses that information

to make assumptions concerning the environment. These assumptions are used to

determine control decisions to keep the person and the craft along a safe trajectory.

The same task is presented to autonomous vision-based control. A camera is placed

onboard a moving vehicle. The camera then projects its environment onto an image

plane and transmits that information to a controller. The controller interprets this

information and determines assumedly safe control decisions. The control theory used

to analyze the image data encompasses the vast field of vision-based control.

This thesis will focus upon the particular area of vision-based control using

feature points. Feature points are defined as points of special significance in the 3D

environment. These points, along with the rest of the camera's environment, are

then projected onto a camera's image plane. These points can be extracted from the

image using a variety of methods including edge detection, color distribution, intensity

variation, or basic differentiation of image properties. Comers, edges, and light sources

are thus obvious possibilities for feature points. These points are then tracked within

the image plane throughout time.

Vision-based control makes logical assumptions concerning the location and

movement of feature points to make larger interpretations about the environment.

A group of feature points can be used to provide information about the overall

environment. For example, the feature points located on the comers of a building can










be used to interpret information concerning the whole building. This information can

then be used for control purposes.

A camera in mathematical terms essentially maps the 3D environment onto a 2D

image plane. The image plane is defined as the plane normal to the camera's central,

or optic, axis, located the focal length f distance away from the camera basis [5].

Figure 3-1 portrays a feature's projection from 3D space to the camera's 2D image

plane. It defines the vector Tr as spanning from the lens of the camera to a feature

point. Its components, 111,112 and r13, are expressed in the camera basis C.


Camera C2
Basis



C V < Feature Point
3 Projection




e Feature Point
In Space




Figure 3-1: Feature Mapping from 3D Space to 2D Image Plane


Equations 3.1 and 3.2 give the standard pin-hole camera model. It is assumed

that there is no offset of the camera basis to the center of the camera lens. This model

effectively maps the camera's surroundings onto the image plane.


p =f (3.1)
13


v = f (3.2)
T13

Figure 3-2 portrays the vectors 4 and as the position of feature point and the

center of mass of the aircraft respectively, both in the inertial earth frame E. The

vector Tr can then be described in terms of 4 and L, as shown in Equation 3.3.












TC1 Feature
point








Figure 3-2: Vector Diagram



TE= 4E E (3.3)

Each of the terms in Equation 3.3 can be expressed in the camera basis by using

the appropriate Euler transformation. The result is shown in Equation 3.4. RBC is

the Euler transformation between the camera basis C and body basis B. REB is the

transformation between the body basis and the earth basis. Again, it is assumed that

the camera is fixed to the aircraft with zero offset from its center of gravity.


Tc = RCBRBE (4E )E)


(3.4)















CHAPTER 4
SCENE RECONSTRUCTION

4.1 Concept

Various vision-based feedback approaches utilizing feature points exist and are

documented in literature. Among these, structure from motion (SFM) will be focused

upon throughout this thesis. The general concept behind this approach has already

been extensively demonstrated in papers so this section will summarize that previous

work [20].

The process of structure from motion describes an approach using feature points to

estimate the relative location of a vehicle and its environment. Assuming a stationary

environment, SFM uses known aircraft states and the locations of feature points in

the image plane to create a virtual 3D scene of the environment. Conversely, using

a known stationary environment and unknown aircraft states, SFM estimates the

position and attitude of the vehicle. Research is currently being conducted concerning

identifying both unknown vehicle states and an unknown environment.

This thesis will focus upon the situation of known aircraft states and an unknown

stationary environment. SFM essentially creates a virtual 3D scene from a series of 2D

images. For this process, SFM analyzes the position of each feature point throughout

the various images. SFM extracts depth information for each feature point by using

the known aircraft states. The 3D coordinates for a feature point are determined using

its depth and image plane coordinates in addition to the aircraft states. SFM then has

the relative location of each feature point. The feature points are then combined into

groups, forming surfaces. The result is a virtual 3D reconstruction of the camera's

environment.









4.2 Strategy

Structure from motion relies on feature points and their tracking in the image

plane throughout time. A series of images may present a great disparity in feature

points. Some of these feature points can be grouped together to represent an object,

providing relevant information for SFM. Other feature points represent noise, not

providing any relevant or consistent information for SFM. The standard approach for

feature point tracking is the Lucas-Kanade algorithm, which uses template registration

to identify a set of feature points in a series of images.

Assuming that a set of feature points can be found to correspond between two im-

ages, the problem statement of SFM is to extract the three-dimensional coordinates for

each feature point. This is approached by minimizing a cost function. Although linear

cost functions offer appealing simplified mathematical solutions, the cost function is

generally accepted as inherently nonlinear. This requires iterative optimization and

addressing local minima; however, the results of the nonlinear cost functions provide

more accurate results.

Most nonlinear approaches are derived from Horn's relative-orientation prob-

lem [15]. Using two frames from the same moving camera and by assuming a

stationary environment, the technique recovers the environment's 3D structure. The

camera basis at timestep k + 1 can be defined as a translation and rotation of the

camera basis at timestep k. Therefore the projection of point P at timestep k + 1 can

be defined as a translation and rotation of its projection at timestep k. Assuming the

aircraft has knowledge of its relative rotation R and translation t from the aircraft

states, Equation 4.1 can be used to calculate the relative rT vector at timestep k+ 1 from

Tr at timestep k.












Il,k+l Il,k

Tr2,k+l = R r2,k +t (4.1)

r13,k+1 r13,k

By combining Equations 3.1 and 3.2 with Equation 4.1, Equation 4.2 can be

derived.


Pk+11t3,k+1
r13,k k
Vk+1f3,k+1 R vk + t (4.2)
rl3,k k3,k
fr3,k+l
l3,k

Each point tracked between the two frames provides the system with three

equations and two unknowns, rT3,k and r13,k+1, which correlate to the depths of the

feature point with respect to the camera at timesteps k and k + 1. A least-squares

approach is used for accuracy. The r13,k and r3,k+1 values for each feature point

are then recovered, creating a 3D structure [19]. A least-squares approach is also

used for video sequences greater in length than two images. This iterative nonlinear

optimization minimizes the error for each feature point's 3D coordinates.

SFM, using vision-based feedback, provides the system with information that

completely describes the flight environment. This known environment can then be

fused with GPS data. This provides the controller with information concerning the

environment with respect to the aircraft and the desired destination. Various path-

planning approaches can then be implemented to maneuver the aircraft through the

known environment toward its goal.

The desired trajectory can be described in the form of waypoints. Waypoints

are points in 3D inertial space that are targets for the aircraft to fly toward. When

flown toward in sequential order, the resulting flight path forms a trajectory similar to










the desired trajectory from the SFM data. In this approach, the outputted waypoints

represent the desired path through the reconstructed scene.

Figure 4-1 demonstrates a structure for the closed loop system. This system

includes two compensator elements: a path planner and a maneuver tracker. The path

planner creates a desired or optimal trajectory through the virtual environment. This

can be implemented using a classical optimization approach or a receding horizon

approach, among others. The maneuver tracker ensures that the aircraft remains along

the optimal trajectory. This tracker can be any type of controller, such as a common

PID or LQR.

environment structure
feature o- terrain
camera from
c a extraction motion mapping


MAV


maneuver path
tracking planning


Figure 4-1: Control Scheme using Scene Reconstruction

4.3 Advantages and Risks

The results of structure from motion provide a very reliable and very detailed

description of the environment. Compared to an optic flow calculation, which provides

a rough inference on the direction of obstacles, the output of structure from motion

gives an approximate size and relative location of each obstacle. There is a significant

difference in the level of detail between the two algorithms.

This additional detail requires additional processing time, which leads to a time

lapse associated with scene reconstruction data. Due to weight and power restrictions,

the processor on a MAV, for example, has limited onboard computational capabilities.

Several minutes may be required for scene reconstruction. This requirement creates a







20

time lag between data acquisition and data processing which leaves the aircraft flying

with outdated information. The airplane obviously can not stop and hover during

computation so the vehicle will be flying without any new information for several

minutes.

A significant problem then arises when an obstacle is hidden from view at the

point of data acquisition. Theoretically, the scene reconstruction analysis could then

create a flight path which coincided with the obstacle. Although the obstacle would

most likely come into view before collision, the analysis is so slow that it may not

have time to react. This risk is a serious limitation for scene reconstruction.















CHAPTER 5
OPTIC FLOW

5.1 Concept

As an aircraft flies through its environment, there is relative motion between the

camera and its surroundings. For an individual feature point, this motion alters its

T1 vector, potentially changing its p and v coordinates. This movement is optic flow.

More formally, optic flow is the two dimensional motion of feature points in the image

plane [43].

An optic flow vector can be defined as a feature point's change in image plane

position between two consecutive timesteps. Using the derivative quotient rule on

Equations 3.1 and 3.2, the optic flow is given in Equations 5.1 and 5.2.


S= f r1 2 (5.1)


=f(I2 2 (5.2)

Consider a camera fixed to an aircraft flying straight and level through the

environment shown in Figure 5-1. The stars along the edges of the buildings denote

the feature points that will be tracked. Figure 5-2 portrays the locations of these

feature points in the image plane as visible within the field of view of the camera at

timesteps 1 and 2. The displacement of the feature points between timesteps 1 and

2 from Figure 5-2 yields the optic flow vectors in Figure 5-3. For visibility, these

vectors have been scaled by a factor of 25.

Although the above buildings are centered in Euclidean space about the aircraft's

central body axis b1, they do not appear centered in the image plane. The building

on the right's feature points have comparatively smaller r13 values, indicating that it
































North


z 0


0.5 0
V


Figure 5-1: Camera Position



-0.6

-0.4

-0.2

: 0

0.2

0.4

0.6

-0.5 0.5 0
V


Figure 5-2: Feature Point Positions in Image Plane at Timesteps 1 (Left) and 2 (Right)
during Straight and Level Flight



-0.6

-0.4

-0.2

:t 0

0.2

0.4

0.6
0.5 0 -0.5
V


Figure 5-3: Corresponding Optic Flow during Straight and Level Flight









is closer in proximity. From Equations 3.1 and 3.2, this causes its feature points to

be located comparatively further from the center of the image plane in both p and v

coordinates. In contrast, the building on the left has a larger r13 value, indicating that

it is further in the distance. The building is therefore located closer to the center of

the image plane. This phenomenon can also be noted by the number of feature points

visible on each building. Each building is lined with feature points at the same equal

interval. However, a smaller number of feature points from the building on the right

are visible than from the building on the left. This characteristic is again because the

building on the right's feature points are located further from the center of the image

plane, many of which are out of the camera's field of view.

The nature of optic flow allows estimation of the relative position of the camera's

surroundings. It can be seen from Equations 5.1 and 5.2 that for all other parameters

being equal, large r13 values cause smaller magnitude p and v values than do small

r13 values. Therefore, assuming a stationary environment, relatively large optic flow

vectors correlate to nearby objects whereas smaller optic flow vectors correlate to

further away objects. From Figure 5-3, it can be seen that the vectors on the right-

hand side of the image plane are larger in magnitude than those on the left. This

property implies that the corresponding feature points are closer in vicinity. This

assumption is confirmed by Figure 5-1.

The camera's view is intrinsically linked to the aircraft's motion. Their joint

equations of motion have been investigated by Causey and Lind [5]. Equation 5.3

can be derived by differentiating Equation 3.4 with respect to the camera frame. This

relationship defines the velocity of the camera to the feature point in the C frame in

terms of the aircraft's velocities, the feature point's position, and the relative angular

velocity between the E and C frames, E(C, as defined in Equation 5.4. The aircraft's

roll, pitch, and yaw rates are defined as p, q, and r respectively.


ic = -RCBRBEtE -E EC X (rC


(5.3)











EWC= [r -q p] (5.4)

It is then desired to express optic flow as functions of the aircraft states, feature

point states, and the intrinsic camera properties, such as f. It can be shown that

Equations 5.5 and 5.6 can be derived by using Equations 5.1, 5.2, 5.3, and 5.4. The

terms u, v, and w are the velocities of the aircraft's center of mass as expressed in the

body frame from Equation 2.2.



= f[(q+pv+qp2+r~v)+ (Ur (5.5)


Sv=f(r-pi + rv2+qp )+(UV3 (5.6)

From Equations 5.5 and 5.6, it can be seen that the optic flow is a function of the

image plane coordinates and r13 as well as two other categories: the translational and

the angular velocities of the aircraft. Therefore, the optic flow vector can be defined as

the summation of two other vectors, one which is defined by the optic flow produced

only by the translational velocities and the other which is defined by the optic flow

produced only by the rotational velocities.

It is easy to conceptualize the expected optic flow due to only the translational

velocities. As one approaches an obstacle at an offset, flying in a straight line, the

feature points accelerate away from the focus of expansion. The focus of expansion is

defined as the point in the image plane from which the optic flow vectors diverge and

its p and v coordinates are aligned with the aircraft's velocity vector. The results of the

example in Figures 5-1, 5-2, and 5-3 are indicative of the optic flow produced by the

translational velocities.

The optic flow due to the angular velocities causes significantly more complicated

results. If the aircraft has a large yaw rate, the magnitude of the 9 vectors escalate.









Similarly, the magnitude of the P vectors increase dramatically with the magnitude of

the pitch rate. Large roll rates create a swirl-like effect on the optic flow.

It is desired to use the magnitude of the optic flow vectors as an indication of

a feature point's proximity. It is therefore necessary to establish consistency. The

angular velocities cause large changes in magnitude which are not representative of the

change in the aircraft's position, but rather its orientation. For the purpose of obstacle

avoidance, it is the position of the aircraft's center of mass which is of direct concern.

An aircraft collision implies that the position of the aircraft's position coincided with

that of an obstacle. The orientation of the aircraft is irrelevant. Therefore, the effects

of the angular velocities will be removed from the optic flow. This decomposition will

enable a more direct correlation between the magnitude of an optic flow vector and its

proximity.

The rotational components will be extracted by subtracting the appropriate

combination of aircraft states and feature coordinates. The resulting optic flow, P and

9, can then be derived as shown in Equations 5.7 and 5.8, where P and 9 are the true

optic flow components as given by the optic flow sensors. The results are a function of

only the translational velocities, r13, the focal length, and the image plane coordinates.



=- f (q + pv + qU2 + rpv) (5.7)

= f (r- pp+ rv2 + qv) (5.8)

An example is used to demonstrate the effects of removing the rotational com-

ponents from the optic flow. The setup is identical to the previous example shown

in Figures 5-1, 5-2, and 5-3, with the exception that the aircraft is in the middle of

a counter-clockwise roll maneuver. The feature point locations at timesteps 1 and 2

are shown in Figure 5-4. The corresponding optic flow is given in Figure 5-5. These

vectors have not been scaled.












-0.6 -0.6

-0.4 -0.4

-0.2 + -0.2

L 0 :- 0

0.2 0.2

0.4 0.4

0.6 0.6
0.5 0 -0.5 0.5 0 -0.5
V V


Figure 5-4: Feature Point Locations at Timesteps 1 (Left) and 2 (Right) during Roll
Maneuver


-0.6

-0.4

-0.2

:: 0

0.2

0.4

0.6
0.5 0 -0.5
V


Figure 5-5: Corresponding Optic Flow during Roll Maneuver


At timestep 1, the roll rate was -2.608 rad/s, pitch rate was 0.0126 rad/s, and the

yaw rate was -0.1176 rad/s. Clearly the total optic flow is highly coupled with the

angular velocities. Using Equations 5.7 and 5.8, i and V can be derived. These results

are plotted in Figure 5-6 alongside the total optic flow from Figure 5-5. In this plot, p

and v have been scaled by a factor of 25. This newly derived optic flow is independent

of the angular velocities. Note the correlation between the translational optic flow and

the optic flow of Figure 5-3 where the angular velocities were negligible.

For comparison, the rotational components of the optic flow from the straight and

level flight of Figure 5-3 have been removed. At timestep 1 of this example, all of

the angular velocities were nearly zero. The rotational components were negligible.












-0.6 -0.6

-0.4 -0.4

-0.2 -0.2

0z 0 ,z 0

0.2 0.2

0.4 0.4

0.6" 0.6
0.5 0 -0.5 0.5 0 -0.5
V V

Figure 5-6: Optic Flow (Left) and Scaled Optic Flow with Rotational Components
Removed (Right) during Roll Maneuver


The resulting optic flow is essentially identical to the original optic flow, as shown in

Figure 5-7. Therefore the removal of the angular components of the optic flow does

not cause a significant impact at times of low angular velocities.



-0.6 -0.6

-0.4 -0.4

-0.2 -0.2

:- 0 :- 0

0.2 0.2

0.4 0.4

0.6 0.6
0.5 0 -0.5 0.5 0 -0.5
V V

Figure 5-7: Optic Flow (Left) and Optic Flow with Rotational Components Removed
(Right) of Straight and Level Flight


5.2 Strategy

Optic flow controllers onboard moving vehicles make use of the phenomenon

of optic flow to make assumptions about their environment. They then use these

assumptions to make control decisions for navigation. For the purpose of this thesis,

optic flow will be used for reactive obstacle avoidance. The goal is a controller which

is able to sense impending obstacles in the flight path and effectively avoid them in a









real-time manner. This objective will be accomplished by using the correlation between

optic flow vector magnitude and the distance between the aircraft and the feature point.

Every point on the image plane correlates to an aircraft heading which can then be

used to control the aircraft. The general strategy employed is to determine the assumed

safest point in the image plane, given as uopt and Vopt, and then use its corresponding

heading for control purposes.

Selecting the assumed safest point involves a cost function which analyzes the

optic flow vectors in the image plane. Using the assumption that large optic flow

vectors correlate to nearby obstacles, it can be determined that the regions of the image

plane containing these vectors are dangerous to fly toward. Comparatively, regions in

the image plane far away from these large vectors are assumed to be far away from

nearby obstacles and are thus less dangerous. The cost function is designed to quantify

the overall "threat" level of a point on the image plane. The cost, or threat level,

correlates to the assumed danger of flying toward that point. It is designed such that

large costs denote a point which is either located on or dangerously close to a nearby

obstacle. A low cost denotes that the point is far away from all nearby obstacles.

The image plane is divided into a grid of discrete points, each of which represents

its surrounding region. The cost function is then evaluated for each of these points.

The result is an overall estimation of the high-threat and low-threat regions of the

image plane. It is designed such that the largest costs should correlate to the regions of

the image plane containing the closest obstacles.

The process of discretizing the image plane into discrete points helps to minimize

the cost function computations. Dense grids create a more precise estimation whereas

sparser grids are less computationally intensive. The density of the grid can be adjusted

according to the environment and processor available. An interval of 0.05 between

points on a grid ranging from -.7 < u < .7 and .7 < v < .7 has been found to provide

adequate results for several flight simulations.









Before defining the cost function, it is necessary to first define some parameters.

For a feature point with coordinates Vn,,, and its corresponding optic flow vector with

components fin,,n, its velocity is defined as Vn in Equation 5.9.



Vn = / +2 (5.9)


The distance from an individual image plane point to a feature point is defined as

dn(p,v) in Equation 5.10.



dn(p,v)= V (V- v)2 (p-)2 (5.10)


Each image plane point is evaluated with respect to each of the optic flow

vectors. A cost Jn,(u,v) is then associated with the point and vector, as defined in

Equation 5.11. This cost represents the threat of the optic flow vector with respect to

the image plane point being evaluated.

n(, dn(v,)v) > 0;
A IY, V)= d 0(5.11)
00, dn(u,V) =0.

The cost is inversely proportional to the distance between the image plane point

and the feature point. The cost also rises with the magnitude of the vector. The quartic

of the magnitude is used in order to place emphasis on the largest vectors. These

large vectors correlate to the most immediate threats and it is logical that they should

dominate the imminent navigation decisions. The power of 4 was found as sufficient

emphasis without making smaller vectors obsolete. Small vectors may indicate a far

away obstacle which should still be taken into account during the navigation decisions.

The total cost J(p,v) for a point is the sum of the cost of that point with respect

to each of the vectors. See Equation 5.12 where N is the total number of optic flow

vectors.










N
J(uv)= Jn(Uv) (5.12)
n=l
Points on the image plane with high cost are generally close to feature points with

large magnitude optic flow vectors. These points are assumed to possess an obstacle

in close vicinity to the camera. This obstacle is a high threat to the moving vehicle.

Therefore these points should be avoided when navigating the vehicle. Points on

the image plane with low cost are sufficiently far from the large magnitude vectors.

These points are assumed to possess no obstacles or obstacles in the far distance. In

comparison, these points are a lower threat to the vehicle. The point corresponding

to this cost is defined as the optimal point having coordinates [1popt,Vopt], as shown

in Equation 5.13. These coordinates correlate to the line of sight that is assumed to

pose the least threat to the aircraft. The variables p, yp, v, V represent the minimum and

maximum p and v values respectively within the field of view.


[opt,Vopt] = arg min J(p, v) (5.13)
pU G [Upl]

ve [I,v]

The optimal point in the image plane correlates to a heading which can then be

used to control the aircraft. This heading is defined as a change in pitch AO and a

change in yaw AV from the current aircraft states. These are geometric relationships

which can be extracted from Figures 2-1 and 3-1. It is assumed that there is a

constant angle of attack during the timestep and a negligible sideslip. These two values

are the control inputs.


AOc = tan-1 tan(cL) + pt (5.14)


Ac -tan- ( (5.15)










In the event that it is desired to maintain a given altitude, the control inputs can

be defined as the commanded altitude he and the commanded change in heading AVc.

This procedure effectively reduces the 3D obstacle avoidance problem to a 2D obstacle

avoidance problem. It also reduces computation time significantly. Here he assumes

the role of AOc. The controller uses actuator deflections which in turn determine y,

the angle the velocity vector makes with the horizon in the vertical plane. An inner-

loop controller is used to maintain the desired altitude [37]. Assuming reasonable y

and a values, the points in the image plane corresponding to this desired altitude are

contained within a parallelogram-shaped region. The desired or commanded change in

yaw AVc can then be found by evaluating the points within this region. If the aircraft

is already at the desired altitude, this region reduces to a line. If the image plane is

unrotated about its roll angle, this line is horizontal in the image plane. This line has a

constant value of p defined as v,. Figure 5-8 defines this projection.





b,








b,
b2


Figure 5-8: Velocity Vector Projected on the Image Plane

Equation 5.16 defines vu in terms of the focal length f and the angle of attack a.

In this instance, the image plane grid can then be reduced to a line of discrete points

located along the line of constant u,. The point of minimum cost along this line is then









the optimum point in the image plane. It correlates to the desired change in heading

AVc which can then be extracted using Equation 5.15.


vu = -ftana (5.16)

The cost function has been applied to the previous example shown in Fig-

ures 5-1, 5-2, and 5-3. The optic flow due only to the translational velocities, scaled

by a factor of 25, is shown in Figure 5-9. These optic flow vectors were evaluated

for each point in the image plane grid using the cost function. The results are also

plotted in Figure 5-9. The peaks in the cost plot correlate to the tracked feature points

of the right building. The surrounding regions of high cost correlate to the locations

of this building. The building on the right has the highest cost since it had the largest

magnitude optic flow vectors and is therefore considered to be the most immediate

threat. The left side of the image plane had a comparatively lower cost. Upon close

examination it can be seen that there is a spike in cost around the left building, but

nothing substantial compared to the building on the right. It is desired to maintain the

current altitude for this example. The aircraft is currently flying straight and level at

the desired altitude with an angle of attack of 2.67 deg. Therefore Equation 5.16 is

applicable, from which we derive the value of u, = 0.046 for f = 1. Since it is desired

to maintain the current altitude Popt = 0.046. Equation 5.14 shows that AOc = 0 and

thus no change in pitch is necessary. The point of lowest cost along the horizontal line

loupt = 0.046 correlates to Vopt = .7. This point correlates to a change in yaw AVc of

-35 deg or -0.61 rad from Equation 5.15. The optic flow controller is commanding

the aircraft to steer left, away from the impending building on the right. It detects the

building on the right to be the most immediate threat. Although there are no obstacles

in the center of the image plane, directly in front of the aircraft, the controller feels that

this flight path is too close to obstacles for safe flight. This left turn would effectively

avoid the closest obstacle as shown in Figure 5-1.











-0.6 x 10
-0.4
-0.2
z 0
0.2
-: 0 .


0.4
0 -1
0.6 0
0.5 0 -0.5 1
V tJ v

Figure 5-9: Optic Flow with Rotational Components Removed (Left) and Cost Func-
tion 3D plot (Right)

5.3 Advantages and Risks

A strong advantage of optic flow is its real-time ability of reactive control for

obstacle avoidance. Optic flow sensors run at very high speeds and the control

decisions can be computed fairly quickly. Thus, a vehicle can begin reacting to a

threat almost immediately after it is detected. This speed is very advantageous, if not

essential, for fast moving aircraft within dense environments.

Since optic flow is based on relative motion, it also has the potential to account

for moving obstacles, making the applications more versatile. The algorithm presented

is based on the assumption of stationary obstacles; however, it is possible to adapt

the algorithm for other applications. The direction of the optic flow vector would be

of greater significance when accounting for moving obstacles. An optic flow vector

pointing toward the FOE generally indicates that either the aircraft is turning toward

the obstacle or the obstacle is approaching the aircraft's flight path. Conversely, an

optic flow vector pointing away from the FOE suggests the obstacle is either moving

away from the flight path or the aircraft is flying away from it. These general rules

assume that you are not flying directly toward the obstacle. There is no optic flow

when flying directly toward an obstacle since the feature point is then located in the

focus of expansion. Any surrounding feature points would still diverge.









A disadvantage to optic flow as used in this capacity is its lack of path-planning.

The above algorithm is relatively unsophisticated as it only reacts on the most imme-

diate threat without substantial consideration of future threats. Thus it may guide the

vehicle through an overall more treacherous path simply due to an initial maneuver to

avoid the first obstacle.

The algorithm does have its assumptions and risks. Optic flow control in this

capacity assumed the ability to track feature points on obstacles. It also assumed cor-

respondence of the points between frames. That is, it requires the knowledge of which

point corresponds to which point in the previous timeframe. This knowledge creates

the optic flow. These assumptions are fairly demanding for practical applications.

Feature points are difficult to reliably extract from an environment. In practice, even

some of the most accepted feature point extraction methods, such as Lucas-Kanade, are

often noisy and include large errors.

Another risk concerns the location of feature points. Since most feature point

extraction algorithms are based on contrast and texture gradients, they are typically

located at comers and edges. A smooth building may only have feature points along

its edges and none in its center. The controller then cannot distinguish between the

center of the smooth building and empty space. This is a large risk which may lead to

collision. For this thesis it will be assumed that the obstacles are textured enough to

avoid this issue.

Another risk involves the focus of expansion. Recall that the focus of expansion is

defined as the point in the image plane corresponding with the heading of the velocity

vector. Equations 5.17 and 5.18 define the FOE's coordinates, UFOE and VFOE in terms

of the angle of attack a, angle of sideslip 3, and focal length f. These equations were

derived using geometric relationships from Figure 3-1. It should be noted that if an

aircraft is flying straight and level, with no sideslip or angle of attack, then the feature

point is located in the center of the image plane.












i[FOE = -ftana (5.17)



VFOE = -ftan 3 (5.18)

An issue arises when the aircraft is flying in a straight line directly toward a

feature point. In this instance, the feature point coincides with the FOE. If the aircraft

remains on that flight path, the feature point remains on the FOE, not moving in the

image plane. Since the feature point does not change location, there is no optic flow.

An example is used to demonstrate this concept. Figure 5-10 portrays the location

of the feature point at timesteps 1 and 2, where each is located at exactly / = 0.1

v = 0. The aircraft is flying straight with a constant angle of attack of 5.7 deg and zero

sideslip. Using Equations 5.17 and 5.18, the FOE is also located at [lFOE = 0.1 and

VFOE = 0, thus coinciding with the feature point. Assuming that the aircraft continues

flying in this direction, the feature point would remain in the same position at timestep

2. Since the feature point did not move in the image plane, there is no optic flow, as

seen from Figure 5-11.


-0.6 -0.6
-0.4 -0.4
-0.2 -0.2
L 0 L 0
0.2 0.2
0.4 0.4
0.6 0.6
0.5 0 -0.5 0.5 0 -0.5
V V

Figure 5-10: Feature Point Location at Timestep 1 (Left) and Timestep 2 (Right)


Without optic flow, the previously described algorithm does not recognize the

presence of an obstacle. It is then conceivable that the aircraft would continue flying

toward the obstacle, resulting in a collision. This behavior is a primary risk for many












-0.6

-0.4

-0.2

::L 0

0.2

0.4

0.6
0.5 0 -0.5
V


Figure 5-11: Resulting Optic Flow


approaches based on optic flow. For the purpose of this thesis, it will be assumed

that every obstacle consists of multiple feature points. If properly oriented, these

feature points will appear in the regions surrounding the FOE, thus creating optic flow,

however small. It is assumed that this optic flow will be sufficient for indicating the

presence of an obstacle in this region. Another method of avoiding this singularity

is the assumption that the aircraft is always flying at least at a slight offset to any

obstacle.















CHAPTER 6
MULTI-RATE CONTROLLER

6.1 Concept

Clearly both scene reconstruction and optic flow have their advantages and

disadvantages. Scene reconstruction provides reliable, detailed information for path

planning purposes; however, it requires large computation times. Optic flow operates at

much faster rates which is necessary for navigating through dense environments. These

fast computations though only provided rough inferences about the obstacles, resulting

in a simplistic method of navigation. As such, there is an innate trade-off between the

level of information provided and the corresponding computation time.

The level of information involved in scene reconstruction deems its navigation

control seemingly more reliable than that of optic flow. It is therefore desired to use

this dependable data whenever available. The time lapse between data acquisition and

control implementation associated with scene reconstruction deems its information as

potentially outdated though. This threat is most prominent in environments that are

densely populated with obstacles. For these situations, the reaction time may not be

fast enough to safely maneuver to avoid obstacles. This is a significant limitation for

scene reconstruction.

The goal of the multi-rate controller is to emphasize the advantages of each type

of vision-based control while mitigating the disadvantages. Namely, it is to use scene

reconstruction's reliable path planning capability and optic flow's fast-rate obstacle

avoidance capability. The multi-rate controller utilizes each of these capabilities by

using the fast-rate capability of optic flow to supplement the disadvantageous time

lapse that occurs during the processing of scene reconstruction data. Essentially, it

alternates between the two control strategies based on the characteristics of the current










environment. Scene reconstruction control is used when obstacles are sparse and

sufficient time is allocated for detailed path-planning. Optic flow control is used when

a nearby obstacle is detected and fast reaction times are required.

6.2 Strategy

The control scheme is inherently a multi-rate design with a slow loop involving

scene reconstruction for navigation and a fast loop involving optic flow for obstacle

avoidance. The basics of this scheme are shown in Figure 6-1. The values Ti and T2

are the update times for the optic flow and scene reconstruction algorithms respectively.

Since optic flow runs at a faster rate than scene reconstruction T < T2. Both loops,

scene reconstruction and optic flow, are constantly running. Each output a commanded

change in heading AVc and change in pitch AOc. A switch determines which of the

control outputs are passed through. For times when nearby obstacles are detected, optic

flow control decisions are used to provide fast-rate obstacle avoidance; otherwise, scene

reconstruction analysis retains control.


optic
environment flow
camera switch -
scene
reconstruction


MAV-



maneuver -
control


Figure 6-1: Closed-Loop System with Multi-Rate Control


The algorithm commences with scene reconstruction controlling the aircraft.

Meanwhile, a switch, or "trigger", based on optic flow principles, searches for impend-

ing obstacles in the flight path. If an obstacle is detected, the switch activates the optic

flow control loop. This loop is better suited to immediately respond to the obstacle and










navigate the aircraft to safety. The optic flow loop remains active for a predetermined

amount of time which is deemed sufficient to allow the optic flow controller to safely

maneuver around any impending obstacles. After this time, the scene reconstruction

analysis loop regains control of the aircraft. If again an obstacle is detected, the optic

flow resumes control. This process is repeated until the aircraft reaches its desired

destination.

The switch is based on a numerical quantification, defined as F, of the total optic

flow in the image plane. This function is designed such that it produces high values

when large magnitude optic flow vectors, and therefore nearby objects, are present.

The function produces low values when small magnitude optic flow vectors, and

therefore distant objects, are present. The function for F, given in Equation 6.1, is the

sum of the optic flow magnitudes to the fourth power. The magnitudes are again raised

to the fourth power, as was the case with the optic flow cost function, to give emphasis

to the largest optic flow vectors.

N
F = m, (6.1)
n=l
For low values of F, it is assumed that the aircraft's path is reasonably safe from

approaching obstacles. Conversely, high values imply an impending threat. For these

circumstances, it is desired for the controller to initiate optic flow control, which is

better equipped to make quick control decisions for obstacle avoidance. Since both

control loops output the same command type, the switch merely alters which command

is passed through. In order to control the switch, a predefined threshold defined as

o is used in Equation 6.2. When F exceeds o, the switch is activated and optic flow

attains control of the aircraft. If F remains below o, the aircraft remains in scene

reconstruction control. Since F is a function of the camera, feature point tracking

parameters, and environment, it is necessary to customize the value of o for a given

mission.











AVc output of optic flow : if F > o
= (6.2)
AOc output of scene reconstruction : else

Overall, this multi-rate controller shown in Figure 6-1 is particularly well suited

to autonomous aircraft operation. The slow scene reconstruction loop provides for

detailed, reliable path planning. The high-rate optic flow switch provides additional

information during these minutes devoted to scene reconstruction. Should a threat be

detected, optic flow control is used for real-time obstacle avoidance. In this way, the

system uses both types of vision-based feedback to continually provide information for

guidance and navigation.















CHAPTER 7
EXAMPLE 1

7.1 Setup

A simulation is used to demonstrate the vision-based controllers. The simulation

uses a high-fidelity nonlinear model of an F-16 which accurately represents the low-

speed flight dynamics. A control augmentation system is included to enable the system

to track desired changes in heading and desired altitudes [37]. It also includes an

inner-loop stabilizer to smooth the flight path.

This simulation was designed to verify closed-loop performance. Optimal imple-

mentations of either the optic flow or scene reconstruction analysis are unnecessary.

Therefore, the actual computation time will be ignored and the controllers will be up-

dated at rates found in published literature. For this example, the optic flow algorithm

will be running at 100Hz and the scene reconstruction algorithm will be running at

0.008Hz.

The F-16 will fly through an environment designed to demonstrate the advantages

of the multi-rate controller. The environment is shown in Figure 7-1. A building is

situated behind the center building such that it is initially hidden from view. Therefore,

it is not accounted for by the initial SFM calculations. The mission objective seeks a

path to the north toward a target GPS waypoint at a constant altitude. The goal is to

arrive at the desired waypoint in a timely fashion while avoiding obstacles in its path.

This simulation was run using feature points lining the edges of buildings at an

even interval. There is a camera mounted on the center of gravity of the F-16, aligned

parallel with the plane's b1 axis. At each timestep, the simulation projects the feature

points onto the image plane of the camera in relation to the aircraft's position and

orientation. To demonstrate the advantages of the multi-rate controller, the results using












4
x 10
4 Desired
2 Destination
0 *
7

6 r

5\

xl0 4

3
Origin
2 A, 2

North \ 4

0 -2 East 10
East


Figure 7-1: Simulation Environment

only the optic flow controller and only a waypoint controller will first be presented.

The multi-rate controller flying through the same environment will follow.

The optic flow algorithms are actively running. However, the scene reconstruction

has been replaced by representative path planning. Scene reconstruction techniques

are fairly well established in literature. Developments are still being pursued in

research, but the overall concept remains constant. This thesis does not present any

new developments on the topic. Therefore, in place of an active scene reconstruction

loop, waypoints will be used to represent the output of scene reconstruction. An outer-

loop guidance and navigation system allows the vehicle to follow these waypoints.

It utilizes a vision-based homing controller to track toward a particular point in the

image plane. The waypoints are chosen to mimic the expected results of ongoing SFM

research performed at the University of Florida and the University of South Carolina.

This approach is deemed sufficient since scene reconstruction is not the focus of this

thesis. The nature of the scene reconstruction controller is still represented.









For this simulation, it is assumed that perfect feature point extraction and tracking

is available. There is also assumed to be perfect state estimation, perfect terrain

mapping, and perfect path planning. Clearly these assumptions are unrealistic but will

suffice to demonstrate the differences between the various controllers.

This simulation could also be used to represent the nonlinear dynamics of a micro

air vehicle by scaling down the environment. An autonomous micro air vehicle with

the capability of GPS waypoint navigation with reactive obstacle avoidance would have

many practical applications. A credible micro air vehicle model unfortunately is not

available. The nonlinear F-16 model in a scaled-up urban environment will instead be

used to demonstrate the concept.

7.2 Actuator Controllers

Three controllers were used to determine the actuator deflection commands [37].

The elevator received its commands from the altitude controller. The ailerons were

controlled by the turn controller. Finally, the thrust was controlled by the speed

controller. No rudder was used during this simulation.

The altitude hold controller used is shown in Figure 7-2. This system inputs the

desired altitude provided by the vision-based controllers and outputs the actual altitude.

It is a negative-feedback proportional-integral controller, also utilizing the aircraft states

of pitch angle 0 and pitch rate q. The block Ae represents the elevator actuator. The

block P represents the F-16 plant model. The variable 8e is a constant trim condition

equal to the value of -2.677. For this simulation, the desired altitude he remained

constant at 15,000 ft. This controller proved to be satisfactory by remaining within an

envelope of 2%, or 300 ft, of the nominal desired value of 16,000 ft throughout all of

the simulations.

The turn controller tracks a commanded heading AVc by altering the roll com-

mand. This system also uses the plant outputs of roll angle 0 and roll rate p. The input

value of AVc was constantly changing due to the output of the vision-based algorithms.















h
+A ++


]i -







Figure 7-2: Altitude Hold Controller

Again, the P block represents the F-16 plant model. The Aa block represents the

aileron actuators.




A-- P Ay


.2 P




Figure 7-3: Turn Controller

The speed controller is a negative-feedback proportional controller. It tracks the

commanded velocity Vc. The block At represents the thrust actuators. The block P

denotes the F-16 plant model. A constant value of 600 ft/s was used for Vc.

7.3 Control based on Optic Flow

The flight path of the F-16 in Figure 7-5 was formulated using only the optic

flow controller. Since it is desired to maintain a constant altitude, the control inputs of

AVc and he are used. The aircraft is initially positioned at the desired altitude. This












Vc -J-- At P v






Figure 7-4: Speed Controller

altitude is maintained by the altitude controller. Therefore the mission is a 2D obstacle

avoidance problem instead of a 3D obstacle avoidance problem. This reduction allows

for the assumption that the trapezoidal region of the image plane which encompasses

the desired altitude is reduced to a line.

The results shown in Figure 7-5 were created using the following algorithm. For

every timestep of 0.01 sec, the controller evaluates the optic flow in the image plane.

It computes the cost for each point along the pv line in the unrotated image. The point

of lowest cost along this line is then chosen as the optimal point. It is the assumed

safest direction for which to navigate the aircraft. The change in heading AV is then

calculated using this point.

In the resulting path of Figure 7-5, the vehicle obviously avoids the obstacles but

does not even approach the desired waypoint. It detected a large amount of optic flow

in the right half of the image plane and consequently turned left. By not flying toward

the desired destination, this controller did not meet mission objectives. The path is not

entirely unexpected given the simplistic nature of the implemented controller. Several

approaches could be used that are much more advanced than simply steering toward

the least flow. However, this simplistic controller is used because it can operate at

an extremely high rate. As such, the results are not indicative of a limitation in optic

flow as much as they are indicative of a limitation this particular high-rate optic flow

algorithm.












4
x 10
4 Desired
2 Destination
0 *
7

6 r

5\ L
4
xl0 4
3\ T=30
2 T= 0
2 2c

North 1 0 4
0 -2 East
East


Figure 7-5: Optic Flow Results

7.4 Control based on Scene Reconstruction

Representing the scene reconstruction algorithm is a set of waypoints. These

waypoints, when flown sequentially, form a trajectory which could be developed by

a scene reconstruction analysis. The resulting flight path is shown in Figure 7-6.

This result is only understood by noting that the SFM updates are computed at the

green points noted on the path. The vehicle records an image, shown in Figure 7-7

at the initial time T = 0. Only the building immediately in front of the aircraft and

the building to its slight right are visible at this time. It has no data indicating the

presence of the third, furthest building. Assuming perfect scene reconstruction based

on the information provided to the controller at T=0, the controller assumes that the

environment is that as pictured in Figure 7-8. It creates an optimal trajectory for this

assumed environment. Assuming that the scene reconstruction takes 2 min or 120

sec, a resulting trajectory for this image is not created until point T = 120. The third

building comes into view at approximately T = 45; however, a trajectory resulting from










this new information could not be computed until T = 165. The aircraft would have

already collided with the building before the data was processed. The lag associated

with the computation time for the analysis is a great risk for the aircraft. For this

environment, the reaction is too slow and results in a collision.



4
x 10
4 Desired
2 Destination
0.
7
Collision
6 T=90 P' {
5 \
4 T=60 T=30
x 10 4

3
T=O

\ ^ 2
North 1 4

0 -2 East
East


Figure 7-6: Scene Reconstruction Results


The flight path in Figure 7-6 is meant to indicate a possible problem with scene

reconstruction for aircraft. The approach has been used with considerable success for

some systems, such as ground vehicles and even helicopters, that can stop and or hover.

A fixed wing aircraft, however, is continually moving forward. It cannot remain in a

known safe zone until the results are processed; it instead ventures into an unknown

environment. The lack of information about the environment can result in collision.

This scenario makes the computational delay potentially devastating to aircraft.

7.5 Multi-Rate Control

The multi-rate control scheme is also introduced to the simulation. This simulation

uses the low-rate scene reconstruction to compute a path but watches for impending























-0.5 0 0.5


Figure 7-7: Camera Image at T=0

obstacles using the high-rate optic flow while following that path. Should a threat be

detected, optic flow navigates the aircraft to safety.

The initial commands from the two schemes, as shown in Figure 7-5 and Fig-

ure 7-6, try to steer the vehicles in different directions. So, the threshold o = 2 x 10-4

is used such that the optic flow controller does not change the vehicle path until the

magnitude of flow passes a critical value. In this way, the vehicle follows the scene

reconstruction control but avoids obstacles.

Figure 7-9 presents the results of the multi-rate controller. The path is initially

the same as the scene reconstruction controller. At T=0, the aircraft extracts the feature

point information to input into the scene reconstruction algorithm. The path is then

followed until approximately T=60, at which point the optic flow subroutine senses

an impending threat in the flight path. Optic flow assumes control of the aircraft as it

navigates throughout the obstacles. After a predetermined amount of time, chosen here

to be 10 sec, scene reconstruction analysis resumes control of the aircraft and attempts

to again fly toward the mission target. The time of 10 sec was chosen as sufficient to

allow the optic flow controller to make a maneuver to avoid any nearby danger.








49




4
x 10
4 Desired
2 Destination
0 *
7

6

5\ 5

xl 0 4

3
T=0
2
A, 2

North 4
x 10
S-2 East


Figure 7-8: Environment as Assumed by Scene Reconstruction from Input Taken at
T=0

This multi-rate control strategy posed the best overall solution. The optic flow

controller lacked the ability to arrive at a desired destination. The scene reconstruction

controller was vulnerable to collision due to the computational delay. The multi-

rate controller, however, proved the ability to direct the aircraft in the desired north

direction while simultaneously avoiding obstacles in a real-time fashion. It successfully

balanced all of the mission objectives, indicating that the multi-rate controller is well

suited for this simulation.































4
x 10
4 Desired
2 Destination




6 T=90ii



4 rT= 60

3

2 \

North 1 \


0 -2


Figure 7-9: Multi-Rate Controller Results


4
x 10


T=30
AL


East


















CHAPTER 8
EXAMPLE 2

8.1 Setup

The purpose of this example is to demonstrate the multi-rate controller's capa-

bility when placed within more complicated surroundings. The simulation used in

this example is identical to Example 1 with the exception of the environment. The

same nonlinear F-16 model, algorithms, and controllers were used. However, in this

example, the environment was more densely populated by obstacles. This example

posed a more complicated situation in which a large number of buildings were origi-

nally obstructed from view. The environment is shown in Figure 8-1. The aircraft's

mission was to advance toward the indicated waypoint while avoiding obstacles along

the way. This mission implied flying north and then turning west once the aircraft had

sufficiently cleared the long wall.


4
x10

4-
Desired
3 Destination

+
Z2- [





-6 -5 -4 -3 2 -1 0 1
East 4
x10


Figure 8-1: Simulation Environment


This simulation assumes that feature points are located at intervals small enough

for the vision algorithms to sufficiently distinguish the center of the obstacle from











empty space. Due to the parameters of this simulation, the feature points are located at

an interval of 5000 ft along the obstacle's edges.

8.2 Control based on Optic Flow

The flight path resulting from the optic flow controller is shown in Figure 8-2.

The vehicle initially sees the optic flow from the long wall on the left and consequently

turns right. At this point, there are no obstacles in its path so the aircraft continues

to fly straight. These results are to be expected of the simplistic controller; it simply

navigated the aircraft away from the obstacles on the left. It did not balance the

mission objectives of avoiding obstacles while approaching the desired destination. In

fact, it flew away from the desired location. Therefore, this controller alone is not well

suited for the intended mission.


4
x10
4-
3 5 Desired
Destination


0z T




-6 -5 -4 -3 -2 -1 0 1
East 4
x10


Figure 8-2: Optic Flow Results


8.3 Control based on Scene Reconstruction

Figure 8-3 portrays the resulting flight path of the scene reconstruction controller.

The timesteps are important to understanding these results. It is assumed that at T=0

the aircraft has previous data informing it of the long wall to its left. The controller

then fuses this information with GPS data of its desired destination to create an optimal

path toward this destination. This flight path involves flying north around the wall and











then turning west toward the destination. The controller also inputs images at T=0 to

analyze for structure from motion. From the information available to the aircraft at

the time, the scene reconstruction analysis assumes the environment to look as it is

pictured in Figure 8-4. As shown in Figure 8-5, the 7 obstacles located to the west

of the wall are not visible in the image. These buildings were therefore not accounted

for in the scene reconstruction. Assuming again that the SFM analysis takes 2 min,

the results are not available until T=120. By this time, the aircraft would have already

collided with a building. Even if the controller was fortunate enough so as to input

images at the moment the obstacles first came into view, approximately around T=45

(see Figure 8-5), the processing lag is still too great to avoid a collision. Therefore,

although the controller does attempt to direct the aircraft toward the desired destination,

it is not successful in avoiding obstacles. Individually, this controller was not capable

of meeting mission objectives.


4
x10
4-
35- I
Desired
3Desitination U T=90 Collision T=60







-6 -5 -4 -3 2 1 0
East 4
x10


Figure 8-3: Scene Reconstruction Results


8.4 Multi-Rate Control

Lastly, the results of the multi-rate controller are demonstrated in Figure 8-6. The

controller began in scene reconstruction analysis mode, following the same flight path

as the scene reconstruction controller. During this time, the optic flow based trigger










4
x10

4-
Desired
3 Destination


Figure 8-4: Environment as Assumed by Scene Reconstruction from Input Taken at
T=0


Figure 8-5: Aircraft Position at T=0 (Left) and T=45 (Right)

searched for impending obstacles in the flight path. Danger was detected at T=57 so

the switch began passing through the optic flow controller's commands. The optic flow

controller directed the aircraft toward the south, away from the obstacle in its path to

the west. The switch continued to pass through the optic flow controller's information.

After a predetermined amount of time, the aircraft resumed flying toward the target

destination. This continued until T=86, at which point the optic flow controller

commanded a swerve to the right to avoid a building on its left. The flight concluded

under scene reconstruction control. This flight path demonstrates the capability of the

multi-rate controller to achieve mission objectives of reaching a desired destination

while avoiding obstacles in the path.


J



















































Desired
Destination

- 25
5 /

1



0

-6 -5


m m


T=60


T=30
Y


Figure 8-6: Multi-Rate Controller Results


T= -j..















CHAPTER 9
CONCLUSION

Vision-based control is a viable approach toward vehicle autonomy. Cameras

are relatively small and lightweight, making them especially well suited for vehicles

with small payloads. They provide a rich stream of data describing their environ-

ment. Vision-based controllers can then analyze this data to make intelligent control

decisions.

The amount of information outputted by the vision-based analysis is compromised

by the length of time taken to conduct the analysis. Rough inferences about the

environment can be conducted quickly, as demonstrated by the optic flow controller.

More intensive investigations, such as scene reconstruction, require more time.

The additional detail provided by scene reconstruction causes a lag between data

acquisition and the implementation of the control decision. Vehicles such as airplanes,

which are continually in motion, are then forced to navigate with outdated information

during this delay. This time lag may cause a collision in dense environments for which

the controller does not have adequate reaction time.

Path-planning optimization algorithms for navigation are most reliable when

in-depth information about the environment is provided. It is therefore desired to use

a detailed vision-based analysis, such as scene reconstruction, whenever possible. The

time lag associated with scene reconstruction causes this information to be potentially

outdated though, especially in dense environments. Optic flow provides less detailed

information but operates at much higher rates, making its data readily available for

analysis.

This thesis introduces a multi-rate controller which demonstrates the value of uti-

lizing different control methodologies based on the characteristics of the environment.









The controller uses slow scene reconstruction analysis for reliable path planning in

sparse environments. It uses fast optic flow control for obstacle avoidance in dense

environments and when obstacles are within close range. Optic flow operates at fast

rates and is better suited to quickly react to impending threats. The two controllers are

monitored by a switch which evaluates the current environment and selects the most

suitable controller.

The benefits of the multi-rate controller are demonstrated through two simulations.

These simulations apply the controller to a nonlinear F-16 model flying within a

scaled-up urban environment. The simulations are limited by the assumptions of

perfect feature point detection and tracking, but suffice to demonstrate the applicability

of the multi-rate controller. First, each vision-based feedback controller, optic flow and

scene reconstruction, are independently implemented and analyzed. Both fail to meet

mission objectives in each case. Next, the multi-rate controller is implemented. Its

flight paths demonstrate the ability to autonomously achieve obstacle avoidance while

still maintaining a mission objective for navigation purposes.















REFERENCES


[1] G. Baratoff, C. Toepfer, M. Wende and H. Neumann, "Real-Time Navigation
and Obstacle Avoidance from Optic Flow on a Space-Variant Map," IEEE
International Symposium on Intelligent Control, Gaithersburg, MD, September
1998, pp. 289-294.

[2] G.L. Barrows, "Future Visual Microsensors for Mini/Micro-UAV Applications,"
7th IEEE International Workshop on Cellular Neural Networks and their Applica-
tions, July 2002, pp. 498-506.

[3] G.L. Barrows, J.S. Chahl and M.V Srinivasan, "Biomimetic Visual Sensing and
Flight Control," 2002 Presented at 2002 Bristol UAV Conference, Bristol, UK,
April 2002.

[4] G.L. Barrows and C. Neely, "Mixed-Mode VLSI Optic Flow Sensors for In-Flight
Control of a Micro Air Vehicle," Presented at the SPIE 45th Annual Meeting, San
Diego, CA, July 2000.

[5] R.S. Causey and R. Lind, "Aircraft-Camera Equations of Motion," AIAA Journal
of Aircraft, Submitted.

[6] P Chang and M. Herbert, "Robust Tracking and Structure from Motion with
Sample Based Uncertainty Representation," IEEE International Conference on
Robotics and Automation, Washington, D.C., May 2002, pp. 3030-3037.

[7] A. Dev, B. Krose, F Groen, "Heading Direction for a Mobile Robot from Optical
Flow," IEEE International Conference on Robotics and Automation, Leuven, May
1998, Volume 2, pp. 1578-1583.

[8] A. Dev, B. Krose, and F Groen, "Navigation of a Mobile Robot on the Temporal
Development of the Optic Flow," Proceedings of the 1997 IEEE/RSJ International
Intelligent Robots and Systems, Grenoble, September 1997, Volume 2, pp. 558-
563.

[9] T.M. Dijkstra, PR. Snoeren, and C.C. Gielen, "Extraction of 3D Shape from Optic
Flow: a Geometric Approach," IEEE Proceedings of Computer Vision and Pattern
Recognition, Seattle, WA, June 1994, pp. 35-140.

[10] C. Fermuller, P Baker, and Y. Aloimonos, "Visual Space-Time Geometry-A Tool
for Perception and the Imagination," Proceedings of the IEEE, July 2002, Volume
90, Issue 7, pp. 1113-1135.









[11] "Human Reaction Time,"
http://www.gecdsb.on.ca/sub/projects/psl/senior/24/reaction.htm. Accessed
June 20, 2005.

[12] T. Fukao, K. Fujitani, and T. Kanade, "An Autonomous Blimp for a Surveillance
System," IEEE International Conference on Intelligent Robots and Systems,
October 2003, Volume 2, pp. 1920-1825.

[13] M.A. Garratt and J.S. Chahl, "Visual Control of an Autonomous Helicopter,"
Proceedings of the 41st Aerospace Sciences Meeting and Exhibit, Reno, NV,
January 2003.

[14] E. Hagen and E. Heyerdahl, "Navigation by Optical Flow," llth IAPR Interna-
tional Conference on Pattern Recognition, The Hague, Netherlands, August 1992,
Volume 1, pp. 700-703.

[15] B.K. Horn, Robot Vision, MIT Press, Cambridge, MA, 1986.

[16] B.K. Horn and B.G. Schunk, "Determining Optical Flow," Ariii. idl Intelligence,
1981, Volume 17, pp. 185-203.

[17] S. Hrabar and G. Sukhatme, "A Comparison of Two Camera Configurations for
Optic-Flow Based Navigation of a UAV through Urban Canyons," IEEE/RSJ
International Conference on Intelligent Robots and Systems, September 2004,
Volume 3, pp. 2673-2680.

[18] S. Hrabar, G. Sukhatme, P. Corke, K. Usher, and J. Roberts, "Combined Optic-
Flow and Stereo-Based Navigation of Urban Canyons for a UAV," Submitted to
the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems,
2005.

[19] T. Jebara, A. Azarbayejani, and A. Pentland, "3D Structure from 2D Motion,"
IEEE Signal Processing Magazine, May 1999, pp. 66-84.

[20] T. Kanade, O. Amidi and Q. Ke, "Real-Time and 3D Vision for Autonomous
Small and Micro Air Vehicles," IEEE Conference on Decision and Control,
December 2004, Volume 2, pp. 1655-1662.

[21] M. A. Lewis, "Detecting Surface Features During Locomotion Using Optic Flow,"
IEEE International Conference on Robotics and Animation, May 2002, Volume 1,
pp. 305-310.

[22] L.M. Lorigo, R.A. Brooks, and W.E.L. Grimsou, "Visually-Guided Obstacle
Avoidance in Unstructured Environments," IEEE International Conference on
Intelligent Robots and Systems, Grenoble, September 1997, Volume 1, pp. 373-
379.









[23] B. Lucas and T. Kanade, "An Iterative Image Registration Technique with an
Application to Stereo Vision," Proceedings of the DARPA Image Understanding
Workshop, Washington, D.C., 1981, pp. 121-130.

[24] P.C. Merrell, D.J. Lee, and R.W. Beard, "Obstacle Avoidance for Unmanned
Air Vehicles Using Optical Flow Probability Distributions," SPIE Optics East,
Robotics Technologies and Architectures, Mobile Robot XVII, Philadelphia, PA,
October 2004, Volume 5609-04.

[25] B.G. Mobasseri, "Virtual Motion: 3-D Scene Recovery Using Focal Length-
Induced Optic Flow," IEEE International Conference on Image Processing,
Austin, TX, November 1994, Volume 3, pp. 78-82.

[26] D. Nair and J.K. Aggarwaal, "Moving Obstacle Detection from a Navigating
Robot," IEEE Transactions on Robotics and Automation, June 1998, Volume 14,
Issue 3, pp. 404-416.

[27] R.C. Nelson, "Flight Stability and Automatic Control, Second Edition,"
WCB/McGraw-Hill Publishing, Boston, MA, 1998.

[28] Z. Rahman, R. Inigo, and E.S. McVey, "Algorithms for Autonomous Visual Flight
Control," International Joint Conference on Neural Networks, Washington, DC,
June 1989, Volume 2, pp. 619.

[29] S. Rathinam and R. Sengupta, "Safe UAV Navigation with Sensor Processing
Delays in an Unknown Environment," IEEE Conference on Decision and Control,
December 2004, Volume 1, pp. 1081-1086.

[30] F Ruffier and N. Franceschini, "Visually Guided Micro-Aerial Vehicle: Automatic
Take Off, Terrain Following, Landing and Wind Reaction," IEEE International
Conference on Robotics and Automation, April 2004, Volume 3, pp. 2339-2346.

[31] F Ruffier, S. Viollet, S. Amic, and N. Franceschini, "Bio-Inspired Optical Flow
Circuits for the Visual Guidance of Micro-Air Vehicles," International Symposium
on Circuits and Systems, May 2003, Volume 3, pp. 846-849.

[32] G. Sandini, V. Tagliasco, and M. Tistarelli, "Analysis of Object Motion and
Camera Motion in Real Scenes," IEEE International Conference on Robotics and
Automation, April 1986, Volume 3, pp. 627-633.

[33] B. Sinopoli, M. Micheli, G. Donato, and T.J. Koo, "Vision Based Navigation for
an Unmanned Aerial Vehicle," IEEE International Conference on Robotics and
Automation, 2001, Volume 2, pp. 1757-1764.

[34] K.-T. Song and J.-H. Huang, "Fast Optical Flow Estimation and its Application to
Real-time Obstacle Avoidance," IEEE International Conference on Robotics and
Automation, Seoul, Korea, May 2001, Volume 3, pp. 2891-2896.









[35] B. Sridhar and G.B. Chatteriji, "Vision-Based Obstacle Detection for Grouping
for Helicopter Guidance," AIAA Journal of Guidance, Control, and Dynamics,
September 1994, Volume 17, Number 5, pp. 908-914.

[36] M.J. Stephens, R.J. Blissett, D. Chamley, E.P. Sparks and J.M. Pike, "Outdoor
Vehicle Navigation Using Passive 3D Vision," IEEE Computer Society Conference
on Computer Vision and Pattern Recognition, San Diego, CA, June 1989, pp. 556-
562.

[37] B.L. Stevens and F.L Lewis, Aircraft Control and Simulation, Wiley, Hoboken,
NJ, 2003.

[38] N.O. Stoffler, T. Burkert, and G. Farber, "Real-Time Obstacle Avoidance Using an
MPEG-Processor-based Optic Flow Sensor," IEEE International Conference on
Pattern Recognition, Barcelona, September 2000, Volume 4, pp. 161-166.

[39] C. Taylor, D. Kriegman, and P. Anandan, "Structure and Motion in Two Dimen-
sions from Multiple Images: A Least Squares Approach," Proceedings of the
IEEE Workshop on Visual Motion, Princeton, NJ, October 1991, pp. 242-248.

[40] H. Wang and M. Brady, "A Structure-from-Motion Algorithm for Robot Vehicle
Guidance," Proceedings of the Intelligent Vehicles 1992 Symposium, Detroit, MI,
July 1992, pp. 30-35.

[41] W.M. Wells, "Vision Estimation of 3-D Line Segments from Motion-A Mobile
Robot Vision System," IEEE Transactions on Robotics and Automation, December
1989, Volume 5, Issue 6, pp. 820-825.

[42] Y-S. Yao and R. Chellappa, "Dynamic Feature Point Tracking in an Image
Sequence," Proceedings of the 12th IAPR International Conference on Pattern
Recognition, Jerusalem, Israel, October 1994, Volume 1, pp. 654-657.

[43] G.-S. Young, T.-H. Hong, M. Herman, J.C.S. Yang, and J.C.S., "Obstacle
Detection for a Vehicle Using Optical Flow," Proceedings of the Intelligent
Vehicles 1992 Symposium, Detroit, MI, June 1992, pp. 185-190.















BIOGRAPHICAL SKETCH

Amanda Arvai was born in Patuxent River, Maryland, on January 17, 1982. Her

family moved around the country for her first several years before finally returning

to southern Maryland. Most of her childhood hours were spent playing soccer and

softball. This carried over to her high school years when she began running on the

track team. After graduating from Leonardtown High School in 2000, she attended

the University of Notre Dame, in snowy South Bend, Indiana. Go Irish! She spent her

summers working on the Patuxent River Naval Base for Veridan Engineering on the

F/A-18 Hornet Team. She also spent a summer working at Honeywell in South Bend,

Indiana, where she worked on the Joint Strike Fighter auxiliary power unit fuel control.

In 2004, she graduated from Notre Dame with a degree in mechanical engineering. She

attended the University of Florida for her masters degree, where she worked under the

advisement of Dr. Rick Lind. She married her husband, Bryan Arvai, in August of

2005. They are currently moving to Redondo Beach, CA, where Amanda will work for

Northrop Grumman Space Technology.