UFDC Home  myUFDC Home  Help 



Full Text  
VISIONBASED STATE ESTIMATION FOR UNINHABITED AERIAL VEHICLES USING THE COPLANARITY CONSTRAINT By THOMAS PHILIP WEBB A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2007 O 2007 Thomas Philip Webb ACKNOWLEDGMENTS I wish to thank my supervisory committee chair (Dr. Andy Kurdila) for his guidance, direction, and support. He is, without doubt, a worldwide leader in this field and I have been privileged to benefit from his extensive experience in both research and education. I am grateful for his time and effort despite his many other responsibilities. I thank Dr. Rick Lind for his advice and valuable contributions to this research. I also wish to thank Dr. Warren Dixon and Dr. Haniph Latchman for serving on my committee and providing their valuable input. I would like to express my appreciation to Dr. Chad Prazenica, a researcher and visiting professor at the UF Research and Engineering Education Facility. He has coauthored papers with me, run numerous simulations for me in the vision laboratory, spent valuable time discussing ideas and techniques, and provided much encouragement. I could not have completed this effort without his kind support. I wish to thank my employer, JE Sverdrup Technology, TEAS Group (Eglin Air Force Base, Florida) for providing my tuition and books during my studies through the Edwin "Bud" George Fellowship. I thank my loving wife, Libby, for the support and encouragement she has given me during this long and seemingly endless endeavor. Finally, I thank God for the ability and opportunity to make any modest contribution I may have accomplished toward understanding and harnessing his creation. TABLE OF CONTENTS page ACKNOWLEDGMENT S .............. .................... iii LIST OF TABLES ............_ ..... ..__ ..............vi... LIST OF FIGURES .............. ....................vii AB STRACT ................ .............. xi CHAPTER 1 INTRODUCTION ................. ...............1.......... ...... 1.1 Previous Work in this Field ................. ......... ...............2..... 1.2 Current Research Effort ................. ...............5............ ... 1 .2. 1 Obj ective ................. .............. ...............5...... 1.2.2 Outline of Tasks Accomplished .............. ...............6..... 1.2.3 Contributions .............. ...............6..... 2 THEORY AND CONCEPT S .............. ...............8..... 2.1 Coplanarity Constraint............... ............... 2.2 Scale Factor Ambiguity .............. ...............10.... 2.3 Implicit Extended Kalman Filter ................. ......... ......... ...........1 2.4 Development of Measurement Model .............. ...............13.... 2.5 Estimation Al gorithm Summary ................. ...............23................ 3 APPLICATION WITH STATISTICAL DYNAMIC MODEL .............. .... .........._..27 3.1 Simulation Description ................ ...............27........... .... 3.1.1 Statistical M odel .............. .. .... ......... ............. ..... .........2 3.1.2 Simulation of Focal Plane and Vision Processing Output. ................... .....28 3.2 Simulation Results ............... .......... .............2 3.2. 1 Results for No Measurement Noise ...._._._._ ..... ... .___ ........._.._.....30 3.2.2 Results in the Presence of Measurement Noise............_.._ .........._._.....31 3.3 Sensitivity of Algorithm to Measurement Noise ........._.._........ .... ...........3 1 3.4 Observations from Application with Statistical Model .............. ...................33 4 APPLICATION WITH A UAV DYNAMIC MODEL ................. ......................39 4.1 Dynamic Model Description............... ..............3 4.2 Observability Considerations............... .............4 4.3 Simulation Description ........._.__........__. ...............47.... 4.4 Results With No Measurement Noise ....._.__._ ..... ... .__. .. ..._.__.......4 4.5 Results With Measurement Noise .............. ...............49.... 4.6 Degree of Observability ................. ...............52...___ ..... 4.6. 1. Mod al Ob servability ............... ...............52.... 4.6.2 The Measurement Matrix .............. .... ........ ............5 4.6.3 Modification of the IEKF Measurement Matrix .............. ...................55 4.7 Results With the Modified IEKF With Measurement Noise .............. ..............56 5 ROBUSTNESS AND SENSITIVITY ASSESSMENT ............... ...................6 5.1 Robustness to Modeling Uncertainties .............. ...............68.... 5.2 Sensivity to Design Parameters .............. ...............72.... 5.2.1 Camera Depression Angle ................. ...............72................ 5.2.2 Camera Field of View .............. ...............73.... 5.2.3 Measurement Interval ................. ......... ...............74. ... 5.2.4 Number of Feature Points. ................ ....................................76 5.2.5 Feature Point Location .............. ...............77.... 5.3 Effect of Measurement Noise .............. .. ... ....... ............7 5.4 Overall Comments on Robustness and Sensitivity ................. ......................80 6 IEKF CLOSED LOOP PERFORMANCE ........._.._.. ....._.. ........__. .......10 6.1 Simulation with Computer Generated Feature Points .............. ....................10 6.2 Simulation With Synthetic Imagery .............. ...............109.... 7 CONCLUSION............... ...............12 7.1 Summary ........._..... ...._... ...............121.... 7.2 Follow On Work ........._..... ...._... ...............122... APPENDIX A UAV TRUTH MODEL DESCRIPTION .....__. ............... ........._.._.......12 B AUGMENTATION TO LINEARIZED AIRCRAFT EQUATIONS OF M OTION .............. ...............134.... LIST OF REFERENCES ............ ......__ ...............139... BIOGRAPHICAL SKETCH ............ ..... .__ ...............142... LIST OF TABLES Table pg 51 Straight and level traj ectory Monte Carlo sets. ............ ...... __ ........_.._.....84 52 Vertical S trajectory Monte Carlo sets. ............. ...............84..... 53 Horizontal S trajectory Monte Carlo sets. ............. ...............85..... 54 Camera angle Monte Carlo sets. ..........._._ ....._._ ...............87.. 55 Camera field of view Monte Carlo sets. .............. ...............90.... 56 Measurement interval Monte Carlo sets. .............. ...............93.... 57 Number of feature points Monte Carlo sets ..........._._ ......_.._ ......._.._......9 58 Feature point location Monte Carlo sets. ..........._..__........ ........_._.........0 59 Measurement noise Monte Carlo sets. ......._..__ ........__ ....._._ ..........10 A1 Aerodynamic and mass parameters ............. ...............131.... LIST OF FIGURES Figure pg 11 Conceptual model. ............. ...............7..... 21 The perspective proj section camera model ......___ ... .....___ ......_._ ........2 22 Geometry of the coplanarity constraint. ............. ...............25..... 23 Measurement model geometry. ............. ...............26..... 31 Traj ectory for statistical dynamic model numerical simulation. ............. ..... ........._..34 32 Focal plane measurement. ............. ...............34..... 33 Feature point movement on focal plant for initial run with no measurement noise...35 35 Estimation results with .03 pixel measurement noise............... ...............36. 36 Estimation results with .14 pixel measurement noise............... ...............36. 37 Estimation results with .47 pixel measurement noise added at 6 seconds. ................37 38 Focal plane for results with measurement noise added at 6 seconds..........................37 39 Estimation results with extended baseline, .47 pixel measurement noise. ...............38 41 Eight foot wingspan class aircraft simulated for this effort. .............. .............. .57 42 Straight and level trajectory simulation, no measurement noise. ............... ...............58 43 Errors from straight and level trajectory simulation, no measurement noise. ............58 44 Vertical S trajectory simulation, no measurement noise. ............. .....................5 45 Errors from vertical S trajectory simulation, no measurement noise. ........................59 46 Horizontal S trajectory simulation, no measurement noise. ................... ...............60 47 Errors from horizontal S trajectory simulation, no measurement noise. ....................60 48 Errors from straight and level trajectory, one pixel measurement noise. ...................61 49 Errors from vertical S traj ectory, one pixel measurement noise. ............. .............61 410 Errors from horizontal S trajectory, one pixel measurement noise. .........................62 411 Monte Carlo runs, straight and level trajectory with measurement noise. ...............62 412 Straight and level trajectory errors with measurement noise, increased measurement interval. ............. ...............63..... 413 Modal observability for straight and level trajectory, no measurement noise..........63 414 Measurement matrix values for straight and level, no noise ................. ...............64 415 Measurement matrix values for straight and level, one pixel measurement noise...64 416 Errors for modified IEKF, straight and level trajectory with measurement noise. ..65 417 Errors for modified IEKF, vertical S traj ectory with measurement noise. ........._....65 418 Errors for modified IEKF, horizontal S trajectory with measurement noise............66 419 Modified IEKF Monte Carlo, straight and level with measurement noise............_...66 420 Modified IEKF Monte Carlo, vertical S with measurement noise. ..........................67 421 Modified IEKF Monte Carlo, horizontal S with measurement noise. ...................67 51 Six run Monte Carlo set errors, case 1, straight and level traj ectory ......................82 52 Case 1 straight and level traj ectory, ensemble mean error and bounds. .........._.......82 53 Case 1 vertical S trajectory, ensemble mean error and bounds ................ ...............83 54 Case 1 horizontal S traj ectory, ensemble mean error and bounds..............._._..........83 55 Straight and level traj ectory, case statistics. ....._._._ .... ..... ... ...._.........8 56 Vertical S trajectory, case statistics. ............. ...............86..... 57 Horizontal S trajectory, case statistics. .............. ...............86.... 58 Focal plane history for straight and level traj ectory with camera looking down, intermediate baseline position, and level. ............. ...............87..... 59 Straight and level traj ectory, camera angle statistics ........._.. ....... ._. ............88 510 Vertical S traj ectory, camera angle statistics. ....._._._ .... ... .__ ........_._......8 511 Horizontal S traj ectory, camera angle statistics ....._._._ .... ... .. ........._......89 512 Straight and level trajectory, camera FOV statistics. ............. .....................9 513 Vertical S trajectory, camera FOV statistics. ............. ...............91..... 514 Horizontal S trajectory, camera FOV statistics. ................... ...............9 515 History of statistics from .5 second measurement interval ensemble, straight and level traj ectory ........._.. ..... ._ __ ...............92.... 516 Straight and level trajectory, measurement interval statistics. ............. .................94 517 Vertical S trajectory, measurement interval statistics. ............. .....................9 518 Horizontal S trajectory, measurement interval statistics. ................... ...............9 519 Straight and level trajectory, number of feature points statistics. ........._..._...............97 520 Vertical S traj ectory, number of feature points statistics ........._.._..... ...._.._........97 521 Horizontal S trajectory, number of feature points statistics. ................. ...............98 522 Focal plane history for straight and level traj ectory with area constraints. .............99 523 Focal plane history for straight and level traj ectory with area constraints. ............100 524 Straight and level traj ectory, feature point location statistics ................. ...............102 525 Vertical S traj ectory, feature point location statistics ................. .....................102 526 Horizontal S traj ectory, feature point location statistics ................. ................ ..103 527 Straight and level trajectory, measurement noise statistics ................. ...............105 528 Vertical S trajectory, measurement noise statistics. ................... ...............10 529 Horizontal S trajectory, measurement noise statistics. ................... ...............10 61 Closed loop simulation. ................ ...............114......... ..... 62 Autopilot for closed loop simulation, computer generated feature points. .............114 63 Estimated and actual states, computer generated feature points. .........._................115 64 Horizontal traj ectory, computer generated feature points. ........._._... ......._._.....115 65 Vertical traj ectory, computer generated feature points. ........._.... ........._._.....1 16 66 Focal plane image from synthetic imagery simulation............_.._._ .............. .....116 67 Autopilot for closed loop simulation, synthetic imagery. .........._.._._ ........_.._.....117 68 Estimated and actual states, synthetic feature points..........._.._.. .........._.........117 69 Horizontal traj ectory, synthetic feature points. ........._.._.. ....__. ................1 18 610 Vertical traj ectory, synthetic feature points ................. ................. ...._ 11 611 Estimated and actual states, synthetic feature points, new guidance.....................119 612 Horizontal traj ectory, synthetic feature points, new guidance. ............. ...... ..........1 19 613 Vertical trajectory, synthetic feature points, new guidance............... ...............12 A1 UAV truth model. ............ .............133...... Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy VISIONBASED STATE ESTIMATION FOR UNINHABITED AERIAL VEHICLES USING THE COPLANARITY CONSTRAINT By Thomas Philip Webb May 2007 Chair: Andrew J. Kurdila Major: Aerospace Engineering We developed and evaluated a visionbased state estimation algorithm for uninhabited aerial vehicles (UAVs) using the implicit extended Kalman fi1ter (IEKF) and the coplanarity constraint (also known as the epipolar or essential constraint). The coplanarity constraint, a wellknown property in the structure from motion (SFM) Hield, has advantages for this application in that the feature point locations in three dimensional space do not have to be known and tracked and that feature points can be discarded and acquired as necessary. This reduces the computational load which is important for real time applications such as aircraft control. Advantages of the IEKF are that, in principle, the current estimate uses all previous information, not just the current observations, and that the estimate will propagate forward in an orderly fashion in the case of interrupted or reduced measurements. The dynamics of the aircraft are included in the process model which improves the observability of the states and resolves the SFM scale factor ambiguity. The algorithm was implemented in a numerical simulation and exhibited divergence problems in the presence of measurement noise. These effects were particularly evident in the velocity estimates. The problems were eliminated by zeroing out the small velocity dependent terms in the measurement matrix. The algorithm was exercised in a Monte Carlo fashion and found to be robust to errors in the process model and to measurement noise. Sensitivities to filter and focal plane implementation parameters including camera depression angle, field of view, measurement interval, and feature point location and number were also assessed. The modified estimator was then employed in a closed loop UAV simulation to provide feedback to a simple autopilot. The simulation demonstrated that the state estimates provided were sufficiently accurate to allow control of the UAV through successful waypoint navigation. This simulation used feature points generated at random locations in the field of view. A second closed loop simulation was successfully run using synthetic imagery from the University of Florida' s vision laboratory and a LucasKanade feature point tracking algorithm. CHAPTER 1 INTTRODUCTION Interest in autonomous flight of small uninhabited aircraft has been increasing over the past few years. Also, the demand and expectation for smaller and smaller vehicles has increased as the size and weight of sensors, actuators, microprocessors, and propulsion systems has decreased [1, 2]. Many of these aircraft employ a video camera as a sensor in accomplishing their mission (e.g., information gathering, surveillance, route documentation) or for manual control for some phases of flight using a remote human pilot. Dual use of video for state estimation in a flight control system as well as for mission accomplishment is attractive in that payload space and weight can be conserved allowing for a smaller aircraft for a given mission. This research explores one approach for using visual information as the sole measurement for estimating an adequate number of aircraft states with enough accuracy to drive an autopilot. A conceptual model is shown in Figure 11. Multiple use of components for different functions in air vehicle design is certainly not a new idea. A remarkable example in the extreme is the Gossomer Condor in which designer Paul MacCready combined the propulsion power source, sensors, flight control computer, control actuators, and flight recorder into a single "off the shelf' package in the person of Bryan Allen [3]. 1.1 Previous Work in this Field Visionbased state estimation for control of aircraft has been around as long as the airplane since vision is the sense most used by a human pilot. Of course flying creatures have been doing this for much longer. The University of Florida has successfully flown micro air vehicles (MAVs) using an autopilot based on a visionbased horizon tracking scheme [4]. The Robotics Institute, Carnegie Mellon University has much experience with "structure from motion" (SFM) research and has applied that to aerial vehicles in recent years [5]. Georgia Institute of Technology has recently flown a glider that successfully homes to a target of known size, shape, and color using visiononly navigation, guidance, and control [6]. The technique used in my research described is drawn from the body of SFM work. The computer vision problem of SFM has been described as "estimating 3D motion and structure of a rigid obj ect from a sequence of monocular images" [8]. This is equivalent to estimating the motion of a moving camera and the structure of a fixed environment from a sequence of images. SFM methods can generally be categorized as causal or non causal. Causal methods are those that, at a given point in time, use only current and past information. These methods can be recursive in nature. In noncausal schemes, batch processing is normally used after the fact to include all information for an estimate at any given time of interest. Any algorithm used for realtime control would obviously have to be causal [9]. Soatto and Perona [8] explore and compare the performance of a number of recursive methods implemented using a form of the extended Kalman filter (EKF). The basic measurement used in all these methods is the movement of "feature points" from frame to frame on the camera focal plane. A feature point is a fixed point in the environment that is identified and tracked as the camera moves. The methods examined use various constraints imposed on feature point image movement on the focal plane by geometry. Also, some methods attempt to estimate the positions of the feature points in the environment (the structure) at the same time as the camera motion variables are estimated, while other methods are able to decouple these operations. If feature point location (structure) is not important, the decoupled methods are attractive for realtime implementation because they are less demanding computationally since the distance to feature points is not required to be known initially and updated. Also, new feature points can be added and old ones discarded at will. These schemes are able to deduce camera angular rates and direction of velocity (or, for the discrete case, incremental changes in angular orientation and direction of position change). The magnitude of the translational motion cannot be determined directly without some reference length. This is the wellknown scale factor ambiguity that comes from the fact that the camera model used cannot distinguish whether a feature point is close with slow relative motion or further away with faster relative motion [5, 8, 10]. Azarbayej ani and Pentland [1 1] developed and demonstrated an EKF method of estimating structure, camera motion, and camera focal length. It carries a state in the filter for each feature point, corresponding roughly to scaled range, as well as the incremental rotations and scaled translations. Chiuso et al. [9] improved upon this general approach by handling the problem of losing and adding feature points (feature point occlusion). The method allows for the use of an arbitrary dynamic model for the motion variables, however the examples published included only statistical random walk models. Soatto formalized the implicit EKF (IEKF) to implement recursive decoupled methods using two different implicit geometric constraints; the coplanarity constraint (also known as the epipolar or essential constraint) [12] and the subspace constraint [13]. These methods are attractive for use in estimating motion variables in real time since, as mentioned previously, filter states for the feature points do not have to be carried. Motion parameters estimated are formulated as angular rates and direction of velocity. Again, statistical models are implemented in the examples. A causal, but nonEKF method has been developed by Kanade et al. [5]. It links twoframe SFM solutions of the essential constraint computed by a conventional non recursive computer vision algorithm. Drawbacks to this method for the current application appear to be that since there is no dynamic model, there is no mechanism for incorporating available information about the dynamic characteristics of the plant. Also, the current state estimate uses only information from the current and last focal plane measurement while the EKF blends in information from all previous measurements. Even so, this method was shown to have promising results using vision data obtained during an actual model aircraft flight of short duration with good initialization. Kehoe et al. [14, 15] developed a method based on application of nonlinear optimization techniques applied to the optical flow of tracked feature points. This method is also not filter based and does not use any model information. The technique was demonstrated using simulation. Gurfil and Rotstein [16] added an aircraft dynamic model to the subspace constraint IEKF method mentioned earlier [13]. Their numerical simulation results demonstrated estimation of aircraft angle of attack, sideslip, and angular rates for a single traj ectory using camera focal plane measurements with a limited amount of measurement noise. They did not attempt to use the state estimates in aircraft control. 1.2 Current Research Effort 1.2.1 Objective The obj ective of this research effort is to develop and demonstrate a causal, vision based state estimation algorithm for autonomous uninhabited aerial vehicles (UAVs) capable of providing enough states of sufficient accuracy to reliably control the vehicle in an urban, dynamic environment. The approach used is based on the coplanarity constraint IEKF method mentioned earlier [12]. This method is attractive as a departure point for several reasons. It is decoupled from the structure which decreases computational demands and allows feature points to be efficiently discarded and added as necessary as the field of view changes. It is not dependent on a discernable horizon which is likely to be missing in an urban environment. If feature point measurements are reduced or even lost completely for a time, the Kalman Filter will still provide state estimates that gracefully degrade which contributes to the robustness and integrity of the control system. And finally, the IEKF lends itself to the incorporation of a specific plant model which provides more information to the process allowing the estimation of additional states. It is important to note that we assumed that the feature point correspondence and tracking problem is solved. That is, the location of feature points on the focal plane from frame to frame is known within a certain level of accuracy. That is not to say that the location of the feature points in space is known. This vision processing problem is certainly not trivial and is the subj ect of much ongoing research. The LucasKanade algorithm [5] is one proven method for this task. 1.2.2 Outline of Tasks Accomplished A simulation of a small UAV was developed and "truth" data sets for various traj ectories were compiled. The IEKF was augmented with a simplified dynamic model representing the aircraft and a rigidly attached focal plane. The filter was exercised on the traj ectories using randomly chosen feature points moving on a simulated focal plane in accordance with the particular traj ectory being run. The filter exhibited some divergence problems in the velocity states when measurement noise was added. A modification to the filter measurement model was made which seemed to alleviate the problem. Filter parameters were tuned during this process and state estimation accuracy was evaluated. Robustness of the filter to initialization errors, vehicle modeling errors, and measurement noise was addressed using Monte Carlo simulation runs. Sensitivity of its performance to camera orientation, field of view, measurement frequency, and feature point distribution and density was investigated. An autopilot tailored to the estimated states was designed and a closed loop simulation including the IEKF, aircraft simulation, and focal plane simulation was developed to demonstrate the suitability of the filter for use in autonomous waypoint navigation for the UAV. The filter was also exercised using synthetic video produced by the University of Florida Research and Engineering Education Facility MAV vision laboratory. Some of this research has been documented previously in references [17] and [18]. 1.2.3 Contributions This research effort makes several contributions to the state of the art in vision based state estimation and control for autonomous UAVs. The IEKF based on the coplanarity constraint was applied to an agile aerial vehicle. A specific dynamic model was used in the IEKF which expanded its capabilities in two respects. First, the plant dynamics allow the estimation of states in addition to angular rates and direction of velocity. In this case, they are the Euler angles of pitch and roll. Second, the scale factor ambiguity is resolved for the velocities. A final contribution is that this effort demonstrates that an autonomous UAV can be productively controlled using information derived solely from focal plane measurements in a nohorizon environment. Figure 11. Conceptual model. CHAPTER 2 THEORY AND CONCEPTS 2.1 Coplanarity Constraint The fundamental geometric constraint we used as a measurement is the coplanarity constraint, also called the epipolar constraint or the essential constraint. This relationship, credited to H. C. LonguetHiggins [19], uses the perspective proj section or pinhole camera model (Figure 21). The coplanarity constraint is formulated based on the principle that the position vectors of a feature point relative to the camera, proj ected onto the focal plane at two instants in time, must be coplanar with the translation vector of the origin of the camera frame. This is shown in Figure 22, where X and X' denote the position vectors of feature point P in the camera reference frame, x and x' represent the corresponding position vectors of the image of P on the focal plane, and T indicates the translation vector of the origin of the camera frame. The development of this form of the constraint equation is found throughout the SFM literature [12] and is repeated here for completeness. Since the scalar triple product of three coplanar vectors is zero, the coplanar constraint for this feature point is expressed as X*(T x RX') = 0 (21) and, equivalently for the focal plane, x*(T x Rx ') = (22) since x is colinear with X and x' is colinear with X Here, the common coordinate system is arbitrarily chosen to be that of the camera at time t. R is the orthonormal rotation matrix that transforms camera coordinates at time t+ A t to camera coordinates at time t. To handle multiple feature points, it is convenient to make the following definitions. A :[UVfT where pu and v are the coordinate components in the focal plane as shown in Figure 21 and fis the camera focal length. If the components are normalized to the focal length then let x :[pulv ]T and x' A[u' v'1]T Also, a skewsymmetric matrix is formed from Tto compute the cross product, O t3 2, [Tx]4 t3 0 t t2 t, 0 and the matrix E is defined as E A [Tx]R (23) E belongs to a special class of matrices called "essential matrices" described in the SFM literature [10, 12]. Equation 22 can now be written as [Fu, v, 1] E [pu', v ', 1]T = 0 for the ith feature point. Carrying out the multiplication and making the subsequent definitions yields Cze = (24) where e is the 9xl column vector composed of the stacked columns of E and The C, rows are stacked to form the constraint for all feature points Ce = 0 (26) Note that C contains only the observations from the focal plane, each row for a different feature point, and e contains only camera movement parameters. 2.2 Scale Factor Ambiguity Scale factor ambiguity was mentioned in chapter 1. For the coplanarity constraint this limitation can be observed by considering the translation vector Tin Figure 22. The coplanarity constraint provides information only about the orientation of T, not its magnitude, since the triple product will be zero for any magnitude of T. In applying the constraint to solve for motion, an additional constraint is placed on T. In the literature the usual additional constraint is to set the magnitude of Tto unity realizing that the direction of translation is what is produced. A similar condition will be imposed for this work. This ambiguity also extends to the lengths of the position vectors from the camera to the feature point, X and X' since it is the components of x and x' measured relative to the focal plane that are used. Scale ambiguity is the price paid for being able to decouple structure from motion which is arguably a good tradeoff when one is only interested in camera motion. Scale can be restored at any time if the magnitude of T is known or the magnitude of Xor X' is known for any single feature point since the three form the sides of a triangle whose angles are known. 2.3 Implicit Extended Kalman Filter A brief review of the discrete extended Kalman filter (EKF) algorithm is presented followed by an explanation of the modification to the measurement update that leads to the implicit extended Kalman filter (IEKF). The EKF algorithm is derived and can be found in any of a number of texts, for example Gelb et al. or Grewal and Andrews [20, 21]. For the discrete case the results are: * Dynamic model: xk =k1kl k1 Wk1 k ~ N(0, Qk ) (27) Where x is the state vector, u is an independent known control input vector, w is a white random vector sequence with properties denoted by matrix N(0, Q) meaning normal distribution, zero mean, and covariance matrix Q. The symbol represents a known vector function and k is shorthand for kcdt. * Measurement model: Zk =hk Xk +k, k ~ N(0, Rk ) (28) Note that this gives z, the measurement, explicitly. * State propagation: xk = fk1 1, u) (29) Where "" denotes a value prior to the measurement update and "+" is the value post update. The "^`" indicates that the value is an estimate. * Measurement prediction: Zk = k k)(210) * State covariance propagation: P = Oklk1P~ +Qk1 (211) where the state transition matrix: Ok = i (212) drx * Gain computation: kk =D H~ Hk H + Rk i (213) where the measurement matrix: Hk =ld (214) S~ x'X * State update: x,= x, + kk( Zk k) (215) * Covariance update: Pk = [IkkHk]Pk (216) In the IEKF, the explicit measurement model is replaced with an implicit constraint involving measurements. For this case the coplanarity constraint, Equation 26, is used. This leads to the following modifications as given by Soatto [12]. The quantity Ce becomes the predicted "pseudo measurement", Sk. The measurement, zk iS replaced by zero, the true value of the constraint. The state update, Equation 215, now becomes: Xk= k kkC(U, 1 pl> 171, 117"">M~N> ,NMN, 1N)k e(Xk) (217) where N is the number of feature points. The measurement matrix, Hk is more specifically defined as: 8(Ce) del Hk C (218) Also, it is useful to express the measurement noise covariance term, Rk for the EKF, in terms of the covariance of the measurement error of the positions of the feature points on the focal plane, Rk (which is not the same as the covariance of the error in Ce). To first order, this is: R, = DkRkD, (219) where : Rk C cOV[ p V~ p V v 62 6v ...6pN VN p ]T ~ 4 Nx4 N 8 denotes the measurement error of the particular parameter N is the number of feature points h 8(Ce) Dk (220) a[ n, v, p* v'V IU2 V2 NU V~...~ N TR uyvy For this IEKF the gain, Equation 213, now becomes: kk, =H [HkIH' +Dk k)kT 1 (221) 2.4 Development of Measurement Model In this section, the specifics of the measurement model for the coplanarity constraint in the IEKF are developed in terms of the state variables. In reference [12], Soatto described two approaches with regards to the state variables used in the IEKF. The first assumed the usual quantities of linear and angular velocities which he referred to as the "local coordinates estimator." The second used the components of the essential matrix E as the state variables and was referred to as the "essential estimator." The essential estimator results in a linear Kalman filter, however a nonlinear transformation is required to obtain the desired state variables. The local coordinates estimator yielded slightly better results according to Soatto [12] and is a more straightforward implementation. That is the approach used in this work and described here. In section 2.1 it was shown that the camera frame to frame movement information is contained in the essential matrix, E, which is comprised of the translation vector, T and the rotation matrix, R. The obj ective then is to express these in terms of aircraft motion variables. T and R are determined by the linear and angular velocities of the camera with respect to the environment, denoted as vectors Vc and coc, respectively. These in turn are related to the velocity vector of the aircraft center of mass, Vb, and the aircraft angular velocity vector, wb The six components of Vb and wb COmprise the state vector for purposes of this section. The geometry for this is shown in Figure 23. The righthanded set of unit vectors {(i,), a } describes the camera Eixed reference frame and likewise the righthanded set {(, b, kb } describes the aircraft body Eixed frame. In the development that follows, R will be determined as a function of we and T will be determined as a function of Vc and wc. Then we and Vc will be expressed in terms of wb and Vb. It is convenient to make the following definitions for the velocity components: and for the skew symmetric matrix: [q x] 4 ,O re 0O pc4, (223) Note: As stated previously, these velocities are with respect to the environment even though they are coordinated in different frames. The rotation matrix R can be computed using the well known exponential expression (e.g., from chapter two of reference [22]): R = elsor]" (224) where At is the time increment between images. This assumes that we is constant during At which is not too bad an assumption if At is not too long. One popular method of evaluating this matrix exponential is Rodrigues' formula (see again chapter two of reference [21]). For this work the more general truncated Taylor series is used. [cew([mcx]At)2 c ux~l3 R = e[~"~ = I +[mcux]At + + + (225) 2! 3! In computing the translation vector T, recall that it is to be expressed in terms of the camera frame of reference at the first of the two images. In general, the camera is rotating as it is translating which means the reference frame for [uc,,'c, "c is rotating as well. Defining z as the time from the first image of the pair during A t, an infinitesimal T on this interval is: dT = R(r) vwu dr= elsex, iv d c c (226) In carrying out the integration it is assumed that the values for [uc,,'c, "c as well as for ~coc are constant. u + ) z v~u =Ic [WXZIWV]r; &i ] ([me x]r)2 T elx3 d= (I+[mx + +)d vc o! o 2! wqain w5ad22 ieRadTntrso aearltv eoiis o ch EFmaueetmdl hsene ob rte ntrso aicratrlt velocitiesu (227)rskoldg fteorettonaddslaeeto tecm reference frame with respect to the aircraft body reference frame. Let the orientation of the camera frame with respect to the aircraft frame described by the usual ordered set of Euler angles ryc,0,,( about the 3,2, 1 axes, respectively, and the corresponding rotation matrix that transforms from the aircraft frame to the camera frame be denoted by Rbc. Referring to Figure 23, these can be thought of as the camera gimbal angles. Providing for a changing orientation, let the angular rate of the camera with respect to the aircraft be defined in the aircraft frame aS Obc [Pbc > bc > be I[b, jb kb]T This would represent the camera slew rates. Let the displacement from the origin of the aircraft frame to that of the camera frame be defined in the aircraft frame as dbe~ xc, bc.c, Zbc b > b > bT Again, it is assumed that the values for rye, Sc, 4~,, mb, db are known at all times. The camera relative velocities can now be written as the following: 1 0 0 cos 8, O sin Oc cos rye sin ry 0 Rbc = 0 cos ( sin ( 0 1 0 sin rye cos ry 0 (228) 0 sin (e cos (e sin On 0 cos On 0 0 1 cee RbcT Whb I ~ hbc h b bc I (229) I c b Zbcb yb Zbc where the dot denotes the time rate of change of the coordinate. 90, = RbcT r br I bc (230) re r b I h Pebc The expressions in 229 and 230 can be substituted into 225 and 227 to obtain R and T in terms of aircraft motion variables which are the IEKF state variables. At this point several simplifying assumptions will be made. The camera is assumed to be mounted rigidly to the aircraft, i.e. the camera orientation and position relative the aircraft frame is fixed so that y,4, 8,, db, are constant and ~cac is zero. Also, the camera is assumed to be mounted at the aircraft center of mass so that dbe is zero. Finally, the camera is mounted in the aircraft vertical plane at a fixed elevation (depression) angle so that rye and (are zero. These assumptions simplify the computations and reduce the potential for errors but do not detract from the overall obj ectives and results of this effort. If at anytime it is desirable to remove any of these assumptions, it would be a straight forward process to include the appropriate terms and reformulate the implementation used here. With these assumptions and the appropriate sub stituti ons : 0 (pb Sin 8,+Oco 8b COS qb [c), x] = (pb Sin 8 ,co 8b CO~) 0 (pb COS@ 8,b Sin 8c) (231) qb (Pb COS 8c b Sin 8c) 0 u =I Ubcs8 9 i (232) we' ubSin8c+~11Sin8 From R and T, the essential matrix E can be calculated and the predicted pseudo measurement from section 2.3 can be expressed in terms of the available state variable estimates. The measurement matrix, H, must also be expressed in terms of the aircraft state variable estimates. As indicated in Equation 218 repeated below, determining H requires taking the partial derivatives of E with respect to the aircraft state variables, 8(Ce) del Hk C (218 repeated) where xi [u v ,~, wb ,p, qb, r, ]T (234) H is an Nx6 matrix with N corresponding to the number of feature points being tracked during a given frame interval. Matrix C is the Nx9 matrix described earlier and de / Dx is the 9x6 Jacobian matrix of partial derivatives of e with respect to x. Each of these six columns is formed by stacking the columns of the partial derivative of the essential matrix E with respect to a different aircraft state variable. This requires the partial of T and R as indicated in Equation 235. dE [ Tx ] dR\ R +[T x] (235) These partial are approximated using only the first two terms of the series given in Equations 225 and 227. The result is the following: At cos 8,+At qe sin ,/ 2 dT = A t Ie cos 8, / 2 At pe Sin 8, / 2 (236) At qe cos 8,/ 2 +Atsin B dT A B~c/B2c2t (237) At sin 8c+ABt qecos8, /2 dT = At 7e sin Be/ 2 ABt pcos 8,/ 2 (238) At qe sin 8,/ 2 +Atcos B At ve sin Be/ 2 dT = At2 O co~/ 2 +At2 usin / 2 (239) At ve cos8, / 2 0TIBtW (240) 8qb At2U At2v co Cs/ 2 dT = A t2W cSin e/ 2 +At2U cCOs e/2 (241) br 2 c, Sin 8 /2 dR dR dR = 0 (242) 0 At sin 8, 0 dR = A t sing 0, At cos 8 (243) O 0t tcos@ 0, dR =0 00 (244) 8qb At 0 0 0 At cos( 0, dR = A t cos 8, 0 At sin 8 (245) Br b 0 At sing 0, 8 [T x] The terms in Equation 23 5 are formed in the usual way with the skewsymmetric dT form of the vectors ~. The velocity components in the expressions are intentionally left in camera frame coordinates for compactness. It is now necessary to impose the additional constraint on T discussed in section 2.2 due to the scale factor ambiguity. For this work the constraint will be that the magnitude of the aircraft velocity,  Vb, will remain constant during the state update. This means that only two components of the velocity vector can vary independently. For this work vb and wb are chosen to be the independent components and ub is chosen as the dependent component. The reason for these choices will become apparent in a later chapter. This additional constraint is then: rvI 2 ~2 _l;)2 _)2) ub 2 b b2(246) aT The partial affected by this are Equations 236 through 238. For this cit ,vb> b adjustment the partial derivative of ub with respect to vb and wb are required. (247) An assumption is made here that ub is positive. This should always be the case for purposes of this effort using a fixed wing aircraft. If in a later application this is not the case (e. g. for a helicopter or balloon) then the signs on the expressions in 247 will reverse for a negative ub. Equation 237 now becomes: dT dT dT c"it = +  Iv bt II b At2 c / 2 At cos 8, + At2q cSin 8c / 2 (248) At t2u co CO O / 2 At2 c Sin 8c / 2 vb( 2 2~  At2p c /2 At2q co COS, / 2+ At sin B Equation 238 becomes: dT dT dT c'it =+ c"sl Es cit. 8 wb At sin 90+ABt2q cCOS8c / 2 = At2r cSin c/ 2 At2p cCOS8c / 2 At2q cSin 8c/ 2 +Atcos9 B At cos8, +ABt2q cSin c/ 2 2r cCO 8 / 2 At2P cSinc 8 At2q cCOS8c / 2 +Atsin B and Equation 236 becomes: dTI = 0 (250) l~x= x The remaining component of the measurement model is the expression for the NxN pseudo measurement noise covariance matrix, R?, from Equations 219 and 220 repeated below. Rk = DkRkD, (219 repeated) h 8(Ce) Dk T(220 repeated) a[nU1v, puvp 2: N? ~U :.. N Nh N~'VL] It is assumed that the value of the 4Nx4N covariance matrix, R, for the error of the feature point measurements is known or can be approximated. The Nx4N matrix, D, can be rewritten as: 8 (Cze) 0 c(8~p 0 z 0 D= 8?(p27 2, p',V') 8 (Cze) where each 1x4 component corresponding to a different feature point: (251) \blT 2 h2Y (249) = e p' 0 0 u v,' 1 0 0 1 (252) The assumption will be made that the measurement errors for each feature point location have the same variance, 62 and are uncorrelated with each other which implies that: R = = a2 4Nx4N (253) R? can now be expressed as the diagonal matrix: R? = DRDT = a2DDT 8 (Cze) 8 (Ce) =1 2 (25 4) At this point it might be useful to review the assumptions, approximations, and simplifications that have been made in this section: *The linear and angular velocities are constant during each measurement interval. *The camera is mounted rigidly to the UAV at the center of mass. *The camera frame of reference differs from that of the aircraft frame only by a rotation of 8, about the number two axis, the camera elevation angle. *In obtaining the measurement matrix, H, the partial derivatives of the essential matrix, E, with respect to the state variables, x, are approximated by using only the first two terms of the given series. * The component of UAV velocity in the ib, direction, u1b, iS positive. (The UAV is moving forward.) * All of the focal plane measurement errors are characterized by the same variance and they are all uncorrelated with each other. It is felt that none of these seriously detract from the overall obj ectives and results of this effort. 2.5 Estimation Algorithm Summary This section provides an outline of how the concepts and expressions described in this chapter are assembled to form the main loop of the recursive state estimator. 1. Propagate state estimate from last update using (29). The measurement model, f(x,u), has yet to be defined. 2. Compute state transition matrix using (212). 3. Propagate state error covariance using (211). 4. Compute the essential matrix with (23) and (223 through 232). 5. Acquire the focal plane measurements from the vision processing module. 6. Form the predicted pseudo measurement using (25 and 26). 7. Compute the measurement matrix with (218) and (23 5 through 250). 8. Compute the measurement noise covariance with (252 through 254). 9. Compute the Kalman gain with (213). 10. Update the state using (215). 11. Update the state error covariance using (216). Feature Point Center of Projection Focal Plane Figure 21. The perspective projection camera model. t+A Time t Figure 22. Geometry of the coplanarity constraint. Camera Fixed Frame (c) ci~ Frame (b) Figure 23. Measurement model geometry. CHAPTER 3 APPLICATION WITH STATISTICAL DYNAMIC MODEL This chapter describes a numerical simulation of the estimator using a simple statistical dynamic model instead of the more complex UAV dynamics. The aim of this simulation is to reproduce results similar to Soatto et al. [8, 12] and to gain basic insight into the suitability and usefulness of the coplanarity constraint as an IEKF measurement. For this simulation the camera simply moves around a circular path at constant speed with the lens pointed toward the center, observing a cloud of feature points located inside the circle. The camera rotates at constant rate so as to point in the same direction relative to the traj ectory. The aim is slightly offset from the center in order to provide a velocity along two of the camera axes. This arrangement is depicted in Figure 31. A description of the simulation to include the dynamic model, the focal plane, the feature point generation method, and the vision processing output is given. Results from the filter simulation along the given traj ectory are examined. 3.1 Simulation Description 3.1.1 Statistical Model The state vector is defined as in Equation 234 from the last chapter. xk [, V, w,, p, q~ r]kT (31) The subscripts on the components have been omitted. They will refer to body axes unless stated otherwise. The form of the dynamic model was given in the previous chapter in Equation 27 as xk k k1x,, 1) + wk1. The statistical model used here and in Soatto et al. [12] is a first order random walk. xk =k1 +k1 k ~ N(0, Qk ) (3 2) The model assumes a random increment is added to each component of the state at each step. This model is typically used when little is known about the dynamics of the physical system. Referring to Equation 29, the state propagation is simply xk k1~ (33) and the state error covariance propagation from Equation 211 is Pk~ = Pk1 +k1 (3 4) The value of the process noise covariance, O, is used as a tuning factor in the Kalman fi1ter implementation to compensate for system modeling uncertainties and nonlinearities as well as disturbances. 3.1.2 Simulation of Focal Plane and Vision Processing Output The measurement is the location of the proj section of each feature point on the focal plane, x, = [pu,, v,, 1]T, as given in the last chapter and illustrated in Figure 3.2. The Eixed position of each feature point in the environment is known as well as the position and orientation of the camera along the actual or "truth" trajectory at any time. To simulate a particular feature point measurement, the position vector, X, from the camera to the feature point is obtained in earth Eixed coordinates, then transformed to body coordinates using the appropriate rotation matrix transformations. An additional rotation transformation is done to go from body axes to camera axes. In this simulation 8, the camera elevation angle, is 90 degrees so that the camera is looking in the direction of the velocity component u. This position vector in the camera frame is then normalized to the third component, the ke component, which also normalizes it to the focal length,J; as discussed in chapter 2. This produces the desired parameters CI and v for each feature point. It is noted here that the estimator has no knowledge of the camera location and orientation or the location of the feature points in space. Measurement noise is simulated by adding a random white Gaussian component to each CI and v from a random number generator. The focal plane is chosen to be 320 x 240 pixels (4:3 aspect ratio), a value that is typical throughout the literature for readily available hardware. Pixel density is specified in order to relate measurement noise to a physical characteristic. The standard deviation of the noise is defined for this work in terms of pixel widths and is a key parameter that can be varied and is specified for any simulation run. 3.2 Simulation Results The IEKF and measurement simulation algorithms as described previously were implemented in Matlab@. The truth trajectory for this simulation (computed apriori) was a 10 second duration arc. The state vector for the trajectory was constant at x= [12.48, 124.38, 0, 0, O . 1]T with units of feet/second and radians/second. This combination of velocity and angular rate results in the camera following a circular path of radius 1250 feet with the camera always pointed .1 radians to the left of the center of the circle, see again Figure 31. (Since u was negative, the signs on equation 247 were reversed.) A cloud of 20 randomly placed feature points was located at the center of the circle which provided the measurements for the estimator. A Hield of view of 60 degrees horizontal was chosen for the camera. The 4:3 aspect ratio resulted in a vertical Hield of view of approximately 46.8 degrees. The measurement noise value input to the IEKF via the variance matrix R in Equation 2.53 was the equivalent of one pixel standard deviation for the 60 degree field of view or a = .0036. The update rate chosen for the fi1ter was 20 Hertz (hz). Based on preliminary runs using trial and error, a value of Q = diag[0 100 100 .1 .1 .1] was chosen for the process noise covariance and P = diag[0 100 100 .01 .01 .01] was chosen to initialize the state estimation error covariance. 3.2.1 Results for No Measurement Noise An initial run was made with no measurement noise; that is, although the fi1ter ran assuming noise equivalent to a = .0036, there was no noise actually being added to the simulated focal plane measurements. Figure 33 depicts the location of the feature points on the focal plane each time a measurement was taken during the 10 second run. This shows how the feature points move on the focal plane. The estimated states in the fi1ter were intentionally initialized with an error to allow observation of the convergence of the fi1ter. The initial values were x = [12.48, 110, 5,.1,.1, 0] which equates to initial errors of 14.38 and 5 ft/sec for v and w, respectively, and .1 rad/sec for p, q, and r. The results are shown in Figure 34. Initializing the state u with no error in effect scales the other velocities. This is due to several factors. Scale factor ambiguity, the choice of v and w as the independent velocity variables, and the additional constraint that the magnitude of velocity is constant over the update were discussed in the last chapter. In addition, with this simple model, there is no interdependency among any of the state variables. The measurements shape the direction of velocity through the coplanarity constraint while the u component influences the magnitudes since it is not altered in either state propagation or the update. Referring to Figure 34 it can be seen that the angular rate estimates converge rapidly, within less that one second, to their true values. The velocity component estimate w corrects almost as quickly to its true value of zero. The v component of velocity is much more sluggish but appears to be slowly converging 3.2.2 Results in the Presence of Measurement Noise For the next run everything remained the same except the equivalent of .03 pixel width standard deviation of noise was added to the measurement. The results are shown in Figure 35. The noise is evident in the plots of the state estimates however the values do converge to the truth values as before. Another run was made with the measurement noise increased to the equivalent of .14 pixel standard deviation. Results are shown in Figure 36. Initial convergence for the angular rates is still good however v appears to diverge from the beginning. After about four seconds divergence can also be seen in r. A final run was made with initially no measurement noise then at six seconds, noise equivalent to .47 pixel standard deviation was abruptly added. The results for this are shown in Figure 37. As expected from the first run, the filter starts out well but when the noise is added at the six second point, the filter fails. A depiction of the focal plane is shown in Figure 38 to indicate in a qualitative fashion the magnitude of the measurement noise. The point at which the noise begins on the feature point traces can be observed if closely compared to Figure 33. Intuitively, this amount of noise would not be unreasonable, particularly on a 320x240 pixel focal plane. 3.3 Sensitivity of Algorithm to Measurement Noise The failure of the filter with a reasonable amount of noise added to the focal plane positions was unexpected as this had not been demonstrated in either of the sets of simulation examples from Soatto et al. [8, 12]. However reference [7], the companion paper to reference [8], briefly discusses that SFM becomes extremely sensitive to noise with short baselines. Ma et al. [22] say that "one needs to exercise caution to make sure that there is sufficient parallax for the algorithm to be well conditioned" in their chapter Reconstruction from Two Calibrated Views which deals with using the coplanarity constraint. Baseline and parallax refer to how much translation of the camera there is between frames and how much feature point movement on the focal plane there is due to this translation. However references [12] and [22] both indicate that when translation is almost zero, it is the direction of velocity that becomes illconditioned, the angular rates can still be computed. This seems to be apparent in Figure 36, at least during the initial part of the run. Kehoe et al. had similar findings in their work [14, 15]. Increasing the baseline or parallax would obviously be desirable for state estimation. Baseline can be extended in several ways. An obvious way is to increase the time between measurements. The disadvantages to this are less measurement updates and some degradation due to the assumption that linear and angular velocities are constant over the update interval. Also, a longer interval between feature point correlations on the focal plane from frame to frame (correspondence) makes it more difficult for the feature point tracking algorithm [7]. Another avenue is to pick feature points that move more on the focal plane in response to the anticipated direction of translation. For example for motion in the direction that the camera is looking, feature points on the periphery would have more movement than those in the center. A disadvantage is that in this case, these feature points are in view for a much shorter period of time. Orienting the focal plane parallel (or more parallel) to the direction of motion where feasible should also improve the baseline. To observe the effect of using a longer baseline, an additional run was made with the update rate changed from 20 hz to 4 hz. The propagated state and error covariance were still computed at 20 hz. Noise equivalent to .47 pixel standard deviation was applied to the measurements from the beginning. The results are shown in Figure 39. The filter no longer fails and appears to converge for all states except v, which may be slowly converging. The r estimate has large variations but appears to be centered around the correct value. 3.4 Observations from Application with Statistical Model In this chapter implementation of the IKEF with the coplanarity constraint has been demonstrated, basically paralleling the work of Soatto et al. [8, 12]. However the extreme sensitivity of the algorithm to noise, particularly for the velocity estimates, was not apparent in their examples. The simulations here have shown a lack of robustness in the presence of noise that may affect the usefulness of this algorithm as applied to the state estimation task in UAVs. The problem appears to be related to baseline length for the measurements. Gurfil and Rotstein [16], referred to in chapter 1, mentioned similar problems in the presence of noise in their work using the sub space constraint with aircraft dynamics as the model. In the following chapters the method will be applied with a specific model that includes the dynamics of the UAV. ** Figure 31. Trajectory for statistical dynamic model numerical simulation. Xn Feature Point Camera Center of Projection Focal Plane Figure 32. Focal plane measurement. 0.4 0.2 0.1 0.4 0.5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0.5 Figure 33. Feature point movement on focal plant for initial run with no measurement noise. 0 05' 0 2 4 6 8 10 0 05 0 1 0 15 0n  Body Velocities Body Angular Rates 01 0 05 1246 8 10 0 r  2 1 Sec 2 4 6 8 10 Sec Figure 34. Estimation results for initial run with no measurement noise. 13  0246810 125 ........ ....... ....... .............. 120 0246810 Body Angular Rates O 15 0 1 0 05 0 051 : 0246810 niE SEstimated ..... Actual 0246810 iles  Estimated Body Velocit 11 12 0 005 R"~'~:~'~"~~'~~"'":~~"~~:~"""' 02 4 6 810 $2 2 1 0246810 Sec 0 15I 0 2 02 4 Figure 35. Estimation results with .03 pixel measurement noise. Body Velocities Body Angular Rates 0 1 0 05 0 05 0246810 0 15 01 ~005 0 05 I 0246810 2 4 8 11 50 005r 10 0246810 Sec 02 4 Figure 36. Estimation results with .14 pixel measurement noise. 150 1004 8 .......1...1...~........~........ ...~...~ I 1 , OS5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 OS5 Body Velocities 11 , SEstimated 12~ ***** Actual Body Angular Rates 10 O 2 01 O 0 0 1 0 2 0 13 14 0 2 2 4 6 8 10 0 1 Figure 37. Estimation results with .47 pixel measurement noise added at 6 seconds. O.4 0.3 0.1 0.1 0.2 0.3 nu Figure 38. Focal plane for results with measurement noise added at 6 seconds. 10~ 1 0 2 15 0 3 0246810 0246810 Body Velocities Body Angular Rates 0 1  Estimated ***** Actual 12 0s 05 ` 0246810 0246810 0 15 01 ~ ~ 0 005 0246810 '1101 0246810 20 10 0246810 Sec 02  0246810 Sec Figure 39. Estimation results with extended baseline, .47 pixel measurement noise. ~ ~ j  CHAPTER 4 APPLICATION WITH A UAV DYNAMIC MODEL In this chapter a more complex dynamic model, that of a typical fixedwing model aircraft, is implemented in the estimator. It is shown that the additional information provided by the dynamic model allows more states to be estimated than the five from previous efforts that were demonstrated in the last chapter. Numerical results along several trajectories are examined. A modification to the IEKF is implemented to overcome deficiencies evident in the presence of measurement noise. 4.1 Dynamic Model Description The UAV simulated is an eight foot wingspan class radio controlled model aircraft pictured in Figure 41. The aircraft has four control inputs; ailerons (6a), rudder (6,), elevator (6e), and throttle (JR). A high fidelity 12 state nonlinear model of the aircraft is given in reference [23] which was derived from the general nonlinear equations of motion for a rigid body. Mass properties were measured on the actual model and aerodynamic coefficients were measured in a wind tunnel. This high fidelity model was coded and used as the truth model. A brief description of this model is given in appendix For the filter model, it was desired to use something simpler than the truth model and representative of what might be reasonably expected to be obtainable for a vehicle in the field. The model chosen was the "perturbed state model" which is a linearization of the truth model about a nominal steady state or "trimmed" flight condition. In this technique the deviations of the original state variables and control inputs from the trimmed conditions are the new perturbation variables and controls such that: x = xss+ Ax (41) Here x is the familiar state vector, x,, is comprised of the constant steady state values, and Ax is the perturbation state vector. The control inputs, u, are handled in a similar fashion. This method is quite frequently used in aircraft control system development [24]. (It is envisioned that, in practice, this model could also come from system identification analysis of flight test data for the desired type UAV flown remotely by hand at the trimmed condition since no high fidelity model is likely to exist.) Regardless of the type and fidelity of the model used to propagate the state, recall from the previous chapters that a linear representation is required to propagate the state error covariance. The IEKF state vector was expanded to include two additional states; the aircraft pitch attitude (0) and roll attitude (0). These are two of the three usual Euler angles that describe the orientation of the aircraft body axes with respect to the local level inertial frame with yr, 8, 0 being the ordered rotations about the 3,2,1 axes, respectively. This is similar to the rotations described in Equation 228. The justification for adding these states will be discussed later. The resulting IKEF state vector is now given as: xi [u, v, w, p, q, r, 8, 95]" (42) and the control vector is defined as: uA [s,,, 4, ]' (43) Su is used to denote both the scalar for the component of velocity along the body i axis and the vector of controls to the UAV to conform to popular convention. Differentation can be made by context. For this study, the case of no wind is assumed. The trimmed flight condition used is straight and level at a velocity of 73.3 ft/s (50 mph) at an altitude of 7500 ft, standard day. For this condition the steady state values computed from the truth model are xss= [72.75, 0, 9.19, 0, 0, 0, .126, 0]' where units are ft/s, rad/s and rad as appropriate and uss= [0, 0, 2.82, 1.1 1]' where units are degrees of deflection and horsepower. The equations of motion from appendix A were linearized about this condition using the methods given in Roskam [24]. This continuous representation was then converted to discrete form by using well known techniques from statespace linear systems theory as given in, for example, Brogan [25]. The interval chosen for the discrete form was .05 seconds as this is the minimum filter measurement interval expected to be used. (This linearization and discretization process could have also been done using numerical linearization routines from Matlab@ toolboxes.) The resulting linear discrete perturbation model is given below: Axk+1 = Ahxk + BLn (44) where the perturbation state vector Ax 4 [Au, Av, Aw, Ap, Aq, Ar, AO, Ag$]" (45) the perturbation control vector Au 4 [A6,, A(, A(, AS, ]T (46) the system or state transition matrix .988 0 .0309 0 .371 0 1.591 0 0 .954 0 .419 0 3.48 0 1.58 .0122 0 .791 0 2.71 0 . 170 0 0 .0213 0 .638 0 .0957 0 .0187 A=.000644 0 .000664 0 .829 0 .000458 0 47 0 .00802 0 .0119 0 .976 0 .00647 .0000166 0 .0000179 0 .0456 0 1 0 0 .000550 0 .0340 0 .00839 0 1 and the control matrix 0 0 .00289 .2782 .0999 .0236 0 0 0 0 .0334 .0297 .412 .0197 0 0 B= 0 0 .00926 .00008(48 .00217 .00485 0 0 0 0 .000239 .00000136 .0110 .000513 0 0 Those readers familiar with conventional aircraft dynamics will recognize that the resulting equations decouple into two independent sets of dynamics known in the literature as the "longitudinal" set and the "lateraldirectional" set [25]. The linearized equations of motion about this condition produce the conventional aircraft dynamic modes. The longitudinal set is comprised of two oscillatory modes, the "phugoid" and "shortperiod" governing motion in the vertical plane and about the pitch axis. The lateraldirectional set consists of an oscillatory mode, "Dutchroll", and two periodic modes, "roll" and "spiral" concerning motion in the horizontal plane and about the roll and yaw axes. In the dynamics for this particular flight condition, the spiral mode has an unstable pole at .076 (about 9 seconds time to double amplitude) indicating the aircraft will tend to roll off into a tightening "spiral" if perturbed from trim with no correcting control inputs. For state propagation in the IEKF this linear representation, Equation 44, was augmented as follows: Au Ar v q~w Av Ap~w Ar~u Aw Aq~u p~v +g(cos 8f 1) Ap Aq~r(Iz I )/II Ax =l = Ahxk + BLn + z t (49) k+1 ~d~ xx zz yyZ Ar Apdq(Iw Ixx)/IZZ AO Aq(cosA8 1i) ArA where Ii, II?, I zz = 1.7, 6.8, 9.3 slugft2 are the vehicle moments of inertia about the roll, pitch, and yaw axes, respectively, g is the acceleration of gravity, and At is the integration interval, .05 seconds in this case. These kinematic higher order nonlinear terms increase the accuracy of the IEKF model as it strays from its steady state condition and yet require very little knowledge of the specific vehicle. The derivation is in Appendix B. The following summarizes the steps in the IEKF used to propagate the aircraft states and error covariance from k1 to k: * Obtain Axk1 by subtracting xss from the previous updated estimate of xk1 (Equation 41). * Obtain An .1by subtracting uss from the previous value of uk1 (Equation 41). * Compute the propagated value of Axk USing Equation 49. * Obtain the propagated value of xk by adding Axk and x,,. * Compute the propagated value of the error covariance using Equation 211 which becomes P, = APk1AT + Q where Q, the process noise matrix, is a constant for this application. It is assumed that the control input vector, u, which in general is not a constant, is known to the filter with a high degree of accuracy. This is a reasonable assumption since these commands are either sent to the aircraft or generated on board by an autopilot. Therefore, control uncertainty does not enter into the error statistics computation. Also, since the trimmed values of the states are known constants, it is only the uncertainty in the perturbed states that affects the error covariance. Recall from chapter two that Q is used to account for modeling errors and uncertainties as well as process noise and in practice is a tuning factor in the filter. 4.2 Observability Considerations In the last chapter it was demonstrated that the IEKF with the coplanarity constraint was able to estimate, at least to some degree, five parameters related to camera motion, angular rates and direction of velocity, using a very simple system model that provided almost no information about the dynamics of the motion. The additional information provided in the more complex UAV dynamic model allows the estimation of additional states. Computing and examining the observability matrix from linear systems theory (see reference [25]) gives insight into this. For a linear system given by: xk+1 = Axk + Buk (410) Zk+1 = k+1 k+1 where : is the measurement and H is the measurement matrix, the observability matrix is given by: H HA OAI HA2 (411) HA"' Where n is the number of states. The system is fully observable if O has full rank, n. For a cursory look at observability using the UAV dynamic model given in the last section, assume that the five states v, w, p, q, r can be measured and that the measurement equation is 01000000 00100000 zk+1 Hxk+1 = 0 0 0 1 0 0 011 0 (412) 00001000 00000100 Although the real, time varying measurement matrix, expressed in Equations 218, 25, and 235 through250, is much more complex, this simplification is useful for a preliminary assessment of observability based on the results of the last chapter. The measurement matrix H from Equation 412 can be used with the UAV system matrix A from Equation 47 to compute the observability matrix O using Equation 411. When this is done, O has rank eight indicating that the system is completely observable. In fact, if the measurement is reduced to just the angular velocities, i.e. 00010000 zk+1 Hxk+1 = 0 0 0 0 1 0 011 0 (413) 00000100 the resulting observability matrix has rank eight indicating that all eight states are still observable. A similar exercise can be done with a system matrix for the statistical model used in the last chapter but augmented with the additional two Euler angle states. In this case the system is described in Equation 413. (Note that some information has actually been added by using the kinematic relationship between the angular rates and the Euler angles for small angles over the .05 sec interval.) 100O O 0 0 00 u 0 10 0 0 000 v 00O1 0 0 000 Ow 0 0 0 1 0 0 0 0 p xk~ = Ax = (414) k+1 k 00 0 1 000 q 00 0 .050O100 0 0 0 .05 0 0 0 1_ q When the observability matrix is computed using the system matrix A above and the measurement matrix H from Equation 412, the rank is five indicating incomplete ob servability. This section has shown that the information provided by the more complex dynamic UAV model should result in increased observability of the aircraft states. 4.3 Simulation Description The IEKF simulation described in the last chapter was modified for the UAV dynamic model. The major modifications were to the system model in the fi1ter used to propagate the state estimate and error covariance and the addition of the new states to the filter, as discussed in the previous sections. The measurement matrix, Hk, WaS augmented with zero columns to accommodate the new states. A provision was also made to add new feature points along the way at random locations in the field of view to replace those that dropped from view. The simulation runs were started with 15 to 20 feature points with new ones added whenever the number dropped below ten. The field of view was unchanged at 60 degrees horizontal and 47 degrees vertical, 320 x 240 pixels, and the filter measurement noise covariance value for the Kalman gain computation was also left unchanged at the equivalent of one pixel standard deviation, 0=.0036. The camera elevation angle was set at 60 degrees (30 degrees down from the longitudinal axis of the aircraft) so that the camera was looking "ahead and slightly down". The update rate for the filter remained at 20 Hertz (hz). Based on preliminary runs using trial and error, a value of Q = diag[0 0 0 .49 .0025 .49 0 0] was chosen for the process noise covariance and P = diag[100 1 1 .01 .01 .01 .01 .01] was chosen to initialize the state estimation error covariance. Three 10 second truth traj ectories were used to assess filter performance. These traj ectories were generated using the truth simulation (appendix A) driven by a simple autopilot to achieve the desired flight paths. The data needed from these truth simulations were the true values for the states to be estimated, the true aircraft position and orientation values for feature point measurement generation, and the aircraft control inputs. The first traj ectory was simply straight and level at the trim condition with the trimmed controls. The second was a vertical S maneuver which started at the trimmed flight condition with an initial pitch down command from the elevator (6e) then a pitch up at 2.5 seconds followed by reversals at 2.5 second intervals. The third was a horizontal S maneuver initiated from the trim flight condition which used elevator (6e), rudder (6r), and aileron (6a) inputs to command horizontal tumns in alternate directions at 2.5 second intervals. The throttle control (6T) remained constant at the trim setting for all traj ectories. The truth values for each of these three traj ectories are included in the plots for the next section. 4.4 Results With No Measurement Noise Runs with the filter simulation were made using the three traj ectories with no noise added to the focal plane measurements (although the filter still assumed that there was noise equivalent to o.0036). In each case the estimated states were intentionally misinitialized at a value different from trim to aid in observing filter convergence. Filter initialization values were ic = [67.75, .5, 9.69, 0, 0, 0, .126, 0] for all three cases which equates to errors of 5, .5, and .5 ft/s for u, v, and w, respectively. These are within the one sigma bounds for the initial state estimate error covariance, P, given earlier. Results of the runs are shown in the figures at the end of the chapter. For each state estimated, three values are shown; the actual or truth value, the estimated value from the filter, and a propagated value. This propagated value is computed external to the filter, open loop, using the same dynamic model that is in the filter, starting with the same initial conditions and applying the same true control history. This is what the IEKF would produce with no measurements. The purpose of this is to help assess how much the measurements are contributing to filter performance. Figure 42 shows the results for the straight and level trajectory. The estimated states seem to be converging for all states and provide better response than the propagated states in all cases except for perhaps w. The estimates for the angular velocities appear to be particularly "tight". Also, the propagated value for the roll angle phi is diverging as a result of the unstable spiral mode however the measurements keep the estimated value in check. Figure 43 shows the results for the straight and level traj ectory in a different way. Here the just the errors (value truth) are plotted. Also included are the fi1ter estimates for the one sigma error bounds (square root of the covariance) for each of the estimated states. In this case these uncertainties are overly pessimistic because the expected measurement noise is not present. Figures 44 and 45 show the results for the vertical S traj ectory with no measurement noise and Figures 46 and 47 show results for the horizontal S traj ectory. The updated estimated states appear to generally be more accurate than the propagated states although in some of the velocities for the horizontal S traj ectories the propagated states are actually better for a portion of the trajectory. In all cases the angular rate estimates appear to benefit significantly from the measurements and divergence in the roll angle estimate is overcome. 4.5 Results With Measurement Noise The three runs from the last section were repeated with Gaussian measurement noise added to the feature point measurement locations on the focal plane using the same procedure as with the statistical model simulations in chapter three. The equivalent of one pixel standard deviation (o.0036) was used for the noise which is the same value used in the IEKF gain calculation. The estimated states were initialized with the same errors as in the previous section. The three sets of error plots are shown in Figures 48 through 410. The results are disappointing although not entirely unexpected given the results for the statistical model with measurement noise shown in the last chapter. The estimates for the velocities and Euler angles seem to develop extreme biaslike errors that are well outside the expected bounds indicated by the predicted error covariances and substantially larger than for the propagated states. The angular rate estimates appear to be somewhat better. Although they are very noisy, the errors are approximately contained within the predicted one sigma error bounds and the mean error is close to zero. In an attempt to ensure that the single set of filter initialization errors chosen was not dominating the filter performance, a Monte Carlo ensemble of six runs was made for the straight and level case. Random draws for initialization errors for each of the eight states were made using a standard deviation corresponding to the initialization covariance values in P. The results of this set of runs are shown in Figure 411 and indicate similar results regardless of the filter initialization errors. As discussed in the last chapter for the statistical model, these problems seem to be a result of illconditioning due to short measurement baseline (lack of parallax). It was demonstrated that lengthening the baseline by increasing the measurement interval would mitigate the problem. To investigate whether that also applies with the UAV dynamic model filter, the straight and level case was run with the measurement interval increased from .05 sec to .2 sec. The results are shown in Figure 412. With the longer baseline the state estimates do not diverge as they did with the shorter baseline. However, the estimates are generally inferior to the propagated state values and the usefulness of the estimates for vehicle control is doubtful. It is possible that some returning of the fi1ter might improve the performance but a more robust remedy for this problem is desired. To this end some possible weaknesses in the algorithm are examined. There are several factors in the implementation of the IEFK into the true EKF framework that "stretch" the basic assumptions made in deriving the original Kalman fi1ter. The EKF formulation assumes the measurement noise to be additive but this is not really the case for the IEKF as indicated in the following equations from chapter 2. Conventional EKF: zk hk() k k (28 repeated) IEKF: Sk = C(UVU V7 Ik ek) (233 repeated) In the IEKF the noisy measurements are mixed in a nonlinear fashion with the estimated states to form the pseudo measurement. This could lead to higher order terms that are not negligible or zero mean as assumed for the EKF. A second factor is that the noise in the focal plane measurements is not truly "white" or uncorrelated over time as stipulated for the EKF. The "current" feature point locations on the focal plane that are a part of the current measurement, along with their errors, will also be a part of the next measurement as the "last" location. Soatto et al. address this in reference [12] and conclude that for their application, the small amount of correlation over just two steps has no discernable effect. In most successful Kalman fi1ter applications, factors such as these are absorbed into the process noise as part of the tuning process. For the IEKF implementation being studied in this effort, the illconditioning experienced, particularly with the shorter baselines, appears to accentuate these factors and leads to the type of errors observed when a reasonable amount of measurement noise is present. In the following sections, this illconditioning and its effects on filter performance are explored in more detail and a robust solution is implemented. 4.6 Degree of Observability In practice it has been noted that low observability of all or some states of a system leads to high gains in a Kalman filter which is essentially illconditioning [26]. In section 4.2 it was shown that the eight states for the case at hand were observable in the absolute sense but "how observable" or the degree of observability was not addressed. 4.6.1. Modal Observability A measure for degree of modal observability is given in references [26] and [27] which can be extended to the discrete case. For the linear system given earlier in this chapter: xk+1 = Axk + B21k (410 repeated) Zk+1 = k+1 k+1 If the system has distinct eigenvalues then it can be decomposed into its modal form with a similarity transformation: xk Mqk qk+1 =M 'Al~qk +M 'B21k (41 5) Zk+1 = Hk+1Mlqk+1 where MA [nz z,, nz ] is the matrix of the right eigenvectors of A and M'AM~ is a diagonal matrix with the eigenvalues on the diagonal. IfHk+1 is defined by rows as Hk+1 [h,k+1, h,k+1,..h;,k+1 T then a measure of the degree of observability is: h *m coso,,k+ B ,k+1 (416) ,,kl Ih ,k+1 Ill? i2 2 for the observability between measurement i and mode j. cos8o is a measure of the orthogonality between h, and m,. A value approaching zero means little observability and a value approaching one means high observability. The value of this measure was computed for a run with the estimation filter on the straight and level trajectory with no measurement noise. The results are shown in Figure 413 for a typical feature point. The larger values vary with different feature points but the two lower values are very close to the same for all feature points observed. The legend delineates the values with respect to the modes described earlier in the section on the UAV model; short period, Phugoid, roll, Dutch roll, and spiral. For the oscillatory modes, the value shown was computed for a single component in the complex conjugate pair. The measure indicates low observability for both longitudinal modes, the short period and Phugoid. This is an indicator that the filter can develop high gains and therefore is susceptible to conditioning problems [26]. 4.6.2 The Measurement Matrix Examining the behavior of the values in the measurement matrix during a run provides insight into the relative observability of the particular states and how that is related to measurement noise. Recall from Equation 2.18 that the elements of the measurement matrix, Hk, reflect how the pseudo measurements change with respect to each state variable. 8(Ce) del Hk C (218 repeated) Figure 4. 14 depicts the actual values of the measurement matrix during an estimation filter run for the straight and level traj ectory with no measurement noise. Different feature points will have different values that change as the feature points move on the focal plane. Depicted are the values for a single representative feature point that form a single row of Hk. The legend delineates which state each value corresponds to. The partial derivative terms with respect to u, 8, and 0 are always zero as explained in chapter two and earlier in this chapter. In the lower plot the vertical scale has been expanded. The units for the angular rates are "per rad/sec" and for the velocities are "per ft/sec". Note that the values for the angular rates are much larger than for the velocities. This indicates that the pseudo measurement, the coplanarity constraint, is much more a function of angular rates than of velocity, at least for these circumstances. Also note that the curves are smooth. It was demonstrated earlier that the filter performance was good for this case. Figure 415 shows the measurement matrix values obtained for the same feature point when the equivalent of one pixel measurement noise is added. As demonstrated earlier, the filter did not perform well. For this run the filter was modified so that the state updates were not added to the state estimates. Also, the initialization errors were removed. This resulted in the estimated states matching the true states, thus the measurement matrix was calculated using errorfree state estimates so any errors are due solely to the measurement noise. Note that the curves now appear noisy, apparently due to the measurement noise. The noise seems to have a minor effect on the relatively large values for the angular rates which are still very close to those of Figure 414 percentage wise. The values for the velocities, however, are extremely erratic and bear little resemblance to those of Figure 4.14. (A smoothed mean value may approach the curve in Figure 4. 14 but even that is doubtful.) The poor performance of the filter in the presence of measurement noise seems to be caused by two factors working together. First, the measurement matrix is a function of the noisy measurements as well as the states. This is not the case in the standard EKF as was pointed out in the last section. Second, the pseudo measurement has a relatively weak dependence on the velocities (direction of velocity, actually). This allows the measurement matrix and the computed gains related to the velocities to be corrupted by the measurement noise to the extent that the resulting velocity estimates are worthless. The errors also propagate through to the estimated Euler angles. The angular rate estimates seem to be more robust (see Figure 41 1) due to a healthy dependence of the pseudo measurement on angular rates. Another way of looking at this is that the angular rate estimates are better conditioned due to their greater observability making them more resistant to the measurement errors. The velocity estimates are poorly conditioned due to weak observability making them more vulnerable to the measurement errors. This ill conditioning is worse with shorter baselines (due to slower UAV speed or a higher measurement rate, for example). 4.6.3 Modification of the IEKF Measurement Matrix Several factors suggest a possible solution to the problems experienced by the IEKF in the presence of measurement noise. *The noise appears to significantly corrupt the part of the measurement matrix relating the pseudo measurement to the velocities (more properly, the direction of velocity) . * These components of the measurement matrix, the partial derivatives with respect to v and w, are very small values. * The UAV model is completely observable just from knowledge of the angular rates (reference section 4.2). * The updated angular rate states appear to still be fairly well behaved in the presence of measurement noise. The solution suggested is to neglect the components of the measurement matrix relating to velocity altogether, that is, set these values to zero in Hk. 4.7 Results With the Modified IEKF With Measurement Noise The modification to the IEKF described in the last section was made. The two columns in Hk relating the pseudo measurement to v and w were set to zero. Runs on the three traj ectories were repeated with the same conditions as in section 4.5. The results are shown in Figures 416, 417, and 418. A comparison with Figures 48, 49,and 410 shows that estimated values from the modified IEKF are clearly better. The errors are less and appear to be bounded for the most part by the filter predicted covariances. In some cases the estimated states have slightly more error than the propagated states at certain points in the traj ectory. The angular rates are especially noisy but the averages of the estimated values appear to track better than the propagated values, particularly during the maneuvering profiles. Further filtering of these state estimates might be beneficial prior to use for autopilot feedback. A series of six runs was made along each of the three traj ectories varying the initial errors in a Monte Carlo fashion as was done previously for the straight and level case in section 4.5. The results are shown in Figures 4.19, 4.20, and 4.21. These results also show much better results after the modification to the IEKF. The angular rates are noisy but all the states seem to converge. Some momentary deviations are noted during maneuvers especially on the horizontal S which involves control inputs on all three aircraft axes. In this chapter, the IEKF using the coplanarity constraint has been applied to a specific set of UAV dynamics. It was shown that the extra information provided by the UAV model allowed the estimation of additional states over that of the simple statistical model used in previous work in this area. Illconditioning problems in the velocity updates were evident in the presence of measurement noise. These were overcome by setting the small noisy velocity partial derivatives terms in the measurement matrix to zero. Performance using this modified IEKF was demonstrated on three different traj ectories. Figure 41. Eight foot wingspan class aircraft simulated for this effort [23]. Body Velocities 75  $765 0 2 46 Body Angular Rates 0 02 Actual Propagated 0 02 8 10 0 2 4 6 8 10 I_;o~U~~Ltus~tt~~f~~Fm 3 ~ ~4~t~ttttCF~ e II~ jli 0 2 4 6 8 10 10 ~9 5   0 2 4 6 8 10 Euler Angles 0 2 4 6 81 0 02 0 02 0 2 4 6 81 Euler Angles 0 05 0 01 0 2 4 6 80 0 2 4 68 0 Velocity Errors, Body Axes Angular Rate Errors, Body Axes 0 ~~~ Estl/maed0   ** "*******   Propagated * s . 10 2 4 t ******* Cov Es 02 2 4 6 8 10 4 0 1 4 0 1 0 2 4 8 0 0 2 4 6 8 1 O 2 4 6 8 1 0 2 4 6 8 1 Euler Angle Errors Euler Angle Errors O1  r..,,l I 0  ~ 01 ** ..* *. """"i'" 0 1 *** 1r 0 2 4 8 0 0 2 4 6 8 1 Figure 42. Straight and level trajectory simulation, no measurement noise. Figure 43. Errors from straight and level trajectory simulation, no measurement noise. 59 Body Velocities Body Angular Rates 80 002 ,70 02 65 0 02 0 2 4 6 8 10rpga 0 2 4 6 8 10 0 0 04  00 02 0 0 0246810 0246810 10 02 *~~~~ L.....:* .............*......... .. **.. 6 0 02 0246u.U 0246810 001 002 0. 0 0 21 0246810 0246810 Euler Angl ErrorsBd s Enuler RAnge Errors, oyA 0 1*...J....... ..,_ 1.....Ci $O 0 0 1rpgae + "" 0 1 ****** ...,. 0 2 4 6 8 1 0 2 4 6 8 10 Sec Sec Figur 4.Err rmvria rjcor iuain omaueetnie O 2 4 6 8 1 Body Velocities 60 I I I Propagated 0 26 00 Poaae 10 0246810 Body Angular Rates 2t~ t~ 024681C 2 4 6 Euler Angles 10 0 24 6 Euler Angles 0246810 02 0 1 0246810 Figure 46. Horizontal S trajectory simulation, no measurement noise. Velocity Errors, Body Axes Angular Rate Errors, Body Axes E r ~ O ~ 3 0 0246810 Euler Angle Errors 0 1 0246810 Sec Euler Angle Errors 0246810 Sec Figure 47. Errors from horizontal S trajectory simulation, no measurement noise. O Velocity Errors, Body Axes 10 02 S1 20  02 022 O, 6 810 4 4 4 6 Euler Angle Errors 4 6 Euler Angle Errors 10 6 8 10 Figure 48. Errors from straight and level trajectory, one pixel measurement noise. Velocity Errors, Body Axes Angular Rate Errors, Body Axes 810 0246810 0 05    8 10 0 2 4 6 8 10  os I I 8 10 0 2 4 6 8 10 01 1 0 1 8 10 0 2 4 6 8 10 Suec nl Err O4 t 2 042 0 2 246I W  Euler Angle Errors 0 2 Figure 49. Errors from vertical S trajectory, one pixel measurement noise. Angular Rate Errors, Body Axes 02 L L L L 0246810 0246810 01  0 1t 10 ` o EulerAngle Erors Velocity Errors, Body Axes 10 Estimated O ~  Propagated S10 Cov Est 0 4 6 Euler Angle Errors 10 0 2 4 6 8 10 10 0 2 4 6 8 10 Euler Angle Errors 10 0 2 4 6 8 10 See See Figure 410. Errors from horizontal S trajectory, one pixel measurement noise. L+..ii .. '/11111. ... ..g )111 "'~~""'~mrmr~mr~l Imlrl;l~:.~ '; ;"'*r'~~lr~"""~n"""""'Li'L~: 'i Figure 411. Monte Carlo runs, straight and level trajectory with measurement noise. Angular Rate Errors, Body Axes O 2I 0246810 01 o O 0 20 40 60 80 100 120 140 160 180 200 Filter Step k Figure 413. Modal observability for straight and level trajectory, no measurement noise. Velocity Errors, Body Axes Angular Rate Errors, Body Axes 02 0 2 0246810 111~ O lorrrr 3 '" I s '"" "' "'~' "' ""' In~~ e ""' ",, ,, 0246810 0246810 Euler Angle Errors 0246810 02 0246810 Euler Angle Errors 0 1 C  ...... I 02 2 ~ S0 1 ~ r*r "'' rr r   02 0246810 Sec 0246810 Sec Figure 412. Straight and level trajectory errors with measurement noise, increased measurement interval. Modal Observability for Feature Point 2 SP ...Phu  roll  DR ..... Spiral 0 6 0 15 0 20 40 60 80 100 120 140 160 180 200 5 1 g 3 i!ier i i 5 i i a r i i 1 2 e e 'I i II ii a i E I :: jiie 1 ii I i " 3 ii 11 I i $ ii I ii I ii e :: ii ii n $O B 5 i I c YR~tYII~ illil81111 U FI II \I~ II III 64 H Matnx Rowfor Feature Point 2 O 20 40 60 80 100 120 140 1( O 18 O0 x 10~ S20 40 60 80 100 Filter Step k 120 140 160 180 200 Figure 414. Measurement matrix values for straight and level, no noise. H Matrix Row for Feature Point 2 0 20 40 60 80 100 Filter Step k 120 140 160 180 200 Figure 415. Measurement matrix values for straight and level, one pixel measurement noise. Velocity Errors, Body Axes Angular Rate Errors, Body Axes 10 02 2 . .   0 0 0246810 0246810 024681 0246810 0 15    0E 0 2 20 1.~!....*  0 0246810 0246810 VelotyErrrsl BIodyAxs An ular Ratge ErrorsoyAe 0 O ... ...4. I Propagatedl........... :* 0 2 m***' .m.... ************  ** * . 0 0 * " 0 1 *****"*'0 1 * 0246810 0246810 Sec Sec Figure 417. Errors for modified IEKF, vertichtald Se trajectory with measurement nie Angular Rate Errors, Body Axes 02 0 2 0 2 4 6 8 1 01 0 010 261 0246810 01  0246810 Sec O 1 )24 6 810 )24 6 810 Euler Angle Errors 0246810 1 0246810 Euler Angle Errors Velocity Errors, Body Axes 10 .. , .. ...  Propagated 10 ****CvEt 0246810 0246810 Sec Figure 418. Errors for modified IEKF, horizontal S trajectory with measurement noise. Velocity Errors, Body Axes 4 ******* Cv st 0 Angular Rate Errors, Body Axes 0 2 4 6 8 1 a i~x?01 0 2 10 0 2 Figure 419. Modified IEKF Monte Carlo, straight and level with measurement noise. Angular Rate Errors, Body Axes j02 rror 0 2 ov Est0 2 4 61 8 0 01 Velocity Errors, Body Axes 111_ 'III E 0 2 4 6****** 0246810 Euler Angle Errors O 010 261 0246810 Euler Angle Errors 0 2 0246810 0246810 Figure 420. Modified IEKF Monte Carlo, vertical S with measurement noise. Velocity Errors, Body Axes Angular Rate Errors, Body Axes 1 20 i****Cv s 0246 024 5 0 1 0 2 4 6 8 10 0 2 4 6 8 10 2I 02 020 0 2 0 I 0 2 4 6 8 10 0 2 4 6 8 10 Figure 421. Modified IEKF Monte Carlo, horizontal S with measurement noise. CHAPTER 5 ROBUSTNESS AND SENSITIVITY ASSESSMENT For practical use, the modified IEKF developed in the last chapter must be tolerant of inaccuracies in the system model that is used in the fi1ter. These inaccuracies are unavoidable and come from simplifieations in the governing equations and from uncertainties in parameters such as UAV aerodynamics, mass properties, and trim conditions. Some of these parameters may vary from flight to flight for a given set of hardware. In this chapter the robustness of the modified IEKF fi1ter developed in the last chapter is assessed using a Monte Carlo analysis. It is also desirable to gain some insight into how some other parameters that are controllable by the designer affect performance. These are items such as camera depression angle, camera Hield of view, feature point selection, etc. Filter performance sensitivity to some of these parameters is examined using the same Monte Carlo framework. Finally, the effect of varying amounts of measurement noise on fi1ter performance is investigated. 5.1 Robustness to Modeling Uncertainties Robustness of the IEKF to uncertainties in the system model was assessed using Monte Carlo runs on the same truth traj ectories used in the last chapter. An attempt was made to choose the Monte Carlo variables and their bounds in a realistic fashion. (These are constants in the system model but are changed for each individual run using draws from a random number generator.) Five groups of parameters were chosen to vary. *Steadystate or trimmed flight condition. The UAV might be trimmed at a condition that is not exactly what is in the filter model. The values given in chapter four were used as the mean with standard deviations of 5 ft/s for u,,, 1 ft/s for v,, and w,,, and .02 radians for 8ss and #ss. The steadystate angular rates were not varied as it was assumed the UAV would always be trimmed to fly in a straight path. * Steadystate or trimmed controls. The trimmed flight control settings might not be precisely known or might be offset in the UAV controls. The values given in chapter four were used as the mean with standard deviations of .1 deg for deflections and .1 horsepower for throttle. * Aerodynamic parameters. The aerodynamics of the system model are contained in the A and B matrices given in Equations 47 and 48. Aerodynamic coefficients are present in the terms in the upper left 6x6 partition of the A matrix and the upper left 6x3 partition of the B matrix. (The other terms in these matrices have to do with kinematics, gravity, and thrust.) This is shown in Roskam [24]. The aerodynamic uncertainties were modeled by varying these parameters individually by a five percent standard deviation around the nominal. This was done with the continuous form of the matrices and then the effect over .05 seconds was added to the discrete matrices in Equations 47 and 48. This was an approximation since all the terms varied are not strictly "only aerodynamic: but their maj or effects are. * Mass properties. There are likely to be uncertainties in the UAV mass and moments of inertia. These uncertainties were modeled similar to aerodynamic uncertainties by varying the A and B matrices. The first three rows in these matrices come from "acceleration equals force divided by mass" equations so these terms in the matrices were varied together corresponding to an approximate five percent standard deviation variation in mass about the nominal value. Again, this was done with the continuous form of the matrices and then the effect over .05 seconds was added to the values in Equations 47 and 48. The moment equations were handled in a similar fashion. The fourth row was varied for a five percent standard deviation in roll moment of inertia, the fifth row for pitch inertia, and the sixth row for yaw inertia. The higher order terms in Equation 49 were not varied from the nominal moment of inertial values. * Longitudinal center of mass location. There is likely to be uncertainty in the center of mass location for the UAV and it may also vary if components are shifted from flight to flight. An allowance for an uncertainty of .1 foot standard deviation along the longitudinal axis of the vehicle from the nominal location was provided. This was done again using the A and B matrices. The pitch and yaw moment equations, rows five and six, were changed together by adjusting each moment component by an amount determined using the change in center of mass location and the appropriate term from the force equations. For example the pitching moment due to w term, A(5,3), was adjusted using the normal force due to w term, A(3,3), through a lever arm of the change in center of mass and rescaled with mass divided by pitch moment of inertia. Again, this was done using the continuous equations first. In all, 39 Monte Carlo parameters were chosen. This list is certainly not all inclusive. For instance camera inaccuracies and atmospheric turbulence are not included. However this assortment should allow enough perturbation from the baseline conditions to assess robustness. To provide insight into the effects of the different types of uncertainties, seven cases were run against the three trajectories for a total of 21 sets. Each set consisted of 50 runs. The cases are described below. * Case 1. None of the model parameters were varied. The initial state estimates were varied and measurement noise was applied as in the Monte Carlo runs in chapter 4. * Case 2. Same as case 1 plus the steadystate flight condition Monte Carlo parameters. * Case 3. Same as case 1 plus the steadystate control Monte Carlo parameters. * Case 4. Same as case 1 plus the aerodynamic Monte Carlo parameters. * Case 5. Same as case 1 plus the mass properties Monte Carlo parameters. * Case 6. Same as case 1 plus the longitudinal center of mass Monte Carlo parameter. * Case 7. Case 1 plus all 39 Monte Carlo parameters at once. The modified IEKF was used with the same setup as described in section 4.3 with regards to field of view, camera depression angle, process and measurement noise parameters, focal plane, measurement interval, and feature points. Actual measurement noise was one pixel standard deviation, o =.0036, for all runs. A suitable metric was needed to characterize each Monte Carlo set. In Figure 51 the estimation errors for a set of six runs are shown for case 1 on the straight and level trajectory to demonstrate the process. The error plots are informative for six runs, however when 50 runs are made the overlaid plots would be useless. For the 50 run sets, or ensembles, the mean error and its standard deviation about the mean are computed at each .05 second interval. Plots of this are shown for case one on all three trajectories in Figures 52, 53, and 54. Mean values close to zero and small standard deviations are desirable. There is some trending in the mean during the maneuvers on the vertical and horizontal trajectories but no divergence. The metric chosen to characterize each set was the average of the error means and standard deviations over the last Hyve seconds. This was to allow the fi1ter a chance to converge from the erroneous initial conditions and also to average out the effects of the traj ectory maneuvers (since the maneuvers were not random). Results for the 21 sets are tabulated in Tables 51, 52, and 53. In order to make the results more readable, bar graphs of the results are plotted in Figures 55, 56, and 57. Note that the absolute values of the mean instead of the actual values from the table are plotted. An additional finding is that none of the 1050 runs had divergences, that is, errors increasing without bound. Overall, the results indicate that the modified fi1ter is robust to model inaccuracies. The angular rates are very insensitive to model variations. This is because of the high observability of the angular rates in the measurement matrix as discussed in the last chapter. Deviations in the trim conditions, case two for the state variables and case three for controls, seemed to have more effect than the errors in aerodynamics, mass properties, and center of mass location, cases four, fiye, and six. The states that seem the most vulnerable are u and 8. These are longitudinal states. Recall from section 4.6.1 that these modes were predicted to have low observability. Notice from the tabulated values of the means the correlation in signs. When u error is positive, a error is negative. This is inherent in the dynamics of the model since the UAV tends to slow down as pitch increases and speed up as pitch decreases. The results for case seven indicate good robustness to all the error sources applied at once. 5.2 Sensivity to Design Parameters Certain parameters in the implementation of the estimation scheme are chosen by the designer. Sensitivity of the filter performance to some of these parameters was assessed. The parameters examined were camera depression angle, camera field of view, filter measurement interval, and number and location of feature points. 5.2.1 Camera Depression Angle Recall that the camera angle is referenced to the UAV vertical body axis, i.e. Oc=0 when aligned with the body z axis (or approximately down when the UAV is at trim). Three camera angles were chosen for evaluation, Oc=0 degrees, Oc=60 degrees (the baseline configuration), and Oc=82.8 degrees which at trim conditions correspond to 82.8, 22.8, and 0 degrees down from the horizon, respectively. To demonstrate the effect this has on feature point movement, three focal plane histories with the different camera angles are shown in Figure 58 for the straight and level trajectory with no measurement noise. Fifty run Monte Carlo sets were performed for each of the three camera angles on each of the three traj ectories for a total of nine sets. The only Monte Carlo variables used were the initial state variables and the measurement noise which corresponds to case one in the last section. The same metrics were computed as in the last section. The results are presented in similar fashion in Table 54 and Figures 59, 510, and 511. The angular rates show a consistent trend for all three traj ectories. The estimation for yaw rate, r, is worse when the camera is looking down and best with the camera on the horizon. Estimation for roll rate, p, is best when the camera is looking down. Both roll and pitch rate estimation, p and q, are a little worse at the intermediate camera angle. Side velocity, v, follows the same trend as r. The other variables do not seem to have a consistent trend through out all three traj ectories. None of the state estimates diverged during any of the 450 runs. Overall, the parameter of camera depression angle appears to have a definite but not overpowering effect on fi1ter estimation performance for this application. 5.2.2 Camera Field of View The focal plane specifies remain the same at 320 x 240 pixels with aspect ratio 4:3. Recall that the baseline Hield of view is 60 degrees horizontal by approximately 47 degrees vertical. The design parameter to be varied here is Hield of view while using the same focal plane. This is equivalent to changing the focal length of the lens used in the camera. When Hield of view is narrowed with the same focal plane, an increase in resolution is gained which should translate into more accurate feature point coordinates. For the for the focal plane simulation algorithm used in this effort, the measurements are normalized to the focal length as explained in chapters two and three. The increased resolution is realized by reducing the measurement noise standard deviation to keep it at the equivalent of one pixel. The opposite is done when the field of view is widened. For this assessment, three Hields of view were chosen; 30 degrees, 60 degrees (the baseline), and 90 degrees horizontal. The measurement noise applied had standard deviations of .001675, .0036, and .00625, respectively, to keep the noise at the equivalent of one pixel. The filter assumed value for measurement noise remained unchanged. As in the previous section, 50 run Monte Carlo sets were done for each field of view on each of the three traj ectories for a total of nine sets. Monte Carlo variables remained the same and Oc was set back to the baseline value of 60 degrees. The results are presented in previous fashion in Table 55 and Figures 512, 513 and 514. Consistent trends noted over the three traj ectories are that the pitch and yaw rate errors appear smaller for the narrower fields of view but the roll angle errors are smaller at the wider fields of view. Over all, the results seem to show surprisingly little variation in performance as the field of view varies. 5.2.3 Measurement Interval The effect of increasing the measurement interval was investigated. This was explored in chapters three and four with the unmodified filter as a way of increasing feature point parallax to help with illconditioning. Three measurement intervals were assessed; the baseline value of .05 seconds, .25 seconds, and .5 seconds. The Monte Carlo variables and filter configuration remained the same with the field of view at 60 degrees. Three sets of 50 Monte Carlo runs were done on each of the traj ectories for a total of nine sets. To see the effect of increased sample interval, a time history of statistics from the .5 second measurement interval set of runs on the straight and level traj ectory is shown in Figure 515. The "saw tooth" effect on the one sigma bounds for the angular rates is typical behavior as the uncertainty grows between updates. This cyclic behavior propagates through to v and w. The overall results for the nine sets are presented in previous fashion in Table 56 and Figures 516, 517, and 518. The results for the straight and level and vertical S cases are unexpected at first glance. The overall performance of the filter as indicated by the error standard deviations seems to improve with less frequent updates for most states which is counterintuitive. More measurements mean more information which should result in better estimates. However, for the horizontal S case, the performance shows an overall deterioration with increased update interval which would be expected. These results are not fully understood but a plausible explanation has to do with filter tuning which, as explained in chapter three, is the selection of the process noise covariance matrix, Q. Q is what drives the estimated filter uncertainty, P, through the system model (Equation 211) to offset modeling uncertainties and nonlinearities as well as disturbances. The Kalman gain matrix, k, is determined based on P and the measurement noise (Equations 213 and 2.21). It can be thought of as a weighting on the measurement residual based on the estimated uncertainty in the current estimate as compared to the assumed noise in the pseudomeasurement. A high uncertainty in the estimate and a relatively low measurement noise leads to larger gains and updates while the converse leads to smaller gains and updates. For k to be optimum, accurate values of process and measurement noise must be used. Q and the feature point measurement noise, R, were constant for these sets. It can be argued that the "real" uncertainty is highest for the horizontal S profile since the UAV is maneuvering the most and therefore further from the trim conditions on which the model is based. If the filter is tuned for the maneuvering case, then it may be mistuned for the nonmaneuvering case resulting in gains that are proper for the maneuvering case but too high for the nonmaneuvering case. This might explain why the results make sense for the horizontal S case and not for the other two. Also note that for the single state variable 0, the trends are correct for all three traj ectories (that is, the estimate gets worse as measurement interval increases). Recall that this is the variable that most reflects the one unstable mode in the process model. It requires updates to prevent divergence and might not be as sensitive to the higher gains in the in the nonmaneuvering case. 5.2.4 Number of Feature Points More feature points result in more measurements at each update which should, as a general rule, increase the fi1ter performance. Nonfi1ter based methods employing the coplanarity constraint require a minimum of eight points or Hyve points [5, 12, 19] to obtain a solution. As mentioned in chapter one, an advantage of this fi1ter based method is that it will provide estimates that gracefully degrade when feature point measurements are reduced or even lost completely for a time. Soatto reported in his work that filter performance tended to "saturate", that is reach a point of limited improvement, after about 20 feature points [12]. The effect of varying the number of feature points used for the measurements was investigated. Runs were made using fiye, ten, twenty, and forty feature points. The Monte Carlo variables and fi1ter configuration remained the same. The 60 degree field of view and .05 second measurement interval from the baseline configuration were used. Each run was started with the desired number of feature points. As feature points dropped from view, new ones were generated at random positions. At some points in the runs, the number of feature points dipped below the desired value momentarily because only one new feature point was allowed to be added at each measurement. This was true more for the maneuvering profiles at the higher numbers of feature points which tended to drop points more rapidly. Four sets of 50 Monte Carlo runs were done on each of the trajectories for a total of 12 sets. The results are presented in previous fashion in Table 57 and Figures 519, 520, and 521. The performance of the filter appears to improve with the increase in feature points, at least for the lateraldirectional states, v, p, r, and 0. The accuracy of the longitudinal state estimates does not seem to be affected as much. The saturation in performance mentioned earlier does not seem to occur, that is, performance improvement does not seem to be leveling off with 40 feature points. None of the estimates diverged during any of the 600 runs. 5.2.5 Feature Point Location Insight into the observability of p, q, and r with respect to feature point location can be obtained by examining the values in measurement matrix H for different feature point locations. Figure 522 shows feature point traces on the focal plane for several feature points for a filter run on the straight and level traj ectory with no measurement noise. The triangle at the top of the focal plane depiction in Figure 522 is the point from which all the feature points appear to emanate or be moving away from which is constant during this run since the vehicle is in a trimmed, straight, steady state flight condition. This point is known as the focus of expansion (FOE) in optic flow literature [14, 15]. The FOE also depicts the direction of translational motion of the vehicle. The position of the feature point with respect to the FOE is what seems to be relevant. Also shown in Figure 522 is a plot of the changing values in H that relate the pseudomeasurements from a specific feature point to the state variables p, q, and r. (Recall that the components ofH are the partial derivatives of the coplanarity constraint for each feature point with respect to the state variables and that these are set to zero for all states except p, q, and r.) The Hyve numbered segments on the plot correspond to the Hyve numbered feature point traces on the focal plane depiction. A larger magnitude of h indicates a greater observability of the particular state variable for that feature point location. Figure 522 seems to indicate the following: * q (pitch rate) observability increases as the magnitude of the horizontal distance, v, from the FOE increases. Segments 2, 4, and 5 have smaller absolute values for h and the corresponding feature points are closer horizontally to the FOE than for 1 and 3. * r (yaw rate) observability increases as the magnitude of the vertical distance, pu, from the FOE increases. Segments 3 and 5 have smaller absolute values for h and the corresponding feature points are closer vertically to the FOE than for 1,2, and 4. * p (roll rate) observability increases as the overall distance from the FOE increases. Segments 4 and 5 have smaller absolute values for h and the corresponding feature points are closer the the FOE than for 1, 2, and 3. It should be noted that for the run shown, the standard setup was used so the FOE was not aligned exactly with the body axis system used to define the angular rates. However it is only off in pitch by the angle of attack of the vehicle, approximately seven degrees, which should not affect the validity of the above subj ective observations. An attempt was made to assess how feature point location on the focal plane affects filter performance. It seems to be a reasonable assumption that filter performance would be related to feature point location in accordance with the preceding discussion on observability. To investigate this, Monte Carlo runs were made with restrictions on the focal plane as to location of feature points. Four configurations were chosen as illustrated in Figure 523; the full 60 degree field of view, a vertical strip centered on the FOE of width .2, a horizontal strip from top of the field of view at pu=.433 to pu=.233 (including the FOE at pu=.42), and a restricted box around the FOE with CI of .433 and .1 and v +/ .2. The Monte Carlo variables and filter configuration remained the same as in the previous section and runs were made using all three trajectories. The runs were started with approximately 1015 feature points. As feature points dropped from view, new ones were generated at random positions in the selected area of the focal plane to try to maintain 10 points in view. At some points in the runs, the number dipped below 10 during the maneuvering profies, particularly with the restricted Hields of view. Four sets of 50 Monte Carlo runs were done on each of the traj ectories for a total of 12 sets. The results are presented in previous fashion in Table 58 and Figures 524, 525, and 526. The results are mixed. For roll rate, p, performance is worse on all traj ectories for feature points restricted to close to the FOE. Roll angle, 0, is also worse for this case. This is what was predicted in the earlier discussion. For pitch rate, q, the poorest performance was expected for the vertical strip case. This was true on the horizontal S traj ectory but not on the other two traj ectories. For yaw rate, r, performance actually seemed better on two of three traj ectories for the horizontal strip case which should have been the worst. Filter performance for the other state variables did not seem to show any discernable trends with regards to feature point location. One possible problem in this assessment is that the FOE moves around on the focal plane for the maneuvering traj ectories which could not be taken into account. 5.3 Effect of Measurement Noise The effect of measurement noise, that is the error in the feature point locations on the focal plane provided to the fi1ter, was investigated by varying the standard deviation of the random noise that was added to the computed feature point locations in the Monte Carlo simulations. For all the previous runs in this chapter, the standard deviation used has been the equivalent of one pixel. For these runs, standard deviations equivalent to .25, .5, 1, and 1.5 pixels were used. Otherwise the baseline parameters were used, i.e. the full focal plane at 60 degrees field of view with a measurement interval of .05 seconds, Oc of 60 degrees, and approximately 10 feature points. The value of measurement noise used by the fi1ter for computing the Kalman gains was left at the equivalent of one pixel for all runs. This simulates the case where the filter assumes a value for measurement noise but does not really "know" what the noise is. Four sets of 50 Monte Carlo runs were made for the four values of measurement noise on each of the three traj ectories for a total of 12 sets. The results are presented in previous fashion in Table 59 and Figures 527, 528, and 529. The results are generally as expected. The fi1ter performance gets progressively worse as the noise level is increased. This is true for p, q, r, v, and 0 on all three trajectories. The states u, w, and a do not seem to be as affected by the noise, perhaps because these states are not as observable and the noise tends to be filtered out. This is similar is to the results noted for the sensitivity to number of feature points. None of the estimates diverged during any of the 600 runs. 5.4 Overall Comments on Robustness and Sensitivity The fi1ter design seems to be robust and very tolerant of errors in the UAV process model. Monte Carlo results indicated that the fi1ter was tolerant of modeling errors due to misinitialization, measurement noise, incorrect trimmed condition and controls, aerodynamic parameters, and mass properties. No filter divergences were encountered during the 1050 runs. The filter seemed most sensitive to errors in the trimmed states and controls. The effects of variation of several parameters in filter and focal plane implementation on filter performance were assessed. Parameters examined were camera depression angle, field of view, measurement interval, and number and location of feature points. The Monte Carlo results indicated less than expected sensitivity to these factors. Number of feature points (the more, the better) seemed to have the most consistent results. Also, u, w, and a seemed to be least affected during all these assessments. Again, no filter divergences were noted. Increasing the measurement noise level had the expected detrimental effect on filter performance. Overall, the filter showed a great degree of robustness to errors and inaccuracies in the system model and variations in filter implementation. However the effects of the various changes in filter design parameters were difficult to differentiate, at least by the Monte Carlo methods and metrics used in this chapter. This could be in part due to the stable nature of the particular UAV model for which the filter was implemented. Another factor could be that the large amount of noise that feeds through to the angular rate estimates masks some of the trends that might be present. Velocity Errors, Body Axes Angular Rate Errors, Body Axes 05 0 10 0 2 4 6 8 10 02 0246810 0" 0 50 2 4 6 8 1 Euler Angle Errors I li l 1 0246810 20 0 Euler Angle Errors 0 2 10 0 2 8 10 Figure 51. Six run Monte Carlo set errors, case 1, straight and level trajectory. Velocity Errors, Body Axes 20 10 ..    ******** One Sgma Bounds 10 0246810 22~~~~~ I 0 2 4 6 10 0246810 Euler Angle Errors 01 0 0 1   0246810 Sec Angular Rate Errors, Body Axes 0 05 0 05"" ii; 0 2 4 6 8 1 02~ Euler Angle Errors Figure 52. Case 1 straight and level trajectory, ensemble mean error and bounds. 83 Velocity Errors, Body Axes Angular Rate Errors, Body Axes 1 0 t *. ,,,,,, e Sigma Bounds "10 0 2aS 0246810 0246810 0 05 2 041 05 1 0 05 0246810 0246810 01 **01 ,0 0 .....** ... 0 1  0 1 0246810 0246810 V uelocity e ErrorsBd xs An ular Ratge Errors, oyAe 010 .. .. 00 0 2u 0 261 0 0 0 02 0246810 0246810 Sec Sec Figure 54. Case 1 horizonal S trajectory, ensemble mean error and bounds. Table 52. Vertical S trajectory Monte Carlo sets. Average of mean error Case u v w p q r 8 ft/s ft/s ft/s rad/s rad/s rad/s rad rad 1 1.421 0.0381 0.14139 0.000192 0.000838 7.5:E05 0.01822 0.001268 2 2.5879 0.14814 0.31278 0.000468 0.00135 0.000618 0.02446 0.0005 3 1.516 0.09106 0.15493 8.04E05 0.000649 0.000357 0.02015 0.0023 4 1.1249 0.01991 0.13966 0.00076 0.000421 0.00057 0.01526 0.0006 5 1.105 0.01391 0.12363 3 25E05 0.000372 0.000244 0.01609 0.00065 6 1.4917 0.01935 0.13931 B.7:EE05 0.001124 0.00011 0.023 0.00053 7 3.1734 0.17221 0.37537 0.000698 0.0006 0.000596 0.0265 0.007965 Average of standard deviation of error 1 4.2084 0.45093 0.27757 0.11863 0.026223 0.11071 0.043465 0.010757 2 7.0845 1.1267 1.184 0.11797 0.02647 0.10963 0.054424 0.019862 3 4.8097 2.7291 0.32119 0.11952 0.026548 0.1116 0.051041 0.070892 4 3.9335 0.43276 0.2939 0.12023 0.025823 0.11124 0.042417 0.00977 5 4.4422 0143519 0128523 0.11898 01026631 0.11121 01045047 01010223 6 4.8925 0.44398 0.30731 0.12032 0.026206 0.11341 0.050393 0.009408 7 7.6246 2.8276 1.0144 C'.1212 C'.026668 111124 C'.058297 C'.065056 84 Table 51. Straight and level traj ectory Monte Carlo sets. Average of mean error w p q r ft/s rad/s rad/s rad/s rad rad 0.018962 0.00013 0.000682 3.37E05 0.00011 0.00063 0.025485 0.00013 0.000238 0.00016 0.018332 0.00349 0.057816 0.00033 0.000701 0.00011 0.000472 0.00755 C'.068606 0.00029 C'.000453 0.00034 C'.005701 0.00044 0.046013 0.00019 0.000598 0.00017 0.002837 0.00074 0.00398 0.00014 C'.000816 0.00012 0.00786 0.00115 C'.1051 6.10E05 C'.001071 0.00027 C'.005477 C'.004862 Average of standard deviation of error C'.19498 1111372 C'.024872 C'.13119 C'.039212 C'.008291 0.9574 0.11387 0.024342 0.13156 0.045087 0.02205 C'.22039 111134 C'.024514 0.13065 C'.050022 C'.059248 0.23638 0.11404 0.024892 0.13389 0.045329 0.008675 0.22646 0.11294 0.024465 0.13041 0.046088 0.007784 Cs.2609 1111551 C'.024682 C'.13335 C'.051796 C'.008558 0.95319 0.1127 0.024138 0.13058 0.063676 0.066345 Case u v ft/s ft/s 1 0.04335 0.0195 2 1.6186 0.01527 3 0.80384 0.29529 4. 0.95018 C'.031218 5 0.58888 0.03818 6 Cs.49949 0.02877 7 2.7942 C'.076492 1 3.7724 C'.47367 2 6.1804 1.1113 3 4.1802 2.2906 4 4.3554 0.5206 5 4.1598 0.47511 6 5.105 Cs.50105 7 8.265 2.7318 Table 53. Horizontal S trajectory Monte Carlo sets. Velocity Error Statistics Angular Rate Error Statistics 10 0 2 m Mean (abs value) 21 .1 .1a IIIIII 1234567 1234567 3~ 002 1234567 1234567 1 02 ~05O 01 I I I 1234567 1234567 Euler Angle Error Statistics Euler Angle Error Statistics 01 01 0 50 05 1234567 1234567 Case Case Average of mean error p q rad/s rad/s Case u ft/s r rad/s 1 3.47 ().21437 ().282()1 ().()()() ().()(212 6.33E)5 2 3.()(8 ().27835 ().4()86 ().()()35 ().()(239 ().()()38 3 2.8316 ).()26()12 ().26288 ().()(127 ().()(248 ().()()76 4 3.1572 ().18199 ().22469 ().()()82 ().()(234 ().()()73 5 2.4691 ().29688 ().24576 ().()()94 ().()(26 ().()()78 6 2.4493 ().2()664 ().24868 ().()()27 ().()(321 ().()()338 7 2.33()3 ().5()543 ().152()5 ().()(158 ().()(243 ().()()192 Average of standard deviation of error 1 5.2436 1.I673 ().34853 ().I3531 ().()26771 ().11883 ).()3259 ).()358 ).()2785 ).()2843 ).()2429 ).()2937 ).()2967 ().()(711 ().()(5282 ().()(181 ().()(3626 ().()(6344 ().()(6169 ().()(253 ().()5()474 ().()32863 7.3113 4.6546 5.1832 4.2636 6.()214 8.4666 1.6()49 2.3222 1.2399 1.2429 1.3711 2.4958 1.2454 ().43665 ().332)8 ().31771 ().37397 1.()198 ().13187 ().1328 ().12947 ().13265 ().13341 ().13()85 ).()26863 ).()26843 ).()26154 ).()26852 ).()27145 ).()266)9 ().11615 ().11618 ().11283 ().116)4 ().11943 ().11552 ).()5478 ).()49777 ).()46634 ).()4356 ).()59269 ).()58652 ).()38959 ).()66782 ).()33341 ).()34434 ).()34()22 ).()67813 Figure 55. Straight and level trajectory, case statistics. Angular Rate Error Statistics 86 Velocity Error Statistics Angular Rate Error Statistics 10 ~Mean (abs value) 2 O Std Dev .1 .1.1.1 1 IIlli llI 12 34 5 67 12 34 56 7 3 003 2 0o 02 12 34 5 67 12 34 56 7 1 5 0 2 1 j 05 0 111 12 34 5 67 12 34 56 7 Euler Angle Error Statistics Euler Angle Error Statistics 01 01 12 34 5 67 12 34 56 7 Case Case Figure 56. Vertical S trajectory, case statistics. Velocity Error Statistics Figure 57. Horizontal S trajectory, case statistics. 10 0 2 H Mean (abs value) 5 01~L 1234567 1234567 3 003 .. 1 . 0I.1 .1.1 .1. . 1234567 1234567 15 02 1 lul.. ..l .. .....1. II I 1234567 1234567 Euler Angle Error Statistics Euler Angle Error Statistics 0 1 0 1 00 I l lI II mi II / 1 1_ 1.1. 1234567 1234567 Case Case Table 54. Camera angle Monte Carlo sets. Oc u v w p q r 8 9 deg ft/s ft/s ft/s rad/s rad/s rad/s rad rad Average of mean error Theta c = O deg Theta c = 60 deg Theta c = 82.8 deg 0.4  0.2 0 0.2 0.4 A 0.5 O 0.4 0.2 0.5 0.5 nu nu O 0.5 0.5 O nu Figure 58. Focal plane history for straight and level trajectory with camera looking down, intermediate baseline position, and level. Straight and Level Trajectory 0.05409 0.007032 0.021851 0.04335 0.0195 0.018962 0.30927 0.014068 0.025078 Vertical S Trajectory 1.4645 0.035481 0.13992 1.421 0.0381 0.14139 1.9163 0.04465 0.18002 Horizontal S Trajectory 3.5958 0.12057 0.35776 3.47 0.21437 0.28201 0 60 82.8 0 60 82.8 0 60 82.8 9.75E06 0.00013 0.0002 0.000203 0.000192 8.13E05 0.00045 0.000682 0.000201 0.00084 0.000838 1.68E05 0.00019 3.37E05 1.81E05 0.0008 7.55E05 0.000126 0.00591 0.00011 0.002525 0.02906 0.01822 0.02353 3.68E05 0.00063 0.000485 1.87E06 0.001268 0.00089 0.0005 0.00871 0.00041 0.05065 0.003908 0.00011 0.00212 6.33E05 0.03259 0.00711 3.6006 0.24286 0.29331 0.00075 0.00241 0.00057 0.03297 0.005555 Average of standard deviation of error Straight and Level Traj ectory 0 4.0577 0.74843 0.087288 0.03742 0.017928 0.16092 0.034328 0.012611 60 82.8 0 60 82.8 0 60 82.8 3.7724 0.47367 0.19498 4.477 0.36341 0.23067 Vertical S Trajectory 4.1344 0.74597 0.21869 4.2084 0.45093 0.27757 4.8849 0.36355 0.32594 Horizontal S Trajectory 5.5306 1.3797 0.42478 5.2436 1.1673 0.34853 5.5214 1.1121 0.35725 0.11372 0.024872 0.13119 0.039212 0.008291 0.083895 0.023681 0.066755 0.048836 0.006345 0.035919 0.11863 0.087702 0.042608 0.13531 0.11398 0.018287 0.026223 0.024275 0.02244 0.026771 0.025551 0.15869 0.11071 0.062332 0.17705 0.11883 0.062243 0.043088 0.043465 0.05171 0.063435 0.050474 0.050303 0.013405 0.010757 0.007588 0.02951 0.032863 0.02991 02 01 003 9002 001 0 02 01 0 04 ~002 0 Angular Rate Error Statistics I  Euler Angle Error Statistics 0 60 82 8 theta c (deg) Figure 59. 6  P4 1 , 0 5 04 0 06 0 04 0 Straight and level traj ectory, Velocity Error Statistics 82 8 Figure 510. Vertical S trajectory, camera angle statistics. Velocity Error Statistics 84 15 S1 '05 04 02 Euler Angle Error Statistics 0 06 ~004 0 60 82 8 theta c (deg) camera angle statistics. Angular Rate Error Statistics 02 0 03 3002 02 Euler Angle Error Statistics 0 04 002 H m I I .s .1 mi Euler Angle Error Statistics II m I 0 60 82 8 theta c (deg) 0 60 theta c (deg) 