<%BANNER%>

A Synthesized Directly Georeferenced Remote Sensing Technique for Small Unmanned Aerial Vehicles

Permanent Link: http://ufdc.ufl.edu/UFE0025070/00001

Material Information

Title: A Synthesized Directly Georeferenced Remote Sensing Technique for Small Unmanned Aerial Vehicles
Physical Description: 1 online resource (154 p.)
Language: english
Creator: Perry, John
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2009

Subjects

Subjects / Keywords: direct, georeferencing, mosaicing, navigation, photogrammetry, remote, sensing, uas, uav
Forest Resources and Conservation -- Dissertations, Academic -- UF
Genre: Forest Resources and Conservation thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Directly georeferenced remote sensing is an emerging technique for rapidly gathering accurate and repeatable observations of the landscape. It refers to the ability to independently reconstruct the geometry of the remotely sensed data, allowing the data to be transformed into both maps and three dimensional representations. While direct georeferencing is becoming an established technique on both satellites and manned-aircraft platforms, the implementation on small, hand-launchable unmanned aerial vehicle platforms has been limited due to significant challenges in system integration and sensor accuracy imposed by the platform. A synthesized approach to directly georeferenced remote sensing on a small unmanned aerial vehicle is presented. An order-of-magnitude improvement in the mosaicing seam error over a nai umlautve direct georeferencing approach is demonstrated using field data. Development of the technique is presented in a comprehensive and systematic examination of each essential element of directly georeferenced remote sensing. The use of empirical models for each element of the approach from raw data to finished product allows for a consistent and reliable evaluation of its accuracy. The geometric model of image formation including camera distortion is developed, and the spatiotemporal integration with the navigation system to accomplish direct georeferencing is formulated and examined in detail. Payload calibration procedures are developed and analyzed, together with a description of the Lightning Lab Calibration Site, a permanent installation for the calibration and evaluation of UAV-based payloads. The development of the physical payload and operational considerations such as flight planning and required overlap are evaluated. A novel sensor synchronization architecture is described and an error model for synchronization developed and analyzed. A synthesized processing algorithm is developed along with a survey of alternative approaches and potential improvements. The implementation of the algorithm is described and the accuracy of the algorithm is assessed using field data. Limitations of the technique are identified and examined throughout, with recommendations for future developments.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by John Perry.
Thesis: Thesis (M.S.)--University of Florida, 2009.
Local: Adviser: Mohamed, Ahmed Hassan.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2009
System ID: UFE0025070:00001

Permanent Link: http://ufdc.ufl.edu/UFE0025070/00001

Material Information

Title: A Synthesized Directly Georeferenced Remote Sensing Technique for Small Unmanned Aerial Vehicles
Physical Description: 1 online resource (154 p.)
Language: english
Creator: Perry, John
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2009

Subjects

Subjects / Keywords: direct, georeferencing, mosaicing, navigation, photogrammetry, remote, sensing, uas, uav
Forest Resources and Conservation -- Dissertations, Academic -- UF
Genre: Forest Resources and Conservation thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Directly georeferenced remote sensing is an emerging technique for rapidly gathering accurate and repeatable observations of the landscape. It refers to the ability to independently reconstruct the geometry of the remotely sensed data, allowing the data to be transformed into both maps and three dimensional representations. While direct georeferencing is becoming an established technique on both satellites and manned-aircraft platforms, the implementation on small, hand-launchable unmanned aerial vehicle platforms has been limited due to significant challenges in system integration and sensor accuracy imposed by the platform. A synthesized approach to directly georeferenced remote sensing on a small unmanned aerial vehicle is presented. An order-of-magnitude improvement in the mosaicing seam error over a nai umlautve direct georeferencing approach is demonstrated using field data. Development of the technique is presented in a comprehensive and systematic examination of each essential element of directly georeferenced remote sensing. The use of empirical models for each element of the approach from raw data to finished product allows for a consistent and reliable evaluation of its accuracy. The geometric model of image formation including camera distortion is developed, and the spatiotemporal integration with the navigation system to accomplish direct georeferencing is formulated and examined in detail. Payload calibration procedures are developed and analyzed, together with a description of the Lightning Lab Calibration Site, a permanent installation for the calibration and evaluation of UAV-based payloads. The development of the physical payload and operational considerations such as flight planning and required overlap are evaluated. A novel sensor synchronization architecture is described and an error model for synchronization developed and analyzed. A synthesized processing algorithm is developed along with a survey of alternative approaches and potential improvements. The implementation of the algorithm is described and the accuracy of the algorithm is assessed using field data. Limitations of the technique are identified and examined throughout, with recommendations for future developments.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by John Perry.
Thesis: Thesis (M.S.)--University of Florida, 2009.
Local: Adviser: Mohamed, Ahmed Hassan.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2009
System ID: UFE0025070:00001


This item has the following downloads:


Full Text

PAGE 1

1 A SYNTHESIZED DIRECTLY GEOREFERENCED REMOTE SENSING TECHNIQUE FOR SMALL UNMANNED AERIAL VEHICLES By JOHN HENDRIX PERRY A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2009

PAGE 2

2 2009 John Hendrix Perry

PAGE 3

3 To Billy Perry

PAGE 4

4 ACKNOWLEDGMENTS Thank you, Mom and Dad, for lugging me to the library every week during my more precocious years. Thank you, Fran and Doctor D., for encouraging me during the years I was less so. Kelly, thank you for your support and for always being there, I couldnt have done this without you A great deal of recognition is owed to all of the researchers who have contributed to the UF UAV project; it couldnt be done without the strong cooperation of an interdisciplinary team. Scott, thanks for getting me up and running on UAV research Adam, t hank you for your effort and for getting me involved early on To the late Michael Morton, you will be missed and were an inspiration to us all. Matt, you are a tremendous asset to this program and were my right hand man through this past year. Josh, thank s for the effort and huge contribution to the work on synchronization. Brandon, thank you for the hours of layup to get the fleet of UAVs in the air, I look forward to working with you more. Thank you to the faculty especially Dr. Percival and Dr. Ifju, who have kept this project going and giving us all the chance to lose some sleep on a worthwhile research effort. Thank you also to our collaborators in the Army Corp of Engineers, who have been solid supporters of our work. Being a part of the Geomatics Program at the University of Florida has been an excellent experience. Thank you, Dr. Mohamed, for giving me the challenge to explore new fields and develop my math chops. Dr. Dewitt, you have had a huge impact on my work, and not just because of your advi ce to join the program. Thanks especially to Ben for being a knowledgeable colleague willing to exchange new ideas. Finally, thank you to Damon and Shane for being willing to get out and do some surveying at the Lightning Lab; good luck to both of you in your careers.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS .................................................................................................................... 4 LIST OF TABLES ................................................................................................................................ 7 LIST OF FIGURES .............................................................................................................................. 8 LIST OF ABBREVIATIONS ............................................................................................................ 11 ABSTRACT ........................................................................................................................................ 14 CHAPTER 1 INTRODUCTION ....................................................................................................................... 16 The NOVA II Project .................................................................................................................. 16 Technique Mo tivation ................................................................................................................. 18 Direct Georeferencing ................................................................................................................. 19 Previous Approaches ................................................................................................................... 20 2 GEOMETRIC MODEL .............................................................................................................. 24 Coordinate Systems ..................................................................................................................... 24 Camera Geometry ....................................................................................................................... 27 Interior Orientation Parameters .................................................................................................. 30 Exterior Orientation Parameters ................................................................................................. 32 Scene Geometry .......................................................................................................................... 34 3 PAYLOAD DESCRIPTION ...................................................................................................... 39 Design Goals ............................................................................................................................... 39 Imaging System ........................................................................................................................... 39 Navigation System ...................................................................................................................... 40 Sensor Synchronization .............................................................................................................. 44 Payload Control and Data Management .................................................................................... 46 Hardware .............................................................................................................................. 46 Software ................................................................................................................................ 47 4 PAYLOAD CALIBRATION AN D EVALUATION ............................................................... 52 Camera Calibration ..................................................................................................................... 52 Boresight and Leverarm Calibration .......................................................................................... 54 Navigation Accuracy ................................................................................................................... 59 Synchronization Accuracy .......................................................................................................... 61 Image Overlap and Fl ight Planning ........................................................................................... 69

PAGE 6

6 5 PROCESSING ALGORITHM ................................................................................................... 87 Overview of the Algorithm ......................................................................................................... 87 Preprocessing ............................................................................................................................... 88 Interest Point Gen eration ............................................................................................................ 91 Initial Approximation .................................................................................................................. 92 Semi -Automatic Point Matching ................................................................................................ 95 Bundle Adjustment .................................................................................................................... 100 Surface Genera tion .................................................................................................................... 104 Mosaicing .................................................................................................................................. 108 Software Implementation .......................................................................................................... 111 6 ASSESSMENT AND CONCLUSIONS ................................................................................. 119 Assessment ................................................................................................................................ 119 Conclusions ............................................................................................................................... 122 Recommendations ..................................................................................................................... 124 APPENDIX A EXAMPLE NOVA II DATA ................................................................................................... 131 B LIGHTNING LAB CALIBRATION SITE ............................................................................. 134 C BUNDLE ADJUSTMENT DEVELOPMENT ....................................................................... 138 LIST OF REFERENCES ................................................................................................................. 147 BIOGRAPHICAL SKETCH ........................................................................................................... 154

PAGE 7

7 LIST OF TABLES Table page 4 1 SCBA adjusted IOPs for the Lightning Lab camera calibration data set ........................... 76 4 2 SCBA adjusted EOPs for the Lightning Lab camera calibration data set .......................... 76 4 3 Boresight and leverarm calibration results ........................................................................... 76 4 4 Navigation system benchmarking results ............................................................................ 76 4 5 The calculated RMS error in the navigation parameters due to synchronization error and aperture uncertainty in the navigation parameters for the given dynamics ................. 76 4 6 Ground coverage and overlap parameters for typical NOVA II flight lines ...................... 77 6 1 Corrections to the exterior orientation parameters obtained by the bundle adjustment of the assessed data set ......................................................................................................... 127 A 1 Description of data packets generated by the NOVA II payload ...................................... 132 B1 Lightning Lab Calibration Site p ermanent r eference m onument c oordinates .................. 135 B2 Lightning Lab Calibration Site a erial t arget c oordinates ................................................... 135

PAGE 8

8 LIST OF FIGURES Figure page 1 1 Preparing the NOVA II for launch ........................................................................................ 23 2 1 The (a) image coordi nate system, (b) navigation coordinate system, and (c) local level coordinate system illustrated ........................................................................................ 37 2 2 Illustration of four types of distortions caused by miscalibration of the interior orientation parameters ............................................................................................................ 38 3 1 The effect of throttling up the NOVA II electr ic motor on the payload magnetometer at full throttle (70 amp draw) ................................................................................................. 49 3 2 Architecture of the NOVA II Payload .................................................................................. 49 3 3 Diagram of the Burredos operation ..................................................................................... 50 3 4 Comparison of the Burredos size through iterations of development ............................... 50 3 5 Preparing to load the payload into the NOVA II, arranged beside the plane in their approximate locations inside the airframe ............................................................................ 51 3 6 Close ups of the sensor package (left) and controller package (right) ............................... 51 4 1 Lightning Lab Calibration Site calibration data set images 770, 796, 812, 894, 919, and 1004 .................................................................................................................................. 77 4 2 Comparison of the SPAN and MTi G attitude parameters over time for the static (left) and dynamic (right) portions of the navigation benchmarking experiment .............. 78 4 3 Error profiles of MTi G position and orientation parameters during the static (left) and dynamic (right) portions of the navigation benchmarking experiment ....................... 79 4 4 Distribution of the pseudorandom triggers as a fraction of the UTC second during the synchronization accuracy assessment ............................................................................. 80 4 5 Evident linear trend in the error of the uncorrected synchronization data attributed to .................................................................................................................................. 80 4 6 The correlation of the synchronization error to the fraction of the UTC second at which it occurred plotted against possibl e values of ............................................ 80 4 7 The value of the synchronization error over time in hundreds of nanoseconds ................. 80 4 8 Normal probability plot of the synchronization error showing low kurtosis ..................... 81 4 9 Drift of the Burredos crystal oscillator over time ............................................................... 81

PAGE 9

9 4 10 Typical NOVA II dynamics while flying straight and level ............................................... 81 4 11 Flight planning guide for determining flying height and flight line spacing for a desired overlap ....................................................................................................................... 82 4 12 Typical flight line configuration for the NOVA II, demonstrating the 'dipole' pattern ..... 83 4 13 Example of the image overlap configuration for a typical NOVA II flight ....................... 83 4 14 Sequential image airbase for a typical NOVA II flight showing variations due to f light direction relative to the prevailing wind ..................................................................... 84 4 15 Typical horizontal deviation of the NOVA II from an ideal flight line .............................. 84 4 16 Typical vertical deviation of the NOVA II from an ideal flight line .................................. 85 4 17 Typical orientation deviation of the NOVA II from ideal flight line .................................. 85 4 18 Coverage map for a two -day data collection mission conducted on behalf of the Army Corp of Engineers ........................................................................................................ 86 5 1 Example of the LogPolar Transformation The origin of the transform is the center of the source image with = 72 Source: [Lena] ............................................................ 116 5 2 Comparison of IDW (left) and Kriging (right) interpolation techniques demonstrating the bullseye effect of the IDW .......................................................................................... 116 5 3 Comparison of resampling techniques at 200% enlargement, Source: [Lena] ................ 117 5 4 Example correct tie point matches between source (left) and target (right) images, matched template in green and search space in red ........................................................... 118 5 5 Example of an incorrect tie point matches due to poor initial approximation of the image boundary, matched template in green and search space in red .............................. 118 6 1 The distribution of tie points for the first (left) and second (right) iteration of the assessed data set ................................................................................................................... 128 6 2 Distribution of mosaic seam error measurements and their values ................................... 128 6 3 Comparison of the mosaic (left) to the unadjusted direct projection of one of the composing images (right) showing the magnitude of the adjustment .............................. 129 6 4 Detail of typical mosaicing seam errors at the farther distance from nadir (left) and due to vertical relief in the image (right) ............................................................................ 129 6 5 Demonstration of detail with observations of a (a) low flying bird, (b) wading birds, a (c) basking alligator, and (d) swimming alligators ......................................................... 130

PAGE 10

10 A 1 Example snippet of the raw NOVA II log file ................................................................... 133 A 2 Example snippet of the generated NOVA II geotags file .................................................. 133 B1 Physical Design of the Lightning Lab Ground Contro l Point Aerial Targets .................. 136 B2 Aerial view of the Lightning Lab Calibration Site Targets indicating the target distribution ............................................................................................................................ 136 B3 Aerial view of the Lightning Lab Calibration Site Targets captured by NOVA II with a zoomed inset of an individual target ................................................................................ 137 B4 NGS Control Monument KINGSLEY (left) and GPS setup (right) ................................. 137

PAGE 11

11 LIST OF ABBREVIATIONS AT Aerotriangulation BA Bundle Adjustment BLC Boresight and Leverarm Calibration CEP Circular Error Probable cm Centimeter COTS Consumer OffThe Shelf DAQ Digital Acquisition Card DCM Direction Cosine Matrix DEM Digital Elevation Model DG Direct Georeferencing DGRS Directly Georeferenced Remote Sensing DLT Direct Linear Transformation DMS Degrees Minutes Seconds DSM Digital Surface Model ECEF Earth Centered Earth Fixed EOP Exterior Orientation Pa rameters FFT Fast Fourier Transform GCP Ground Control Points GCS Ground Control Station GPS Global Positioning System HCD Harris Corner Detector HSV Hue Saturation -Value Hz Hertz IDW Inverse Distance Weighted

PAGE 12

12 IMU Inertial Measurement Unit INS Inertial Navigation System IOP Interior Orientation Parameters KML Keyhole Markup Language LiDAR Light Detection and Ranging LPT Log Polar Transformation LSM Least -Squares Matching M Meters mm Millimeters m/s Meters per Second ms Mill i second s Microsecond MEMS Mic ro Electro -Mechanical Systems NCC Normalized Cross Correlation NGS National Geodetic Survey ns Nanosecond OPK Omega Phi Kappa PPS Pulse Per Second PRM Permanent Reference Monument RANSAC Random Sample Consensus RGB Red Green -Blue RMS Root Mean Squared RPY Roll Pitch Yaw RTK Real Time Kinematic SATA Serial Asynchronous Transfer Attachment

PAGE 13

13 SCBA Self Calibrating Bundle Adjustment SLR Single Lens Reflex UAV Unmanned Aerial Vehicle UTC Coordinated Universal Time UTM Universal Transverse Mercator V Volt VGA Video Graphics Array ZUPT Zero -velocity Update

PAGE 14

14 Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science A SYNTHESIZED DIRECT LY GEOREFERENC ED REMOTE SENSING TECHNIQUE FOR SMALL UNMANNED AERIAL VEHICLES By John Hendrix Perry August 2009 Chair: Ahmed Mohamed Major: Forest Resources and Conservation Directly georeferenced remote sensing is an emerging technique for rapidly gatheri ng accurate and repeatable observations of the landscape. It refers to the ability to independently reconstruct the geometry of the remotely sensed data, allowing the data to be transformed into both maps and three dimensional representations. While direct georeferencing is becoming an established technique on both satellites and manned aircraft platforms, the implementation on small, hand launchable unmanned aerial vehicle platforms has been limited due to significant challenges in system integration and s ensor accuracy imposed by the platform. A synthesized approach to directly georeferenced remote sensing on a small unmanned aerial vehicle is presented. A n order -of -magn itude improvement in the mosaic ing seam error over a nave direct georeferencing approa ch is demonstrated using field data Development of the technique is presented in a comprehensive and systematic examination of each essential element of directly georeferenced remote sensing. The use of empirical models for each element of the approach f rom raw data to finished product allows for a consistent and reliable evaluation of its accuracy. The geometric mode l of image formation including camera distortion is developed, and the spatiotemporal integration with the navigation system to accomplish direct georeferencing is formulated and examined in

PAGE 15

15 detail. Payload calibration procedures are developed and analyzed, together with a description of the Lightning Lab Calibration Site, a permanent installation for the calibratio n and evaluation of UAV-base d payloads The development of the physical payload and operational considerations such as flight planning and required overlap are evaluated. A novel sensor synchronization architecture is described and an error model for synchronization developed and ana lyzed. A synthesized processing algorithm is developed along with a survey of alternative approaches and potential improvements. The implementation of the algorithm is desc ribed and the accuracy of the algorithm is assessed using field data. L imitations of the technique are identified and examined throughout, with recommendations for future developments

PAGE 16

16 CHAPTER 1 INTRODUCTION The NOVA II Project The NOVA II is the most recent evolution of a decade long interdisciplinary project at the University of Florida to develop a lightweight unmanned aerial vehicle for remote sensing applications [ Jones, 2003]. With each successive generation of UAV developm ent, the project has improved the performance and capability of the platform, with great strides being made in flight duration, airframe ruggedness, flight stability, operational flexibility, and payload capacity [Bowman, 2008]. The recent success of the p rogram in transferring the NOVA II platform to operational use with the Army Corp of Engineers is a testament to the maturation of the platform [Burgess et al., 2009]. This transition has signaled that the capability of the small UAV platform for remote se nsing in the civilian sector is moving from possible to practical. The NOVA II is pictured in Figure 1 1. The basic motivation for developing the NOVA II is the need for georeferenced aerial imagery. The ubiquitous adoption of geographic information system s such as Google Earth and Microsoft Virtual Earth has only increased the need for spatially registered imagery in an immense variety of applications. Current methods for collecting and updating this imagery rely on manned aircraft and satellite -based re mote sensing platforms. Limitations of these platforms include high cost and difficulty of deployment, limited resolution due to flying height restrictions, and risk to human life. In contrast, t he small UAV paradigm offers the potential for : 1) low cost, rapid deployment, 2) high resolution, and 3) low risk to operators [ Bowman, 2008]. The applications often cited for this UAV -based imagery include infrastructure and environmental monitoring, disaster response and recovery, and surveying and land use plann ing [Cox et al., 2004; Jones et al., 2006]. Georeferencing is a critical component of these

PAGE 17

17 applications, allowing both localization of features in the image as well as repeatable observations of the scene over time [Bowman 2008]. Development of the NOVA II has been geared toward specific applications that are of interest to program collaborators. These missions reflect the particular advantages of the small UAV platform. In cooperation with the Army Corp of Engineers, the use of the NOVA II for monitoring invasive aquatic plant species has been highlighted [ Bowman et al., 2008]. This application emphasizes the remote operation capabilities of the UAV, as well as the ability to frequently deploy the system for repeatable observations in change detection application. A long time goal of the program has been adapting small UAVs for wildlife population research, for which the platform is uniquely suitable [Wilkinson 2007; Jones 2003]. For this application, very high image resolution for species and nest ident ification, the dangerously (for manned aircraft) low flying heights, quiet operation, and spatial and temporal repeatability all demonstrate the high potential of the small UAV platform. Accompanying the development of an operational small UAV platform, si gnificant progress has been made in the development of a suitable remote sensing payload. Although the UAV imposes weight and size constraints on the payload, the use of innovative sensor fusion methods for maximiz e its operational envelope. To pursue this goal, a major redesign of the payload architecture was undertaken in the fall of 2008. A detailed examination of this hardware design and its architectural elements are presented in Chapter 3. The end result of this redesign was a flexible payload architecture capable of delivering the required geospatial data in a robust operational package. Following the successful implementation of both the new airframe and payload, which together form the NOVA II, the project began deploying the UAV on an operational basis for a

PAGE 18

18 variety of data collection missions. This stage of development was long been anticipated, and had been partially achieved in the previous generation, the NOVA I [Bowman 2008a ]. However, it presented the new research challenge of developing a robust method for turning the raw data into a deliverable remote sensing product. For each previous generation of the project, methods had been proposed or developed for processing the data delivered by the system [Bowman 2008; Wilkinson 2007], h owever, the previous approaches were incompatible or inadequate for exploiting the new suite of sensors available on the NOVA II. Technique Motivation The development of the payload, calibration, and processing techniques stood to take advantage of t he two principal sensors on the payload, a high -resolution imaging system and a consumer grade INS/GPS. Together, these two sensors can theoretically deliver georeferenced aerial imagery over terrain with no a priori terrain model or control through the us e of a directl y georeferenced remote sensing system. The approaches presented in this thesis are intended to fulfill this possibility by fusing all available sensor information to produce a georeferenced and rectified mosaic of the scene. The emphasis on synthesis in describing the proposed technique is purposeful. A great number of methods have been developed for each element of the proposed technique, drawing primarily from the fields of robotics and photogrammetry. The development of the technique was thus guided by the investigation and selection of a suitable approach for each element of the algorithm, and the resulting synthesis of the individual elements producing a novel method for approaching the DGRS paradigm. Importantly, it should be noted that the scope of the proposed technique is limited to processing data derived from the operational envelope of the data collection platform. Limiting the scope of the technique has the dual impact of optimizing the solvability and effectiveness of

PAGE 19

19 the algorit hm for this specific platform, but also limiting the robustness of the technique in more general applications. However, a more general approach is certainly desirable. To this end, the constraints and assumptions of the proposed algorithm are described to the greatest extent possible throughout the thesis, and improvements and alternatives are proposed w h ere possible. As will be demonstrated, it is ultimately the observability of the desired information and the availability of initial approximations which c onstrains the generality of the algorithm. Direct Georeferencing Direct georeferencing is clearly a central theme here, and the concept itself deserves some elucidation. A definition of direct georeferencing is not well established in the general photogrammetric community, perhaps because of the variety of algorithms and systems, including the one described in this thesis, which exploit it (or at least the term). A precise definition might be Direct georefer encing is the act of determining the attitude and position of a remote sensing system at the moment of exposure. This definition is by no means simplistic, and implies the key concept that direct georeferencing is necessarily an exercise in sensor integra tion, wherein the remote sensing system is integrated with a navigation system both spatially and temporally. However, by merely defining direct georeferencing in terms of systems integration, it belies the rich potential of the technique which it proposes Determining the position and attitude of the imaging sensor does little without exploiting the data synthesis that is made possible by direct georeferencing. A refined definition of direct georeferencing is therefore proposed: Direct georeferencing is the process of independently reconstructing the geometry of remotely sensed data by integration of remote sensing and navigation systems. What this definition loses in conciseness is more than made up for by the explicit description of the purpose of dire ct georeferencing. It is put forth that direct georeferencing is a process composed of several essential elements. It is true

PAGE 20

20 that this process is in large part a synthesis of well -formed methods. Much is of the methodology is shared with traditional photogrammetry, computer vision, and built upon the rudiments of implementing and integrating remote sensing and navigation systems. However, direct georeferencing offers novel challenges and unique advantages through sensor fusion. Under this proposed definiti on, direct georeferencing gains a unique identity among the other methods of reconstructing the geometry of remotely sensed data. Beside the aforementioned aspects of integrating the sensor systems, direct georeferencing is further defined by the independe nt reconstruction of the geometry. That is, the process of direct georeferencing proceeds with nothing more than the parameters and data associated with its sensor suite. No ground control is needed; no defined scene composition, or a priori terrain model. This definition is perhaps too narrow, since there are processing techniques which invoke the term direct georeferencing but in fact exploit extensive a priori information. To this end, it can be stated up front that a direct georeferencing technique, as defined, is not universally valid. Being directly georeferenced is not a sufficient property of remotely sensed data for the scene geometry to be independently reconstructed Previous Approaches As m entioned, the proposed direct georeferencing technique i s synthesized from a number of different scientific fields, and a practical review of the full scope of previous methods from which it derives is not feasible here. To the extent possible, previous approaches to individual elements of the technique are des cribed as the individual elements are discussed. For the purposes of overview, however, the proposed technique is a direct georeferencing technique, and should thus be compared to other direct georeferencing approaches. Direct georeferencing as a photogra mmetric technique is relatively new, coinciding with the availability of the global positioning system (GPS) and inertial navigation systems (INS)

PAGE 21

21 capable of providing sufficiently accurate position and orientation parameters. From the first indications of its advent in the 1980s to the development of practical system implementations in the 1990s, there has been a continual expansion in the recognition of its potential in the photogrammetry community [Mohamed and Price 2002]. This gave rise to a point of contention within the discipline as to whether direct georeferencing makes methods which require ground control points or other a priori information for a photogram metric solution obsolete [Cramer, 1999; Greening et al., 2000]. For the proposed technique, as with previous direct georeferencing techniques, it is premature to envision it as a sufficiently robust and accurate method to replace traditional photogrammetric methods for highaccuracy remote sensing applications. However, the operational envelope of the NOVA II leaves little choice in the matter; it is designed and must operate in environments where ground control is not practically available. Fundamentally, direct georeferencing techniques use position and attitude measurements from an INS/GPS to provide estimates for the exterior orientation parameters of the imaging system [Cramer and Stallman 2000]. In comparison, traditional photogrammetric techniques recover these parameters by use of space resection by collinearity in the case of a single image and aerotriangulation in the case where a strip of overlapping images is available [Wolf and Dewitt 2000]. Essentially, the navigation solution is used as the geometric control in the photogrammetric solution rather than ground control points. Once the exterior orientation parameters of the image are recovered, the normal photogrammetric process of generating rectified aerial images is continued. That is, either an extrinsic scene model (e.g., a Digital Elevation Model of LiDAR data) or an intrinsic sc ene model generated using photogrammetry is used as a surface on which the aerial images are projected and resampled [Mohamed and Price

PAGE 22

22 2002; Jung and Lacroix, 2003]. Direct georeferencing techniques using LiDAR have been fully implemented on a UAV, but due to size of the NOVA II was not applicable [Nagai et al. 2009]. A significant difference between previous direct georeferencing methods and the proposed technique arise from the difference in the accuracy of the INS/GPS measurements. In typical direct g eoreferencing scenarios, the IMU has an accuracy of 1/hr or better [Mostafa and Schwarz 2000]. Due to size and weight limitations of NOVA II payloads current commercially available technology is not capable of delivering this level of accuracy on the NOVA II platform. To compensate, sensor fusion is used to improve the solution with additional geometric information derived from the photogrammetric observations. This does not mean that ground control or surface models are required; the photogrammetric o bservations may be limited to the intrinsic geometry of image formation. This approach has been termed Integrated Sensor Orientation in the photogrammetric community [Khoshelham 2009], although the usage is not well defined and is also used to refer to combined direct georeferencing and aerotriangulation [Heipke et al. 2002]. Optimally reconstructing the image and scene in a simultaneous adjustment is a well developed technique regardless of the use of control points [Triggs et al. 1999] and is nearly universal in traditional aerotriangulation. [Wolf and Dewitt, 2000]. In the NOVA I, direct georeferencing was implemented by projecting each captured image to a flat surface. Multiple image geometry was not considered, and mosaicing was not implemented. T he horizontal accuracy attained using this method was reported to be 67.6 m RMS [Bowman 2008a ]. This method of using a nave projection model to a flat surface has been used widely for rudimentary DGRS payloads on a number of UAV systems [Taylor and Ander son 2008; Xiang and Tian, 2007]. This method compares poorly to photogrammetrically adjusted direct georeferencing solutions found by a fully calibrated metric camera with high -

PAGE 23

23 accuracy navigation systems on traditional aerial platforms, which have been s hown to attain horizontal accuracies on the order of centimeters [Cramer and Stallman 2000]. The proposed technique is intended to improve upon the results of the NOVA I, with the goal of approaching the accuracy of existing larger systems. Figure 1 1 Preparing the NOVA II for launch

PAGE 24

24 CHAPTER 2 GEOMETRIC MODEL Coordinate Systems To begin the development, several Cartesian coordinate frames must be introduced. Traditional photogrammetry usually employs two coordinate systems, one in the image space and the other in object space. Direct georeferencing requires a third coordinate system for the navigation system referred to in this paper as the navigation coordinate system The object space coordinate system is usually defined in the l ocal -level frame, a right -handed coordinate system with an origin at some position defined in a geodetic coordinate system such as graticular latitude and longitude or E CEF The X and Y axes, along the East and North directions res pectively, form a plane t angent to the surface of an earth ellipsoid. The Z axis is normal to the ellipsoid, completing the right handed system as illustrated in Figure 2 1( c ) Although the following mathematical development is made in a local level orthogonal system, it is comm on to use a conformal mapping projection such as the Universal Transverse Mercator for the object space coordinate system in direct georeferencing to reduce the computation cost of generating output maps [Legat 2006]. However, this introduces error into the solution due to the curvature of the earth and the vertical relief of the terrain [Ressl 2002]. A more rigorous solution is found by reconstructing the geometry in a local level coordinate system defined at the cent er of the scene, and subsequently transforming the object space coordinate system to a mapping projection using geodetic methods. For a detailed examination of the problems associated with using a conformal mapping projection in direct georeferencing, the reader is referred to [ Skaloud and Legat 2007; Legat 2006; and Ressl 2002]. In general, these effects are minimal over small land areas when the terrain is essentially at sea level.

PAGE 25

25 The image is a right handed coordinate system defined with an origin a t the perspective center of the camera. The X and Y axes define a plane parallel to the imaging sensor. For the purposes of this research the X and Y ax e s correspond to the rightward and upward directions respectively when holding the camera as if you wer e taking a normal picture. The Z axis of the image coordinate system completes the right -hand coordinate system and extends positively in the direction of the photographer. The intersection of the Z axis of the imaging coordinate system (the optical axis ) with the plane of the image sensor is the principal point. To avoid the complication of dealing with an inverted image, the image sensor can by virtually located in front of the perspective center as shown in Figure 2 1(a). We can thus define the posi tion vector of the perspective center of the camera in the object space [ ], and a position vector for any feature in the object space, [ ], corresponding to a feature in the image space [ ]. The navigation coordinate system is defined by the body frame composed of the ideal sensing axes of the dual triad of accelerometers and gyroscopes in the inertial navigation system. To avoid confusion in nomenclature, it should be made clear that in this thesis the navi gation coordinate system is equivalent to the body coordinate system when defined relative to the local level frame as is commonly found in navigation literature. This system is idealized in the sense that the navigation sensors will not be perfectly align ed in space and will not be truly orthogonal [Schwarz and Wei 2000]. Its origin is defined at the convergence of these axes and is illustrated in Figure 2 1(b) The origin is denoted in the object space by [ ]. The orientations of the image and navigation coordinate system with respect to the object coordinate system can be defined by equivalent Euler angles, direction cosine matrices, or quaternions [Kuipers, 1999] The rotations are usually expressed using Euler angles and

PAGE 26

26 computed using a DCM Tw o conventions are commonly used; 1) omega -phi k appa for the image coordinate system and 2) roll -pitch yaw for the navigation coordinate system These conventions differ by the direction, order and choice of axes about which the sequential Euler angles are applied. In this thesis the latter is used only when explicitly noted, and in all cases the OPK convention is used for photogrammetric implementation. We thus denote the rotation from the object coordinate system to the image coordinate system b y = [ ] and the navigation coordinate system to the object coordinate system by = [ ]. The equivalent DCM rotations are given by Eq uations 2 1 and 2 2 = cos cos sin sin cos + cos sin cos sin cos + sin sin cos sin sin sin sin + cos cos cos sin sin + sin cos sin sin cos cos cos (2 1 ) = cos cos sin sin cos cos sin cos sin cos + sin sin cos sin sin sin sin + cos cos cos sin sin sin cos sin sin cos cos cos (2 2 ) Some geodetic transformations are of practical interest in direct georeferencing. Particularly the navigation coordinate system origin provided by a GPS unit will usually be given in WGS84 latitude longitude and height above the ellipsoid. To convert this value to the object space system defined by a local level frame it is necessary to first convert the latitude, longitude and height to ECEF coordinates using Equation 2 3 which uses the WGS84 ellipsoid parameters [Wolf and Dewitt, 2000] Using the same formula, the origin of the object coordinate system should also be known. = ( + ) cos ( ) cos ( ) = ( + ) cos ( ) sin ( ) = 22 + sin ( ) = 1 2sin2( ) = 6378137 000000 m = 6 356 752 314245 m = 1 298 257223563

PAGE 27

27 = ( ) = ( ) = ( ) (2 3 ) A vect or between the origin of the object space and the navigation coordinate system can then be calculated in the ECEF coordinate system and by rigid body transformation converted to the local level coordinates system as given in Equation 2 4 = + = sin ( ) cos ( ) 0 sin ( ) cos ( ) sin ( ) sin ( ) cos ( ) cos ( ) cos ( ) cos ( ) sin ( ) sin ( ) = (2 4 ) Camera Geometry The geometry of camera image formation is described by a perspective projective transformation [Wolf and Dewitt 2000] The perspective projective transformation development proceeds by first giving a transformation up to a scaling factor between the image and object space as in Equation 2 5 = (2 5 ) Applying to the image coordinate system we introduce a temporary coordinate system whose origin is coincident with the image coordinate system but whose axes are parallel with the local level system. The vector then becomes [ ] given by Equat ion 2 6 = 11 + 12 + 13 = 21 + 22 + 23 = 31 + 32 + 33

PAGE 28

28 = 111213212223313233 (2 6 ) The collinearity condition states that the perspective center and the corresponding image space and object space coordinates are collinear. T his is an ideal condition produced by light rays propagating in straight lines through a uniform medium The collinearity condition thus produces a relationship of similar triangles (Equation 2 7) and by su bstitution in Equation 2 6 we deduce Equation 2 8 = = = (2 7 ) = 11 ( ) + 12 ( ) + 13 ( ) = 21 ( ) + 22 ( ) + 23 ( ) = 31 ( ) + 32 ( ) + 33 ( ) (2 8 ) Dividing and by and noting that = we arrive at the ideal collinearity condition, given by Equation 2 9 The collinearity equations in their common form are arranged to solve for the image coordinates. This may be thought of as the Project Up developme nt of the collinearity equations, since the arrangement makes it straigh tforward to solve for the perspective projection of the object space coordinates onto the image space However, it is usually the case that the image coordinate of some object of interest has been identified, and the local level coordinates of the image ar e unknown. To do this, we solve the Project Down collinearity equations for the object space coordinates, and given by Equation 2 10. = 11( ) + 12( ) + 13( ) 31( ) + 32( ) + 33( )

PAGE 29

29 = 21( ) + 22( ) + 23( ) 31( ) + 32( ) + 33( ) (2 9 ) = ( 13+ 11) ( 23+ 21) + ( 23+ 21) + ( ) ( 33+ 31) 11+ 13 = ( 31+ 21) ( 31+ 21) + ( 32+ 22) + ( ) ( 33+ 23) 32+ 22 = 11 22 11 22 = 1 11 1= 31+ 11 1= 32+ 21 1= a1+ b1+ ( 13+ 33) ( ) 2= 31+ 21 2= 32+ 22 2= a2+ b2+ ( 23+ 33) ( ) (2 10) Equivalent to the collinearity equations, the Project Down system of equations is not directly solvable because it lacks one degree of freedom under usual circumstances. That is, for given internal and exterior o rientation parameters and an image coordinate pair, there remains an ambiguity for solving for the object space coordinate The ideal collinearity equations provide the fundamental physical model for the geometric reconstruction of a scene from image coordinates However, this model must be extended with additional parameters in order to model the real -world properties of a camera system [Fraser, 1997]. Before proceeding, an important comment is to be made on alternative developments of camera geometry and image formation This issue is at t he crux of the proposed direct georeferencing technique, since it is central to the rigorous approach to direct georeferencing. It is well known that alternative models for camera geometry can be employed and have some desirable properties [ Hartley and Mundy 1994]. Most notably the use of a projective model or a direct linear transformation as opposed to the perspective projective geometry developed in this

PAGE 30

30 thesis, is commonly employed as an alternative. However, such models ultimately are not an empirica l approach to direct georeferencing [Horn, 1999]. Interior Orientation Parameters Interior orientation parameters are properties of the optical and physical configuration of the camera lens and sensor. The interior orientation parameters of a camera provide a corrective model to adjust for deviations in the internal camera geometry from the ideal collinearity model. Regardless of the model used, the sum effect of the interior orientation parameters is to produce a corrected image coordinate for use in the collinearity equations [F raser 1997]. The interior orientation parameters can be further categorized into intrinsic and distortion parameters The distor tion parameters refer to IOPs which apply a corre ction that is dependent on the value of the image coordinate under consideration whereas the intrinsic parameters are constant for all image coordinates Although there are alternative methods for modeling the IOPs [Ebner 1976], the interior orientation model described here relies on a set of well described distortions developed in [Brown 1971]. The calibrated focal length, or principal distance, is an intrinsic parameter and is typically the most significant because it acts as a scale factor in the col linearity equations Equation 2 9 The calibrated focal length is defined as the true distance from the rear nodal point of the lens to the principal point. The principal point offsets are intrinsic parameters and are a measurement of the physical offset between the geometric center of the sensor and the optical axis of the lens. The principal point offsets are denoted by and The presence of principal offsets results in a linear shift of the image coordinate system, as illustrated in Figure 2 2 Principal point offsets are present in most imaging systems, and along with focal length are the most commonly calibrated interior orientation parameters.

PAGE 31

31 Radial distortion is caused by variations in the ef fective refractive index of the lens along a rad ial axis. This distortion is common and is usually the largest source of error among the image coordinate dependent errors for no n -metric cameras [Wackrow 2008]. The distortion is dependent not only on the physical aberrations in the lens, but also on the focusing distance and object distance, although these secondary effects are typically minor. Radial distortion is typically modeled as a Seidel series, an odd -ordered polynomial series as given in Equation 2 11. Typically, len se s are modeled with a Seidel series truncated at the second or third term [Fraser, 1997] = = = 13+ 25+ 37+ = 2+ 2 = (2 11) Tangential distortion, alternately called radial asymmetric or decentering distortion, is typically caused by misalignment of the optical components of the lens and is given in Equation 2 12 [Kraus, 2007] The magnitude of this error is usually smaller tha n radial distortion for high quality lenses, but may be significant for lower quality lenses not designed for metric imaging [Wackrow, 2008] = 1( 2+ 2 2) + 2 2 = 2( 2+ 2 2) + 2 1 (2 12) Affinity and shear distortion models the effect of a digital sensing array whose elements have non uniform scales along each axis and/or are non -orthogonal given by Equation 2 13.

PAGE 32

32 This error is less common for modern digital cameras, but can be accounted for with the correction formula = 1 + 2 = 0 (2 13) As shown in Equation 2 14, both the intrinsic and distortion parameters can be combined into a single perturbation model which is added as a correction to Equation 2 9. The distort ing effect of miscalibrated interior orientation parameters are illustrated in Figure 2 2 = + 11( ) + 12( ) + 13( ) 31( ) + 32( ) + 33( ) = + 21( ) + 22( ) + 23( ) 31( ) + 32( ) + 33( ) = k1 2+ k2 4+ k3 6+ t1( 2+ 2 2) + 2 t2 + a1 + a2 = k1 2+ k2 4+ k3 6+ 2 t1 + t2( 2+ 2 2) (2 14) Exterior Orientation Parameters We have now developed a sufficient mathematical foundation to illustrate the use of direct georeferencing. Recall that the physical geometry of camera image formation is described by twelve parameters for the ideal camera in Equation 2 9 with additional interior orientation parameters for a more realistic case as in Equation 2 14. The IOPs can be determined by calibration and may be considered knowns. The parameters of the ideal collinearity equations consist of six exterior orientation parameters [ , ] which relate an image point [ ] to a corresponding object point [ ] by a perspective projective transformation The power of direct georeferencing is in providing direct measurements of the six exterior orientation parameters. It should now be clear why direct georeferencing necess arily provides

PAGE 33

33 both position and orientation data. Using direct georeferencing, the collinearity equations lack only a single unknown for reconstructing the geometry of each element of an image formed by the camera. The ability to treat the EOPs as directl y observed values greatly reduces the complexity of the problem, not only from a mathematical standpoint but also for practical purposes Without directly observed EOPs the absolute orientation between the image and object space must be provided entirely by image observations of a priori ground control points. Furthermore, the relative orientation between images can be determined only up to a scale factor without ground control points This requires not only the availability of observable ground control for a solution, but the ground control must be arranged to provide a sufficiently robust geometry for the photogrammetric solution. Since it is not feasible to have the navigation coordinate system and image coordinate system physically coincident, it is necessary to apply a boresight and leverarm correction to the navigation parameters in order to use them as EOPs It is essential that the boresight and leverarm parameters remain constant during flight, a requirement that is satisfied by rigidly mounting the navigation and imaging system relative to each other. This is known as a strapdown sensor configuration. In this configuration, the boresight and leverarm correction amounts to a rigid body transformation in the object space The three parameters of the lever arm correction, [ ], are t he translation from the origin of the navigation coordinate system to the origin of the image coordinate system defined in the navigation coordinate syste m Similarly, the boresight correction requires three Euler angles, [ ] which orient the navigation coordinate system to the image coordinate system The boresight and leverarm calibration parameters are applied to the navigation parameters t o obtain the EOPs using

PAGE 34

34 Equation 2 15. The optimal estimation of the boresight and leverarm equations is discussed in Chapter 4. = + = (2 15) Scene Geometry Reconstruction of the scene geometry is principally accomplished by solving for the object space coordinates of image features. As noted in the previous section, the collinearity equations for an image feature lack a single degree of freedom given the exte rior and interior orientation parameters an d image coordinate observations An additional known is required to resolve the ambiguity, referred to from here on as the Z ambiguity It is so named because the ambiguity lies largely along the optical axis (the Z axis) of the image space. The solution set will lie along the three dimensional line defined by the collinearity equations parameterized by a scal ar in Equation 2 16. = + ( 11+ 12 13 ) = + ( 21+ 22 23 ) = + ( 31 + 32 33 ) (2 16) The additional observations required to resolve the ambiguity may take the form of tie points and control points. A tie point is a feature that is observed in two or more images. Each tie point has three unknowns which define a position vector in the object space but each image in which the tie point appears gives two knowns corresponding to the pair of image coordin ates Thus, a tie point that appears in two images will add one degree of freedom to the system of equations and a tie point that appears in three images will add three degrees of freedom to the system Resolving the Z ambiguity using tie points is equiva lent in a physical sense to finding

PAGE 35

35 the intersection of the lines formed by the collinearity condition of each image observation. Tie points are necessarily limited to image regions which have stereoscopic coverage, i.e. the object space feature is visible from at least two independent camera exposures A control point may or may not be a tie point, but has at least one known position parameter in the object coordinate system. For example, a horizontal control point may have a known easting and northing coordinate or a vertical control point may have a known height. In traditional photogrammetry, the control point has a central role in the photogrammetric solution because it establishes t he absolute orientation. Of course, the goal in the development of a DGRS is to disbar their use entirely as was previously discussed. Therefore, a detailed discussion and development of control points will not be given. In general, however, control points should always be used when available because they provide a valuable constraint to the collinearity equations One of the most widely used techniques for resolving the Z ambiguity for DGRS is to use an extrinsic or assumed model of the physical geometry of the scene. In these scenarios the collinearity ambiguity i s effectively nonexistent since all three object space parameters are observed and the photogrammetric solution is reduced to merely finding the corresponding image features for each point of interest on the scene model The use of extrinsic scene models suffers from a serious limitation that is not initially obvious T he direct resolution of the Z ambiguity using extrinsic scene models does not equate to the use of control points or tie points in the sense that it does not enforce the geometric consistenc y or accuracy of the physical image formation In other words the correspondence between a particular position on a DEM and a position in the image, as defined by the collinearity equations, does not equate to the empirical correspondence of the actual fe ature and the image feature. If the EOPs provided by direct

PAGE 36

36 georeferencing are in error, then the reliance on the direct solution of the collinearity equations will result in a n incorrect correspondence between the scene and image features. Therefore, the use of scene models should be restricted to cases where the relationship between the camera and scene models is established empirically by enforcing the image formation geometry or where the EOPs provided by direct georeferencing are of sufficiently high a ccuracy The technique presented in this thesis does away with the requirement for a priori control points or scene models and is similar to classic softcopy photogrammetry techniques for DEM generation [Wolf and Dewitt, 2000]. The modern use of photogrammetrically generated DEMs has been demonstrated for both highaccuracy direct georeferencing [ Yastikli and Jacobsen, 2002] and on larger UAV platforms using ground control points [Eisenbeiss and Zhang, 2006]. The technique relies solely on the photogrammetric reconstruction of a tie point network to reconstruct the scene geometry and thereby generate a surface for resolving the collinearity ambiguity in situ. This method is limited to resolving the collinearity ambiguity for features which can be o bserved in two or more images and the ambiguity for all other features must be interpolated from the observed features In the sense that this interpolation results in a DEM it is similar in outcome to the extrinsic scene model but with the fundamental d ifference that it is empirically related to the camera geometry.

PAGE 37

37 (a) (b) (c) Figure 2 1 The ( a ) image coordinate system, ( b ) navigation coordinate system, and ( c ) local level coordinate system illustrated

PAGE 38

38 Principal Point Radial Tangential Affine Figure 2 2 Illustration of four types of distortions caused by miscalibration of the interior orientation parameters

PAGE 39

39 CHAPTER 3 PAYLOAD DESCRIPTION Design Goals Directly georeferencing airborne data is an exercise in systems integration and the physical and operat ional limitations imposed by a small unmanned aerial vehicle constrain the payload [Hruska et al. 2005]. These constraints limit the sensor suite, which tend s to increase bot h in size and ease of integration in direct proportion to the sensor quality and accuracy Thus, the selection of sensors is a matter of compromise guided by strict design limitations. The design constraints were established by experience gained in previou s generations of the NOVA platform and the goal of improving operational and production efficiency, listed here One kilogram (1 kg) total weight. The target weight was a component of the airframe design specifications both for flight duration and handlaunch capability. Modular consumer off -the -shelf hardware. Minimizing cost, ease of replacement, and development time were of primary concern. The o ngoing development of the system is a strong deterrent to custom -designed solutions which require specialized knowledge for modification or upgrade. Remote (wireless) and direct (cabled) interfaces. It is an operational necessity to have remote access to t he payload, both for sensor control and for error detection. The large datasets that are generated, however, preclude data transfer over a robust wireless interface. A direct cabling interface is necessary for both post -mission data transfer as well as pay load troubleshooting. Unified data bus. The tight confine s of the airframe as well as the need to minimize electromagnetic interference suggest the need for simple cabling and interconnects Single data storage device. The complexity of the device as wel l as post -mission data transfer is greatly simplified by having a single repository for all data. Imaging System The primary imaging sensor on the NOVA II is an Olympus Inc. E 4 2 0 digital single lens reflex camera see [Olympus E 420] for specifications This digital camera was selected primarily for its compact size and robust feature set, including a software interface that allows

PAGE 40

40 complete control of the camera settings, exposure triggering, and cap tured image transfer to the host A fixed focal length 25mm pancake lens was used due to its light weight and compact size. Important considerations in moving to a digital SLR over the admittedly lighter and smaller consumer point and shoot -camera s common on most small UAV platforms was the vastly improved op tical performance of the available lenses and the low -noise medium -format imaging sensor. This move was foreseen and advised in the analysis of the NOVA I payload [ Bowman, 2008]. The overall success of the platform as a remote sensing tool is as reliant on the quality and resolution of the final imagery as much as the accuracy of the mapping. During a typical mission the camera is operated at the fastest possible exposure speed to minimize the effect of motion blurring. As a result, aperture priority mode is usually set to an exposure duration of 1/2000th of a second or better and an ISO sensitivity of 100. The Olympus automatic exposure metering program set to shutter priority, has in all experience set the aperture to 2.8 with these parameters under normal daylight conditions No noticeable motion blurring is seen at these settings, and increasing the exposure duration to 1/4000th does not result in marked improvement in sharpness and has less contrast in lower light conditions Navigation System The navigation sensors are of paramount importance to the mapping accuracy of the direct georeferencing solution. As discussed in Chapter 2, the navigation sensors provide the EOPs and hence the absolute orientation of the photogrammetric solution Th e two canonical components of the navigation system are the IMU and GPS sensors, which are further integrated using a state estimation filter [Chatfield, 1997] Additional sensors such as magnetometers and air pressure sensors are often used to augment the navigation performance. Size, weight, and cost restrictions severely limit the selection of the navigation sensors In general, the selection for small UAV platforms is limited to the two lowestaccuracy classes of

PAGE 41

41 these devices; a MEMS based IMU and a c ode -solution single -frequency GPS. As a result, the primary factors in selecting the INS/GPS were the relative accuracy, robust design, and ease of integration. The Xsens Inc. MTi G was chosen for implementation. Key features of the MTi G include a built -in Kalman filter for a real time orientation parameter solution, a raw data output mode for post -processing refinement, a high update rate, and a simple serial data communication protocol [Xsens MTi G ] Importantly, an interface is also provided for device synchronization based on the GPS 1PPS signal. A benchmarking experiment and an analysis of the accuracy of this sensor are discussed in Chapter 6. A brief discussion of inertial navigation and integrati on with GPS using a Kalman filter is in order due to its importance in direct georeferencing. A canonical INS design consists of a triad of accelerometers axially coincident with a triad of gyroscopes, with the sensing axes of the triads arranged orthogona lly. This allows for the measurement of the linear acceleration and angular rate along the axes of the navigation coordinate system relative to an inertial frame of reference. Considering motion within the inertia frame, the position and orientation of a r igid body can be entirely determined from the initial state and the observations of motion by applying Newton s laws of motion. This is summarized by the mechanization of the equation of motion in the inertial frame of reference given in Equation 3 1 Impl ementation of the laws of motion are complicated by the effects of a rotating frame of reference (earth) introducing the Coriolis effect as well as the definition of the local level frame. = + = = + = =

PAGE 42

42 = = = = (3 1 ) By simple analogy, cons ider if a ships navigator were to know that they were at the equator and oriented due north. B y dropping a weighted rope in the water and measuring the angle of the rope from vertical, the navigator can determine their average velocity After a days sail ing, the navigator would simply multiply the measured velocity by the time travelled, giving the distance travelled and their new position. Much like the analogy the dead reckoning method of inertial navigation i s suspect because of the large influence of biased measurement errors compounded in the case of inertial navigation by double integration of the measured acceleration and single integration of the angular rate. This is where the importance of integration with GPS comes into play. GPS provides both a position and velocity update which can be used to bound the growth of error in the INS solution. This is particularly effective due to the complimentary characteristics of the two systems : GPS error is dominated by short term white noise whereas the INS error is dominated by longterm drift. On the other hand, GPS provides relatively low frequency updates whereas INS is typically operated at 100 Hz or more There are a variety of performance gains by integrating the two systems, but perhaps the most impor tant is the ability to make the bias errors of the INS observable allowing them to be compensated for Thus, the typical state vector has fifteen parameters; nine for position, velocity and orientation with six additional parameters for the bias of each inertial sensor. These states are then optimally estimated by employing a Kalman filter which is an optimal estimator for linear dynamic systems. Integration of INS/GPS is usually accomplished with an Extended K alman Filter

PAGE 43

43 (EKF). The EKF allows the standard Kalman filter to be extended to nonlinear systems by linearization of the dynamics about the current state. The Kalman filter operates in two stages a prediction and an update. In very brief summary, the Ka lman filter prediction stage propagates the current state through the system dynamics providing a prediction of what the next measurement should be if the current states are correct A state covariance matrix is also propagated through the dynamics. In th e update stage, an update is calculated as the difference between the prediction and the update measurement. T he Kalman filter gain is calculated which weights the new update based on the covariance matrix of the states. Conceptually, this means that if t he filter has converged and the states have a low co variance, the filter will trust the current estimates and not give the update much weight. Thus, the Kalman filter effectively acts as an adaptive low -pass filter, allowing the filter to reject noisy me asurements [Simon, 2006] If the filter has not converged and the current states have a high co variance, the filter will be more optimistic about the new measurements. Either way, the weighted update is applied to generate the filter states for the next iteration and a new estimate for the state covariance matrix is calculated based on how well the prediction performed There are a few notes to be had on INS/GPS and the optimality of the Kalman filtering. The first is that the filter must be initialized wit h unbiased states since the Kalman filter has infinite memory and will propagate initial bias through the entire solution. Additionally, the covariance matrices for the system and measurements must also be accurate. These conditions have to be met for the Kalman filter to be an optimal estimator. In practice, however, the relatively poor quality of consumer -grade INS/GPS means that the initialization of the navigation system is less critical. Initialization for roll, pitch, position, and velocity is straigh tforward using

PAGE 44

44 GPS and motionless observation of the gravity vector. However, the heading cant be initialized in an INS/GPS by direct measurement unless it is in motion or the INS is accurate enough to find the heading by measuring the earths rotation of 15 /hour, a process known as gyrocompassing. The MTi -G is augmented with a magnetometer, which by observing E arths magnetic field is capable of providing heading observations and smooth the angular observations. However, a series of in situ experiments with the NOVA II revealed that the high electrical currents drawn by the electric motor induced magnetic fields that corrupted the MTi G magnetometer measurements An example of this effect is shown in Figure 3 1 Because of the error introduced by the electromagnetic interference, the magnetometer was disabled. Thus, the heading must be initialized by GPS updates alone. For this reason, it is critical that the NOVA II be in motion for some time to allow for a good estimation of t he heading before data collection begins [Xsens MTi G ]. Sensor Synchronization Synchronization is a challenging aspect of sensor integration when dealing with COTS devices. Since the INS/GPS sensors are already integrated with in the MTi G they were not of concern, however, the synchronization of the imaging system with the navigation system was an open problem A thorough search of the marketplace did not reveal any COTS solution, and therefore the requirement for a completely modular off the -shelf payload was not met. The solution was to develop a custom synchronization system, which was dubbed the Burredo The synchronization architecture was developed by the author and the circuit and microcontroller program was design ed and implemented by Joshua Child s. Although the Burredo was designed specifically for the synchronization of a dSLR camera with an INS/GPS the basic architecture of the Burredo allows for the synchronization of a wide range of sensors with minor m odification s to the signal conditioning circuitry to handle the

PAGE 45

45 voltage level. Cameras that employ SLR technology typically include a standard hotshoe connector typically used to mount and synchronize flash devices. The flash synchronization signal present at the hotshoe is a ready -made interf ace for timing the exposure. A variety of flash signal modes are available, but the most common in modern xenon-type flash modules is the X sync signal. The X -sync signal is designed to trigger the flash at the moment of peak exposure i.e., at the m oment the shutter is fully open [Olympus E 420] Given the fast shutter speed relative to the dynamics of the UAV, the raw X -sync signal was used, although it does not precisely signal the central moment of exposure Although adequate for our purposes, a platform with higher dynamics should include a correction factor for the time between when the shutter is fully open a nd half the exposure duration. This detail is discussed more thoroughly in Chapter 4. An important characteristic of the described synchronization architecture is the advantage of employing a feedback mechanism The a dvantage of using a feedback mechanism, such as the flash X -sync signal, guarantees that the timing signal is deterministically related to the actual time of exposure. The mast er timing signal is provided by MTi G SyncOut pulse, which is synchronized (in the simultaneous sense) to the sampling rate of the navigation sensors. The internal clock of the MTi G is calibrated to the GPS 1PPS signal. The MTi G can be configured to o utput the solved navigation parameters at a variety of rates, such as 10, 50, and 100Hz, corresponding to frequency divisions of the GPS 1PPS signal. Choosing the sampling rate of the INS as the master signal was a matter of convenience in terms of hardwa re interface and has a very practical benefit Each navigation packet received is accompanied by a Burredo timing packet. Additionally, e ach camera exposure that occurs within that epoch is reported as a fraction of the epoch. As a result, the synchronizat ion data reported for the camera exposure is a proper fraction

PAGE 46

46 that can be directly used to interpolate the needed direct georeferencing parameters in adjacent navigation packets. The Burredo utilizes an Atmel, Inc. Atmega 644p processor to synchronize the devices and generate the timing packet. This processor series was chosen due to affordability and ease of development. The X -sync signal is conditioned using a high precision operational amplifier The programming loop describing the operation of the timing mechanism is described in Figure 3 3 The master timing signal is at TTL voltage level. The Burredo outputs data in a fixed -width data packet over a TTL serial line. An FT232RL chip was used to convert this signal to the USB pr otocol allowing for integration with the existing data bus. To achieve its accuracy the Burredo utilizes the Atmega s 16 bit input capture register. This allows the slave event time to be recorded asynchronously of the normal program scheduling. The re solution of this timer is 67 ns, dependent on the microcontrollers clock frequency which is i mplemented at 16.7456 MHz. To further reduce the weight of the overall payload, a second FT232RL chip and a line driver have been added to reduce cabling and pac kaging requirements I t allows the computer to communicate with the MTi G using USB, and simplifies wiring by having the master timing signal present on the same board. This saves weight and space. Payload Control and Data Management Hardware The control and data management requirements of the NOVA II payload strongly suggested the use of a Microsoft Windows compatible x86 -based computer. Such a computer meets many of the payload requirements: readily available components such as human interface devices and storage devices ; standardized data buses such as FireWire, USB, SATA II, and RS -

PAGE 47

47 232. Perhaps most importantly, there is a very large market for devices with drivers that are plug a n d -play compatible. The payload control and data management system was built around a VIA Technologies, Inc. P ico -ITX f orm factor motherboard, the EPIA P70010L [VIA EPIA 2009]. The data storage device selected was a solid state hard drive connected by SATA optimal for UAV operations because it has no delicate moving parts Appropriate interface connectors were wired for the power supply, power switch, power -on indicator light, USB 2.0, RS -232, and VGA. The power supply itself was a Dimension Engineering, Inc. DC DC regulator, model DE -SWADJ3, t hat down-regulated the onboard lithium polymer battery voltage from a nominal 18.5 V to the required 12 V level. Early in the NOVA II testing phase, it was apparent that the onboard computer was a major source of electromagnetic interference which debilitated the radio communic ations downlink. As a result, it was necessary to shield the computer within a Faraday cage An 18 gauge steel mesh was used to construct a structurally sound enclosure with satisfactory electromagnetic shielding properties. A ll devices, cabling, and inter connects with high -frequency signals were carefully shielded and properly grounded based on early testing. Software The payload control and data management software, titled the N OVA II Payload Manager, was written by the author in Microsoft C#.NET and r uns on the Microsoft Windows XP operating system. The software also makes use of proprietary dynamic link libraries provided by Olympus USA, Inc. as an interface to the camera The software has a fully object oriented paradigm and makes extensive use of interrupt -driven events to minimize device querying overhead. The NOVA II Payload Manager software has the following objectives: Reliabl y start up and begin pay load management automatically

PAGE 48

48 Simultaneously expose local GUI and remote in terfaces to control the payload Configure settings, trigger exposures, and store images from the camera Configur e and log data from the IMU/GPS Log data from the synchronization device Integrate data streams into efficient disk -write operations The basic structure of the pro gram consists of a Windows Form object, which instantiates a single instance of a NovaOrchestrator object, which in turn creates instances of the Logger Procerus Olympus MTiG and Burredo These objects are instances for the control of the data logger the GCS downlink for remote commands, the INS/GPS navigation device, and the synchronization device respectively. Two critical aspects of the software architecture are the logging subsystem and the event driven data collection model. First the logging s ubsystem is built around a thread safe singleton class This allows all of the device handlers to have equal access to the same logging system, allowing efficient write operations to the hard drive. This, together with the event -driven device handlers, all ows the incoming data to be recorded precise ly in the sequence received. This system is necessary for the correct implementation of the synchronization scheme In addition to these architectural concerns, there were issues associated with allowing simultan eous command of the payload from both remotely transmitted commands as well as the automated routines This functionality is provided by implementing command marshal methods with state flags in the NovaO rchestrator object. Although an interface between the payload computer and the Procerus Technologies, Inc. Kestrel autopilot system which provides the wireless communication facilities was not implemented, the software was written to allow future integration once this commu nication link i s established.

PAGE 49

49 Figure 3 1 The effect of throttling up the NOVA II electric motor on the payload magnetometer at full throttle (70 amp draw) Figure 3 2 A rchitecture of the NOVA II Payload

PAGE 50

50 Figure 3 3 Diagram of the Burredos operation Figure 3 4 Comparison of the Burredo s size through iterations of development Clock Register Recorded ( T epoch ) Clock Register Reset to Zero Burredo Data Packet Sent for Previous Epoch Slave Signal Triggers Interrupt Clock Register Recorded ( T event ) Master Signal Triggers Interrupt

PAGE 51

51 Figure 3 5 Preparing to load the payload into the NOVA II, arranged beside the plane in their approximate locations inside the airframe Figure 3 6 Close ups of the sensor package (left) and controller package (right)

PAGE 52

52 CHAPTER 4 PAYLOAD CALIBRATION AND EVALUATION Camera Calibration A c amera calibration was performed using data collected at the Lightning Lab Calibration S ite (see Appendix B). This calibration is marked by being an in -situ calibration as opposed to a bench calibration as performed on previous generations of the NOVA platform ; see [Bowman, 2008] for example The literature strongly suggest s the use of in-situ camera calibrati on, particularly for non -metric cameras [Habib et al., 2005; Honkaavara et al., 2003]. In situ allows the calibration to include actual geometry, temperature pressure and other conditions found during normal operation al missions The camera is also focus ed at infinity during normal data collection ; a condition which is not practically achieved when calibrating the camera using close range techniques. The calibration data presented was collected on February 2, 2009 at midday and a t a flying height of 200 m eters with an average groundspeed of 21 m/s The camera was configured for shutter priority with exposure duration of 1/2000th and with all i mages taken at a 2.8 F -stop. The field calibration technique as performed in this section is admittedly inconvenient compared to close range calibration approaches. Considering that the UAV is designed to operate in remote areas without ground control, it is counter -intuitive to expect there to be a calibration site available nearby Thus, a primary concern was to determine the frequency with which the camera calibration must be performed. If it is found that the calibration parameters change significantly from one mission to the next, then the proposed in-situ calibration would not be practical and less accurate but more convenient close range techniques would be preferable. Varying results as to the stability of internal orientation parameters for non -metric digital cameras have been obtained in the literature, and it is unclear whether this is due to met hodology

PAGE 53

53 or the camera employed. In Habib et. al. [2005] the is sue of calibration methodology is addressed directly, and a conservative approach for estimating IOP similarity was develop that considered the effect of IOP stability on direct georeferencin g witho ut any post -mission adjustment Using this method, the interior orientation parameters were found to exhibit significant variability. However, a more liberal stability assessment that accounted for using a bundle adjustment with ground control on th e same data as the conservative approach showed that in fact the interior orientation parameters did not vary significantly. Using a similar approach as the more liberal stability assessment in Habib, et. al. [2005] no significant variability was found for a lower -grade consumer camera over the course of a year [ Wackrow 2008]. If the parameters are unstable, an alternative approach would be to conduct a self calibrating bundle adjustment for each flight using the photogrammetric data generate d by the prop osed direct georeferencing technique Given that such flight data would not include ground control points and would rely on a low accuracy navigation system for control, it is unclear whether the signal -to -n oise ratio would obliterate the sensitive interio r orientation parameters. In addition, i t has been well -demonstrated that the interior orientation parameters are strongly correlated with various exterior orientation parameters using normal adjustment geometries [Wolf and Dewitt, 2000]. The effect of abs orbing biases in the direct georeferencing parameters and perhaps even noise for higher order distortion parameters on the overall adjustment accuracy is not clear for the technique presented in this thesis. In calibrating a high accuracy direct georeferen cing payload, it was shown that using the inplane polynomial parameters developed by Ebner [ 1976] reduced the mixing of the exterior and interior orientation parameters [Cramer, 2002].

PAGE 54

54 The results of the camera calibration are presented in Table 4 1 The self -calibrated bundle adjustment was performed using SCBUN [ Wolf and Dewitt 2000]. The IOPs described in Chapter 2, excluding a ffine distortion parameters were modeled. The standard deviation of unit weight of the solution was 1.06. The images used and their adjusted exterior orientation parameters are given in Table 4 2 Interestingly the principal point offset parameters are not statistically significant at the 95% confidence level On the other hand, the first -order radial and tangential distortion parameters are both significant, the former strongly so. Such a result is not unexpected for a non -metric camera lens. Boresight and Leverarm Calibration Along with synchronization, the boresight and leverarm calibration has been noted as one of the fundame ntal problems in DGRS [ Skaloud, 1999]. While it is at least feasibl e to directly measure the lever arm on a fixed platform to a suitable degree of accuracy, analytical methods are almost certainly required to obtain the boresight correction for non -metric c ameras [Cramer, 2002]. In the NOVA I approach, the boresight parameter was recovered by mounting the system to a platform with a known orientation and solving for the camera orientation by using a computer vision algorithm [Bowman 2008]. The drawback of t his approach is that it is not particularly rigorous and a more robust analytical approach is possible. Unlike the IOPs the boresight and leverarm corrections are unlikely to be significantly variable due to the rigidity of the strapdown payload configura tion in comparison to the accuracy of the implemented INS/GPS. Using high accuracy platforms some variability the boresight and leverarm parameters h ave been observed but it is well below the expected precision of the NOVA II [Honkavaar a, 2003; Cramer, 2002]. Unlike nearly every other aspect of direct georeferencing, the boresight and leverarm calibration explicitly relies on the availability of ground control points, a motivating factor in establishing the Lightning Lab Calibration Site (see

PAGE 55

55 Appendix B ). Given that an aerotriangulation calculation must be performed anyway, it has been noted that the simultaneous bundle adjustment can be extended to include the boresight and leverarm parameters in a simultaneous solution C aution is advised given the strong correlation of the boresight and leverarm parameters with the IOPs when using the self -calibrating bundle adjustment [C ramer, 2002]. The technique presented here for boresight and leverarm calibration is independen t of the bundle adjustment to avoid mixin g of parameters. Thus, it is assumed that a calibration data set is obtained which has a suitable number of corresponding navigation and EOPs recovered by calibrated aerotriangulation. All that remains is to apply a n optimal estimation to Equation 2 15 par ameterized by linear offset and Euler angles, reformulated here as observation equations in Equation 4 1 = 1 = = ( 4 1 ) A least squares solution to the leverarm observation Equation 4 1 is linear and reduces to the weighted means of observations, given Equation 4 2 The weighting factor in Equation 4 2 requires the application of the law of propagation of variances to rotation matrices, developed later in this section. = 1[ ] (4 2 )

PAGE 56

56 It is important to note that the DCM representation of has intrinsic orthonormality constraints due to the trigonometric relationship of its elements to the Euler angles. It is therefore necessary to parameterize the least squares solution with respect to these Euler angles, enforcing the orthonormality conditions. The alternative of parameterization with respect to the elements of the rotation matrix will generally not produce a proper rotati on matrix in the presence of noise. Linearization with respect to the Euler angles produces nine redundant equations as given in a reduced form in Equation 4 3 The linearized least squares formulation requires iterative solution of the weighted normal equations given in Equation 4 5 and requires initial approximations. 0 + 0 + 0 = 0+ 3 3 (4 3 ) = = = a1a2a3a4a5a6a7a8a9b1b2b3b4b5b6b7b8b9c1c2c3c4c5c6c7c8c9T = 1111+ 1212+ 1313 1121+ 1222+ 1323 1131+ 1232+ 1333+ 2111+ 2212+ 2313+ 2121+ 2222+ 2323+ 2131+ 2232+ 2333 3111+ 3212+ 3313 3121+ 3222+ 3323+ 3131+ 3232+ 3333 a1= 0 a2= cos B sin B cos B sin B sin B a3= sin B sin B cos B+ cos B sin B a4= 0 a5= cos B sin B sin B sin B cos B a6= sin B sin B sin B+ cos B cos B

PAGE 57

57 a7= 0 a8= cos B cos B a9= sin B cos B b1= sin B cos B b2= sin B cos B cos B b3= cos B cos B cos B b4= sin B sin B b5= sin B cos B sin B b6= cos B cos B sin B b7= cos B b8= sin B sin B b9= cos B sin B c1= cos B sin B c2= sin B sin B sin B+ cos B cos B c3= cos B sin B sin B+ sin B cos B c4= cos B cos B c5= sin B sin B cos B cos B sin B c6= cos B sin B cos B sin B sin B c7= 0 c8= 0 c9= 0 = 111213212223313233 = 111213212223313233 = cos cos sin sin cos + cos sin cos sin cos + sin sin cos sin sin sin sin + cos cos cos sin sin + sin cos sin sin cos cos cos (4 4 ) = ( ) 1 (4 5 ) For both the boresight and leverarm calibration, it is necessary to properly weight the observations using and respectively to produce an optimal estimation of the parameters. Unfortunately, the nonlinearity of the propagation of error in Euler angle parameters through the DCM formulation results in very dense weight matrix calculations. The calculation of these matrices is best handled through computerized symbolic algebra programs. Therefore, the

PAGE 58

58 process to construct them is merely described here and not illustrated From the general law of propagation of variances we can write a general weighting matrix for the corresponding weight matrix, given by Equation 4 6 = 0 2( )2 = 1( 1, ) ( 1, ) = x12 x1xm xmx1 xm2 0 2= ( 1, ) = = (4 6 ) To construct the Jacobian of the boresight equation in Equation 4 1 must be evaluated with respect to , and for each observation The weight matrix is then constructed using Equation 4 6 Similarly the Jacobian of the leverarm must be evaluated with respect to , , and and the weight matrix constructed For the boresight observations, the a posteriori stat istics of the AT bundle adjustment should provide the covariance est imate for the Euler angles of and Equivalent statistics should be available for , and from the Kalman filter for the navigation system. If these are not available for the navigation system, it may be possible to obtain estimates by benchmarking the navigation system against a more accurate one. Particularly in the case of rotations, off -di agonal covariance terms can be significant and should not be assumed to be zero

PAGE 59

59 A boresight calibration was conducted using the method described above using the assessment data described in more detail in Chapter 7 It should be noted that because this boresight calibration was conducted using data that did not have ground control, the boresight calibration may be tainted by biases present in the bundle adjustment Although the above weighting scheme was employed, a priori estimates of the accuracy were no t available and so an equal weighting scheme was used However, even in this state, it serves to demonstrate a proof of -concept. Because the navigation system is installed on the aircraft within 2 cm of the imaging system, the leverarm calibration can be s olved with sufficient accuracy by direct measurement given the relatively p oor precision of the navigation system. The results of the example boresight calibration are presented in Table 4 3 Navigation Accuracy The accuracy of the navigation system is a critical factor in the success of a directly georeferenced remote sensing system. The values generated by the navigation system, particularly the GPS positioning, serve as the control for the photogrammetric solution. By fusing the navigation parameters with the photogrammetric observations, the accuracy of the INS orientation solution becomes perhaps less critical, but of sufficient accuracy to obtain good initial approximations for the bundle adjustment. To this end, the accuracy of the MTi G was analy zed by benchmarking it against a tactical -grade Novatel Inc. HG1700 1/hr navigation system integrated with an OEMV4 differential GPS receiver. This benchmarking exercise produce d an unexpectedly poor and therefore much more interesting result. These results generated should not be considered representative of the performance of the navigation system since the error profile observed has never been repeated and the root cause s are unclear. This experiment was simultaneously conducted with the test of an other geodetic grade dual -frequency DGPS to evaluate its performance under GPS dropout

PAGE 60

60 conditions due to forest canopy cover. It is therefore exceedingly probable that the MTi -G was experiencing significant multipath error or GPS dropout. The published ac curacy of the MTi G is given as less than 0.5 degrees in tilt and 1.0 degree in heading for static observations and 1.0 degree RMS orientation error under dynamic s, with a positional CEP of 2.5 m [Xsens MTI G] These specifications are given with the appr opriate caveat that the accuracy results are dependent on the Kalman filter scenario selected. As described in Chapter 3, there are augmenting sensors such as a magnetometer and barometer, as well as non -holonomic constraints which can be included in the n avigation filter to improve the performance. In the interest of reflecting the performance attainable by the NOVA II, all augmenting sensors and constraints were disabled for this accuracy assessment A length initialization period of both static and dynam ic observations to allow the filter to stabilize was conducted directly before the start of the data collection during a ZUPT of the HG1700 over a known control point. The results of the both the static and dynamic portion s of the experiment are given in Table 4 4 The data collection transitioned directly from the static portion to the dynamic portion without a gap. The results of the static portion we re not unexpected; the roll and pitch error are somewhat high but not outside a reasonable range given the lack of augmentation The GPS positions were also acceptable. The heading error, however, was unexpectedly large and drifts during the static test as expected when heading updates are only available by GPS observations. It is unclear whether this initial large error was due to poor initialization, perhaps due to the aforementioned sky obstructions, or whether this is within the range of the typical hea ding error when using GPS -only updates. A heading error of similar magnitude had been previously noted in the DGRS data sets over known control points, so it is not out of the question.

PAGE 61

61 The most surprising result from this data set was the extremely poor p erformance of the GPS position solution during the dynamic portion of the test. The roll and pitch estimates perform ed admirably converging on the HG 1700 solution within about a 40 s. The roll error was slightly higher than the pitch dynamics, likely due to larger dynamics about that axis because it was a ground -based vehicle. The GPS solution, however, was terrible, with rapidly diverging error nearly reaching 50 m on a single axis The most reasonable explanation for this behavior was that the GPS lost lock in the transition from the static to the dynamic portions of the data collection Coinciding with this peak in positioning error, the vehicle ma de an abrupt turn which result ed in a nearly instantaneous jump in the heading error must likely caused b y the lag of GPS updates. However, the GPS error beg an a steep decline, probably as it regained lock given the reduced canopy coverage over the road. This accuracy assessment was revealing for a number of reasons. The first was that the MTi G performs no better than other low accuracy INS systems when the GPS updates are poor or unavailable; the divergence in position was rapid. Fortunately, GPS dropout conditions are not expect on the NOVA II because the GPS was mounted on the airframe with a clear view of the sky, and the roll and pitch-limiting autopilot prevent serious horizon occlusion by the airframe. Both the pitch and roll parameters performed well, reliably giving estimates within 2 degrees RMS even under the se worst -case conditions. T he heading was not as reliable as expected, but this may be explained by the possible GPS reception problems Undoubtedly, better GPS reception and stable dynamics would improve the quality of all results Finally, the vertical error was decent and was lar gely composed of bias error that slowly drifted. Synchronization Accuracy Recalling that sensor integration for DGRS requires establishing both the spatial and temporal relationship between the sensors, and having established the accuracy of the spatial

PAGE 62

62 component by boresight and leverarm calibration, it remains that the temporal relationship between the sensor observations be considered. The importance of synchronization in direct georeferencing has been well established [Skaloud and Legat 2007], and a number of system integration techniques have been developed. The NOVA I employed a rudiment synchronization scheme which relie d on the time of arrival of navigation data packets at the host and camera trigger commands reported to have an accuracy of 87 ms [Bowman 2008]. An identical implementation using the same hardware showed that the synchronization error was as much as 333ms [Chao et al., 2008]. This method highlights the need for deterministic measures of the time of sensor sampling. Relying on data packet s introduces a stochastic component due to both the processing and transmission time of the packet in the INS, as well as a stochastic element from using software -based synchronization on a nonreal time operating system. Particularly for COTS consum er cameras, features such as white balancing and autofocus can produce stochastic and even indeterminant delays between trigger and exposure. Without a feedback mechanism, this error cant be compensated for. A more rigorous approach commonly implemented i s the use of a COTS DAQ or similar expansion card in the host computer, which allows deterministic timing of the signals. However, a DAQ card is impractical on a small UAV platform due to its size and lack of an available interface with the host computer. Nonetheless, using this approach has been shown to obtain accuracies ranging from 0.4 ms [Li et al., 2006] to 5 s [ Toth et al., 2008]. Time synchronization problems can be described in terms of synchronizing an arbitrary slave timing signal to a periodic master timing signal. This relationship arises from the idea that synchronization of any pair of sensors necessitates the establishment of a temporal reference, which f or convenience is typically chosen as the periodic signal. Often, the choice of master

PAGE 63

63 timing signal is evident from the nature and temporal stability of the sensors. It is intuitive in the case of a DGRS that the master signal be based on the GPS 1PPS sig nal and that the slave signal is the time of exposure of the imaging system, since GPS time is referenced to the standard UTC system. This system conveniently provides the time of exposure in a standard temporal reference system. However, it should be note d that this terminology is purely one of convenience, and that a system where GPS observations are instead synchronized relative to the exposures of a camera is equally valid, if not as sensible. Moreover, the master signal need not be purely periodic; given any two occurrences of a master timing signal one can define an epoch of time, and the occurrence of a slave signal be synchronized in terms of a proportion of the master -defined epoch. Each timing signal is generated by an event, e.g. the exposure of a camera sensor or validation of a GPS message, and is subject to an aperture error. The aperture error is defined as the error in synchronization introduced by characterizing an event of finite duration with a timing impulse [Bendat, 2000] A pertinent example of aperture error is in characterizing the time of exposure of an imaging system, which may on a typical imaging system range from 1/100 to 1/8000th of a second. This begs the question, should the instantaneous timing signal for the exposure event oc cur at the beginning, end, or at some proportion of the total time which the shutter is open? This question leads to observations regarding the necessary accuracy required of a synchronization system. Aperture error is closely related to synchronization er ror due to the dynamics of the system being observed. It is evident that if neither the camera nor the scene is in motion, then whether one defines the moment of exposure at the beginning, middle, or end of the time the shutter is open is irrelevant. Expou nding on the purpose of direct georeferencing, the goal is to provide the

PAGE 64

64 parameters of a geometric model which produced the observed image. Thus, the dynamics of the system of interest is the change in the observed image with respect to a change in positi on and orientation parameters. In the case of DGRS, this relationship is defined by the perspective projective geometry of a camera, and a full derivati on is beyond the scope of this thesis The key understanding, however, is that the accuracy of the syste m is necessarily limited by the aperture error of the sensors. To clarify, consider a camera mounted vertically with a shutter speed of 0.25 ms and a nominal focal length of 5000 pixels flying 25 m/s straight and level at 100 meters above ground. In this s cenario, the camera will move a linear distance of 6.25 mm while the shutter is open. When motionless, one pixel at the center of the image covers a ground distance of 2 cm in the direction of travel. Due to the dynamics over the exposure duration, the linear distance that the center pixel actually covers is 26.25 mm. Such an image in practice shows no visible image blurring, which indicates that the error due to dynamics is negligible for our purposes The selection of the parameters in this scenario is no t by accident; the shutter speed of the camera is selected based on experiments to eliminate blurring due to what is effectively the aperture error from typical small UAV dynamics. Having made the somewhat weak assumption that dynamics are negligible for an appropriate shutter speed, one can deduce that the accuracy of the synchronization must be within the magnitude of the exposure duration. A stronger assumption that the dynamics are linear over the duration of exposure implies that synchronization error is minimized by taking the average parameter value over the period of exposure, equivalent to defining the synchronization signal at the center of the duration of exposure. Hence, there is a need for the bias correction between the X -sync signal and the c entral moment of exposure on higher dynamics platforms. A

PAGE 65

65 more general approach would be to take the time -weighted average of the parameters over the exposure duration, a case which is only practical (and it s not) where the frequency of the navigation par ameter data is much greater than the shutter speed. In the preceding discussion of synchronization accuracy, the emphasis was on understanding when the synchronization signal is generated in relation to each event of finite duration, and thereby minimizing the aperture error. This is significantly different from considering synchronization error from the standpoint of the accuracy of measuring the relative timing of the master and slave signals. As will be seen in the subsequent analysis, this is because th e precision of measuring the relative timing of two signals is orders of magnitude better than the time scale of the aperture error on a typical DGRS. The error due to synchronization is thus diminishingly small relative to the other errors inherent in the described DGRS. However, the analysis of synchronization error is relevant to other types of sensors and payload configurations particularly laser ranging and in these it is possible that such synchronization errors will become significant. The synchronization accuracy of the Burredo was evaluated by comparing its performance to a Trimble AcuTime Gold GPS receiver. The AcuTime outputs the GPS 1PPS signal with an accuracy of 15 ns and has two event inputs which provide an event timestamp in UTC time with a resolution of 488 ns [Acutime, 2007]. In the experiment, the GPS 1PPS signal from the AcuTime was the master signal for the Burredo, and a microcontroller -based circuit generated a pseudorandom trigger signal, for both AcuTime ev ent input and the Burredo slave input. The signal propagation delay due to cable length introduced an additional parameter, into the experimental setup, and was calculated to be 120 ns for the AcuTime cable length of 100 feet All other propa gation delays were negl igible due to short cable length s The

PAGE 66

66 primary data set was collected over a period of about 6 hours, with a trigger event occurring on average every 2.5 s. Several smaller confirmation data sets were also collected, all showing simi lar results. Each timing epoch was defined as the time between the beginning of the UTC second, 0, and the end of the second, Thus, each event measurement can be defined as a fraction of the UTC second. The measurements were modeled using Eq uation 4 7 and Equation 4 8 for the AcuTime and Burredo, respectively. The AcuTime measurement was taken as the truth value to benchmark the Burredo, thus the observation equation was defined by Equation 4 9 = = + 0 0 (4 7 ) = = 0+ + 0+ (4 8 ) = =2 0 + (4 9 ) Note t he Burredo measures all of these values with respect to inte ger clock cycles, with a clock rate of approximately 1 4.7456 MHz. Both measurements are effectively ratios of equivalent units, so all units cancel. B ecause we have defined the epoch as equal to the UTC second, the measurements are effectively in fractions of a second, however the experiment would be equally valid for an arbitrary master signal. Due to the design of this experiment, it was necessary to apply experimental bias corrections. The most significant adjustment is due to quantization error, which was si gnificant for the Trimble device The AcuTime resolution is specified at 488 nanoseconds, and due to the documented sequence of events this error always results in a late measurement. A timestamp generated by the Trimble device indicates an event takes place within the previous 488 ns [Acutime, 2007]. Assuming a uniform distribution of event times, this indicates the mean

PAGE 67

67 timestamp is generated 244 ns after the actual event Not by accident t he pseudorandom triggers generated for the experiment resulted in a n approximately uniform distribution of measurement values over the fraction of the 1 PPS epoch, as shown in Figure 4 4 Unexpectedly, stron g correlation between the fraction of the epoch and the magnitude of the error emerged from the data, as shown in Figure 4 5 Upon further investigation, it was found that the error correlation is explained by a constant bias in the measurement of 0 in t he Burredo caused by the finite time required to retrieve and store the value of the clock register before resetting it at the beginning of each epoch. This error does not occur during the event capture since it is handle d by hardware. The reset time, den oted modifies the Burredos measurement equation as given in Equation 4 10. Since the Burredo operates on integer clock cycles, this value was found by minimizing the correlation and round ing to the nearest integer value, so that Treset= 11 as shown in Figure 4 6 = 0+ + + 0+ + (4 10) The synchronization error in the Burredo (Figure 4 7) corrected for the experimental biases and was found to be non-normal (Figure 4 8) rejecting the null hypothesis that the distribution is normal at the 9 5% confidence level by the Lilliefors test. This result was expected, since a digital system will not be susceptible to large outliers. In Figure 4 9 the oscillator frequency drift for the Burredo is clearly visible. Both low and high rates of change of the oscillator frequency are observed. Additionally, t he synchronization error was found to not be correlated to variations in the crystal frequency After removing all experimental biases previously mentioned, the resulting mean error between the Trimble and Burredo measurements was 208.9 ns. The bias corresponds to approximately three clock cycles of the Burredo. The actual source of this bias error is unclear,

PAGE 68

68 and may be due to a number of factors including the internal architecture of the Burredos microprocessor timing facilities or the timing signal line drivers. Without experimental verification of the source of this bias, whether intrinsic to the device or due to experimental setup, the error may not be excluded from a conservative error analysis and is therefore attributed to the Burredo, resulting in a experimental RMS error of 256.4 nanoseconds, a significant improvement over any DGRS synchron ization system found in the literature The measurements of both the Burredo and AcuTime are sub ject to quantization error. T he standard deviation of error introduced by quantization (Equation 4 11) was used to calculate the Burredo and AcuTime quantization error. A pplying the special law of propagation of variances for two indepe ndently measured quantities, the calculated deviation of the synchronization error due to quantization is given by Equation 4 12. = 1 12 (4 11) = 19 6 = 140 9 ( ) = = = 2+ 2 = 142 2 (4 12) This calculated value agrees closely with the observed standard deviation of 148.7 ns. This indicates that at least the majority of random error in the Burr edo is accounted for by quantization error. The value calculated by the propagation of variances does not take into account the uncertainties associated with the 1PPS measurement, as well as stochastic electromagnetic effects within the physical system, al l of which contribute to the greater uncerta inty observed in the data. The presence of these additional uncertainties w as confirmed by

PAGE 69

69 an F -test of the ratio of calculated to observed variances, which reject ed the null hypothesis that the variances are sta tistically equal at the 95% confidence level. In order to evaluate the impact of errors due to synchronization, the flight dynamics must also be characterized. The navigation parameters from a typical NOVA II flight line were selected, as shown in Figure 4 10. The flight from which the data was extracted was collected on March 4th, 2009 at 9:45 AM. The temperature was 63F and winds were out of the north at approximately 3.5 m/s. Flying height above ground during the data sample sho wn was a commanded 150 meters above ground level From these parameters, the flight dynamics were calculated and the results given in Table 4 5 The accompanying RMS error due to synchronization error and aperture uncertainty are also indicated in Table 4 -5 The synchronization error was calculated using the RMS error of the Burredo and the aperture uncertainty calculated using the typical exposure duration of 1/2000th of a second. The aperture uncertainty was calculated assuming linear dynamics over the exposure period, and indi cate d the RMS change in navigation parameters over the duration of the exposure. Image Overlap and Flight Planning The amount of overlap between adjacent images is an important consideration in the ability to reconstruct scene geometry and improve the EOP estimates. A number of factors influence image overlap, including the camera parameters such as focal length, sensor size, and rate of capture, as well as flight parameters such as target flying height, airspeed, and flight line spacing. A thorough unders tanding of both the camera parameters and flight parameters allows for a reliable flight planning procedure that provides predictable overlap in the image data set. Using a UAV, where it is necessary to fly relatively low to the ground both to meet federal aviation requirements as well as the more practical con cern of getting good resolution results in data sets, has considerable impact on flight planning The paramount goal of flight planning is to ensure

PAGE 70

70 that there is sufficient image overlap for the desi red processing output while maximizing the resolution and efficiency of data capture. Three types of processing output are considered here. Single Image No image overlap is required, but direct georeferencing accuracy is limited by the quality of the navi gation system alone and cannot take advantage of the technique proposed in this thesis With single images as the desired output format, high resolution imagery is the primary goal 2D Mosaic Appropriate when gathering data over flat terrain with few vertical objects, and continuous mosaic coverage of the target area is required. Sufficient image overlap is needed to generate a robust network of tie points but is compromised by the desired hi gh resolution of the imagery. 3D Scene Reconstruction For a three -dimensional reconstruction of the scene, all objects in the scene must have full stereoscopic coverage, meaning that all ground coverage must be imaged at least twice Of the camera param eters mentioned above the capture rate is perhaps the most significant. M easurements showed that the maximum sustained image capture rate was 0.4 frames/s, or about one fra me capture every 2.5 s, similar to the NOVA I capture rate of 0.42 frames/s. [Bowma n, 2008a ]. There is an implicit tradeoff in image overlap between capture rate, flying height, ground speed, and focal length. Among these variables, the capture rate is nearly constant and the ground speed is constrained to a limited range by aerodynamic constraints, leaving focal length and flying height as variables Furthermore, given that a minimum image overlap is required for mosaic and 3D outputs the necessity to fly higher or to use a wider lens (shorter focal length) in order to ensure proper ove rlap leads to a direct tradeoff with the ground resolution of the camera. It can be shown by way of proof using Equations 2 13 through 2 1 6 that for a given overlap, capture rate, and ground speed, the ground resolution will be the same regardless of the lens (focal length).

PAGE 71

71 = 1 = ( 1 ) = = = = = = = = (4 13) ( 1 ) where = (4 14) = (4 15) = = = (4 16) Equation 4 15 illustrates that the ratio of flying height to focal length is a constant scale factor for a given sensor size, airbase, and required overlap. Since all terms on the right side of Equation 4 16 are constant, the resulting ground coverage along the flight direction is constant for a given overlap, sensor size, and airbase regardless of focal length and as sociated flying height. T he number of pixels per ground area, a rough measure of resolution, will remain constant regardless of fo cal length selection. Equations 4 15 and 4 16 indicate that increasing the focal length to improve resolution requires an incr ease in flying height to maintain image overlap, effectively canceling the improvement in ground resolution. This lead s to the conclusion

PAGE 72

72 that selection of an appropriate lens should be dependent on factors other than focal length, namely resolving power, lens distortion, and the need to reduce flying height to minimize atmospheric effects on resolution and accuracy. The primary purpose of the technique proposed in this thesis is to generate 2D mosaics; however the technique may be applied to 3D outputs Although the flight parameters can be calculated for a desired image overlap using Equation 4 13, the flight planning process is complicated by the fact that Equation 4 13 is only valid for vertical photographs oriente d in the direction of flight. Because the autopilot flight control system has a limited ability to maintain constant course and the winds aloft can induce changes in orientation and ground speed of the air craft the planned overlap rarely corresponds with the true image overlap. Therefore, a conservative approach is required when estimating flight parameters. A flight planning table was developed for t he NOVA II to aid in selecting proper flying height and flight line spacing given the expected groundspeed and is shown in Figure 4 11. The orientation of the camera with respect to the flight direction has a significant impact on the resulting image overlap. Sideslip, also called crabbing, occurs when the direction of travel is nonparallel to the longitudinal axis of the aircraft. This causes the camera to be rotated relative to the direction of travel. Sideslip is most commonly caused when the prevailing wind is nonparallel to the direction of travel. Thus, the direction of the planned flight lines should be oriented in to the prevailing wind to minimize crabbing U nfortunately flight paths parallel to wind direction also maximize effect s of wind var iation to the ground speed of the aircraft, as the plane must maintain a constant airspeed with respect to the moving air mass to remain airborne This relationship is given by Equation 4 18.

PAGE 73

73 = + (4 18) From Equation 4 13, the airbase is determined by the ground spee d when the capture rate of the camera is constant, so the overlap will be greater when flying upwind and will decrease when flying downwind. As a result, flight planning must always be determined by using a conservative estimate for target groundspeed equa l to the required airspeed to maintain flight plus wind speed. This procedure will result in upwind flight lines having remarkably better overlap than their downwind counterparts which allow s the operator to target upwind flight lines over the areas of greate st interest. A final flight planning considera tion for the NOVA II platform concerns the turning radius of the aircraft S ufficient space must be provided at the ends of flight lines to allow the aircraft to change direction and regain straight and level flight before entering the target area By incorporating turning radius details with the other flight planning considerations mentioned ea rlier, the dipole flight pattern was developed by Matthew Burgess and found to be effective (Figure 4 12). The dipole flight pattern has the advantage of concentrating the highest image overlap in the center of the region of interest, and all but two pai rs of flight lines have a flight direction common to the adjacent flight line. The sample data set used to evaluate the overlap is shown in Figure 4 13. As can be, the stereoscopic coverage is nearly ( but not consistently ) continuous, which preclud es the d ata set from being used for full 3D reconstruction outputs This flight plan did achieve respectable 2D mosaic results as demonstrated in Chapter 6 The analysis of image overlap was using the output generated by the full technique described in this thesis Only upwind flight lines were evaluated, resulting in a predicted groundspeed of 19 .5 m/s. The flight line analysis was conducted using a

PAGE 74

74 flight line which composed part of the sample data set, and was found to be consistent with other flight line sample s. The overlap values observed in the data set agreed with the calculated values presented in the NOVA II Flight Planning Guide ( Figure 4 11). Using Figure 4 13 with a commanded altitude of 150 m a predicted value of 45% overlap in the direction of travel (forward lap or endlap) and 30% overlap between flight lines (sidelap) was expected The actual measured image overlap values obtained are shown in Table 4 6 As expected, the upwind and downwind velocities were significantly different and corresponded to the variations in the airbase between sequential images shown in Figure 4 14, verifying the need for using downwind groundspeed estimates in the flight planning process. Analysis of the flight lines showed that the NOVA II was able to maintain remarkably straight and level flight, with a mean deviation (or error) from a straight flight line of 1.2 m and a standard deviation of .92 m shown in Figure 4 15. The ideal straight flight line was calculated by taking a general least squares linear regression of the horizontal position data, allowing for error in both the easting and northing directions. The general least squares approach produces a more realistic error assessment by calculating residuals normal to the fitted li ne. The standard deviation in height from the ideal flight line was found to be 0.6 m ( Figure 4 16). It goes without saying that the true flying height of the aircraft is a critical part of the flight planning and overlap analysis. The accuracy of the flyi ng height is determined by the autopilot navigation system, not the payload navigation system. Unfortunately, the error between the commanded and actual height cant be empirically evaluated in situ because the data could not be benchmarked on the current platform. An independent evaluation of the performance of the system at ground level conducted by the author on the NOVA I platform showed that the system

PAGE 75

75 has a height accuracy of approximately 2 m [ Perry et al. 2008]. This value is somewhat better than the typical code -solution GPS height error probably because the autopilot aids the altitude solution with a barometer calibrated at ground level before flight The success of orienting the flight lines in parallel to the prevailing wind direction is evident in the angle of sideslip shown in Figure 4 17. The sideslip was estimated by using the previously described least -squares fitted line for the true flight direction and comparing it to the heading provided by the paylo ad navigation system. It should be noted that this estimate suggests a much higher in situ heading accuracy than the one found in the Navigation Accuracy section of this chapter The mean angle of sideslip was 1.7 degrees with a standard deviation of 1.2 degrees. The mean magnitude of the deviation from vertical (tilt) was 7.8 degrees with a standard deviation of 1.5 degrees also shown in Figure 4 17. Given that a strong bias is evident in the tilt angle, it may indicate the need to more precisely orient t he camera relative to the level orientation of the plane during flight. It should be noted that the standard deviation in the roll direction, 1.9 degrees, was significantly higher than the pitch direction, 1.1 degrees. This is most likely attributed to t he relative ability of the plane to maintain level flight (inducing pitch) to maintaining direction (inducing roll) suggesting that flight line spacing estimates should be even more conservative

PAGE 76

76 Table 4 1 SCBA adjusted IOPs including distortion param eters for the Lightning Lab camera calibration data set (* denotes pixel units) Parameter Type Value Stan. Dev. t statistic Focal Length 5308.1342 8.3492 2.1720 Principal Point 0.1034 3.1247 0.0331 Principal Point 1.5013 3.0906 0.4857 1 Radial +5.6493e 009 3.9899e 010 14.1591 2 Radial +1.5363e 016 1.8984e 016 0.8092 3 Radial 3.5228e 023 2.7153e 023 1.2974 1 Tangential 1.3741e 007 5.0827e 008 2.7035 2 Tangential +7.0757e 008 4.2883e 008 1.6500 Table 4 2 SCBA adjusted EOPs for the Lightning Lab camera calibration data set Image Omega (deg) Phi (deg) Kappa (deg) Easting (m) Northing (m) Height (m) 770 +5.11 +7.14 3.53 400393.48 3312756.85 235.87 796 +1.80 +4.12 0.47 400427.16 3312748.56 239.23 812 17.70 22.02 217.74 400344.32 3312801.44 237.61 894 +10.61 +11.32 71.40 400426.18 3312747.42 229.37 919 9.41 +5.44 +45.13 400423.19 3312778.38 231.22 937 +10.81 +4.82 68.89 400432.49 3312720.75 232.68 1004 12.25 +6.00 +51.21 400447.67 3312831.82 231.49 Table 4 3 Boresight and leverarm calibration results Omega (deg) Phi (deg) Kappa (deg) Boresight Angles (deg) 4.9639 0.4139 0.9933 Mean Residual 0.0098 0.0150 0.0415 Std.Dev. Residual 1.8615 1.5022 1.9312 Table 4 4 Navigation system benchmarking results Roll (deg) Pitch (deg) Heading (deg) Easting (m) Northing (m) Height (m) Dynamics RMS error 1.7969 1.1286 16.2244 11.5007 26.0204 6.9016 Static RMS error 1.4167 1.1267 8.3706 4.9467 5.5153 4.2197 Table 4 5 The calculated RMS error in the navigation parameters due to synchronization error and aperture uncertainty in the navigation parameters for the given dynamics Roll (deg) Pitch (deg) Yaw (deg) Horizontal Position (m) Vertical Position (m) Dynamics 3.22/s 7.19/s 2.53/s 18.61 m/s 0.79 m/s Sync Error 8.26 10 7 18.46 10 7 6.50 10 7 4.77m 10 6 2.03m 10 7 Aperture Error 1.6 10 3 3.6 10 3 1.3 10 3 9.3m 10 2 4.0m 10 4

PAGE 77

77 Table 4 6 Ground coverage and overlap parameters for typical NOVA II flight lines Width (m) Height (m) Forward (m) Side (m) % Endlap % Sidelap Average 116.8 87.1 44.9 41. 2 52 % 35 % Std. Dev. 1.5 0.9 8.7 10. 3 Image770 Image796 Image812 Image894 Image919 Image937 Image 1004 Figure 4 1 Lightning Lab Calibration Site calibration data set i mages 770, 796, 812, 894, 919, and 1004

PAGE 78

78 Figure 4 2 Comparison of the SPAN and MTi G attitude parameters over time for the static (left) and dynamic (right) portions of the navigation benchmarking experiment 0 10 20 30 40 50 60 70 80 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 SecondsRoll (deg) Novatel Xsens 0 10 20 30 40 50 60 70 80 -2.6 -2.5 -2.4 -2.3 -2.2 -2.1 -2 -1.9 -1.8 SecondsPitch (deg) Novatel Xsens 0 10 20 30 40 50 60 70 88 89 90 91 92 93 94 95 96 97 SecondsHeading (deg) Novatel Xsens 0 10 20 30 40 50 60 70 80 90 100 -2 -1 0 1 2 3 4 5 6 SecondsRoll (deg) Novatel Xsens 0 10 20 30 40 50 60 70 80 90 100 -4 -3 -2 -1 0 1 2 3 4 5 SecondsPitch (deg) Novatel Xsens 0 10 20 30 40 50 60 70 80 90 100 -40 -20 0 20 40 60 80 SecondsHeading (deg) Novatel Xsens

PAGE 79

79 Figure 4 3 Error profiles of MTi G position and orientation parameters during the static (left) and dynamic (right) portions of the navigation benchmarking experiment 0 10 20 30 40 50 60 70 80 -6 -4 -2 0 2 4 6 8 10 12 SecondsError (meters) Easting Northing Height 0 10 20 30 40 50 60 70 80 -10 -8 -6 -4 -2 0 2 SecondsError (deg) Roll Pitch Heading 0 10 20 30 40 50 60 70 80 90 100 -30 -20 -10 0 10 20 30 40 50 SecondsError (meters) Easting Northing Height 0 10 20 30 40 50 60 70 80 90 100 -20 -15 -10 -5 0 5 SecondsError (deg) Roll Pitch Heading

PAGE 80

80 Figure 4 4 Distribution of the pseudo-random triggers as a fraction of the UTC second during the synchronization accuracy assessment Figure 4 5 Evident linear trend in the error of the uncorrected synchronization data attributed to Figure 4 6 The correlation of the synchronization error to the fraction of the UTC second at which it occurred plotted against possible values of Figure 4 7 The value of the synchronization error over time in hundreds of nanoseconds 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 100 200 300 400 500 600 700 800 900 1000 TT r i m b l eOccurences 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 x 10-6 EB u r r e d oTT r i m b l e 0 5 10 15 20 25 30 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 CorrelationTr e s e t 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 -6 -4 -2 0 2 4 6 8 10 x 10-7 SampleEB u r r e d o

PAGE 81

81 Figure 4 8 Normal probability plot of the synchronization error showing low kurtosis Figure 4 9 Drift of the Burredos crystal oscillator over time Figure 4 10. Typical NOVA II dynamics while flying straight and level -1 0 1 2 3 4 5 x 10-7 0.001 0.003 0.01 0.02 0.05 0.10 0.25 0.50 0.75 0.90 0.95 0.98 0.99 0.997 0.999 ErrorProbability Normal Probability Plot 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 48 50 52 54 56 58 60 62 64 66 Burredo Clock FrequencySample +14745900 0 5 10 15 20 25 30 35 40 45 50 -20 -15 -10 -5 0 5 SecondsValue Roll Rate (deg/s) Pitch Rate (deg/s) Yaw Rate (deg/s) Velocity X (m/s) Velocity Y (m/s) Velocity Z (m/s)

PAGE 82

82 Figure 4 11. Flight planning guide for determining flying height and flight line spacing for a desired overlap Altitude (m) 55110 165 220400 79% 78% 77% 76% 75% 73% 72% 71% 70% 68% 67% 66% 65% 64% 3D Coverage 52103 155 206375 78% 77% 75% 74% 73% 72% 70% 69% 68% 66% 65% 64% 62% 61% 4896 144 192350 76% 75% 74% 72% 71% 70% 68% 67% 65% 64% 63% 61% 60% 58% 4589 134 179325 75% 73% 72% 70% 69% 67% 66% 64% 63% 61% 60% 58% 57% 55% 2D Mosaic 4182 124 165300 73% 71% 69% 68% 66% 64% 63% 61% 60% 58% 56% 55% 53% 52% 3876 113 151275 70% 68% 66% 65% 63% 61% 59% 58% 56% 54% 52% 51% 49% 47% 3469 103 137250 67% 65% 63% 61% 59% 57% 55% 53% 52% 50% 48% 46% 44% 42% 3162 93 124225 63% 61% 59% 57% 55% 53% 50% 48% 46% 44% 42% 40% 37% 35% 2755 82 110200 59% 56% 54% 52% 49% 47% 44% 42% 39% 37% 35% 32% 30% 27% 2448 72 96175 53% 50% 47% 45% 42% 39% 36% 33% 31% 28% 25% 22% 20% 17% Individual 2141 62 82150 45% 42% 39% 35% 32% 29% 26% 22% 19% 16% 13% 9% 6% 3% 1734 52 69125 34% 30% 26% 22% 19% 15% 11% 7% 3% -1% -5% -9% -13% -16% 1427 41 55100 18% 13% 8% 3% -2% -7% -12% -16% -21% -26% -31% -36% -41% -45% 1021 31 4175 -10% -16% -23% -29% -36% -42% -49% -55% -62% -68% -75% -81% -88% -94% 714 21 2750 -65% -75% -84% -94% -104% -113% -123% -133% -142% -152% -162% -172% -181% -191% 37 10 1425 -230% -249% -269% -288% -307% -327% -346% -366% -385% -404% -424% -443% -463% -482% 80% 60% 40% 20% 17 18 19 20 21 22 23 24 25 26 27 28 29 30Instructions: 1. Calculate groundspeed = programmed airspeed + observed wind 2. Select altitude based on desired endlap using the right side of the table 3. Select flight line spacing distance on the left side of the table based on desired sidelap percentage Groundspeed (m/s) Sidelap % Flight Planning Guide

PAGE 83

83 Figure 4 12. Typical flight line configuration for the NOVA II, demonstrating the 'dipole' pattern Figure 4 13. Example of the image overlap configurat ion for a typical NOVA II flight

PAGE 84

84 Figure 4 14. Sequential image airbase for a typical NOVA II flight showing variations due to flight direction relative to the prevailing wind Figure 4 15. Typical horizontal deviation of the NOVA II from an ideal fligh t line 0 100 200 300 400 500 600 700 800 35 40 45 50 55 60 65 70 Image NumberAirbase (m)

PAGE 85

85 Figure 4 16. Typical vertical deviation of the NOVA II from an ideal flight line Figure 4 17. Typical orientation deviation of the NOVA II from ideal flight line 0 2 4 6 8 10 12 14 16 18 20 22 -60 -40 -20 0 20 ExposuresDegrees 0 2 4 6 8 10 12 14 16 18 20 22 0 10 20 ExposuresTilt (deg) 0 2 4 6 8 10 12 14 16 18 20 22 0 2 4 6 ExposuresCrab (deg) Pitch Roll Heading

PAGE 86

86 Figure 4 18. Coverage map for a two -day data collection mission conducted on behalf of the Army Corp of Engineers

PAGE 87

87 CHAPTER 5 PROCESSING ALGORITHM Overview of the Algorithm The goal of the algorithm was first and foremost to produce accurate mapping products fr om the directly georeferenced remote sensing payload of the NOVA II. The algorithm should only use inputs generated by the DGRS with no a priori information needed. Due to the large volume of data that was generated by the platform, it was desirable and p erhaps necessary to automate the process to the greatest extent possible. Due to the complexity of the problem, it wa s necessary to make certain assumptions about the geometry of the scene. These assumptions c a me with a tradeoff of the robustness of the al gorithm. The algorithm was designed where possible, to be computational ly efficient due to the extremely large data sets generated by high resolution DGRS [ Grenzdrffer 2004]. The mapping output of the algorithm had to be in a format readable by standard G IS software. These goals underlying goals were met in the development of the algorithm. The output of the NOVA II platform, an example of which is provided in Appendix A provides a self -contained data set suitable for input to the algorithm. The flight log files must be parsed to produce the geotag and navigation files prior to starting the algorithm. The data set is contained in a single folder, the path of which is provided to the program. The folder contains all the images from the flight, as well as the unadj u sted geotag file which provides the direct ly georeferenc ed parameters for each image. In addition to the flight data, calibration data such as the focal length of the camera and the boresight and leverarm calibration are required. Upon providing these initial parameters the program parses the geotag file and indexes the folder contents to create an internal representation of the data set. The algorithm implement s the following steps each of which will be examined in detail :

PAGE 88

88 Prepr ocessing. The data set is constructed using the geotags file and the calibration parameters, and the images are pyramided and radiometrically modified for later use. Interest Point Generation. Each image is analyzed for features with good potential for mat ching, generating a potential tie point list for each image. Initial Approximation. Approximations of the object space coordinates of each potential tie point and each image extents is generated. Semi a utomat ic Point Matching. Potential tie points are matc hed based on a search space defined by the initial approximation. Bundle Adjustment. Matched tie points are collated and a bundle adjustment of all available geometric information is performed. Surface Generation. A surface is generated based on the object space coordinates of adjusted tie points and used for mosaicing Mosaicing The scene extents are recalculated based on adjusted exterior orientation parameters, the space is segmented into subscenes the aerial images are resampled using the collinearity equations, and a georeferenced output mosaic is generated. Preprocessing To begin, the exterior orientation parameters provided by direct georeferencing m ust be brought into the object coordinate system. The navigation coordinates are converted to the appropriate UTM coordinate system and the leverar m calibration is applied using Equation 2 15. Recall from the discussion in Chapter 2 that using UTM coordinate as object space coordinates for photogrammetric processing simplifies the final process of generating the georeferenced mosaic at the cost of introducing some error. It is necessary to filter the data set for valid images, which are selected from the da ta set by taking the highest altitude in the navi gation parameters and selecting all images which are above a threshold altitude This allows images which were taken during tak e off and landing to be excluded, since they are unlikely to contain sufficient image overlap to be useful. The threshold is set t o a fixed offset from the maximum height. The assumption for this filte ring process is that the planes maximum flying height was nearly the flight line flying height

PAGE 89

89 An initial approximation for the ground height is also calculated by taking the minimum altitude in the navigation data set. This assumption may be violated i f the plane was launched from and landed on a platform that was significantly higher than the ground height, such as operating from a levee when map ping nearby wetland vegetation. On an operational basis, it is usually not inconvenient to transport the plane to the average terrain height before launching. An improvement on this initial approximation may be to use a cluster detection algorithm and calculate a mean of the lowest cluster if it is not significantly different from the smallest value, ensuring that an outlier in the navigation provided height is not used. A resolution pyramid is generated for each image for use in the tie point matching step, an integral requirement for many t ie point matching schemes [ Zitov and Flasser 2003] and a practical method of improving the computational efficiency of initial tie point matching states Each pyramid level is convolved using a 5 5 Gaussian kernel and then subsampled by a factor of 2 to reach the next level. The number of pyramid levels automatically generated is determined by the size of the source image, which is downsampled until the final pyramid level is less than 500 pixels on both its width and height. This requires four pyramid levels, incl uding the original, to reach a resolution of 456 352 pixels for the default NOVA II payload image size of 3648 2736 pixels. The ima ge pyramid is converted to grayscale to facilitate intensity -based tie point matching. The conversion is from the three -channel RGB color space to grayscale using Equation 5 1 [OpenCV 2001]. The histogram of the grayscale images is then equalized using Eq uation 5 2 in order to maximize contrast and normalize the brightness in the grayscale image. The function maps each occurrence of an original gray level, to an equalized gray level,

PAGE 90

90 An equalized histogram indicates that the probability distribution function of grayscale value is uniform, although the ideal is not possible due to di scretization. = 0 299 + 0 587 + 0 114 (5 1 ) = = 1 = 0 = (5 2 ) The preceding radiometric preprocessing steps deal exclusively with preparing the imagery for the subsequent algorithmic processe s. In fact, they decrease the available information in the image. It is also of interest in radiometric preprocessing to improve the source images to increase the quality of the output mosaics. These techniques must generally be approached cautiously, sinc e radiometric mod ification can negatively affect remote sensing techniques if the information content of the imagery is reduced [Showengerdt, 2007] The most commonly desired improvement is to minimize the apparent discontinuities around mosaic seams, and classic approaches deal with this problem as an integral part of the mosaic process rather than in preprocessing [Kraus, 2007] A more robust method is to balance the illuminance of the imagery both within each image and between adjacent images [ Palubinska s et al. 2003]. By removing the effects of the geometry and variations in intensity of the lighting source, the output mosaic is more visually appealing by reducing the apparent discontinuities between images. This global rather than local approach to color balancing is desir able but comes with considerable computational cost

PAGE 91

91 Interest Point Generation As described in Chapter 2, tie points are features in the scene that can be identified in more than one image. The accuracy of the geometry is dependent on the ability to precisely measure the location of the feature in the image. It is essential, then, that tie points be selected that can be uniquely localized in the image, particular in the context of automated image matching [Zitov and Flasser 2003]. However, localization is complementary to repeatability, which is the ability to detect a point between multiple views [ Schmid et al. 2000]. As localization accuracy increases the repeatability rate will decrease. An assessment of the corner detection alg orithms in the literature found that the Harris Corner Detector (HCD) performs the best among popular algorithms for both repeatability and localization of matched points between images subject to rotation, scaling, and contrast change [ Schmid et al. 2000]. The HCD was implemented in the algorithm to select potential tie points at the desired pyramid image. The HCD searches for points where a Gaussian -weighted kernel about the point has a high local autocorrelation [Harris and Stephenson 1988]. A shifted window is approximated by a first order linearization given by Equation 5 3 and used to calculate the autocorrelation function about the point of interest in Equation 5 4 The Harris Corner Detector is then given in Equation 5 5 by the derived autocorrelation function, excluding the pixel shift terms which act as a constant scale. Corner points are points which have large local variations in intensity values, quantified by the partial derivatives in the HCD matrix. Thus, it would be expect ed that the HCD would have two large eigenvalues for points of interest. If only one or the other eigenvalue of the HCD is large, then an edge has been detected. If both eigenvalues are small, then the region has little intensity variation and sho uld be rejected. For computational efficiency, the eigenvalues of the HCD are approximated by Equation 5 6

PAGE 92

92 ( + + ) ( ) + ( ) ( ) ( ) ( ) (5 3 ) ( ) = [ ] ( ) (5 4 ) ( ) = ( ) 2 ( ) ( ) ( ) ( ) ( ) 2 (5 5 ) ( ) | | 2( ) = (5 6 ) The HCD assigns an interest factor for all pixels in the image based on the magnitude of the eigenvalues The algorithm then selects the points which have the highest interest factor above a quality threshold and are separated by a minimum distance, up to a selected maximum number of points. The minimum separation distance ensures that the points are well -distributed across the image. The maximum number of points will not be returned if there are not enough points above the quality threshold, defined as a percentage of the maximum interest factor. Initial Approximation The initial approximation process is the core result of the data synthesis arising from DGRS and significantly affects the essential tasks of tie point matching and the bundle adjustment Unfortunately, it is accompanied by the need to make strong assumptions about the scene which can greatly reduce the robustness of the method. The initial approximation refers to the need to provide an estimated object c oordinate for each potential tie point. As discussed in detail in C hapter 2, direct georeferencing only allows the collinearity equations to be solved up

PAGE 93

93 to a scale factor, repeated here for clarity. With known exterior and interior orientation parameters, the object space coordinate is only known to lie on a line coincident with the perspective center and the image point This result is certainly a vast improvement over traditional techniques where both the exterior orientation parameters and the Z ambigui ty must be solved simultaneously However, it remains that estimation of the tie point object space coordinate is dependent on an estimate of where it is along that line. The current operational paradigm for the NOVA II is flying straight and level flight lines over relatively smooth terrain. In fact, the majority of past and planned missions occur over wetlands where differences in elevation are for all practical purposes non -existent. These conditions give sufficient motivation to make a constant -surface assumption about the scene. This assumption allows the object space Z -coordinate to be assigned based on the ground level determined in the preprocessing stage. Having resolved the Z ambiguity it is a trivial matter to calculate the object coordinates fo r the potential tie points using the Project Down formulation in Equation 2 10. Having calculated an approximate object space coordinate, it is an obvious result that it can be used as the initial approximation in the linearized bundle adjustment. Real gains, however, are also to be had in the tie point matching algorithm. Considering the typical flight takes about one thousand images it is clear that it would be computationally burdensome to the point of impracticality to attempt to match thousands of tie points if each tie point could possible exist anywhere in any of the images. Thus, the algorithm exploits the initial approximations provided by direct georeferencing to vastly narrow the search space of the tie point matching algorithm. Narrowing the s earch space is accomplished by projecting the extents of each image down to the const ant surface generating a set of 2D four vertex polygons. Each interest point object

PAGE 94

94 coordinate is then tested to see whether it falls within each image polygon. This retu rns a list of tie points which are potentially located within each image, narrowing the search space for each potential tie point to a subset of the images. Furthermore, by using the Project Up collinearity equations Equation 2 9 an approximate image coordinate can be generated for each potential tie point in each of the overlapping images. In order to account for the error due to both the inaccuracy of the constant surface assumption and the inaccuracy of the exterior orientation parameters provided by direct georeferencing, it is necessary to define a tie point matching search space about the approximate image coordinates. A formulation for empirically determining the extent of the search space is impractical due to the non linearity of the sequence of projections and thus in implementation the search space can be tuned experimentally determined based on typical performance An analysis of the image coordinate approximation is given in Chapter 6. A perhaps more serious consideration is that poor approximation also affect image boundaries. Direct projection of the image extents to the assumed surface may exclude points which are within the uncertainty bounds of the search space but are outside of the polygon because no valid image coordinate approximation exists. A great deal of improvement is possible in both the object space approximations and the corresponding image coordinate approximations. Chief among these is to utilize an iterative approach, performing a lean bundle adjustment and surface approximation using the first iteration to improve the accuracy of subsequent repetitions of the initial approximation and tie point matching steps. Another improvement would be to better handle the boundary conditions, using a fuzzy boundary for the image extents to include nearby points that may fall within the search space. Following each tie point matching procedure, it is also possible to refine the object

PAGE 95

95 coord inate of the tie point calculating the average of the object coordinates calculated from projection of each image observation. Epipolar geometry can also be employed to improve the search space of the tie point matching algorithm by utilizing direct georeferencing [Kraus 2007]. Briefly two epipole s are formed for each pair of images by the projection of the perspective center of each image onto the others image plane using the collinearity equation. An arbitrary surface is used to project the feature of interest from one image to the other. The coplanarity conditi on then states that the corresponding image features in the second image will lie along the line formed by the reprojected image coordinate and the e pipole [Kraus, 2007]. T his approach has the advantage o f not requiring object coordinates for the tie point transferring this Z ambiguity from the object space to the corresponding image space. Combining the coplanarity condition with the described algorithm would allow the search space to be constrained about the epipolar line or intersection of lines if mult iple epipole correspondences are calculated further narrowing the search space. Semi Automatic Point Matching Tie point matching has been an intense area of research in both the photogrammetric and computer vision communities with a multitude of continua lly evolving applications and approaches At the risk of producing an echo, I quote A.W. Gruen, a leading figure in the photogrammetric literature, paraphrasing a comprehensive review by A. Rosenfeld of equal stature in the computer vision community, these new methods do not solve the problems addressed before. Several pertinent image matching techniques will be discussed here, with the caveat that the method implemented proved sufficient but is entirely open to improvement. Image matching may general ly be divided into two approaches: feature based matching or template -based (intensity -based) matching [Zitov 2003]. The former attempts to extract structural information about the scene, such as points, line, or regions, and to then match

PAGE 96

96 corresponding structures. Feature -based techniques typically require expert systems and are usually limited to well -defined image constraints Moreover, they are the primary subject of the above quote. On the other hand, template based matching is the more direct approa ch exploiting the intensity values of the images themselves to seek a match. Template matching can be thought of as image to image matching, where some source template (usually a subset of the image) is matched to a corresponding target image (also, usuall y a subsample of a larger image). The size of the template and target is selected such that distortions due to perspective changes are minimized and at worst can be well -modeled by an affine transformation. The rudimentary exercise of template matching is to maximize the similarity between two templates and conversely minimize the dissimilarity or error. A common and well known approach to template matching in the spatial domain is the normalized cross -correlation (NCC), given in Equation 5 8 = 1 1 ( ( ) ) ( ) = = = = (5 8 ) The cross -correlation of the intensity values for each image is normalized by the mean and standard deviation in order to account for variations in image brightness and c ontrast. The NCC has the advantage of being simple and relatively fast. A matching algorithm is implemented by sliding the source template over the target search space to find the image coordinate where the NCC is maximized. The normalized cross -correlat ion is a translation -invariant matching technique. This method suffers from the inability to account for geometric a nd radiometric

PAGE 97

97 distortion found in typical images It is possible to augment the NCC technique with spatialdomain transformations that allo w rotation and scaleinvariant matching. The log polar transformation (LPT) Equation 5 9 is a method for introducing scale and rotation invariance, both in combination with NCC and frequency-domain matching [Wolber and Zokai 2000; Reddy and Chatterji 1996]. Interestingly, t he LPT is modeled after human foveal vision and an example is shown in Figure 5 1 The transformation is not translation invariant, so the origin of the transformation is of account. ( ) = ( ) = atan 1 = lo g ( )2+ ( )2 (5 9 ) The LPT is rotation and sca le invariant. A rotation in a Euclidean coordinate system corresponds to a linear shift in the -direction of the polar coordinate system Noting that log ( ) = ( log + log log + log ) a scale factor in the Euclidean coordinate system corresponds to a linear shift in the -direction of the log -polar system The LPT has the additional advantage of effectively weighting the image matching algorithm at the focus of the transform. Transla tion, rotation, and scale invariant template matching can be performed by combining NCC and the LPT [Wolber and Zokai 2000]. Matching proceeds by first performing the LPT on the source template. An LPT transformation is then performed with the origin at e ach pixel in the target template. Normalized cross -correlation is then performed across each of the LPT -transformed target templates. The x and y -coordinate that correspond s to the maximum NCC within the LPT transformed target image correspond s to the sca le and rotation parameters,

PAGE 98

98 respectively and the trans lation parameter is the origin of the LPT transformed target image with the maximum NCC among all the transformed targets Another affine -invariant technique worth mentioning brings us to the frequency -domain matching methods. The phase correlation of the 2D Fourier transform provides the transformation parameter An LPT can then be applied to the magnitude component of the Fourier Transform followed by a second Fourier transform for which pha se correlation then solves for the rotation and scale factors [Capodiferro 1987]. This series of transformations is known as the Fourier -Mellin transform. This technique can also be applied to directly produce an af fine invariant transform if approximatio ns of the affine parameters are not required. Although the previous methods have been shown to be robust for affine invariant matching, statistical methods for maximizing the similarity are limited to normalized cross correlation and are not well -suited fo r sub -pixel matching. The parameters obtained from these methods may be optimized by employing a least -squares matching technique given by Equation 5 10 [Gruen 1985]. The LSM algorithm has the advantage of being robust to radiometric distortion and can provide accurate sub -pixel matching [Ibid.] The a posteriori adjustment statistics also provide a meaningful measure of the accuracy of the match. Due to the nonlinearity of the model initial approximations are required so the model is generally not imple mented independent of aforementioned methods. Initial approximations for the parameters must typically provide a match within a few pixels of its true value for the LSM to converge. = 0+ 1( ) = 0+ 1 + 2 = 0+ 1 + 2 0, 1= 0, 1, 2, 0, 1, 2= (5 10)

PAGE 99

99 The algorithm implemented in the proposed direct georeferencing algorithm is the combined NCC/LP T algorithm without a least -squares match. This decision was made because the initial results achieved by the algorithm w ere adequate, and because sub -pixel matching on the full resolution images was unnecessary for the desired accuracy and likely to be fa ult y due to perspective changes over irregular ground coverage without more robust blunder detection techniques An assessment of the tie point matching algorithm performance is presented in the next chapter. The final set of matched tie points is then pre sented to the operator, allowing the user to reject blundered matches, and as a result of this the process is semi au tomatic. No facilities are implemented for the manual addition or editing of tie points; the user is simply presented with the option to ac cept/reject the automated matches, allowing the manual process to be completed relatively quickly. This is possible because of the very high redundancy already available in the automated algorithm; the loss of a percentage of blundered points has little im pact. Implementation of additional statistical blunder detection algorithms may allow the algorithm to be fully automatic. In summary, there are two observations to be made on the tie point matching algorithm. The first is that despite the perspective proj ective nature of image formation, image matching techniques typically only model affine transformations. This is due to the assumption that tie points sweep a small area within the projection and for smooth object space features the affine well approximates the projective transformation in this space. This assumption leads to the second observation that object space features that are irregular such as coarse vegetation tend to produce poor tie point matches. Given the high resolution of the image ry produced by the NOVA II, it can be advantageous to perform the matching on downsampled imagery as a tradeoff of

PAGE 100

100 localization to improve repeatability. Sub pixel LSM on downsampled images may compensate for some loss in localization. In an iterative appr oach to this algorithm, it is advantageous to consider dense tie point generation techniques in preparation for robust surface generation o nce the image geometry has been adequately reconstructed so that the search space is greatly reduced C lassic approac hes to the problem usually involve image resampling so that the epipolar lines lie along the rows of each image, al l o wing rapid densification using NCC. There are a variety of methods available in the modern literature with varying claims of success rangin g from computer vision techniques employing video imagery [Jung and Lacroix 2003] to robust diffusion techniques [Stretcha et al. 2003]. Bundle Adjustment The simultaneous bundle adjustment is a technique for the optimal estimation of the camera and scene geometry parameters. It is formulated as a linearized least squares implementation of the collinearity equations. It may also be extended to include the interior orientation parameters, in which case it is referred to as the self -calibrating bundle adju stment. The bundle adjustment is jointly optimal in the sense that it serves to optimally estimate both camera and scene parameters by applying a geometrically consistent model of the image formation [Wolf and Dewitt 2000]. In a general sense, this is wha t makes the bundle adjustment a robust estimator, since the physical behavior of light imposes a strict model of the position and orientation of a camera which will produce a g iven image of a scene The optimality of the estimation is given by the employme nt of a least squares. The least square technique is optimal for linear system because it seeks to minimize the quadratic error of the parameters in fitting the model and can be extended by linearization of the model to optimally solve for nonlinear syste m s given sufficiently accurate initial approximations The term bundle refers to the

PAGE 101

101 bundle s of light that form the camera image s which denotes the ability of the technique to optimally solve for bundles arising from more than one camera. A special consideration is to be made in the application of the bundle adjustment for direct georeferencing. In traditional photogrammetry, control points that exist in the object space serve as the primary control to the absolute orientation. In direct ge oreferencing, the only measure of absolute orientation are the parameters provided by the navigation system Considering that the bundle adjustment solves for the best relative geometry of the scene, the absolute accuracy of the final adjustment is in ques tion It can be supposed that the bundle adjustment will tend to produce parameters which are an internally consistent geometry for the images due to the strength and redundancy of the image coordinate observations and therefore accurate in the relative s ense. The absolute accuracy of the object space coordinates or accuracy of the scene and image features with respect to the local -level coordinate system, however, would act only as a weak constraint on the solution given the relatively poor accuracy of t he direct georeferencing parameters This is best illustrated with respect to the vertical position of the scene geometry. Given that the altitude provided by the navigation system is likely biased (see Chapter 4 ), it is possible that the bundle adjustment will produce an internally consistent scene geometry which is in absolute error equal to the bias of the altitude. This is not to say that the solution will not be an optimal estimator of the absolute position; in fact, if all observa tions are properly weighted and the direct georeferencing parameters are normally distributed about the true value then it indeed should be optimal. However, the possibility of biased measurements at least forewarns the risk of poor solutions in the absol ute sense. This consideration is less important in the case of two dimensional mapping, since the reasonably accurate horizontal navigation parameters and well -

PAGE 102

102 known focal length will tend to produce a strong estimate of the object space scale and orientat ion. The accuracy of a self -calibrating bundle adjustment in solving for the interior orientation parameters has not been evaluated for the direct georeferencing technique presented in this thesis, and its use is cautioned It is not initially apparent tha t the addition of internal orientation parameters would lead to over -parameterization of the problem, since as was noted earlier the degrees of freedom in the t ypical problem addressed by this thesis is on the order of thousands. In such a case, how could the addition of just a few unknown parameters affect such a problem? The answer is that the internal orientation parameters are highly correlated to other parameters in the solution [Cramer, 2002 ]. For example, the focal length is highly correlated to the Z parameter of the exterior orientation, the flying height. The effect of this correlation on the absolute mapping accuracy of NOVA II data set h as not been empirically verified. Two additional factors are noted that would discourage the use of the self -c alibrating bundle adjustment for the NOVA II platform. The first is the relatively high noise of the measured parameters, both from low quality navigation systems and also from automated tie point matching. Low signal to noise ratios may cause the SCBA to model noise as higher order distortion parameters. The second factor is the low solution inertia caused by poor approximations of the object space coordinates based on a constant -surface assumption. By introducing additional parameters the number of criti cal points in the solution space is increased With poor initial approximations, the possibility of convergence on a suboptimal solution is thereby also increased. Although not incalculable, the size of the matrices generated by the bundle adjustment for these data sets is of concern for calculating speed [Triggs et al. 1999]. Keeping in mind, of

PAGE 103

103 course, that it is an iterative algorithm that requires a number of passes to converge on a solution. However, several factors lend themselves to improving the s ituation. First, the sparsity of the normal matrices as well as the ability to directly compute them using partitioned matrices lends itself to a significant amount of algorithmic and computational optimization. The sparsity of the matrix allows efficient use of memory during computation. In addition, the bundle adjustment normal equations are symmetric positive definite matrices for a well -formed data set, allowing efficient factorization methods to be employed for solving the bundle adjustment [Golub, 1996] A development of both the standard and self -calibrating bundle adjustment is given in Appendix C. The solution of the bundle adjustment is given by iteration, solving for the update vector in Equation 5 14 and updating the initial approximations at each step. This is the Gauss Newton method for the solution of non-linear systems. It should be noted that the Gauss Newton method suffers from the possibility that the solution will not converge because a particular update vector overshoots and produces a less accurate solution. If this occurs, it is possible to employ a damping factor to the normal equations. This technique is widely implemented as the Levenberg-Mar quardt algorithm Equation 5 11. + ( ) = = ( ) (5 11) There are several methods in the literature about methods for the selection of the damping parameter [Roweis 2009]. At each iteration, the sum of the squared residuals is calculated and compared to the previous iteration. If the error is smaller, the assumption is made that t he solution is improving, and the update from the current iteration is accepted and updated. Accompanyin g this assumption is the insight that, since the iterations are converging on the correct solution, it is best to speed up the gradient descent rate to get to the solution more

PAGE 104

104 quickly To do this, the damping parameter is decreased by some factor. In the converse situation, where the solution worsens with an iteration, we shou ld slow down the solution so that we more closely follow the gradient. To do this, the damping parameter is increased by a factor to decrease the gradient descent rate Understand ing the impact of the LM algorithm is eased when considering that when the value of ( ) 0 the solution will be the same as the Gauss -Newton method. Alternatively, if the Levenberg formulation in Equation 5 12 is used then as ( ) the update becomes approximately getting much smaller and in the direction of the gradient of K. The extension of Levenbergs formulation by Marquardt to replace the identity matrix with the diagonal of the Hessian, which allows the algorithm t o effectively weight the update in the direction of the smaller gradients. This improves the ability of the algorithm to escape sub optimal solutions and also increase the rate of convergence [Raweiss]. An important note on the theory of the LM is that it is purely a heuristic method and does not ensure optimality in the selection of the damping parameter. Thus, the selection of the factor by which to increase or decrease the damping factor is a matter of practice and not theory. + ( ) = = (5 12) Surface Generation The scene modeling step uses th e adjusted tie point object space coordinates to produce a surface model of the scene for the entire area covered by the mosaic. The algorithm developed for the NOVA II uses the constant surface assumption, and therefore this step is simplified to merely calculating the average height of all tie points in the solution. This assumption is suitable for mapping in areas with very litt le terrain height variations and low vegetation coverage

PAGE 105

105 relative to the flying height. However, great potential lies in extending the algorithm to handle 3D scene modeling. The scene modeling process is most often implemented as digital terrain model gene ration. If a bare -earth model is developed to remove the relief due to vegetation and anthropogenic features, the process is called digital surface modeling. The automatic generation of a digital terrain model of comparable quality to a LiDAR generated mod el has been demonstrated on a low altitude mini UAV platform [Eisenbeiss and Zhang, 2006]. Albeit somewhat larger than the NOVA II and using ground control for the bundle adjustment, this serves as the proof -of -concept using similar equipment generating a D E M consisting of 4.7 million tie points at a resolution of 10 cm. The reported mean difference between the LiDAR and photogrammetrically generated scene models was less than one centimeter with a standard deviation of 6 cm. Moreover, the use of direct ge or eferencing on larger platforms is used in commercial operations for DEM generation indicating that the direct georeferencing accuracy will be the only limiting factor, if at all [Yastikli and Jacobsen, 2002]. The scene modeling procedure essentially de pends on the output of the tie point procedure. Whether using bundle adjusted tie points or post adjustment densification of the tie points, the generation of a DEM is essentially a matter of interpolation of the irregular observations of the object space coordinates. Interpolation is required to provide the elevation value for each groundel in the output Therefore, the discussion proceeds with a discussion of t wo relevant m ethods of surface interpolation, the inverse distance weighted interpolation and the Kriging surface interpolation. The inverse distance weight ing method Equation 5 13, is a straightforward approach to surface interpolation. The basic idea is that for every unknown point of interest on the surface, the elevation is calculated by taking a weighted sum of the elevations of nearby points. The

PAGE 106

106 weighting function allows control over the smoothness of the interpolation. The primary drawback of the IDW is the tendency to produce bullseye artifacts as in Figure 5 2 since the method has no me chanism for accounting for trends in the elevation, such as a ridge line. As a result, a sharp linear ridge in the scene will appear as a linear series of peaks in the interpolated surface. = = 0 = 0 = 1 2+ 2 (5 13) A smoothing parameter, operates such that values 0 < < 1 will decrease the relative weight of nearby points and thereby reduce the bullseye effect. Decreasing the number of points, used for the interpolation allows increasing localization of the interpolation from being all tie points to some subset of points. The tie points are selected by a threshold Euclidean distance from the target point. The advantage of the IDW is its simplicity, which allows the algorithm to be easily processed automatically. A more advanced DEM generation method is the Kriging interpolation technique. There are several forms of the Kriging and the one described here is the ordinary Kriging which assumes there is some constant but unknown mean elevation value and a known spatial dependence (covariance) based on an experimental variogram It is a least squares estim ator which attempts to find an unknown elevation as the weighted linear sum of observed elevation values where the w eights are chosen such that the prediction error for observed values is minimized. The Kriging relies upon the assumption that the elevation is a dependent variable of the spatial position, a reasonable assumption for most types of natural terrain. The spa tial

PAGE 107

107 dependence of the elevation is quantified by a stochastic model given by the experimental variogram which must be modeled by the observed tie points .The Kriging equation is given by Equation 5 14, where the weighting value for each = is given by least squares solution of Equation 5 15. = = 0, (5 14) ( 1, 1) ( 1) 1 ( 1, ) ( ) 1 1 1 0 1 = ( 1, ) ( ) 1 = = (5 15) The variogram function, to emphasize again, is modeled from observed values. This is necessary because the variogram is discontinuous for a discrete set of observed values, and must itself be interpolated. The experimental variogram maps the covariance of a data set a s a function of the difference between two points evaluated at a point, given by Equation 516. ( [ ] [ ] ) =1 2 [ ( ) ( ) 2] (5 16) A mathematical model used to fit the variogram is necessary and usual ly either a linear, exponential, or spherical model as given in Equation 5 17. [Cressie, 1993] = 0 0+ = 0 0 = 0 0+ 3 2 1 2 3 0+ = 0 0 < = 0 0+ 1 exp = 0 0 (5 17)

PAGE 108

108 For each model, the variable 0 is called the nugget and is the apparent y intercept of the experimental variogram and is usually caused by noisy measurements [Cressie, 1993] The linear model is completed by equal to the slope of the line. The spherical and exponential models include the variables and respectively, which are the range parameters, equal to the distance at which the terrain is no longer dependent. The spherical and exponential models include a dependent parameter known as the sill equal to + and + for the respective models, which is the value the variogram converges on as distance increases In order for the Kriging to be fully automated, the estimation of these model parameters must also be automated. This is accomplished by a least squares approach to the model fitting [Cressie 1985] Mosaicing Three dimensional models are not generally needed for aerial remote sensing applications. The preferred method of using this type of imagery is to reproject the image onto a two dimensional surface. If this projection is orthogonal to the mapping plane, an orthophoto is produced. The mathematical formulation of the orthogonal projection, given in Equation 5 18, is straightforward. Applying the or thogonal projection to the object space coordinates, the output simply corresponds to the object space coordinates with an arbitrary (or zero) elevation. 0 = = 1 0 0 0 1 0 0 0 0 (5 18) An orthophoto constructed from an ideal scene reconstruction will remove all perspective geometry and the scene will be viewed from a uniformly infinite vertical perspective. This has the effect of removing all relief displacement from the image. For example, the verti cal sides of a building will not be visible. Because of this, any area of the image which is occluded from the perspective view will result in gaps in the output orthophoto. Orthophotos produced using the

PAGE 109

109 regularly -spaced pixels common to digital image formats have the extremely useful property of allowing each pixel to have a uniform scale. Mosaicing is an extension of the orthophoto production process, where multiple images are combined into a single ort hogonal projection to produce a composite map. This final step in the process is perhaps the most imp ortant, where we arrive at the georeferenced image output. As discussed, the orthogonal projection of images produces a two dimensional map where each pixe l covers an approximately equal area on the ground and where the image rows and columns correspond to east and north directions. The a pproximation is due to the error associated with representing the ellipsoidal surface of the earth using a rectang ular gri d. Although other approaches are possible, the technique for mosaicing presented here is designed to exploit the type of imagery collected by the UAV. That is, the imagery tends to have high amounts of overlap and therefore redundancy, and also to minimiz e the effect of the constant terrain assumption Because a constant terrain assumption is used, true orthophotos are not produced unless the terrain is actually flat. An accurate threedimensional surface model as described in the previous section is requi red for true orthophotos. Rather, the correct term for the process used here is rectification. That is, the effect of image tilt is removed from the image but any vertical relief remains uncorrected. Since the majority if not all areas of a NOVA II mapping area have stereoscopic coverage, there is the necessary step of choosing the best source image from which to select each groundels radiometric information. There are three considerations. Radiometric quality is highest along the optical axis due to consis tent illuminance, shortest pa th travel led and minimal color distortion due to lens aberrations G eometric quality is highest at the nadir, where the projection error due to incorrect image parameters is minimized Vertical relief error is minimized by selecting from the image with t he nearest nadir best approximating the orthogonal projection

PAGE 110

110 Because we are using a constant terrain surface assumption, perhaps the most difficult consideration in the usual orthophoto production can be neglected. That is, where a particular area is occluded in an image due to terrain relief, it is necessary to use an adjacent image in which the occluded area is visible. Using the constant surface assumption, terrain relief error is directly transferred to the output image but is assumed to be small. The mosaic process proceeds by calculating the constant surface intercept of the nadir of each image, and then for each pixel in the output mosaic the image with the nearest Euclidean distance to the nadi r and which contains the target pixel is selected. The object space position of the output pixel is then projected up to the image using the collinearity equations to find the corresponding pixel and color values. In the implemented algorithm, a nearest -neighbor approach is used; whichever pixel contains the object space point is selected. The nearest neighbor approach is both the most computational efficient and also most suitable for remote sensing classification, since it does not modify the original color inf ormation. However, given that it is rare for an object space coordinate to correspond precisely to a single pixel, a more visually appealing results may be obtained by using interpolation techniques about adjacent pixels (Figure 5 3) Two notable interpola tion techniques for the resampling process are the linear and cubic interpolation technique. More advanced techniques such as those described in the scene modeling step are impractical for the computational intensive process of resampling, and these basic techniques are more than adequate. Linear interpolation techniques are well known and will not be repeated here Linear interpolation has the advantage of being less computational efficient than the other common technique, cubic spline interpolation but i s disadvantaged

PAGE 111

111 because it acts as a low -pass filter, reducing high -frequency detail in the output image [Parker et al. 1983]. Cubic spline interpolation given in Equation 5 19, employs an approximation of the sinc function as a weighting function for t he interpolation. This is important because convolution of the sinc function over a discrete data set that was sampled to meet the Nyquist criteria will exactly reconstruct the original signal. = ( + 2 ) 3 ( + 3 ) 2+ 1 3 5 2 + 8 4 0 1 1 2 = = (5 19) The selection of the free parameter for the cubic spline function is limited to the range of 0.5 to 1 for a good approximation of the sinc function. Typically, the value is selected to be 0.5 which will exactly reproduce quadratic source signals [Wol f and Dewitt 2000]. Alternatively, values up to 1.0 can be used which progressively amplifies frequencies approaching the stopband and results in more high-frequency information content passing through [ Parker 1983]. Software Implementation The algorith m was implemented in the Python version 2.6 [ Van Rossum ]. Python was chosen for implementation due to a rich and open source library of numerical image processing and parallelization algorithms. The software was executed from a plain text script file whi ch allows direct adjustment of the algorithm parameters as well as access to individual processing steps. The algorithm was organized within the pyGeomatics Python package. The remainder of this section describes the complete algorithmic flow for a single iteration of the algorithm.

PAGE 112

112 The process was initialized with the directory path of the data and the calibrated focal length. The data log file was processed, during which the image geotags were interpolated from the associated navigation and synchronization packets. If a manually selected subset of the images was desired, it could have been specified in a list. O therwise the valid images we re automatically selected using the criteria specified in the preprocessing section and the constant surface height wa s calculated The boresight and leverarm calibration are applied to the geotags, and the geodetic latitude and longitude values were converted to the appropriate UT M coordinate system using the pyproj library which exposes Python bindings for the PROJ.4 library [Evenden ]. Two variables are returned, containing an ordered list of the image direct georeferencing parameters and a dictionary of the image file names. The radiometric preprocessing proceeds by downsampling, gr a yscaling, and equalizing the histogram for each pyramid level This process was completed using functions available in the OpenCV library [ OpenCV 2001]. The generated images a re saved to a new subdir ectory in the source data directory. An optional step at this point was to directly project the images with a specified resolution using the uncorrected exterior orientation parameters. Otherwise, the algorithm proceeds with the automated tie point generat ion. The tie point generation algorithm implements the HCD available in the OpenCV library. An edge mask was applied to the image to ensure sufficient space around the potential tie points for the desired template window. The number of unique tie points pe r image was specified, usually a value between 5 and 15, and a minimum separation distance between tie points was also provided as a percentage of the image width. The HCD sensitivity parameter and the kernel size wer e also required. A kernel size of 11 and the default sensitivity of 0.04 were found to be effective. A relatively low quality threshold, 0.2, was found to be sufficient to reject low -

PAGE 113

113 quality features such as open areas of water. The initial approximation o f the object space coordinates of the potential tie points wa s then calculated. The tie point generation process returns t wo more variables an unordered nested list of image coordinate observations and an ordered list which contain ed the object space init ial approximations. Each image coordinate observation was associated with the image index and tie point index from their respective ordered lists. The next procedure was to generate the initial approximations of the object space boundaries of each image in preparation for the tie point matching step. This return ed an ordered nested list of the image boundaries. For each tie point, a list of overlapping images was generated by evaluating whether the object space approximation of the tie point was within the polygon defined by the approximated image boundaries. The nested list of overlapping images for each tie point was then returned. The tie point matching algorithm allow ed the image pyramid level to be used for matching to be specified, as well as the temp late window size and the search window size. A minimum value for the normalized cross -correlation can be provided, but was not used due to the subsequent manual filtering step. The tie point matching algorithm proceed ed by finding the maximum NCC for the l og -polar transformed template and target windows within the search space. The center of search space wa s determined by the Project Up Equation 2 10 using the initial approximations generated during the tie point generation step. The best match found wa s then presented to the user to either be accepted or rejected. Each successfully matched tie point wa s added to the list of image coordinate observations, which wa s returned at the conclusion of this procedure. Optionally, at this point the matched tie poin ts can then be manually reviewed a second time using the full -color source images and poor matches rejected (Figures 5 4 and 5 5)

PAGE 114

114 Because not all of the potential tie points will ultimately be matched, it was necessary to then calculate the observational redundancy of each tie point. Tie points which do not have at least two observations were then scrubbed from the list of tie points, as well as the associated image coordinate observation. This completes the generation of all necessary data for the bundle adjustment process. At this point, the generated data set was exported to a text file for inspection or for future reuse, with facilities provided for reloading previously generated data sets. A formatted data file for the bundle adjustment was also genera ted using specified observation weights and saved to the source directory. The bundle adjustment algorithm was a self -contained module within the pyGeomatics package. A function was called to load the desired data set from a file, and a number of internal data structures are generated. For each iteration of the bundle adjustment solution, the normal equations wer e generated as described in the bundle adjustment section. The NumPy and SciPy libraries were used for optimized numerical computation facilities and sparse -matrix storage of the normal equations and the optimized linear algebra solution algorithm was provided by the P y S parse library [Geus and Arbenz 2003]. It is again noted that the current implementation wa s restricted to the regular bundle adjustment and did not include the interior orientation parameters or boresight and lever arm parameters in the adjustment. The bundle adjustment was optionally solved using either a sta ndard least squares approach or the Levenberg -Marquardt technique. The standard deviation of unit weight was calculated and output at each iteration. The direct solution was usually us ed and provides satisfactory convergence. The adjusted exterior orientation parameters and tie point object space coordinates we re saved to an output text file. The variables containing the image exterior orientation parameters, tie point object space coor dinates, and image observations were updated with the adjusted values.

PAGE 115

115 It is at this stage of the algorithm that the surface generation technique should be imple mented in future development As discussed previously, this step was simplified by the constant surface assumption to simply taking the average tie point height. In preparation for the mosaicing process, the object space image boundaries we re recalculated and updated using the new surface and adjusted EOPs. The final step in the algorithm was the mo saicing process. The algorithm allows the user to specify the extents of the generated mosaic, or to automatically calculate the full extents using the image boundaries. The mosaic was generated as a set of subset images, tiles, with a specified number of pixels in order to reduce individual file sizes. The resolution of the output image can either be specified or automatically calculated as the minimum size of a projected groundel for the data set. As described, t he source of each output pixel wa s selected by finding the image with the nearest nadir. The algorithm does not currently implement pixel interpolation in the resampling procedure. The mosaic tiles and an associated world file were generated which contain ed the image georeferencing parameters using the ESRI specification [ World File 2008]. In addition, a routine was run to automatically parse the directly projected or mosaic tile images world files to convert them to the .KML format compatible with Google Earth [ KML 2009].

PAGE 116

116 Figure 5 1 Example of the Log -Polar Transformation The origin of the transform is the center of the source image with = 72 Source: [Lena] Figure 5 2 Comparison of IDW (left) and Kriging (right) interpolation techniques demonstrating the bullseye effect of the IDW

PAGE 117

117 \ Original None Linear Cubic Spline (a = 0.5) Figure 5 3 Comparison of resampling techniques at 200% enlargement Source: [Lena]

PAGE 118

118 Figure 5 4 Example correct tie point matches between source (left) and target (right) images, matched template in green and search space in red Figure 5 5 Example of an incorrect tie point match es due to poor initial approximation of the image boundary, matched template in green and search space in red

PAGE 119

119 CHAPTER 6 ASSESSMENT AND CONCLUSIONS Assessment The goal of the assessment was to evaluate the relative accuracy of the output mosaic (Figure 6 3) b y measur ing the image to image agreement of the location of features within the data set. This assessment was limited by the fact that ground truthing values wer e no t available. However, the assessment reflects both the actual operating conditions that the NOVA II platform is designed for as well as evaluates the output from the standpoint of its intended use. The calibrat ed focal length obtained in the camera calibration described in Chapter 4 was used. The boresight and leverarm correction was estimated from the physical mounting. The lack of a re fined calibration for the full interior orientation parameters and boresight and leverarm parameters suggests that these results are conservative estimates of the overall accuracy of the system, and should be considered a rudimentary measure of the systems potential. The data set used for this adjustment was selected randomly from a typical NOVA II mission. The average flying height above the terrain was about 160 m (commanded flying height of 150 m) and the aver age flight line spacing was approximately 72 m with a target airspeed of 2 3 m/s and a prevailing wind speed of approximately 3.5 m/s. The data was collected under clear sky conditions at approximately 1 1 :00 AM on April 23, 2009, for a total flight duration of approximately 35 minutes Twenty -seven images were selected from the upwind portion of the dipole flight pattern. The output mosaic was generated for an area of 40,000 sq. m. at a resolution of 2.5 cm centered within the coverage area. The algorithm de scribed in the software implementation of C hapter 5 was employed directly, with no manual intervention of the process except during the specified manual matched tie point filtering step T he data set was processed twice, the first using a smaller set of tie points

PAGE 120

1 20 to generate a somewhat improved estimate of the EOPs so that the subsequent processing step had improved initial approximations For both bundle adjustments, the original EOPs provided by direct georeferencing were used as the initial approximatio ns; only the tie point observations varied between adjustments. A standard methodology was developed for evaluating the relative accuracy of the mosaic. For each image, the perimeter of the image was traversed and approximately every 912 pixel s (1/3 of the height and 1/4 of the width) a matchable feature was selected. This evaluation was completed using both the adjusted and unadjusted EOPs The error distance was measured to the same feature in each immediately adjacent image. This process was completed us ing individual image projections to evaluate the accuracy of the full extent of each image. It was expected that the error would be greater as distance from the nadir increased. Because the mosaicing process compensates for this effect, it was of interest to compare the edge accuracy to the mosaic seam accuracy. To this end the mosaic itself was analyzed by traversing each mosaic seam and selecting a matchable feature every 10 m. Because of errors in projection and terrain relief, some feature s along t he seam were immeasurable due to occlusion and these seam locations were excluded from the analysis The first iteration of the algorithm projected the images to a constant surface using the EOPs provided directly by direct georeferencing. The unadjusted edge error had a mean of 9.8 m and a standard deviation of 4.3 m. The tie point matching algorithm was performed on the twice downsampled images and resulted in 112 successful tie point matches 26 (19%) potential tie points matches that fell outside of the target image due to the initial approximation error, and 13 (9%) tie point matches which fell within the target image but were incorrectly matched. The

PAGE 121

121 overall success rate of the tie point matching algorithm for the first iteration was 72%. The bundle adjustment was then completed using the first iteration tie points. The second iteration of the algorithm used the solved EOPs of the first bundle adjustment for initial approximation of the tie point matching algorithm operating on the once -downsampled images. This resulted in a total of 295 successful tie point matches. The number of potential tie points which were outside of the target image was 28 (7%) and the number of incorrectly matched points was 42 (12%) for an overall success rate of 81%. T he in creased success rate was clearly due to better initial approximations of the search space and image extents. It should be noted that for both the first and second iteration, no tie point matches failed due to an insufficient search space; the manual select ion of the search space based on a priori estimates of the navigation accuracy proved to be an effective method. The increase in the number of mismatched points is likely attributable to the move from twice to once downsampled images, which necessarily de creas ed repeatability as discussed in Chapter 5. The distribution s of tie points from the first and second iteration are compared in Figure 6 1 The results of the second bundle adjustment are shown in Table 6 1 The edge accuracy of the adjusted images wa s found to have a mean error of 0.50 m with a standard deviation of 0.31 m, nearly an order of magnitude of improvement over nave direct georeferencing. No significant correlation was found between the number of tie points in an image and the image edge a ccuracy. The mosaicing seam accuracy assessment gave a mean error of 0.37 m with a standard deviation of 0.21 m, an improvement over the edge accuracy as was expected. The distribution of the mosaicing seam errors is shown in Figure 6 2

PAGE 122

122 Conclusions Th e research and development presented in this thesis has demonstrated the feasibility and perhaps even the practicality of developing a true directly georeferenced remote sensing system on a small UAV platform. The most significant contribution of this thes is is the systematic development of a technique in which each element of the di rect georeferencing process is based on an empirical model The result is a system which produces mapping products with a predictable error budget based on the contribution of quantifiable errors from the individual elements of the te chn ique Given this advance, it is now possible to systematically review and improve individual components of the technique until the desired mapping accuracy is reached, within the physical limitations of the platform and sensing hardware. In the development of an empirical approach to directly georeferencing remote sensing on small UAVs, a strong contrast has been drawn with alternative approaches Surveying the literature on aerial mapping techni que for small UAVs, there has not been a shortage of attempts However, this approach does not suffer the shortcomings from incorrect camera models [ Taylor 2008; Xiang 2007], it does not rely on unrealistic image warping methods [ Mujambar 2004; Ilstrup 2008], and it does not requi re preexisting imagery [ Zhu 2005]. The technique does not require ground control points or extrinsic scene models neither of which are currently possible within the scope of the platform or mission profile s [ Ladd 2006; Grenz drffer, 2008; Nagai 2009; Eisenbeiss 2006]. The synt hesis of this technique overcame other limitations of directly georeferenced remote sensing implementation on a small inexpensive UAV as discussed in Hruska [2005]. Employing and integrating a suite of consumer off the -shelf sensors proved to be a practical and cost -effective approach, requiring only a single custom synchronization component.

PAGE 123

123 Interestingly, the Burredo proved to be the most accurate synchronization system among the other published approaches in direct georeferencing despite its low cost and simple design. Utilizing standard computer hardware and interfaces allowed for rapid development of a flexible architecture open to modification with additional sensors or data collecti on features. Finally, the processing algorithm was written with, and utilized only, open -source tools, helping to ensure compatibility with future development Importantly, the synthesis of existing techniques also allow ed for concise treatment of the most significant problems identified in previous direct georeferencing approaches [Skaloud, 1999]. Despite the greatly reduced accuracy and quality of the sensor suite on the NOVA II platform when compared to surveying-grade, manned aircraft systems many elem ents of existing direct georeferencing techniques had parallels on the UAV platform. This was particularly evident in the calibration of the payload, where the methods for camera and boresight/leverarm calibration proved feasible. To this end, a permanent installation for the future calibration and accuracy analysis was established at the University of Florida Lightning Lab (Appendix B). In terms of final mosaic accuracy, t he processing algorithm is arguably the most important element of the new technique. The algorithm although synthesized from existing methods, made use of a sensor fusion approach allowing for a robust and efficient processing method. In particular, the application of the bundle adjustment to the directly georeferenced observations serve d to increase the mosaicing accuracy by better than an order of magnitude. Despite this, the algorithm remains open for significant improvement, particularly in terms of the algorithmic efficiency. The sheer amount of data collected in a single flight proved to be more than could be reasonably processed using the technique and algorithm on an operational basis. The mosaic

PAGE 124

124 generated in the accuracy assessment, consisting of only 27 images, and took approximately 6 hours on a modern laptop computer to process The resolution of the system is obviously of great importance to the overall goal of the NOVA II project Figure 6 5 provides some examples of the quality of imagery obtained from the NOVA II platform. Groundel sizes of 2.5 c m or less are achievable on t his platform while still maintaining sufficient overlap for 2D mosaicing. A baseline assessment for the quality of the imagery required for the operational scope of the NOVA II discussed among those involved in the project was the reliable identification o f small targets The reliable identification of various ecological targets proved to be well within the capability of the system, and future improvements tot optical sensors will augment the capability Recommendations It is not entirely predictable whether the most significant gain in accuracy will be achieved by improving the sensor accuracy, calibration quality, or the robustness of the processing algorithm Initial emphasis should be placed on calibration quality since it r equires minimal resources for maximum return s A full calibration of both the camera and boresight and leverarm at the Lightning Lab site should be completed and analyzed for improvements. With respect to the calibration procedure itself, the most feasible modification would be to integrate both the camera and boresight calibration into a single procedure, allowing for a more efficient calibration process in the future. Beyond this, a radically different approach to calibration would be to integrate the cal ibration procedure into the data collection process itself by employing a self calibrating bundle adjustment on the data without the use of ground control points. Such an approach is unproven, but is certainly worthy of investigation. However, as discussed in Chapter 4, the effect of cross -parameter correlation in both the proposed alternative procedure is not clear and should be evaluated.

PAGE 125

125 Improvement of the sensor suite is feasible, but is strongly constrained by the limitations of the small UAV platform. Two areas of improvement that will have a significant impact on the accuracy are foreseeable in the near future The first is in improving the accuracy of the GPS system. In particular, moving to differential (ground station based) and dual -frequency GPS should result in an order of magnitude improvement in the absolute position error of the navigation system [Misra et al., 1999] Given that the GPS serves as the primary control in the algorithms adjustment procedure this is expected to have the largest impact on the final accuracy among sensor improvements. Currently, the primary deterrent to installation of geodetic -grade GPS on the NOVA II is the weight of the antenna, which is relatively bulky The second improvement would be to install a single -beam EDM device on the aircraft Such a device would depart from the passive sensor approach currently employed on the NOVA II platform and would add very useful data. This approach has been successfully employed on small UAV platforms for Z ambiguity estimati on and serves to add an empirical measurement of the actual image -to -object space correspondence [Zhu]. This addition would allow significantly more robust initial approximation procedure s by allowing for a strong er estimate of the Z ambiguity, even with only a single ranging observation per image. The processing algorithm is certainly leaves room for improvement as discussed throughout its development First and foremost, the efficiency of the algorithm must be improved. This is largely possible through the development of parallelized algorithms for the tie point matching and mosaic generation procedures. Both are embarrassingly parallel problems, which allow for efficient distribution among an arbitrary number of processors and could be scaled to as l arge of a system as needed. In addition, the costly calculation of the bundle adjustment normal equations can be parallelized, which although it performs adequately in the

PAGE 126

126 current version will quickly decrease in performance as the size of the processed da ta set increases due to the ( 2) complexity i.e., the number of required computations increases as the square of the number of individual elements The solution of the bundle adjustment is processed optimized sparse and symmetric matrix solvers, widely available in open source tools and not expected to be a significant problem. At the cost of increasing computational complexity, the tie point algorithm should be improved in two ways. First, the accuracy of the algorithm should be improved by employing a more robust sub -pixel least squares matching algorithm on the full resolution imagery, perhaps even including radiometric information from multiple color bands for improved performance. A significant outcome of such a procedure would be a more robust esti mate of the quality of the match, leading to another improvement in the algorithm. By employing statistical measures of the match quality, deviation from the epipolar geometry, and locally modeled terrain variations, a robust outlier rejection algorithm su ch as RANSAC [Fischler 1981] could be employed to eliminate the need for a human operator. Although the goal of this directly georeferenced remote sensing technique is t he production of two -di mensional mapping applications vertical relief error due to pr ojection is transferred directly to the output mosaic. The algorithm development in this thesis and its application are greatly enhanced by the nearly constant terrain of South Florida, however, expanding the operational scope of the platform to other terr ains necessitates improvement s The ability to produce a strong 3D scene reconstruction will largely be dependent on the densification of the tie points post adjustment. Given a proper densification procedure and an appropriate DEM generation technique, a transition to true orthophoto production and even the generation of color 3D m odels should be achiev able with moderate effort

PAGE 127

127 Table 6 1 Corrections to the exterior orientation parameters obtained by the bundle adjustment of the assessed data set Image Omega (deg) Phi (deg) Kappa (deg) Easting (m) Northing (m) Height (m) Image82 1.36 5.60 0.14 2.30 2.05 1.57 Image83 4.43 4.81 0.83 0.95 1.22 1.41 Image84 1.86 4.80 0.21 0.28 3.16 0.79 Image85 0.29 3.30 0.06 0.50 4.02 1.01 Image439 4.52 6.33 2.27 1.21 0.88 2.87 Image440 3.51 3.34 2.29 4.50 0.28 2.82 Image441 5.40 3.42 1.77 3.62 0.95 2.07 Image442 2.75 1.08 2.80 2.02 0.96 2.93 Image443 2.05 5.23 2.73 1.91 5.70 3.11 Image530 4.40 3.84 2.74 4.69 1.38 2.57 Image531 3.40 2.56 2.89 0.29 1.72 2.58 Image532 4.19 3.85 2.68 0.99 3.10 2.16 Image533 6.28 1.31 2.82 1.85 0.28 2.01 Image534 6.47 1.98 3.32 3.61 0.29 1.71 Image535 4.95 4.55 2.73 2.39 1.68 2.86 Image536 0.71 5.10 2.56 4.34 1.02 2.28 Image537 3.23 4.05 2.95 4.93 0.72 2.19 Image538 0.10 3.44 2.97 2.24 3.87 1.34 Image624 2.62 4.15 2.42 1.37 2.56 3.55 Image625 2.70 6.10 2.46 4.26 2.59 3.91 Image626 2.86 3.58 2.37 2.95 2.24 3.16 Image627 2.53 2.64 2.07 1.16 3.68 2.82 Image628 3.01 2.84 1.76 0.55 2.56 2.46 Image629 6.64 1.26 2.09 1.41 1.87 1.73 Image630 6.50 1.64 2.55 2.63 4.47 1.63 Image719 3.82 1.80 2.00 1.48 3.06 2.76 Image720 2.79 4.98 2.22 1.49 3.64 4.10 Image721 2.66 4.99 2.08 0.79 3.45 3.86

PAGE 128

128 Figure 6 1 The distribution of tie points for the first (left) and second (right) iteration of the assessed data set Figure 6 2 Distribution of mos ai c seam error measurements and their values

PAGE 129

129 Figure 6 3 Comparison of the mosaic (left) to the unadjusted direct projection of one of the composing images (right) showing the magnitude of the adjustment Figure 6 4 Detail of typical mosaicing seam errors at the farther distance from nadir (left) and due to vertical relief in the image (right)

PAGE 130

130 A. C. B. D. Figure 6 5 Demonstration of detail with observations of a (a) low flying bird, (b) wading birds a (c) basking alligator, and (d) swimming alligators

PAGE 131

131 APPENDIX A EXAMPLE NOVA II DATA Each NOVA II flight automatically generates a folder labeled with the time and date. The folder contains two elements, the image files and a log file. Th e format of the ima ge files is by default .jpg, although the image format is selectable. The log file is generated in real time and combines the information from all payload sensors (except imagery) into a single ASCII file. The log is formatted with each line corresponding to a data or status packet prefixed by a threeletter code indicating the source of the packet. The packets are described in Table A 1. The log file is parsed and the packets processed to pro duce two output files. The most critical file is the geotags file, which provides the direct georeferencing parameters associated with each image. The parameters are calculated by interpolation of the navigation packets using the Burredo synchronization pa ckets associated with the image exposure. A navigation file is also output which simply extracts the trajectory information from the navigation packet s

PAGE 132

132 Table A 1 Description of data packets generated by the NOVA II payload Packet Description #nav Primary data packet from the MTi G. The columns contain in order: Accelerometer X Y, Z (m/s2) Gyroscope X, Y, Z (deg/s) Magnetometer X, Y, Z (normalized unit vector) Roll, Pitch, Heading (deg) Latitude, Longitude, Height (deg, deg, m) Velocity X, Y, Z ( m/s) Status Byte Sample Counter #bur Primary data packet from the Burredo. Contains columns: Total clock ticks per epoch Tick count at time of event (0 if no event) #cam Camera status packet. The following status messages are available: connect Camera is detected and driver is successfully loaded disconnect Camera disconnect detect triggered Command to take exposure issued captured Image has been successfully captured and transmitted to host #img Occurs after #cam captured packet, and i ndicates the received images filename #bat Camera battery voltage indicator ; decays from 100 to 0 in four de crements

PAGE 133

133 Figure A 1 Example snippet of the raw NOVA II log file Figure A 2 Example snippet of the generated NOVA II geotags file

PAGE 134

134 APPENDIX B LIGHTNING LAB CALIBR ATION SITE A regularly spaced grid of ground control points was needed for the calibration and accuracy analysis of the NOVA II payload. A suitable site was selected at the University of Florida Lightning Research Lab in Starke, FL. A model runway used for lightning research was located conveniently adjacent to the selected site, allowing for routine takeoff and landing of the UAV. The GCP grid consisted of raised aerial targets mounted on threaded iron rods driven into th e ground and spaced approximately 20 meters apart. The aerial targets were constructed as shown in Figure B 1 The aerial targets consisted of rigid 2 by 2 boards with 18 diameter high -contrast black and white semicircles. In addition to the 32 aerial targets, 5 permanent reference monuments (5 iron rods) were driven into the ground so that the aerial targets could be conveniently resurveyed at a later date A precision geodetic survey of the 5 permanent reference monuments was conducted using nearby NGS control with Trimble 5800 GPS equipment The closest NGS control monument s were KINGLSEY and KINGSLEY RESET Once the permanent reference points were established, an RTK GPS survey of the 32 aerial targets was completed using Leica System 1200 GPS equ ipment Due to post -survey data loss, two of the permanent reference monuments were excluded from the network adjustment due to insufficient GPS observations. The coordinates of PRMs are given in Table B 1. The adjusted WGS 84 UTM Zone 17N coordinate values of the aerial targets are provided in Table B 2 The relative standard deviation of error for the aerial targets was 0.76 c m using a posteriori adjustment statistics and the absolute accuracy was determined to be 2.02 cm.

PAGE 135

135 Table B 1 L ightning Lab Calibration Site p ermanent r eference m onument WGS84 c oordinates Control Lat itude (DMS) Long itude (DMS) Height (m) B 2956'30.74647" 8201'52.51608" 44.791 C 2956'29.64313" 8201'55.29755" 43.509 E 2956'28.61991" 8201'52.57836" 43.824 Table B 2 Lightning Lab Calibration Site a erial t arget WGS84 UTM Zone 17N Coordinates Point ID Easting (m) Northing (m) Height (m) 1 400333.13 3312789.13 43.61 2 400350.91 3312789.28 43.63 3 400372.44 3312789.14 44.08 4 400392.53 3312788.79 44.09 5 400412.11 3312788.70 43.78 6 400431.80 3312788.62 44.21 7 400452.09 3312788.41 44.48 8 400471.77 3312788.33 44.98 9 400471.19 3312770.83 45.14 10 400452.19 3312770.65 44.83 11 400431.66 3312770.37 44.19 12 400412.17 3312770.26 43.65 13 400392.24 3312770.02 43.43 14 400372.14 3312769.77 43.27 15 400350.86 3312769.57 43.11 16 400332.47 3312769.29 43.23 17 400331.88 3312749.58 42.93 18 400350.77 3312749.71 42.85 19 400371.77 3312750.19 43.29 20 400391.87 3312750.65 43.69 21 400412.12 3312751.06 44.10 22 400431.41 3312751.59 44.73 23 400452.41 3312751.98 45.04 24 400471.36 3312752.31 44.56 25 400471.47 3312729.85 44.03 26 400451.50 3312729.73 44.05 27 400431.17 3312729.63 43.96 28 400412.13 3312729.66 43.97 29 400391.50 3312729.59 43.79 30 400371.40 3312729.46 43.68 31 400350.71 3312729.51 43.22 32 400331.30 3312729.45 43.15

PAGE 136

136 F igure B 1 Physical Design of the Lightning Lab Ground Control Point Aerial Targets Figure B 2 Aerial view of the Lightning Lab Calibration Site Targets indicating the target distribution

PAGE 137

137 Figure B 3 Aerial view of the Lightning Lab Calibration Site Targets captured by NOVA II with a zoomed inset of an individual target Figure B 4 NGS Control Monument KINGSLEY (left) and GPS setup (right)

PAGE 138

138 APPENDIX C BUNDLE ADJUSTMENT DE VELOPMENT The derivation of the bundle adjustment is given here for convenience. The development is then extended to include the interior orientation parameters dis cussed in Chapter 2. In order to optimally solve the collinearity equations using least squares, they must be linearized. The nonlinearity of the collinearity equations is evident first in their rational form, but the complexity of their linearization is d ue more to the trigonom etric terms in the DCM representation of Note that the linearization is parameterized with respect to the Euler angle representation of which allows the orthonormality constraints of a valid rotation matrix to be implicit ly enforced. The linearized observation equations, weight matrices, and the direct construction of the normal matrices is developed in Eq uations C 1 through C 1 1 with a brief description of each a nd reflects the development in Wolf and Dewitt [ 2000]. Rearr anging Equation 2 9 gives Equation C 1 = = = = = 31 + 32 + 33 = 11 + 12 + 13 = 21 + 22 + 23 = = = = 111213212223313233 (C1) First -order Taylor series linearization with respect to image and scene geometry parameters results in Equation C 2

PAGE 139

139 0 + 0 + 0 + 0+ 0+ 0+ 0+ 0+ 0 = 0+ 0 + 0 + 0 + 0 + 0+ 0 + 0 + 0+ 0= 0+ (C2) Expanding and simplifying Equation C 2 gives the image observation Equation 5 13. 11 + 12 + 13 14 15 16+ 14+ 15 + 16= + 21 + 22 + 23 24 25 26+ 24+ 25 + 26= + 11= 2 ( 33 + 32 ) ( 13 + 12 ) 12= 2 ( cos + sin sin cos sin ) ( sin cos + sin cos cos cos cos cos ) 13= ( 21 + 22 + 23 ) 14= 2 ( 31 11) 15= 2 ( 32 12) 16= 2 ( 33 13) = + 11= 2 ( 33 + 32 ) ( 23 + 22 ) 12= 2 ( cos + sin sin cos sin ) ( sin sin sin cos sin + cos cos sin ) 13= ( 11 + 12 + 13 ) 14= 2 ( 31 21)

PAGE 140

140 15= 2 ( 32 22) 16= 2 ( 33 23) = + (C3 ) The normal equation is given by Equation C 4 = (C4 ) The unknown sta te vector is given by Equation C 5. = 1 2 m 1 2 i= i i i Xi Yi Zi j= Xj Yj Zj (C5 ) The matrix form of Equation C 3 is given by Equation C 6 B ij i+ B ij j= ij+ Vij B ij= 11ij12ij13ij 14ij 15ij 16ij21ij22ij23ij 24ij 25ij 26ij B ij= 14ij15ij16ij24ij25ij26ij ij= ijij Vij= xijxij (C6 ) Not ing that the superscript 0 indicates a calculated value and the superscript 00 indicates a measurement, the direct georeferencing observations of the exterior orientation parameters are given in a linearized form by Equation C 7 0+ = 00+ 0+ = 00+ 0+ = 00+

PAGE 141

141 0+ = 00+ 0+ = 00+ 0+ = 00+ i= C i+ V i = [ 0 0 0 0 0 0] i= [ ] (C7 ) Similarly, observations for ground control points, if used, are given by Equation C 8 0+ Xj= 00+ 0+ Yj= 00+ 0+ Zj= 00+ j= C j+ V j = 00 0 00 0 00 0 V j= (C8 ) For all observations, appropriate weight matrices must be used. Note that all weight matrices are scaled by the reference variance usually chosen as 1.0, and are given by Equation C9 = 0 2 22 1 = 0 2 222222 1

PAGE 142

142 = 0 2 222 1 (C9 ) Using Equations C 6 through C 9 the normal equations can be composed directly by Equation C 10 and C 11. = 1+ 10 0 11 12 1 0 2+ 2 0 21 22 2 0 0 0 0 + 1 2 11 21 1 1+ 10 0 12 22 2 0 2+ 2 0 0 1 2 0 0 0 + = = 1 = = 1 = (C10) K = K 1+ W 1C 1K 2+ W 2C 2 K m+ W mC mK 1+ W 1C 1K 2+ W 2C 2 K + W C

PAGE 143

143 = = 1 = = 1 (C11) Recalling that the interior orientation parameters can be specified in the collinearity equations, it is obvious that they can be included in the simultaneous bundle adjustment solution. The assumption will be made for this development that all data is collected using the same camera at approximately the same time, with the conclusion being that the interior orientation parameters are constant over all images for a particular bundle adjustment. Thus, we proceed by linearizing the additional interior orientation parameters from Equation 2 14. = + = = + = (C12) 0 + 0 + 1 0 1+ 2 0 2+ 3 0 3+ 1 0 1+ 2 0 2+ 1 0 1+ 2 0 2+ 0 + 0 + 0 + 0+ 0+ 0+ 0+ 0+ 0 = 0+ 0 + 0 + 1 0 1+ 2 0 2+ 3 0 3+ 1 0 1+ 2 0 2+ 0 + 0 + 0 + 0 + 0+ 0 + 0 + 0+ 0= 0+ (C13)

PAGE 144

144 Equation C 3 is simplified in Equation C 4 with terms identical to their definition in Equation C 3 and the matrix formulation given in Equation C 5 11 + 12 + 14 1+ 15 2+ 16 3+ 17 1+ 18 2+ 19 1+ 1 10 2+ 11 + 12 + 13 14 15 16+ 14+ 15 + 16= + 21 + 23 + 24 1+ 25 2+ 26 3+ 27 1+ 28 2+ 21 + 22 + 23 24 25 26+ 24+ 25 + 26= + (C14) B ij i+ B ij j+ ij = ij+ Vij = 111 0 1415161718191 10210 1 24252627280 0 = [ 1231212]T 11= 14= 2 15= 4 16= 6 17= 2+ 2 2 18= 2 19= 1 10= 21= 24= 2 25= 4 26= 6 27= 2 28= 2+ 2 2 (C15) In the case that one or more internal orientation parameters are known a priori, we must take into account direct observations of the internal orientation parameters which is developed directl y in matrix form by Equation C 16.

PAGE 145

145 = C + V = 00 0 00 0 00 0100 10200 20300 30100 10200 20100 10200 20 V = 1 2 3 1 2 1 2 = 1 2 3 1 2 1 2 = 0 2 1 (C16) The terms of the self -calibrating bundle adjustment normal equation can be composed directly by Equations C 17 and C 18. = + 1 2 1 2 1 T 1+ 10 0 11 12 1 2 T0 2+ 2 0 21 22 2 0 T0 0 0 + 1 2 1 T 11 21 1 1+ 10 0 2 T 12 22 2 0 2+ 2 0 0 T 1 2 0 0 0 +

PAGE 146

146 = = 1 = 1 = B = B (C17) K = + K 1+ W 1C 1K 2+ W 2C 2 K m+ W mC mK 1+ W 1C 1K 2+ W 2C 2 K + W C = = 1 = 1 (C18)

PAGE 147

147 L IST OF REFERENCES Acutime, 2007. T rimble AcuTime Gold GPS s mart a ntennae u ser g uide, Trimble Incorporated, 128 p. Bendat, J. and A. Piersol, 2000. Random Data Analysis and Measurement Procedures. Wiley Interscience, New York, NY, 594 p. Bowman, S., 2008 Design and v alidation o f an a utonomous rapid m apping s ystem u sing a s mall UAV, Masters Thesis, University of Florida Gainesville, FL, 115 p. Bowman, W. S., A. C. Watts, J. H. Perry, M. Morton, P. G. Ifju, 2008. UAS as monitoring tools for assessing the Nation's levees and monitoring invasive plant populations. Presentation Slides. Workshop held at the US Army Corps of Engineers Research and Development Center March 19, Vicksburg, MS. Brown, D.C., 1971. Close range camera calibration, Photogrammetric Engineering, 37(8):855866. Burgess, M.A., P.G. Ifju, H.F. Percival, J.H. Perry, and S.E. Smith, 2009. Development of a survey-grade sUAS for natural resource assessment and monitoring, Florida Cooperative Fish and Wildlife Research Unit Annual Coordinating Committee Meeting, Slide prese ntation, May 23, Gainesville, FL Capodiferro, L., R. Cusani, G. Jacovitti, and M. Vascotto, 1987. A Correlation Based Technique for Shift, Sc ale, and Rotation Independent Object Identification, Acoustics, Speech, and Signal Processing, IEEE Internation al Conference on ICASSP 87, 12:221224 Chao, H., M. Baumann, A. Jensen, Y.Q. Chen, Y. Cao, W. Ren, and M. McKee, 2008. Band reconfigurable Multi -UAV-based Cooperative Remote Sensing for Real time Water Management and Distributed Irrigation Control, Proceedings of the 17th Word Congress of the International Federation of Automatic Control July 611, Seoul, Korea, 6 p. Chatfield, A., 1997. Fundamen tals of High Accuracy Inertial Navigation, American Institute of Aeronautics and Astronautics Reston, WA, 339 p. Cox, T., C. Nagy, A. Mark, M. Skoog, and I. Somers, 2004. Civil UAV Capability Assessment. NASA Draft Report. 103 p. Cramer, M. 1999. Direct geocoding is aerial triangulation obsolete?, Photogrammetric Week '99 Heidelberg, Germany pp. 59 70. Cramer, M. and D. Stallmann, 2002. System calibration for direct georeferencing. International Archives of Photogrammetry, Remote Sensing and the Spatial Information Sciences 34(3A): 79 84.

PAGE 148

148 Cramer, M., D. Stallman, and N. Haala, 2000. Direct georeferencing using GPS/inertial exterior orientations for photogrammetric applications, Proceedings of ISPRS XIX Congress, Amsterdam, Netherlands, pp. 198 205. Cressie, N., 1985. Fitting Variogram Models by Weighted Least Squares, Mathematical Geology 17(5): 563586 Cressie, N., 1993. Statistics for Spatial Data Wiley -Interscience Hoboken, N.J., 928 p. Ebner, H., 1976. Self -calibrating block adjustment, Congress of the International Society for Photogrammetry, Invited Paper of Commission III Helsinki, Finland, 11 p. Eisenbeiss, H. and L. Zhang, 2006. Comparison of DSMs Generated From Mini UAV Imagery and Terrestrial Laser Scanner in a Cultural Heritage Ap plication, ISPRS Commission V: Image Engineering and Vision Metrology Symposium Sept. 25 27, Dresden, Germany, 36(5):9096. Eisenbeiss, H., 2004. A Mini Unmanned Aerial Vehicle (UAV): System Overview and Image Acquisition, International Workshop on Proces sing and Visualization Using HighResolution Imagery Nov. 1820, Pitsanulok, Thailand, 7 p. Evenden, G., PROJ.4 Cartographic Projections Library, RemoteSensing.Org, retrieved June 2009: http://www.remotesensing.org/proj/ Finlayson, G. D., B. Schiele, and J. Crowley, 1998. Comprehensive Colour Image Normalization. Computer Vision ECCV98 Springer -Berlin, pp 475 490. Fischler, M. A. and R.C. Bolles, 1981. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communication of the Association of Computing Machinery 24(6):381 395. Fraser, C., 1997. Digital Camera Self Calibration, Journal of Photogrammetr ic Engineering and Remote Sensing, 52(3):149159. Geus, R. and P. Arbenz, 2003. PySparse and PyFemax: A Python framework for large scale sparse linear algebra. PyCon 03, March 13, Washington, DC, 10 p. Greening, W. J., W. Schickler, and A. Thorpe, 2000. The Proper Use Of Directly Observed Orientation Data: Aerial Triangulation Is Not Obsolete, ASPRS Annual Conference May 2226, Washington, DC, 8 p. Grenzdrffer, G.J., A. Engel, and B. Teichert, 2008. The Photogrammetric Potential of Low -Cost UAVs in Forestry and Agriculture, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 31(B3):12071214. Grenzdrffer, G.J., 2004, Digital Low -Cost Remote Sensing with PFIFF, the Integrated Digital Remote Sensing System, ISPRS Annual Conference July 1223, Istanbul, Turkey, pp. 235239.

PAGE 149

149 Greening, T., W Schickler, A Thorpe 2000. The proper use of directly observed orientation data: Aerial triangulation is not ob solete ASPRS Annual Conference May 20 24, Washington, DC, 12 p. Golub, G. and C. Van Loan, 1996. Matrix Computations, 3rd Edition, John Hopkins University Press, Baltimore, MD, 694 p. Gruen, A.W., 1985. Adaptive Least Squares Correlation: A Powerful Image Matching Technique, South African Journal of Photogrammetry, Remote Sensing and Cartography 14(3): 175187. Habib, A, A. Pullivelli, E. Mitishita, M. Ghanma, and E. Kim, 2005. Stability analysis of low cost digital cameras for aerial mapping using different georeferencing techniques The Photogrammetric Record, 21(113):2943. Harris, C. and M. Stephens, 1988. A combined corner and edge detector, Proceedings of the Fourth Alvey V ision Conference Manchester, U.K., pp. 147 151. Hartley, R. and J. Mundy, 1994. The relationship between photogrammetry and computer vision, SPIE Proceedings 1944:92105. Heipke, C., K.Jacobsen, and H. Wegmann, (Eds.), 2002. Integrated Sensor Orientation : test report and workshop proceedings. OEEPE Official Publication 43 297 pages. Honkavaara, E., R. Ilves, and J. Jaakkola, 2003. Practical results of GPS/IMU/camera system calibration. ISPRS International Workgroup: Group I/5 Theory, Technology and Reali ties of Inertial/GPS Sensor Orientation, Sept. 22 23, Castelldefels, Spain, 10 p. Horn, B.K., 1999. Projecti ve geometry considered harmful, Manuscript retrieved Ju ne 2009, http://www.ai.mit.edu/people/bkph/ Hruska, R., G. Lancaster, J. Harbour, and S. Cherry, 2005. Small UAV acquired, High Resolution, Georeferenced Still Imagery, Wildlife Society 12th Annual Conference Sept. 2529, Madison, WI 13 p. Ilstrup, D., M. Lizarraga, G. Elkaim, and J. Davis, 2008. Aerial Photography using a Nokia N95, World Congress on Engineering and Computer Science Oct. 22 24, San Francisco, CA., 7 p. Jensen, A., M. Baumann, and Y. Chen, 2008. Low Cost Multispectral Aerial Imaging Using Autonomous RunwayFree Small Flying Wing Vehicles, IEEE International Geoscience and Remote Sensing Symposium July 7 11, Boston, MA 4 p. Jones, G.P., L.G. Pearlstine, H.F. Percival, 2006. An Assessment of Small Unmanned Aerial Vehicles for Wildlife Research, Wildlife Society Bulletin, 34(3): 7 50758 Jones, P., 2003. The Feasibility of Using Small Unmanned Aerial Vehicles for Wildlife Research, Masters Thesis, University of Florida.

PAGE 150

150 Jung, I., and S. Lacroix, 2003. High Resolution Terrain Mapping Using Low Altitude Aerial Stereo Imagery, Procee dings of the Ninth IEEE International Conference on Computer Vision Oct. 14 17, Nice, France, 6 p. Khoshelham, K., 2009. Role of Tie Points in Integrated Sensor Orientation for Photogrammetric Map Compilation, Photogrammetric Engineering and Remote Sensing 75(3):305311. Kraus, K. 2007, Photogrammetry 2nd Ed., de Gruyter New York, 459 p. KML, 2009. KML Documentation Website Google, Inc. retrieved Ju ne 2009, http://code.google.com/apis/kml/do cumentation/ Kuipers, J., 1999. Quaternions and Rotation Sequences. Princeton University Press Princeton, NJ, 371 p. Ladd, G., A. Nagchaudhuro, T. Earl, and M. Mitra, 2006. Rectification, Georeferencing, and Mosaicing of Images Acquired with Remotely Operated Aerial Platforms, ASPRS Annual Conference, May 1 5, Reno, Nevada, 10 p. Lena. Lena, 512x512 pixels, USC -SIPI Image Database. http://sipi.usc.edu/database/ Legat, K., 2 006. Approximate direct georeferencing in national coordinates, Journal of Photogrammetry and Remote Sensing, 60(4):239255. Li, B., C. Rizos, H.K. Lee, and H.K. Lee, 2006. A GPS -slaved Time Synchronization System for Hybrid Navigation, GPS Solutions 10:207217. Majumdar, J., S. Vinay, and S. Selvi, 2004. Registration And Mosaicing For Images Obtained From UAV, IEEE International Conference on Signal Processing and Communications Dec. 11 14, Bangalore, India, pp. 198 203. M isra, P., B. Burke, and M. Prat t, 1999. GPS Performance in Navigation P roceedings of the IEEE, 87 (1 ):65 85. Mohamed, A., R. Price, 2002. Near the Speed of Flight, Aerial mapping with GPS/INS Direct Geo referencing, GPS World, 13(3):40:46. Mostafa, M.M.R., and K.P. Schwarz, 2000. A Mult i -Sensor System for Airborne Image Capture and Georeferencing. Photogrammetric Engineering & Remote Sensing, 66(12):14171423. Nagai, M., T. Chen, R. Shibasaki, H. Kumagai, and A. Ahmed, 2009. UAV -Borne 3 D Mapping System by Multisensor Integration, IEEE T ransactions On Geoscience And Remote Sensing, 47(3):701708. Olympus E 420, 2008. Olympus E 420 Instruction Manual, Olympus USA, Inc. retrieved June 2009, http://www.olympusa merica.com/files/E420_Instruction_Manual_EN.pdf

PAGE 151

151 OpenCV, 2001. Open Source Computer Vision Library: Reference Manual, Intel Corporation, 443 p. Palubinskas, G., R. Muller, and P. Reinartz, 2003. Radiometric Normalization of Optical Remote Sensing Imagery, International Geoscience And Remote Sensing Symposium July 2125, Toulouse, France, 2:720 722. Parker, J.A., R.V. Kenyon, and D.E. Troxel, 1983. Comparison of Interpolating Methods for Image R esampling, IEEE Transactions on Medical Imaging, 2(1):31 39. Perry, J., A. Mohamed, A. Abd -Elrahman, W.S. Bowman, Y. Kaddoura, and A. Watts, 2008. Precision Directly Georeferenced Unmanned Aerial Remote Sensing System: Performance Evaluation. Proceedings of the ION National Technical Meeting, Jan. 24 26, San Diego, CA, 7 p. Reddy, B.S. and B.N. Chatterji, 1996. An FFT -Based Technique for Translation, Rotation, and Scale Invariant Image Registration, IEEE Transaction on Image Processing, 5(8):12661271. Res sl, C., 2002. The impact of conformal map projections on direct georeferencing, International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 34(3A):283 288. Roweis, S. Levenberg -Marquardt Optimization. Informal Notes retrieve d June 2009, http://www.cs.toronto.edu/~roweis/notes/lm.pdf Schmid, C., R. Mohr, and C. Bauckhage, 2000. Evaluation of Interest Point Detectors, International Journal of Computer Vision, 37(2):151 172. Schwarz, K.P. and M. Wei, 2000. INS/GPS Integration for Geodetic Applications, Lecture Notes Department of Geomatics Engineering, University of Calgary, 117 p. Showengerdt, R., 2007. Remote Sensing: Models and Methods for Image Processing, 3rd Edit ion, Elsevier Academic Press New York, NY, 515 p. Simon, D 2006. Optimal State Estimation: Kalman, H, and Nonlinear Approaches Wiley Interscience, New York, NY, 526 p Skaloud, J. and K. Legat, 2007. Theory and reality of direct georeferencing in nat ional coordinates, Journal of Photogrammetry and Remote Sensing, 63(2):272282. Skaloud, J., 1999. Problems in Direct Georeferencing by INS/DPGS in the Airborne Environment, ISPRS Workshop on 'Direct versus indirect methods of sensor orientation', Commission III, Working Group III/1, Nov. 2526, Barcelona, Spain, 9 p. Strecha, C., T. Tuytelaars, and L. Van Gool, 2003. Dense Matching of Multiple Wide -baseline Views, Proceedings of the Ninth IEEE International Conference on Computer Vision, Oct. 1417, Nice, France, 8 p.

PAGE 152

152 Sullivan, D. and A. Brown, 2002. High Accuracy Autonomous Image Georeferencing Using a GPS/Inertial -Aided Digital Imaging System, Proceedings of the Institute of Navigation National Technical Meeting, San Diego, CA pp. 598603. Taylor C.N., and E.D. Andersen, 2008. An Automatic System for Creating Georeferenced Mosaics from MAV, IEEE/RSJ International Conference on Intelligent Robots and System s, Sept. 22 26, Nice, France, 6 p. Toth, C., S.W. Shin, D.A. Grejner Brzezinska and J.H. Kw on, 2008. On Accurate Time Synchronization of Multi -Sensor Mapping Systems, Journal of Applied Geodesy 2(3):159 166. Triggs, B., P. McLauchlan, R. Hartley, and A. Fitzgibbon, 1999. Bundle Adjustment: A Modern Synthesis. Vision Algorithms: Theory and Pract ice Springer Berlin, pp. 153177. Van Rossum G., et al. Python Language Website, retrieved June 2009: http://www.python.org/ VIA EPIA 2009. EPIA Specifications, VIA Technologies, Inc. retrieved June 2009: http:// via.com.tw/en/products/embedded/ProductDetailTabs.jsp?motherboard_id=690 Wackrow, R., 2008. Spatial Measurement with Consumer Grade Digital Cameras, PhD Thesis Loughborough University, 241 p. Watts, A., W.S. Bowman, A.H. Abd Elrahman, A. Moha med, B.E. Wilkinson, J. Perry, Y. Kaddoura and K. Lee, 2008. Unmanned Aircraft Systems (UASs) for Ecological Research and Natural -resource Monitoring, Journal of Ecological Restoration, 26(1):1314. Wilkinson, B.E., 2007. The Design of Georeferencing Techn iques for an Unmanned Autonomous Aerial Vehicle for Use with Wildlife Inventory Surveys: A Case Study of the National Bison Range, Montana. Master s Thesis, University of Florida 115 p Wolber, G. and S. Zokai, 2000. Robust Image Registration Using Log-Po lar Transform. Proceedings of the IEEE International Conference on Image Processing, Sept. 10 13, Vancouver, Canada, pp. 493 496. Wolf, P., and B. Dewitt 2000. Elements of Photogrammetry with Applications in GIS, 3rd Edition McGraw -Hill Boston, MA, 608 p. World File, 2008. FAQ: What is the format of the world file used for georeferencing images?, ESRI, Inc., Article ID: 17489, retrieved July 2009, http:/ /support.esri.com/index.cfm?fa=knowledgebase.techarticles.articleShow&d=17489 Wu, J., G. Zhou and Q. Li, 2006, Calibration of Small and Low Cost UAV Video System for Real Time Planimetric Mapping, International Geoscience and Remote Sensing Symposium July 31-Aug. 4, Denver, CO., 4 p.

PAGE 153

153 Xiang, H. and L. Tian, 2007. Autonomous Aerial Image Georeferencing for an UAV Based Data Collection Platform Using Integrated Navigation System. 2007 ASABE Annual International Meeting, June 17 20, Minneapolis, MN, 28 p. Xsen s MTi G, 2009. MTi G User Manual, Xsens, Inc. 78 p. Yastikli, N. and K. Jacobsen, 2002. Investigation of Direct Sensor Orientation for DEM Generation, Proceedings of ISPRS Commission I Symposium Denver, CO, 6 p. Zebedin, L., A. Klaus, B. Gruber Geymaye r, and K. Karner, 2006. Towards 3D map generation from digital aerial images. Journal of Photogrammetry and Remote Sensing 60(6):413427. Zhu, Z., E.M. Riseman, A.R. Hanson, and H. Schultz, 2005. An Efficient Method for Georeferenced Video Mosaicing for Environmental Monitoring, Machine Vision and Applications 16(4):203216. Zitov, B. and J. Flusser, 2003. Image Registration Methods: A Survey, Image and Vision Computing, 21(11):9771000.

PAGE 154

154 BIOGRAPHICAL SKETCH John Hendrix Perry has always had a f ascination with science and technology. Despite his recent attempts at specialization in the field of geomatics, he is a generalist with a broad background in programming, electronics, philosophy, music, and variety of other muses He is an avid r eader and enjoys longs walks in the par k as long as it accompanies a vigorous debate. He started working at the age of 14 washing dishes and has been employed at some job or other nearly continuously since. John was admitted to the University of Florida on a National Merit Scholarship and completed an undergraduate degree in g eomatics in a more or less reasonable amount of time. He met a number of interesting people and worked on a number of interesting projects, including a stint with Engineers Without Borders, wherewith he travelled to the Republic of Macedonia to the birthplace of Philip the Great to map a landfill and sewer system. As part of this project, he received a University Scholars Program grant and participated in a Davis g rant and EPA P3 g rant He was a teaching assistant for two classes in the Geomatics Program. Upon graduation in 2007, he worked for the Geomatics Program full -time at a newly established campus in Plant City, FL concurrent with his first year of graduate studies. Upon retu rning to Gainesville in 2008, he commenced work on the NOVA II project. In related research, he was presented with an Institute of Navigation Best Presentation award. Recently, he has been awarded a National Science Foundation Graduate Research Fellowship and a University of Florida Alumni Fellowship. He is the author or coauthor of eight academic works that have been presented or published, with three more in manuscript. His graduation with a Master of Science in forest resources and c onservation with a co ncentration in geomatics and a minor in m echanical and a erospace e ngineering is expected forthwith. He will commence study for a PhD in the fall of 2009 at the University of Florida. Go Gators.