Citation
Unmanned Aerial Systems for High Accuracy Mapping

Material Information

Title:
Unmanned Aerial Systems for High Accuracy Mapping
Creator:
Thomas, Orrin H
Place of Publication:
[Gainesville, Fla.]
Florida
Publisher:
University of Florida
Publication Date:
Language:
english
Physical Description:
1 online resource (175 p.)

Thesis/Dissertation Information

Degree:
Doctorate ( Ph.D.)
Degree Grantor:
University of Florida
Degree Disciplines:
Forest Resources and Conservation
Committee Chair:
DEWITT,BON A
Committee Co-Chair:
WILKINSON,BENJAMIN E
Committee Members:
HOCHMAIR,HARTWIG H
GEZAN,SALVADOR
OLSON,TIMOTHY EDWARD

Subjects

Subjects / Keywords:
drone -- photogrammetry -- sfm -- uas
Forest Resources and Conservation -- Dissertations, Academic -- UF
Genre:
bibliography ( marcgt )
theses ( marcgt )
government publication (state, provincial, terriorial, dependent) ( marcgt )
born-digital ( sobekcm )
Electronic Thesis or Dissertation
Forest Resources and Conservation thesis, Ph.D.

Notes

Abstract:
It was frequently assumed that mapping with Unmanned Aerial Systems (UAS) would be much less expensive than manned aerial mapping due to the dramatic difference in the cost of the hardware. To quantify this advantage bids prepared by successful professional mapping companies using UAS or Small Manned Aerial Vehicles (SMAV) were compared to bids using Large Manned Aerial Vehicles (LMAV). Surprisingly the UAS per-project-savings are modest at best, and insufficient to move the LMAV market. However, the interest is driving development and investment in lower cost sensors and platforms. In contrast, in a series of test projects UAS were shown to compete effectively against terrestrial mapping methods (e.g. GNSS, total stations, and terrestrial laser scanners). The per-project savings in time and cost was around 50 percent, which is enough to be disruptive in the market. However, there were two hurdles to wide-spread UAS implementation. The first was unexpectedly low accuracy and inconsistency among UAS mapping accuracy reports. The second was the over-reliance on Imagery-Derived Point Clouds (IDPC). IDPC work well in open areas but they frequently distort breaks in the terrain and miss above ground features. Hence, mapping systems that rely on them only work reliably in areas that have been cleared of vegetation and have smooth terrain. This work demonstrates that, using specific hardware and methods, UAS mapping can be consistently accurate and that data collection from stereo models can be used to efficiently fill in features missing or distorted in IDPC. A series of UAS test flights were designed to help UAS operators choose hardware and design mapping flights. These flights tested the impact of two previously untreated mission parameters and three specific hardware configurations on Photo Triangulation (PT) accuracy. The mission parameters of interest were the flight speed and shutter speed which directly affect the area that can be mapped in each flight. The first hardware comparison was between using a mechanical leaf shutter and a digital shutter. The second was a comparison was between data constrained by high-end Post Processed Kinematic (PPK) Global Navigation Satellite System (GNSS) data and data without it. The last comparison was between a DJI Zenmuse X4S camera and the Phantom 4 Pro camera. The competitiveness of UAS and SMAV collection platforms depends on increasing the reliability of automated processing. Two algorithms to improve the speed and reliability of the processing are presented. The first is General Incremental Pose Estimation (GIPE) the first general simultaneous solution for solving the incremental pose problem (IPP). Second, an extension to the popular RANSAC algorithm that detects when a solution is near critical and reports all distinct statistically similar solutions. Applying this algorithm to relative pose estimations from real mapping projects showed that previous relative pose solution algorithms would reject the solution representative of the true geometric camera motion in favor of an apparently superior numerical ghost solution in approximate 1 percent of the relative pose estimations tested. ( en )
General Note:
In the series University of Florida Digital Collections.
General Note:
Includes vita.
Bibliography:
Includes bibliographical references.
Source of Description:
Description based on online resource; title from PDF title page.
Source of Description:
This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Thesis:
Thesis (Ph.D.)--University of Florida, 2018.
Local:
Adviser: DEWITT,BON A.
Local:
Co-adviser: WILKINSON,BENJAMIN E.
Statement of Responsibility:
by Orrin H Thomas.

Record Information

Source Institution:
UFRGP
Rights Management:
Applicable rights reserved.
Classification:
LD1780 2018 ( lcc )

Downloads

This item has the following downloads:


Full Text

PAGE 1

UNMANNED AERIAL SYSTEMS FOR HIGH ACCURACY MA PPING By ORRIN HOWARD THOMAS A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2018

PAGE 2

2018 Orrin Howard Thomas

PAGE 3

To Edward R. Oshel and Donn A. Liddle my colleagues at Johnson Space Center Working with them was an incredible education.

PAGE 4

4 ACKNOWLEDG MENTS I thank my wife Jennifer for patiently putting up with me doing all this homework. Without her support I never would have finished. I thank my colleagues at NASA, Johnson Space Center and the USGS Astrogeology Group for presenting me with the cha llenges that motivated my research. I thank the late Charlie Smith, a friend and successful professional mapper He asked practical questions and sought practical answers with unconquerable good d the econo mic study presented in Chapter 1. I thank Cardinal Systems, LLC. They encouraged my continu ing education and fortunate enough to have such an employer. I thank Se an Denney, a fellow Geomatics PhD student. He helped me collect images and survey control points for testing the General Incremental Pose Estimation algorithm presented in C hapter 5 I thank Mckim & Creed for supporting the UAS accuracy testing presented in Chapters 2 and 3. I thank my committee for helping me get this work presentable.

PAGE 5

5 TABLE OF CONTENTS ACKNOWLEDGMENTS ................................ ................................ ................................ .. 4 LIST OF TABL ES ................................ ................................ ................................ ............ 8 LIST OF FIGURES ................................ ................................ ................................ ........ 10 LIST OF ABBREVIATIONS ................................ ................................ ........................... 13 CHAPTER 1 OVERVIEW ................................ ................................ ................................ ............ 17 2 THE IMPACT OF SMALL MANNED AND UNMANNED AERIAL VEHICALS IN THE AERIAL MAPPING MARKET ................................ ................................ ......... 21 Introduction ................................ ................................ ................................ ............. 21 The General Bid Model Fit for the LMAV Data Collection ................................ ....... 22 The Aerial Colle ction Platforms ................................ ................................ ............... 25 Bids for UAS Data Collection ................................ ................................ .................. 26 UAS Estimated Mapping Time ................................ ................................ ......... 26 UAS Overhead Costs ................................ ................................ ....................... 32 UAS Variable Costs ................................ ................................ .......................... 35 The General Bid Model Fit For UAS Data Collection ................................ ........ 36 Modeling Bids For Small Manned Aerial Vehicles ................................ .................. 37 Small MAV Estimated Mapping Time ................................ ............................... 37 Small MAV Overhead Costs ................................ ................................ ............. 39 Small MAV Variable Costs ................................ ................................ ............... 40 The General Bid Model Fits For SMAV Data Collection ................................ ... 41 Results ................................ ................................ ................................ .................... 42 Discussion of Platform Limitations ................................ ................................ .......... 46 LMAV Limitations ................................ ................................ ............................. 47 ................................ ................................ .............................. 47 Nova Limitations ................................ ................................ ............................... 48 Cessna Limitations ................................ ................................ ........................... 49 ................................ ................................ ..................... 49 Discussion of Assumptions ................................ ................................ ..................... 50 Assumption: All Jobs Collected Indi vidually ................................ ...................... 50 Assumption: Usage Rates ................................ ................................ ................ 51 Assumption: UAS Can Legally Fly These Jobs ................................ ................ 51 Assumption: Aerial Data Collection Savings Will Not Be Lost During Aerial Triangulation And Vector Extraction. ................................ ............................. 52 Assumption: There Will Be No Cost Difference In The Ground Co ntrol Collection ................................ ................................ ................................ ...... 53

PAGE 6

6 Assumption: Customers Will Be Equally Satisfied With Small Format Camera Derived Products ................................ ................................ ............. 54 Conclusion ................................ ................................ ................................ .............. 54 3 THE IMPACT OF SMALL UNMANNED AERIAL VEHICALS IN THE TERRESTRIAL MAPPING MARKET ................................ ................................ ...... 57 Introduction ................................ ................................ ................................ ............. 57 Experiment Design ................................ ................................ ................................ 62 Photo Triangulation Processing ................................ ................................ .............. 69 Results and Discussion ................................ ................................ ........................... 72 The Effect of the Altitude on the Accuracy ................................ ........................ 72 UAS Mapping Consistency ................................ ................................ ............... 74 The Effect of GCP Quantity and Distribution ................................ .................... 75 PT Procedures Tested on a Larger Block of Images ................................ ........ 76 UAS Stereo Compilation Accuracy of Features Missing From IDPC ................ 76 UAS Economic Viability ................................ ................................ .......................... 79 Rooftop Survey for Solar Panel Installation ................................ ...................... 80 260 Acre Golf Course ................................ ................................ ....................... 80 Rowlett Church Test Site ................................ ................................ .................. 80 Conclusion ................................ ................................ ................................ .............. 81 Future Work ................................ ................................ ................................ ............ 82 4 THE INFLUENCE OF MISSION PARAMETERS ON UNMANNED AERIAL SYSTEMS MAPPING ACCURACY ................................ ................................ ........ 83 Intr oduction ................................ ................................ ................................ ............. 83 Experiment Design ................................ ................................ ................................ 85 Photo Triangulation Processing ................................ ................................ .............. 89 Results and Discussion ................................ ................................ ........................... 93 Comparing North South and East West flight groups. ................................ ...... 93 Comparing Nadir Looking and Convergent Flight Grou ps ................................ 96 ............. 97 Comparing Blocks Constrained With PPK GNSS and Without It ...................... 97 Influence of Airframe Speed on PT Accuracy ................................ ................... 98 Influence of Shutter Speed on PT Accuracy ................................ ..................... 99 Influence of Shutter Type ................................ ................................ ............... 101 Conclusion ................................ ................................ ................................ ............ 102 5 GENERAL INCREMENTAL POSE ESTIMATION (GIPE) ................................ .... 104 Introduction ................................ ................................ ................................ ........... 104 Legacy Models ................................ ................................ ................................ ...... 108 General Incremental Pose Estimation Mathematica l Model ................................ .. 110 GIPE Initial Values ................................ ................................ ................................ 112 Optimization of Initial Values ................................ ................................ ................. 114 Numerical Simulations ................................ ................................ .......................... 116

PAGE 7

7 Simulation Results ................................ ................................ ................................ 1 18 Real Data Tests ................................ ................................ ................................ .... 124 Conclusion ................................ ................................ ................................ ............ 128 6 RELATIVE POSE ESTIMATION CRITICAL VOLUMES ................................ ....... 129 Introduction ................................ ................................ ................................ ........... 129 Critical Volume Derivation ................................ ................................ ..................... 132 Critical Volume Derivation ................................ ................................ ..................... 134 Examples of Critical Volumes ................................ ................................ ............... 139 Other Implications of Regular Data ................................ ................................ ....... 145 Observed Prevalence of Ambiguous Relative Pose Solutions .............................. 146 Detecting Near Criticality during Parameter Estimation ................................ ........ 148 Conclusion ................................ ................................ ................................ ............ 154 7 THE EXPANDING NICHE OF UAS IN HIGH AC CURACY MAPPING ................. 155 APPENDIX A CONJUGATE POINT DATA FOR THE SHUTTLE NOSE IMAGES ..................... 157 B CONJUGATE POINT DATA FOR THE MARS PHOENIX LAN DER IMAGES ...... 161 C CONJUGATE POINT DATA FOR THE UAS STEREO PAIR ............................... 162 LIST OF REFERENCES ................................ ................................ ............................. 165 BIOGRAPHICAL SKETCH ................................ ................................ .......................... 175

PAGE 8

8 LIST OF TABLES Table page 2 1 Professional bid data from LMAV Company 1. ................................ ................... 23 2 2 Professional bid data from LMAV Company 2. ................................ ................... 24 2 3 Professional bid data from LMAV Company 3. ................................ ................... 24 2 4 year. ................................ ................................ ................................ ................... 37 2 5 Professional bid data from the Nova assuming a usage rate of 24 jobs per year. ................................ ................................ ................................ ................... 37 2 6 Parameters of sensors considered for use in SMAV. These sensors have digital forward motion compensation ................................ ................................ 38 2 7 The professional bid data for the Cessna assuming a usage rate of 24 jobs per year ................................ ................................ ................................ .............. 42 2 8 jobs per year. ................................ ................................ ................................ ...... 42 2 9 The bid model parameters and some data associated with the fit. ..................... 43 2 10 The number of photos collect ed by the different platforms. ................................ 52 3 1 Statistics from p ublished UAS accuracy reports. ................................ ................ 58 3 2 UAS flight settings. To test consistency, flights 1, 5 7 are identical except for the da ta and time. Flights 2 4 have different flying heights. ............................... 64 3 3 Photo triangulation and stereo check point residuals (m). ................................ .. 74 3 4 Photo triangulation and stereo check point residuals in GSD units. ................... 74 3 5 Accuracy of above ground test points in imagery derived point clouds (m). ....... 77 3 6 Accuracy of Test Points measured in stereo models (m). ................................ .. 78 3 7 Accuracy of Test Points measured in stereo models (GSD). .............................. 79 3 8 Time and cost savings from using UAS stereo compilation compared to terrestrial surveying. ................................ ................................ ........................... 79 4 1 UAS flight settings. ................................ ................................ ............................. 90

PAGE 9

9 4 2 Accuracy statistics for flights that varied the airframe speed. Higher sharpness values indicate sharper images. ................................ ........................ 99 4 3 Accuracy Statistics for flights that varied the shutter speed. Higher sharpness values indicate sharper images. Flight 15 was flown separately and thus did not have a comparable set of images for sharpness comparison. 100 5 1 Properties of IPP Algorithms ................................ ................................ ............. 108 6 1 Two statistically accep table solutions to the Space Shuttle nose cone relative pose ................................ ................................ ................................ ................. 140 6 2 Four stat istically accep table solutions to the overdetermined Phoenix lander relative orientation.. ................................ ................................ .......................... 143 6 3 Statistically similar relative p ose solutions for a pair of images captured from an unmanned aerial system. ................................ ................................ ............ 144 6 4 Observed prevalence of critical relative pose prob lems among real data projects. ................................ ................................ ................................ ........... 147 A 1 Distortion cor rected conjugate points for the Space Shuttle nose example in pixels ................................ ................................ ................................ ................ 157 B 1 Distortion corrected conjugate points for the Mars Phoenix lander example given in millimeters. ................................ ................................ .......................... 161 C 1 Distortion corrected conjugate points for the unmanned aerial system stereo pair example given in pixels. ................................ ................................ ............. 162

PAGE 10

10 LIST OF FIGURES Figure page 2 1 ................................ ........... 31 2 2 ................................ .................. 31 2 3 ................................ ............... 32 2 4 Regions where there is a single model that is significantly less expensive than all the others are sho wn.. ................................ ................................ ............ 44 2 5 How much more a method is predicted to cost than the least expen sive method is shown in dollars ................................ ................................ ................. 45 2 6 How much more a method is predicted to cost than the least expensive method in percent is shown ................................ ................................ ................ 46 3 1 Reported UAS accuracy is plotted against ground control density ..................... 60 3 2 Examples of features distorted in imagery derived point clouds ......................... 62 3 3 The primary accuracy test site that includes a church and the parking are around it ................................ ................................ ................................ ............. 63 3 4 Predicted average error ellipsoid volumes for 3D point reconstructed from stereo models plotted against model percent overlap. ................................ ....... 67 3 5 The red crosses are observations of check point locations in image from flight 5. ................................ ................................ ................................ ................ 68 3 6 The white circles are example test point locations. These points represent features that are frequentl y missing or distorted in imager derived point clouds. ................................ ................................ ................................ ................ 68 3 7 Courtesy of Mckim & Creed. ................................ ................................ ............... 69 3 8 This secondary, much larger, test site flown with the same UAS hardware and processed with the same photo triangulation procedures. The accuracy results were remarkably similar to church Flight 4. ................................ ............. 71 3 9 The relationship between vertical RMSE in cm and the altitude. ........................ 72 3 10 The relationship between vertical RMSE in GSD units and the altitude. ............ 73 4 1 Red crosses indicate example check point locations. ................................ ......... 86

PAGE 11

11 4 2 Mosaic of the test site in Dallas, TX. A five acre lot with several buildings and a large parking area. ................................ ................................ .................... 86 4 3 Courtesy of Mckim & Creed. ................................ ................................ ............... 89 4 4 Ac curacies of flight groups ................................ ................................ ................. 92 4 5 A raw Zenmuse X4S image (A) has obvious barrel distortion and vignetting. The recorded JPG image (B) does not have either.. ................................ .......... 95 4 6 The relationship between the flight speed and the accuracy. ............................. 98 4 7 The relationship between the shutter speed and the accuracy. ........................ 100 5 1 Graph of cylinder simulation GIPE position error as a function of the number of sequential pose estimations. The error corrects every 36 images as the circles of images close. ................................ ............................. 119 5 2 The results from the DMCII aerial strip with 60% overlap Distributions of how many images could be linked before the position error exceeded the nominal image spacing. ................................ ................................ .................... 120 5 3 The results from the Canon SL1 aerial strip with 90% overlap. The distributions of how many images could be linked before the position error exceeded the nominal image spacing. ................................ ............................. 121 5 4 Olympus E520 hallway simulations results, the distributions of how many images could be linked before the position error exceeded the nominal image spacing. ................................ ................................ ................................ ............ 122 5 5 Olympus E520 panorama simulations results, the distributions of how many images could be linked before the position error exceeded the nominal image spacing. ................................ ................................ ................................ ............ 122 5 6 Distributio ns of the first pose position estimation error for the DMCII aerial strip with 60% overlap. In this example analysis of the first IPP position error would lead to similar conclusions as analysis of the error accumulation rate .. 123 5 7 Distributions of the first pose position estimation error for the Canon SL1 aerial strip with 90% overlap ................................ ................................ ............. 124 5 8 Error accumulation in IPP by algorith m calculating initial values for a 233 image DMCII block. Linear unit is the nominal image spacing. ........................ 125 5 9 Error accumulation in IPP algorithms calculating initial values for a 58 image C6310 block. Linear unit is the nominal image spacing. ....................... 125

PAGE 12

12 5 10 Error accumulation in IPP algorithms calculating initial values for a 25 image Olympus E520 hallway series. Linear unit is the no minal image spacing. ........ 126 5 11 Error accumulation in IPP algorithms calculating initial values for an 88 image Olympus E520 panorama. Linear unit is the radius of the circle of camera centers ................................ ................................ ................................ ............. 127 6 1 Space shuttle nose stereo pair showing the 142 conjugate points. Two statistically acceptable relative orientation solutions are given in Table 6 1. ... 140 6 2 Each column contains, from top to bottom, the critical surface, the critical surface constrained by the field of view, the critical volume, and the critical volume constrained by the field of view. ................................ ........................... 141 6 3 A Pair of image views under the Mars Phoenix lander. The 26 conjugate points are marked with white squares. Four statistically acceptable relative orientation solutions are given in Table 6 2. ................................ ..................... 142 6 4 A stereo pair captured by an unmanned aerial system with three statistically similar relative pose solutions given in Table 6 3. ................................ ............ 143

PAGE 13

13 LIST OF ABBREVIATIONS CMOS Comp lementary metal oxide semiconductor ES East West, as in, a flight path that alternated between east and west directions. GCP Ground Control Point GIPE General Incremental Pose Estimation, an algorithm presented for general and simultaneous solutions of the IPP. GNSS Global Navigation Satellite System Ground Control Point GSD Ground Sample Distance, the size of pixel on the ground. IDPC Imagery Derived Point Cloud, a point cloud created by dense image matching. IPP Incremental Pose Problem, the op eration of solving for the pose (position and orientation) of a frame image with respect to a body of at least two previously oriented images. LMAV Large Manned Aerial Vehicle NS North South, as in, a flight path that alternated between north and south d irections. PPK Post Processed Kinematic, a method of processing GNSS data to get centimeter level accuracy. PPO Principal point offset PT Photo Triangulation RANSAC RANdom SAmple Consensus, an algorithm for parameter estimation given input data corrupt ed by outliers. RMSE S uareRoot of the Mean Squared Error SMAV Small Manned Aerial Vehicle UAS Unmanned Aerial System

PAGE 14

14 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy UNMANNED AERIAL SYSTEMS FOR HIGH ACCURACY MAPPIN G By Orrin Howard Thomas December 2018 Chair: Bon A. Dewitt Cochair: Ben Wilkinson Major: Forest Resources and Conservation This work aims to demonstra te that it i st technically feasible to use UAS for high accuracy mapping, and under what circumstances it is economically advantageous to do so. In addition, the accuracy impact of several UAS mission parameters and three hardware configurations are asses sed to help UAS pilots engineer flights and select hardware. Finally, two new algorithms to improve the accuracy and reliability of 3D reconstruction from imagery are proposed. It was frequently assumed that mapping with Unmanned Aerial Systems (UAS) woul d be much less expens iv e than manned aerial mapping due to the dramatic difference in the cost of the hardware. To quantify th is advantage bids prepared by successful professional mapping companies using UAS or Small Manned Aerial Vehicles ( SMAV ) were com pared to bids using Large Manned Aerial Vehicles ( LMAV ) Surprisingly the UAS per project savings are modest compared to using LMAV and insufficient to move the LMAV market. However, the interest is driving development and investment in lower cost senso rs and platforms. In contrast, in a series of test projects UAS were shown to be economically advantageous compared to terrestrial mapping methods ( e.g. GNSS, total stations, and

PAGE 15

15 terrestrial laser scanners). Using a hybrid approach that combined UAS and terrestrial mapping methods reduced costs by around 50%, which is enough to be disruptive in the market. However, t he re were tw o hurdles to wide spread UAS implementation. The first was unexpectedly low accuracy and inconsisten cy among UAS mapping accur acy reports The second was the over reliance on Imagery Derived Point Clouds (IDPC). IDPC work well in open areas but they frequently distort breaks in the terrain and miss above ground features. Hence, mapping systems that rely on them only work reliab ly in areas that have been cleared of vegetation and have smooth terrain. This work demonstrates that, using specific hardware and methods, UAS mapping can be consistent ly accurate enough to be used in conjunction with terrestrial mapping methods and that data collection from stereo models c an be used to efficiently fill in features missing or distorted in IDP C A series of UAS test flights were designed to help UAS operators choose hardware and design mapping flights These flights tested the impact of t wo previously untreated mission parameters and three specific hardware configurations on Photo Triangulation (PT) accuracy. The mission parameters of interest were the flight speed and shutter speed which directly affect the area that can be mapped in eac h flight. The first hardware comparison was between using a mechanical leaf shutter and a digital shutter. The second was a comparison was between data constrained by high end Post Processed Kinematic (PPK) Global Navigation Satellite System (GNSS) data and data without it. The last comparison was between a DJI Zenmuse X4S camera and the P HANTOM 4 Pro UAS camera.

PAGE 16

16 The competitiveness of UAS and SMAV collection platforms depends on increasing the reliability of automated processing. Two algorithms to im prove the speed and reliability of the processing are presented. The first is General Incremental Pose Estimation ( GIPE ) the first general simultaneous solution for solving the incremental pose problem ( IPP ) Second, an extension to the popular RANSAC al gorithm that detects when parameter estimation has multiple distinct statistically similar solutions. Applying this algorithm to relative pose estimations from real mapping projects showed that previous relative pose solution algorithms would reject the solution representative in approximate 1% of the relative pose estimations tested Note this is not a 1% error. This is indicates that, using previous algor ithms, a numerically valid solution that has no direct relationship to the geometry of the camera motion would be the result in 1% of the cases tested.

PAGE 17

17 CHAPTER 1 OVERVIEW This work aims to demonstrate that it is technically feasible to use UAS for hi gh accuracy mapping, and under what circumstances it is economically advantageous to do so. In addition, the accuracy impact of several UAS mission parameters and three hardware configurations are assessed to help UAS pilots engineer flights and select ha rdware. Finally, two new algorithms to improve the accuracy and reliability of 3D reconstruction from imagery are proposed. T he hypothesis that aerial imagery that is currently being collected by LMAV could be more economically collected by UAS or SMAV is tested in Chapter 2 S pecific professional bids to perform aerial data collection using large manned aerial vehicles (LMAV), small manned aerial vehicles (SMAV), and small unmanned aerial systems (UAS) were used to generate general bid models. From thes e bid models, it was determined what collection methods were competitive for jobs based on the project area, distance from the airport or office, and the modeled bids from competing methods This is followed and versatility. Results indicate that UAS and SMAV can compete efficiently on aerial photogrammetric mapping up to at least 1000 acres when the equipment gets enough usage. However, the predicted savings are modest and therefore unlikely to be disruptiv e in the LMAV market. Chapter 3 presents UAS as effective competitors with terrestrial mapping methods. T his required testing three hypotheses to show technical and economic feasibility First, it was necessary to show that UAS mapping with specific hard ware can be consistently accurate enough for small scale mapping. S econd t hat stereo models

PAGE 18

18 could be used to collect features missing or distorted in Imagery Derived Point Clouds (IDPC) These point clouds are remarkable, and Industry has relied on them heavily to create final mapping deliverables. However, t his practice has largely relegated UAS mapping to earth work applications because IDPC distort discontinuities in the terrain and miss above ground features. Thus, demonstrating reliable feature co llection from stereo models allows UAS to be used in a much wider range of applications. Finally, it was necessary to show that utilizing UAS w as economically advantageous Chapter 4 deals with the accuracy impact of two UAS mission parameters and three hardware configurations. The purpose of the research was to help UAS operators better design UAS mapping flights and make informed decisions when purchasing UAS hardware. The mission parameters of interest were the flight speed and camera shutter speed which influence the per flight area that can be mapped. It was hypothesized that slower airframe speeds and faster shutter speeds would result in more accurate mapping because the image motion blur would be reduced. Image overlap, ground control density, and GSD are not included because they have been treated by previous authors. The first hardware comparison was between a mechanical leaf shutter and a digital shutter. The mechanical shutter was expected to be more accurate because it was designed to el iminate rolling shutter distortion. The second was to quantify the advantage of PPK GNSS. The last was to test if the DJI Zenmuse X4S camera and P 4 Pro UAS cameras were similarly suitable as metric sensors. The success of UAS technology in map ping depends on reliable automation. During software development and data processing two new algorithms to improve the speed and reliability of photo triangulation automation were developed. The first,

PAGE 19

19 General Incremental Pose Estimation (GIPE) is prese nted in C hapter 5. Previously published methods for calculating the pose of an image with respect to a body of at least two previously oriented images (the Incremental Pose Problem, or IPP) are ad hoc. All of them used combinations of relative orientatio ns, trifocal tensors, intersections, and resections to solve the IPP. At least one of the following was true of all previous methods: Only subsets of the available data were used They had geometric weakness not intrinsic to the IPP They had artificially h igh minimum data requirements They were not simultaneous They unnecessarily required object space reconstruction Herein is proposed an algorithm that has none of these weaknesses It is the theoretical ideal for the incremental generation of initial valu es for multi photo measurement systems. RANSAC SSS, the second algorithm proposed to improve reliability of automation, is presented in Chapter 6. It is a modification to the popular RANSAC algorithm that detects the presence of multiple distinct statisti cally similar solutions to over determined systems. Applying this algorithm to relative pose estimations from mapping projects showed that previous relative pose solution algorithms would reject the solution representative of the true geometric camera mot ion in favor of an ly 1% of cases. C hapters 2 3, and 4 address application questions regarding the current state of the art for UAS mapping. They are less technical, and hence appeal to a wide r audience. However, their usefulness is temporally limited. Chapters 5 and 6 present

PAGE 20

20 new algorithms for improving the automation and hence economic competitiveness of UAS, SMAV, and close range mapping They are more technical, but will be as relevant now as decades in the future. Stated differently, the first two parts are relevant to individuals making aerial mapping business and design decisions today. The last two parts are relevant to those designing the next generations of photogrammetric system s and software. The chapters are separate research projects that can be read independently, but are all related to photogrammetric mapping in general and more particularly to UAS and close range applications.

PAGE 21

21 CHAPTER 2 THE IMPACT OF SMALL MANNED AND UNM ANNED AERIAL VEHICALS IN THE AERIAL MAPPING MARKET Introduction The overarching goal of this investigation was to determine if aerial data collection companies can collect data more economically using small unmanned aerial systems (UAS) or small manned a erial vehicles (SMAV) instead of large manned aerial vehicles (LMAV). It is important to note that this investigation endeavors to compare bids not necessarily costs. For LMAV, the modeling is a parameter fit to bids solicited from three professional aer ial mappers with decades of experience. Bids for newer collection methods (e.g. various UAS configurations) and proposed collection methods (e.g. single engine Cessna and small helicopters) were prepared based on estimated costs, usage rates, etc. Reques ts for proposals were prepared for nine sites. All of them were to be mapped with no larger than a 0.3 foot GSD to collect 1.0 foot contours, vector features, produce orthophotos, and possibly do volume calculations. The proceeding requirements were chos en because they are typical for LMAV operations. Hence, there is a market where UAS could gain a share. Also, it has been demonstrated that UAS currently in use can achieve ASPRS accuracy specification for 1 foot contours [1] The nine jobs were: mile route survey along a major commercial street acre parking lot near a professional sports stadium. acre L shaped area of undeveloped. acre square area of farmland. Five quarries: o acre rectangular.

PAGE 22

22 o acre T Shaped area. o acre oval area. o acre irregular shape. o acre rectangle. The first four jobs are in central and nor thern Florida. The five quarries are in northern Virginia. Of these nine sites, seven were real jobs, and the remaining two (Palm Coast and Sarasota) were hypothetical but representative of realistic projects. Of the seven real bids, six were won by one of the professionals who prepared bids for this study. He won the quarry work three years consecutively. When bids were requested from the professionals, they were given the option to create similar hypothetical sites to bid or provide similar bids they had already prepared. One prepared bids for all nine of these specific sites, and another prepared bids for the five quarries and then chose four of their own hypothetical sites to bid (referred to as The last prepared bids for all nine specific sites and then on nine additional sites closer to their base of operations to provide bids for jobs in the same distance range as the other companies. Interviews with SUAS, SMAV, and LMAV pilots were used to es timate some of the costs and flying times. The rotary wing UAS used for the bidding was also assembled and tested by the Mapping Resource Group (MRG), a mapping company owned by one of the authors. The General Bid Model Fit for the LMAV Data Collection T here were many variables considered in determining the LMAV bids including the distance from the airport, the area to be mapped, the number of lines, the number of exposures, and the proximity to controlled airspace. For simplicity, the general LMAV

PAGE 23

23 bids were modeled as functions of three parameters and two independent variables. The three parameters were: a constant amount charged in each bid; the charge per acre; and the charge per one way straight line mile to the site from the airport. The two independent variables were: the acreage of site; and the on e way straight line distance from the airport in miles. ( 2 1) A different parameter fit was done for each of the three companies. These fits were calculated as ordinary least squares optimizations [2] This modeled the bids as having errors that were independently drawn from a Gaussian population with a mean of zero and an unknown standard deviation. The procedure compensates for the unknown standard devi ation but scaling the covariance matrix of the calculated parameters by the a posteriori standard error of unit weight. These scaled covariance matrices are used to make statistical comparisons among the models. Table 2 1. Professional bid data from LM AV Company 1. Name Distance Area Pro Bid Model Bid Error % Error Palm Coast 20 78 $1530 $1500 $30 2.0% Jax 77 22 $1700 $1856 $156 9.2% Tomoka 15 533 $1700 $1992 $292 17.2% Sarasota 164 615 $3220 $3188 $32 1.0% Buchanan 180 210 $3070 $2836 $234 7.6% Doswell 105 375 $2750 $2473 $277 10.0% Rocky Point 250 228 $3110 $3374 $264 8.5% Strasburg 57 953 $2780 $2791 $11 0.4% Winchester 40 363 $2130 $1979 $151 7.1% The model fit the bids for companies two and three well; t hey had maximum percent errors of 5.4% and 5.7% respectively. The first company had a maximum

PAGE 24

24 Regardless, the bids from Company 1 were the highest and as a result have negligible i nfluence on the analysis. The raw data is provided in Table 2 1, Table 2 2 and Table 2 3 Table 2 2. Professional bid data from LMAV Company 2. Name Distance Area Pro Bid Model Bid Error % Error Cumberland 140 290 $2040 $2061 $21 1.0% Goshen 153 29 0 $2070 $2124 $54 2.6% Lynchburg 167 290 $2200 $2192 $9 0.4% Richmond 130 290 $1930 $2012 $82 4.3% Buchanan 180 210 $2200 $2202 $2 0.1% Doswell 114 375 $2105 $1991 $114 5.4% Rocky Point 170 228 $2215 $2165 $50 2.3% Strasburg 57 953 $2087 $2097 $10 0.5% Winchester 46 363 $1650 $1654 $4 0.2% The model employed works well enough to be used to interpolate bids for other hypothetical sites. However, it is likely unwise to attempt much extrapolation. The distances from the airport ranged from near 0 to 200 miles. The areas to be mapped ranged from near 0 to 1000 acres. Trying to use these models to estimate bids for jobs outside of those parameter ranges is not advised. Table 2 3. Professional bid data from LMAV Company 3. Name Distance Ar ea Pro Bid Model Bid Error % Error Broadlands 250 77 $3000 $3096 $96 3.2% Cambria 149 112 $2250 $2268 $18 0.8% Clay City 152 1236 $2878 $2894 $16 0.6% Columbus 8 521 $1380 $1305 $75 5.5% Isle St George 125 485 $2306 $2266 $40 1.7% Nor thridge 28 1182 $1728 $1826 $98 5.7% Springfield 35 66 $1298 $1288 $10 0.8% Starr 235 1221 $3638 $3582 $56 1.6% Winchester 277 363 $3520 $3475 $45 1.3%

PAGE 25

25 The Aerial Collection Platforms The LMAV data collection platforms varied, but are general ly twin engine aircraft with 18 to 24 inch camera holes. The companies were instructed to bid the jobs as inexpensively as possible while meeting the design criteria. Two chose to use film (presumably because the film cameras were paid off years ago and had less overhead). These aerial cameras used nine by 9 inch pieces of film and nearly distortionless lenses. When scanned the film images range from 250 400 megapixels (depending and the scanning resolution). The other company used a DMCII 140 140 meg apixel camera with a modern global navigation satellite system (GNSS) and inertial navigation system (INS) unit. The representative for the rotatory wing class of UAS was a DJI S900 airframe (6 rotors) with a Pixhawk PX4 flight controller. The payload us ed was a Sony A7RII 42.4 megapixel camera equipped with a Nikkor 28mm lens on a three axis stabilized mount (total payload weight 5.1 pounds). Hereafter this payload/platform is referred to e maximum unloaded ascent speed is 10 feet per second. The maximum flight speed is 60 feet per second. The whole system was constructed for about $10,000. The representative of the fixed wing class of UAS was the Nova F7200 AT with the Fusion MP22 sens or package and the Flare GS10 ground control station. It cruises at 52 65 feet per second. The craft turns around between lines in approximately 10 seconds. It climbs at about 33 feet per second, and has a ninety minute flight time (Chance, 2016). The camera is a Cannon SL1 with 5288 by 3506 pixel sensor [3] The use of SMAV was originally motivated by the regulatory uncertainty that has plagued UAS. Two SMAV were selected for bidding. The first is an early model

PAGE 26

26 Cessna 17 0 with an 8 new Mosquito 104 MPH and has a range of 590 miles. The Mosquito cruises at 85 MPH and has a range of 280 340 m iles (with the auxiliary fuel tank). The max hover out of ground effect elevation is 7500 feet. The climb rate is 20 feet per second [4] It is worth noting that, according to the manufacturer, mounting a sensor package under neath the XE285 fuselage does not require a FAA Supplemental Type Certificate [5] Bids for UAS Data Collection Professional bids for rotary wing UAS data collection were prepared by Charlie Smith, a professional mapp er with 30 years of experience and owner of the Mapping Resource Group. The DJI was also assembled and tested by the Mapping Resource Group. Bids for the Nova were prepared similarly to those for the DJI with the manufacturer, providing est imates for the flight times and the number of photos [6] UAS Estimated Mapping Time To estimate how quickly the DJI can map we must first estimate how fast it will fly. Neither the Sony A7RII nor the mount on the DJI have a ny forward motion compensation (FMC). The Sony does have image stabilization, but it is unknown to the authors how well this performs in flight or how it would affect the consistency of camera calibration. Hence, the flight speed and shutter speed were designed to limit motion blur. For both the LMAV and the SMAV the altitude is designed to maximize the coverage in each image. In contrast, SUAS often collect at a smaller GSD than required due to the 400 altitude ceiling. Hence, the GSD is disconnecte d from the

PAGE 27

27 design requirements, and it is more useful to talk about motion blur in ground units than in pixel units. Consider that for the specific jobs being bid trying to limit motion blur to a tenth of a 0.064 foot GSD pixel is inefficient because it is trying to maintain the integrity of oversampled data. Instead, the maximum motion blur in ground units, can be reasonably set to one tenth of the optimal GSD, ( 2 2 ) In E quation 2 2 is the maximum speed of the SUAS and e is the exposure time of the camera. Rearranging E quation 2 2 to make a function of and e gives: ( 2 3 ) Based on our experience with the Sony camera the exposure time, e can be set to 1/1250 of a second on most days. Thus, E quation 2 3 numerically simplifies to feet per second (about 63% of the maximum speed). However, the flight plans were prepared assuming 25 feet per secon d because that was the maximum speed our pilot recommended. At the altitude ceil ing the, GSD will be 0. 064 feet, and each 7952 by 5304 pixel image would cover 509 by 339 feet. The Nova camera is typically set to shutter priority at 1/4000 of a second. From E quation 2 2 the motion blur would be 0.013 to 0.016 feet flying between 52 and 65 feet per second. Indicating that the Nova easily meets design criteria at maximum cruising speed. From Nova documentation, the GSD at 400 feet would be 0.086 feet [3] and each 5288 by 3506 pixel image will cover 455 by 302 feet. Flight plans for each project were designed with 80% overlap and 60% to 70% side lap. These overlaps have become common practice for UAS data collection becaus e it makes automatically aerial triangulation more reliable.

PAGE 28

28 Flight planning for UAS requires consideration of batteries. Using 16000mAH batteries in the DJI provides 15 18 minute flight times (with a small reserve) in 75 80 F weather near sea level wit h light winds. Flight times are negatively impacted by high temperatures, low temperatures, the wind and higher elevations. For ease, a 15 minute flight time is used for design. Further, in flight planning, the time needed to climb and land must be con sidered. Four minutes were reserved for ascending, descending, and landing, this leaves 11 minutes of cruising time. Managing battery swapping and charging is critical to planning sequences of flights. Some time is needed to recover the UAS and swap th e batteries. DJI batteries require 5 7 minutes to swap Nova batteries require less than 1 minute. On paper, the charging time for the batteries is 60 minutes, but in practice, it can be as much as 90 minutes. Allowing batteries to cool down before an d after charging charge a battery that is still warm from usage, and never use [7] In Florida, this has required 30 60 minutes before and after charging. The total between flight time for a battery is thus to 120 210 minutes. For the DJI planning on 15 minute flights and 12 minutes between flights (5 minutes for recovery and 5 7 minutes to change the battery) batteries are 5 8 flights away from being r eused. Thus, to operate efficiently with that configuration for large projects use at least six batteries and be able to charge at least four batteries simultaneously. For the Nova, batteries are 2 3 flights away from being reused. Thus, uninterrupted o peration requires at least three batteries, along with the ability to charge two batteries simultaneously. Consider also that manufacturers recommend charging in open areas away from any flammable materials [8] and at least on e specifically forbids

PAGE 29

29 charging inside of vehicles [9] Finally, flight planning that has battery replacements in easy spots for recovery and replacement is essential to efficiency. The flight shown in Figure 2 1 by one mile square and made up agriculture, swamps, and stands of trees. There is a road on the eastern side and a road that runs east west through the middle. The DJI maximum cruising distance is 16500 feet (1 1 minutes at 25 feet per second). Thus, it captured six strips per flight. Notice that the DJI is always launched and recovered from the middle east west road. Convenient recovery and relaunching are essential to efficiency. All eight flights were nea r the maximum length. The first strip covered 509 by 5230 feet in stereo images, and each successive strip newly covered 40% of that, or 230 x 2640 feet. Hence, the whole area was covered in 23 strips or 8 flights. If there are sufficient batteries and battery cooling, charging, replacement and UAS recovery go smoothly, then those six flights could be done in about 2.7 hours (27 minutes per flight ). The site is 615 acres which gives the coverage per hour (CPH) rate is 171 acres per hour. 15.6 total hou rs were billed for the job. This included driving, mobilization, and pre post flight time on site. The Nova flight plan looked similar to the DJI ( F igure 2 1) because the image footprints are similar. However, the Nova would require only two flights. T here would be 39 strips, instead of 23, and 2422 total photos. 2. It is a two mile route survey of a major commercial street. The maximum width of the area of interest is about 480 feet so a single strip could have covered the area for the DJI but the NOVA will have to collect two strips. The DJI would collect 250 images in 15

PAGE 30

30 minutes (including 4 minutes for takeoff and landing). The Nova would collect 553 images in 18 minutes. Th e area to map was approximately 78 acres so the coverage rate for this job was 390 acres per hour for the DJI and 260 for the Nova. This CPH is the highest for any of DJI jobs because was no recovery and relaunching required. F igure 2 1. The flight p lan for the DJI The thick black lines are the boundary of the area to be mapped. The thick white lines mark where the DJI will be launched and recovered and are always in convenient locations. The thin white lines are the indiv idual flight paths.

PAGE 31

31 Figure 2 2. The DJI boundary of the area to be mapped. The think white bar marks where the DJI will be launched and recovered. The thin white line is the fli ght path. 2 3 It is a 375 acre quarry. The flights planned to avoid overflying the roadway. The irregularity of the shape and sparse distribution of suitable recovery areas made it more difficult to use ea ch DJI flight optimally. The cruise distances for the flights range from 5416 to 16452 feet, or from 32.8% to 99.7% of the maximum 16500 feet. The pilot also has to visit scattered If the pilot manages to maintain a rate of 27 minutes per flight under these circumstances, this job could be collected in 2.7 hours (approximately 139 acres an hour) and would consist of 1200 photos in 36 strips. The Nova could cover the area 1.2 hour s in a single flight (if permission was granted to overfly the road). There would be 39 strips and approximately 1680 photos.

PAGE 32

32 Figure 2 3. The flight plan for the DJI the boundary of the area to be ma pped. The thick white lines mark where the DJI will be launched and/or recovered. The thin white lines are the individual flight paths. The CPH for DJI varied from 124 to 390 acres/hour among the nine jobs. The median CPH was 200 acres per hour. T he CPH for the NOVA ranged from 110 to 477 acres/hour, and the median was 310 acres per hour. UAS Overhead Costs UAS insurance was discussed in a February 2015 article published on dronelife.com [10] Mr. Amato interviewed Ter ry Miller, the President of Unmanned Risk

PAGE 33

33 for a P HANTOM covering liability up to $1 million and hull damage up to $1,500 can run as little as $1,350 a year (with a 10 depending on risk and number of aircraft insured. Hulls are covered at a rate of 6.5% to 13% depending on risk and value. Dedu This is consistent with the $1250 insurance rate MRG was quoted from an aviation insurance company [11] Annual operating costs are hard to predict. One of the surveyors intervie (Volkmann, W. 2015, Volkmann, O. 2015), indicating that it is imperative to have a complete spare system. Many of the users interviewed emphasized the need for spare parts and on site r epair [12] [13] [14] [15] The operator of a rotary wing craft in the tropics reported the near constant need for repairs and replac ement of parts [16] In contrast, some operators insist that costs of operations are negligible. Clearly the cost of operation varies a great deal. Factors in the variation appear to include the model of the UAS, the amount of use, and the location. This study aims to compare bids for aerial image collection only; the bids do not include any processing or ground control collection. There is thus no cost for GPS equipment, software, or targets. Further, the UAS bids made the vehicle a variable cost by charging the 2016 standard mileage rate for the use of the work vehicle, 57.5 cents per mile [17] This makes the cost of the extra equipment negligible, The cost of extra equipment is left in the derivations below for ease of readers intending to modify this analysis to match their specific circumstances.

PAGE 34

34 The overhead rate is the sum of the annual costs (insurance, depreciation, operatin g, and extra equipment costs) divided by the usage, u, (the number of jobs predicted) and multiplied by some minimum acceptable rate of return (MARR), r (e.g. 12%). The DJI cost of depreciation, was the $10,000 cost of UAS depreciated ove r three years, $3300. Insurance costs, used was $2100 ($1500 for liability plus 6% of the cost of the UAS). As described above, the annual operating costs, are difficult to model. It was decided to use 5% of the cost of t he UAV and sensor package in the model because 5% was on the low end of the scale that insurance companies use for offering hull insurance (it was acknowledged that it was conjecture). This gives an overhead rate for the DJI , as /u. Assuming a usage rate of 24, this simplifies numerically to The total annual overhead is or $5 900. For the Nova F7200AT, the cost of depreciation, was $22,100 (cost of UAS, MP22CV sensor package and Flare GS10 ground station) depreciated over five years or $4420. Insurance costs, was $2750 ($1500 for liability plus 6% of the cost of the UAS). As described above, the annual operating costs, are difficult to predict. It was decided to use 5% of the total cost of the UAS ($1,100) for the bids because 5% was on the low end of the scale that insurance c ompanies use for offering hull insurance (it was acknowledged that it was largely a guess). This gives an overhead for the Nova, /u. Assuming a usage rate of 24, this simplifies numerically to: The total annual overhead: or $9300.

PAGE 35

35 UAS Variable Costs Salaries are the biggest part of the variable costs. Assuming that a drone pilot will be c ompensated similarly to a survey party chief an hourly billing rate of $100/hour is reasonable. This is consistent with billable rates for 1 person survey crews found online [18] [19] [20] In addition, travel time to and from the site, lodging, and per diem are all considerations. Budgeting some time for preparing to leave from the office and re stowing equipment, downloading data, etc. is highly recommended (a constan t 2.0 hours per job is used in the bids). The time required to prepare for flight and stow equipment after flight are usually constant. This does not include flight planning or other office work. Rather, it is assembly, uploading the flight plan, pre fli ght checks, equipment diagnostics, etc. Reports on the amount of prep time required varied from 20 minutes to two hours depending on the UAV [15] The large time variation is a function of drone assembly time, landing area pr eparation/identification, payload preparation, etc. The rotary wing UAS used here requires little time to prepare for flight (0.5 hours is used in the model). The Nova requires no more time to prepare for flight than a rotary wing UAS, but some landing s ite identification and preparation may be needed (1.0 hours was used for the NOVA). The cost of fuel for battery powered drones is small but perhaps not as negligible as might be assumed. The two 16000mAh batteries that were used for testing the DJI were bargain brands that are now only available from overseas sources. They cost $120 each. The more reliably sourced batteries are $310 $350 dollars from domestic suppliers. The Nova uses 6S 13000mAhr LiPo batteries that cost about $220 each. The manufact urers claim these batteries are good for 150 charge cycles.

PAGE 36

36 Assuming that they are as good as claimed, every flight costs $0.87 $2.33 in battery between $15 $38. As mention previously, the UAS crews will pay the 2016 standard mileage rate for the use of the work vehicle, 57.5 cents per mile [17] The General Bid Model Fit For UAS Data Collection As the previous section illustrated in det ail there are many variables in UAS bidding. Some of these (e.g. per diems, CPH, and drive times) are nonlinear. Regardless, the general UAS bids were modeled as functions of three parameters and two independent variables (exactly as was done for LMAV bi d models). The three parameters are: a constant amount charged in each bid; the charge per acer; and the charge one way straight line mile to the site from the office. The two independent variables are: the acre age of sit and the one way straight line distance from the office in miles. Straight line miles were used in the modeling so that the inputs are as similar as possible for UAS as for LMAV and SMAV. A different parameter fit was done for the DJI and the Nova. As with the LMAV, these fits were calculated as ordinary least squares optimizations [2] The maximum percent error for the DJI and Nova were 8.6% and 11.7% respectively. The raw data for an assumed usage rate of 24 is provided in Table 2 4 and Table 2 5.

PAGE 37

37 Table 2 4. Professional bid data from the DJI assuming a usage rate of 24 jobs per year. Name Distance Area Pro Bid Model Bid Error % Error Pa lm Coast 20 78 $727 $719 $9 1.2% Jacksonville 77 22 $1057 $966 $92 8.7% Tomoka 15 533 $1070 $1128 $58 5.4% Sarasota 164 615 $2154 $994 $160 7.4% Buchanan 180 210 $1578 $1691 $112 7.1% Doswell 105 375 $1444 $1452 $8 0.5% Rocky Point 175 228 $162 5 $1681 $57 3.5% Strasburg 57 953 $1715 $1752 $36 2.1% Winchester 40 363 $1108 $1097 $11 1.0% Table 2 5. Professional bid data from the Nova assuming a usage rate of 24 jobs per year. Name Distance Area Pro Bid Model Bid Error % Error Palm Coast 20 78 $897 $869 $29 3.2% Jacksonville 77 22 $1228 $1155 $74 6.0% Tomoka 15 533 $1020 $1040 $20 2.0% Sarasota 164 615 $2138 $1887 $251 11.7% Buchanan 180 210 $1671 $1797 $126 7.5% Doswell 105 375 $1448 $1461 $13 0.9% Rocky Point 175 228 $1678 $1 778 $101 6.0% Strasburg 57 953 $1353 $1453 $100 7.4% Winchester 40 363 $1109 $1102 $6.30 0.6% Modeling Bids For Small Manned Aerial Vehicles Outside professional bids for the hypothetical configurations of SMAV could not be requested. The author s prepared the SMAV bids. Small MAV Estimated Mapping Time Therefore, it must have some form of forward motion compensation (FMC). At least two metric camera manufacturers are providing digital FMC: the first is the Phase One on their iXU RS 160/180 60 and 80 megapixel medium format models [21] the second is

PAGE 38

38 the Illunis on their 29 and 47 megapixel aerial cameras [22] With lenses and a FMC license, the Phase One cameras cost between $70,000 and $75,000 [23] The 47 megapixel Illunis cameras with a Mamiya Schneider Kreuznach 28mm lens costs $25,800. The 29 megapixel Illunis camera with usin g a Zeiss ZF.2 28mm Distagon lens costs $10,800 [24] Table 2 6 breaks down the specification of cameras. Note that no cost effective means of mechanical forward motion compensation were identified. Table 2 6. Parameters of sensors considered for use in SMAV. These sensors have digital forward motion compensation Camera Focal (mm) HFOV GSD at 1000 ft Swath Width at 1000 ft Cost Phase One iXU RS 180 32 80 0.16 1678 $75,000 Phase One i XU RS 160 32 80 0.19 1685 $70,000 Illunis 47 28 82 0.20 1740 $25,800 Illunis 29 28 66 0.20 1292 $10,800 dimensions of 5200 by 17000 feet. At a 0.3 foot GSD, the sensor with the smallest swath width (the Illunis 29 megapixel) would require five strips to cover it (with a 60% side lap). The sensor with the widest swath width (the Phase One iXU RS 180) woul d need three strips. If the aircraft slows to 65 MPH while acquiring the imagery, each line will take 3 minutes and budgeting 2 minutes to turn the plane around then upgrading the sensor from the smallest to the largest decreases the data collection time from 25 minutes to 15 minutes. This 10 minute difference is nearly irrelevant to the bid, and therefore the more expensive sensors do not make economic sense for the projects considered. An

PAGE 39

39 argument for the more expensive sensors could be made on metric or image quality grounds, but that is beyond the scope of this paper. The XE285 is capable of flying slow enough that FMC is not required. Further at the altitudes being considered for the specific bids (over 1000 feet), it can safely fly slowl y. However, it can only do so at the cost of lower CPH. Also, as will be discussed later, the can be flown lower for high accuracy work. At lower altitudes a fast airspeed should be maintained so that lifesaving recovery maneuvers are possible For these reasons, it was decided to use the same payload as the Cessna (the Illunis 29 camera with FMC). Thus, the mapping rate of the is nearly identical to the Cessna. Both have a planned speed of 65 MPH, but the can turn faste r saving about 1 minute per flight line. Small MAV Overhead Costs Early model Cessna 180/170 can be purchased for about $50,000. The total fixed costs for the Cessna is estimated at $10,000 a year. This includes depreciation, storage, a nd an overhaul every three years. In addition, the cost of for liability and hull insurance ( ) is around $6,000 a year. The depreciation of the Illunis 29 sensor package depreciated over 8 years ( ) is $1,350. This brings the total annual costs to Thus, overhead rate for the Cessna is: If the same MARR and usage are used for the Cessna as for the UAS (12% an d 24 respectively) this simplifies numerically to $750. Discussion board posts and published interviews with owners were used to estimate the costs. Depreciation for the $52,000 XE285

PAGE 40

40 ( ) over 8 years is $6,500 Insurance for the ( ) is likely more expensive than the Cessna. One pilot reported being quoted $8,000 a year [25] Another pilot estimated $1,000 a year as a ceiling for storage costs ( ) in Palo Alto, CA [26] John Uptigrove, designer, pilot and enthusiast, recommended modeling all other costs as variable based on usage [27] This brings the total annual fixed costs to Thus, the overhead rate for the XE285 is: If the same MARR and usage are use d for the XE285 as for the UAS (12% and 24 respectively) this simplifies numerically to $790. Small MAV Variable Costs All of the costs for the Cessna save the fuel and pilot were modeled in the overhead. The small Cessna cruises at 104 MPH and costs around $0.50 per mile or $52 an hour. Pilot salaries are incredibly variable. One of the mapping offices is near several flight schools where there is no shortage of semi retired pilots that can be hired for $20 an hour. Further, it i s not uncommon for young pilots to offer to fly for free. This not something a company should rely on since it carries the extra administrative burden in scheduling and may result in delays or re flights. However, it does put downward pressure on the sal The range for a single tank of fuel is 590 miles. A single tank of fuel is sufficient to fly all of the specif ic jobs.

PAGE 41

41 The XE285 burns high octane unleaded gasoline mixed 50:1 with oil at a rate of 5 6 gallons per hour. In addition to regular maintenance, every 500 hours a rebuild is required for around $5,000. The total cost per hour to operate is app roximately $30 [27] The but probably should not be. Further, the higher insurance rates suggest this is a riskier job. As an estimate, a 30% higher charge rate ($13 0 an hour) was used for the enough away from the airport that the cannot get th ere and back on a single tank of fuel. For each of these there is an airport close enough to the flight path that it adds only negligibly to the flight distance to stop there. Regardless, to cover the time of landing and refueling 1.5 hours of extra pilo t time and 0.5 hours of extra flight time to land, refuel and get back in the air was billed to each of these three bids. For both SMAV, the time required to prepare for flight and stow equipment after flight are fairly constant. This did not include flig ht planning or other office work. Rather it is preflight checks, payload preparation, time to taxi and takeoff, etc. In the specific bids, 2 hours preflight and 1 hour post flight were billed. The General Bid Model Fits For SMAV Data Collection SMAV bids were also modeled as functions of three parameters and two independent variables. The three parameters are: a constant amount charged in each bid; the charge per acer and; the one way straight line mile to the site from the airport. The two independent variables are: the acreage of site, and

PAGE 42

42 the one way strai ght line distance from the airport in miles ; The raw data is provided in Table s 2 7 and 2 8. Table 2 7. The professional bid data for the Cessna assuming a usage rate of 24 jobs per year Name Distance Area Pro Bid Model Bid Error % Error Palm Coast 20 61 $1208 $1187 $21 1.8% Jax 77 25 $1316 $1321 $5 0.4% Tomka 15 620 $1173 $1212 $39 3.3% Sarasota 164 630 $1601 $1570 $31 2.0% Buchanan 180 270 $1625 $1584 $41 2.5% Doswell 105 450 $1429 $1416 $12 0.9% Rocky Point 175 170 $1610 $1566 $45 2.8% Strasburg 57 1150 $1395 $1348 $47 3.4% Winchester 40 630 $1231 $1272 $42 3.4% Table 2 8. The professional bid data for the assuming a usage rate of 24 jobs per year. Name Dista nce Area Pro Bid Model Bid Error % Error Palm Coast 20 61 $1310 $1293 $17 1.3% Jax 77 25 $1545 $1527 $18 1.2% Tomka 15 620 $1316 $1328 $12 0.9% Sarasota 164 630 $1945 $1950 $5 0.3% Buchanan 180 270 $1996 $1980 $15 0.8% Doswell 105 450 $1692 $1686 $6 0.4% Rocky Point 175 170 $1974 $1949 $24 1.2% Strasburg 57 1150 $1578 $1556 $22 1.4% Winchester 40 630 $1413 $1433 $19 1.4% Results Ordinary least squares optimization was used for the each of the parameter fit s The parameters calculated are given in Table 2 9. The linear model worked well for two of the LMAV companies and the SMAV; for these t he maximum percent error ranged from 2.0% to 5.7% The higher error LMAV model turned out to be largely irrelevant because it was a cost outlier. The maximum percent error for the Nova was 11.7%.

PAGE 43

43 Table 2 9. T he bid model parameters and some data associated with the fit. UAS and SMAV models assumed a usage rate of 24 jobs per year. is the charge per one way straight line mile from the airport or the office to the job site. is the charge per acer. is the constant charge per job. Max percent error and root mean squared error (RMSE) describe the quality of the model fit. Sites are how many bids were used in the model fit. Platform M R C Max %error RMSE Sites LMAV1 7.4 1.1 1262.2 17.2% $193 9 LMAV2 4.8 0.7 1191.6 5.4% $54 9 LMAV3 8.4 0.5 959.1 5.7% $60 9 DJI 5.3 1.0 537.7 8.7% $78 9 Nova 5.4 0.4 725.9 11.7% $108 9 Cessna 2.7 0.1 735.7 4.3% $22 9 4.2 0.1 818.7 2.0% $13 9 Fi gure 2 4 shows the regions where there is a single m odel that is significantly less expensive than all the others. In each of the 20 graphs the horizontal axis is the one way straight line distant to the site and ranges from 0 to 200 miles. The vertical a xis is the site area in acres and ranges from 0 to 1000. At each pixel in the output graphs the predicted bid and propagated standard error for each model was calculated. A Welch Satterthwaite t test was used to calculate the probability that a bid was l ess than each other bid. The product of the individual probabilities was then the probability that a bid was less than all other models. The gray scale shading draws contours of these probabilities. For these graphs LMAV are grouped and the lowest cost among the three models is used in the comparisons. The graphs show that at usage of 12 the LMAV is the most cost effective. The Nova became the least expensive option for most of domain by the time the usage reaches 24. The DJI and the Cessna also pick up some territory. The Nova slowly loses ground to the Cessna as the usage increases beyond 24.

PAGE 44

44 Figure 2 4. Regions where there is a single model that is significantly less expensive than all the others are shown. In each of the 20 graphs the horizont al axis is the one way straight line distant to the site and ranges from 0 to 200 miles. The vertical axis is the site area in acres and ranges from 0 to 1000. Statistical significance is interesting but is ultimately of less importance than economic si gnificance. Figure 2 5 is similar to Figure 2 6 but contours the dollars over the minimum bid Note that for usages over 24 the difference among the DJI Nova,

PAGE 45

45 Cessna, and are largely in the range of $0 $300 or 0% 40%. Given the magnitude of much of the parameter space. Figure 2 5. How much more a method is predicted to cost than the least expensive method is shown in dollars. In each of the 20 graphs the horizontal axi s is the one way straight line distant to the site and ranges from 0 to 200 miles. The vertical axis is the site area in acres and ranges from 0 to 1000

PAGE 46

46 Figure 2 6. H ow much more a method is predicted to cost than the least expensive method in percen t is shown. In each of the 20 graphs the horizontal axis is the one way straight line distant to the site and ranges from 0 to 200 miles. The vertical axis is the site area in acres and ranges from 0 to 1000 Discussion of Platform Limitations The bidin g and modeling discussed to this point focused on very specific job requirements. This section discusses how suitable sensors are for other work.

PAGE 47

47 LMAV Limitations LMAV are not legally allowed to fly under 1000 feet above the tallest thing in the area; thi s puts a floor on their GSD of approximately 0.08 feet. They also must wait for clear weather. However, they can fly for hours and collect data with much larger GSD for wide area mapping. DJI Limitations The DJI is capable of vertical takeoff and dyna mic flight patterns. It can be flown low so the effective floor of the GSD approaches 0. It can be flown below clouds. The 400 foot limit on the flying height makes the ceiling of the current configurations GSD around 0.064 feet. The tedious requiremen ts for battery swapping, cooling and charging make it impractical to use for areas that require more than 2 or 3 flights. Further, clients may not be pleased with pilots having to drive all over their properties for recoveries and battery replacements. A lso note that the DJI specifically, and multi rotor UAS in general, are not as stable in high winds as the Nova. It probably has become apparent that DJI payload was not engineered well for these specific jobs. The GSD at 400 feet is 0.064 feet, much sm aller than required. The payload is also heavy (5.1 pounds). A simple alteration to the processing that could help the efficiency is binning the images. If every 3x3 pixel was binned into 1x1 then the GSD would be 0.24 feet. This binning would improve the signal to noise ratio and prevent operators from collecting unnecessary detail. Note that this reduces the 42.4 megapixel camera to a 4.7 megapixel camera that observes the same ground area. It follows then, that if hardware changes are to be conside red the Sony A7RII could be exchanged for light weight small format metric camera (e.g. Mako G419 C) saving as much as 3 pounds of payload weight. This lighter payload together with a

PAGE 48

48 battery upgrade (from 16000 mAH to 22000 mAH) could conceivably double the flight time and the CPH. These proposed configuration changes are possibly counter intuitive, but could make the DJI competitive for 1.0 foot contour mapping in areas from 400 to 900 acres. N ova Limitations Perhaps the primary concern users have wit h fixed wing UAS is the takeoff and landing. Based on the stories published of their work, this fear is largely ater) [3] They gave this account of sandy with scattered rocks. The surface of the plateau was solid, compact rock with other large rock formati landing in some of the roughest areas around, including gravel and small rocks. However, some of these rocks where big enough to stop a car from passing. It took us an hour to clear away the largest rocks to form a safe landing zone [28] some time for landing site selection and/or preparation is necessary, but is a manageable issue. Like the DJI the Nova can fly below the clouds. The Nova turn around quickly (10 seconds) but it is not as versatile as multi rotor systems. At 400 foot altitude, the GSD was 0.09 feet so it is conceivable that 0.5 foot contour work could be done from the images (half of the requirement for the bids in this study). At the nominal 300 foot altitude, the GSD is 0.064 feet. However, flying at that altitude at the minimum cruising speed there will be 0.2 pixels of motion blur. Thus, without upgrading to a camera with FMC, the floor for the GSD is around 0.07 feet. No te that this is not much smaller than the LMAV (0.08 feet).

PAGE 49

49 Cessna Limitations The Cessna, like the LMAV, cannot fly below 1000 feet. Hence, the GSD floor is 0.2 feet. This GSD is adequate for the specific bids but more than twice the GSD floor of the LM AV or the Nova. The Cessna configured as suggested may not reliably achieve 0.5 foot mapping. Switching to a longer focal length is possible, but the field of view would be narrower, and the vertical measurement resolution would suffer. To maintain a wi de field of view and get a smaller GSD would require a longer focal length and a larger sensor. The upgrade would require investing an addition al $15,000 to $65,000 to purchase the Illunis 47 megapixel or one of Phase One models (Table 2 8). The best p ayload for mapping accuracy among the sensors considered may be the Phase One iXU RS 180 with a 50mm lens. It would have a field of view of 56.4 and a GSD of 0.1 feet at an altitude of 1000 feet. However, the purchase of such an expensive camera is lik ely unjustifiable considering the assumed usage rates. Limitations Figures 2 5 and 2 6 showed that the is never the most economical option. However, it becomes competitive as the usage increases, and it may be the most versatile. It does not have the regulatory hurdles of the Nova, and can fly lower than the Cessna. The GSD floor is a function of how low the pilot is willing to fly and is a safety issue. Airspeed and altitude are necessary to complete an auto rotation landing in th e event of engine failure. Practice for auto rotations is typically done at 70 [29] At 300 fe et, the GSD is 0.06 feet. It is possible to fly lower or upgrade the camera to get even better accuracy potential. The GSD ceiling is around 1.4 feet (flying at 7000 feet). There is also a lot of

PAGE 50

50 unexplored potential in payload customization. The Mosqu is not limited by the size of a camera hole and could be outfitted with multiple cameras or LIDAR. Discussion of Assumptions Clearly, there have been assumptions made in the bidding. Some (e.g. salaries, insurance rates, etc.) were discussed above, a nd a few others discussed in more detail below. Consider that many of these are same assumptions that a company would have to make before deciding on investment: accurate insurance bids are not available until you have a pilot and vehicle serial number, t he amount of work (usage) is unknown, annual operating costs and pilot salaries vary, etc. For others (e.g. UAS ground control Assumption : All Jobs Collected Individually It was ass umed that all of the jobs would be mapped one at time (one job per flight). While this might be typical for UAS, LMAV planners try to avoid it. Covering multiple jobs in a single flight allows LMAV costs for mobilization and flight time to be split among projects. For this reason if possible the operators will group multiple jobs and fly them together. To illustrate the volume saving potential for each collection method the five quarries were bid as a group. LMAV could easily fly the five quarries in a single day. The professional bids to fly them together from three LMAV companies were: $9,700, $6,900 [30] and $7300 [31] The first bid is $4100 or 43% lower than the sum of the separate bids. The second bid is $4200 or 61% lower. The third company did not bid them individually. To map the five quarries the DJI crew would spend four days on the road. Assuming a usage rate of 24, this was bid at $4,985 (13.5 hours to drive 723 miles, 21.2 ho urs of data collection time and 2.5 per diems). The grouped bid is $1798 or 36% less

PAGE 51

51 than the separate bids. The Nova crew would also spend three days on the road for $4,516 (13.5 hours to drive 723 miles, 13.7 hours of data collection and 2.5 per diems ). The grouped bid is $1,775 or 39% less. The Cessna can easily fly all quarries in a single day for $5,000.0, saving $2,200 or 44%. The would require 8 hours of flight and as much as 12 hours of pilot time, but could conceivably fly it in a s ingle day for $5,900. This would save $2,800 or 47%. All the bids illustrated potential savings for grouped jobs: LMAV saved 43 61%, the saved 47%, the Cessna saved 44%, the DJI saved 36%, and the Nova saved 39%. The savings are significant but left the order of methods from most economical to most expensive unchanged. The Nova was the least expensive option followed by the DJI Cessna, and finally the LMAV. Assumption : Usage Rates Usage may be the biggest concern for entrepreneu rs considering investment in a UAV or small SMAV. It is an essential input to the cost estimation, and the conclusions are not robust to it. The UAS, and more particularly the Nova, become competitive at a usage rate of 12 jobs per year. It takes a usag e rate of at least 24 to make the SMAV appealing. Assumption : UAS Can Legally Fly These Jobs During the bidding and modeling process, it was assumed that the UAS could participating persons in the

PAGE 52

52 nearby glider traffic that must be considered [6] It is also debatable that line of sight could be maintained for some of the flights. Table 2 10. The number of photos collected by the different platforms. The UAS collected approximately 60 to 90 times as many photos as LMAV. SMAV collected an aver age of six times as many photos Site LMAV DJI Nova SMAV Palm Coast 12 250 553 37 Jacksonville 2 80 143 3 Tomoka 18 1800 2202 126 Sarasota 18 1800 2422 108 Buchanan 11 700 937 39 Doswell 23 1200 1683 108 Rocky Point 12 750 1017 54 Strasburg 40 250 0 4125 357 Winchester 24 1130 1568 126 Assumption : A erial Data Collection Savings Will Not Be Lost During Aerial Triangulation And Vector Extraction. The models predict that using UAS or SMAV that $0.00 $750 can be saved depending on the usage, size of, and distance to the project. This savings is equivalent to 0 9.5 hours of billed office work (using $80 per hour billing rate). The assumption then is that these savings will not be lost to longer aerial triangulation and vector extraction times. With automatic point matching, fast model swapping, and automatic surface extraction as standard features of mapping software, that is conceivable but undemonstrated. Consider if the number of models still drove the time required to do mapping, then the indust ry would never have switched away from film cameras. That said, using the DJI resulted in approximately 64 times as many photos as LMAV. Using the Nova resulted in 90 times as many photos, and using SMAV resulted in 6 times as many photos (Table 2 10). extracted point clouds is possible, but not as reliable. The more volatile orientations

PAGE 53

53 and coverage may also tax existing project management methods. Thus, while nearly equal time may be believable, i t is unproven. Assumption : There Will Be No Cost Difference In The Ground Control Collection The ability to collect the ground control while on site with the UAS is often cited as having the potential to reduce cost. The savings this offered is not bid or modeled because of the sparse and conflicting data available regarding how much control is needed and whether or not it is more than LMAV requires for the same area. Consider that the actual number of GCPs used in UAS blocks varies wildly from 4 [32] to 130 [33] Gonlaves and Henriques [34] state the having at least nine well distributed control ent drifting in large blocks. One study analyzed the difference in using eight control points (two in each corner of the rectangular block) and ten control points (2 control points in the center of the block in addition to the original 8). They demonstr ated that, depending on other parameters (flight mode, height above the ground, etc.), the addition of the two control points reduced the RMSE of check points by up to 35% [35] Another study found a negligible difference in i ncreasing the number of control points from 5 to 10 [36] As a separate consideration, one of the surveyors interviewed emphasized the need to have 20 check points to conform to map accuracy standards [37] and provide a reliable 95% confidence interval of errors [15] Among all this variation in the number of GCPs used, there was also a wide range of accuracies reported. They range from 1 GSD vertically [38] to 2 3 GSD vertically [39] to the 60 GSD vertically [40] At present, there is no consensus on the issues of control requirements or expected accuracy. These are issues that will be resolved as time goes on and usage increases by consumers that demand accuracy.

PAGE 54

54 For those that have dismissed small format camera blocks as inherently unstable, consider that these blocks benefit from more variant geometry, much greater overlap, and c ould be flown with cross strips, varied flying heights, and even convergence. It is, therefore, conceivable that they will have stronger geometry than LMAV aerial blocks and thus be more stable and require fewer GCPs. What remains to be seen is what hard ware, software, and procedure upgrades are required to achieve this. Assumption : Customers Will Be Equally Satisfied With Small Format Camera Derived Products It has also been assumed that the photography, or the products derived from them, will be equally satisfying to clients. There may be great debate on this point. The users of high end digital aerial cameras have expressed dissatisfaction with the quality of the images from low end cameras. The general fact that the lenses, sensors, etc. are lower q uality and the images have a higher signal to noise ratios is indisputable. There are also operation issues. If the UAS cameras are allowed to autofocus (for many the autofocus cannot be turned off), it is common to get some blurry images. Some of the UAS flight configurations require shutter speeds so fast that the images are underexposed, or the ISO increased to the point that it noticeably degrades the image quality. Some operators are using compression in the image capture that tends to remove the subtle qualities of surfaces and thus degrade the quality of feature matching and stereo collection. Despite all that, users unaccustomed to imagery from high end sensors seem pleased with what they're getting from their UAS Conclusion Models have been provided that allow bids to be compared for hypothetical jobs with areas from 0 to 1000 acres and one way straight line distances to site from 0 to

PAGE 55

55 200 miles for 1 foot contour mapping. These demonstrated that the UAS or SMAV can collect sufficient aeria l data more economically than LMAV for the entire parameter space considered if they get enough usage. Predicted savings ranged from $150 to $450 dollars (assuming a usage rate of 24). In the single example given, the UAS and SMAV were also more economic al than LMAV for collecting data for multiple jobs as a group. The Nova is the likely the most economical choice among all the options, it is competitive for almost the entire parameter space (for usage rates of at least 12). The DJI is consistently co mpetitive for areas less than 500 acres. The Cessna and become nearly as efficient as the Nova as the usage approaches 36 (particularly for areas of hundreds of acres). The may also be the most versatile option because it does not hav e the same regulatory burdens as UAS, can fly lower than the Cessna, and has the most payload flexibility. Whether or not these savings in the data collection phase mean that a project can be completed more inexpensively is still an open question. Largely it hinges on whether or not 6, 64, or 90 times as many images (for SMAV, DJI and Nova respectively) can be processed as quickly and effectively as the LMAV images. Even if That said, the tota l savings were still modest, and not likely to be disruptive in the market. The most likely outcome is that m any sensors that cost upwards of a million dollars will be replaced by sensors that cost thousands or tens of thousands. It is true that some of the workflows are unproven, and some of the methods are still crude. However, it appears inevitable. For existing mapping and aerial data collection companies, the low capital requirements and

PAGE 56

56 increasing automation will make this a relatively smooth trans ition, but one for which they should be actively preparing. T hose in most urgent need of altering their business models may be the manufacturers of the expensive sensors.

PAGE 57

57 CHAPTER 3 THE IMPACT OF SMALL UNMANNED AERIAL VEHICALS IN THE TERRESTRIAL MAPPING MARKET Introduction In Chapter 2 it was established that UAS are unlikely to be disruptive in the manned aerial mapping market because the savings are too modest. However, t here are examples of terrestrial mapping projects that have been completed drama tically more economically with UAS [41] UAS use can also avoid the necessity of sending personnel into hazardous areas (e.g., rooftops, ledges, roadways, unstable soil, and steep embankments), and enables mapping of ina ccessible features such as cell phone towers and tree tops. Currently however, they are used almost exclusively for volumetric and orthophoto production. We believe that disappointing UAS mapping accuracy reports (Table 3 1) and the limitations of imager y derived point clouds (IDPC) have contributed to the slow acceptance of the UAS for other mapping applications Expected Photo Triangulation (PT) accuracy from UAS mapping projects has been difficult to quantify. This is, at least in part, because of th e lack of consistency among sensors, flight patterns, processing techniques, and reporting methods. This quantification issue is further complicated by the expectation of a simple predictive formula that has bled over from manned aerial mapping. As an ex ample, consider the empirical equation that was repeated among mapping professionals in the United [42] [43] The relation was simple, easy to apply, and worked well enough to be reliable. Further, this formula seems to have transitioned smoothly from film to digital cameras. It worked because the hardware and block geometries were consistent. Table 3 1 shows a representative selection of UAS accuracy reports. The

PAGE 58

58 data demonstrates that there is no similarly simple relationship or formula for reported UAS mapping accurac ies Table 3 1. Statistics from published UAS accuracy reports. The PPK column in dicates if the UAS was equipped with a survey grade GNSS used for Post Processed Kinematic measurement of image positions. The last column lists the RMSE in GSD units Reference UAV Camera GSD Images GCPs PPK RMSE (GSD) [44] VTOL Canon 550D 0.006 550 18 No 31.7 VTOL Canon 550D 0.006 550 18 No 18.3 [45] VTOL Nikon D3X 0.003 585 5 No 2.7 [46] VTOL Nikon D800 0.012 13 No 7.2 VTOL Nikon D800 0.012 12 No 10.1 [47] VTOL Canon EOS M 0.022 543 19 No 23.5 [38] Fixed Canon 450D 0.1 854 78 No 4.3 [48] Fixed EX 5R 0.038 0 Yes 2.3 [49] VTOL O lympus PEN EP2 0.015 170 0 Yes 1.2 [1] VTOL Canon EOS Rebel 0.028 192 15 No 4.3 VTOL Canon EOS Rebel 0.014 231 15 No 7.6 Fixed Sony A7R 0.015 72 15 No 2.0 Fixed Sony A7R 0.007 285 15 No 5.7 [50] Sony NEX 5R 0.045 207 5 Yes 1.5 [51] VTOL Canon EOs M 0.02 386 20 No 2.9 [52] Fixed Sony A6000 0.045 0 Yes 1 [53] Fixed 0.02 748 19 No 10.4

PAGE 59

59 Unfortunately, there seem to be other violated expectations that affect data quantification. Foremost among them is that flying lower sometimes result ed in less accurate data. Day et al. [54] tested the effe ct of reducing the altitude by half on two UAS flights. The vertical take off and land (VTOL) systems showed a 76% increase in root mean square error (RMSE) in ground sample distance (GSD) units. The f ixed wing showed a 186% increase. Similar results ha ve been reproduced in undergraduate capstone projects at the University of Florida. ( Busscher, 2017 ). We hypothesized that flying progressively lower would have diminishing accuracy returns because the effects of motion blur, undamped vibrations, focus se nsitivity, depth of field, rolling shutter distortion, and perspective effects (e.g., tree and building lean) are compounded at lower flying heights. However, with better hardware and procedures, we believed there should be some accuracy advantages to fly ing lower. The second violated expectation was the reported limited efficacy of adding more Ground Control Points (GCPs). There have been numerous studies on the effect of ground control density on accuracy [35] Gindraux et. al. [55] is of note; they experimented with varying the control point density in eight different blocks that ranged in size from 1.4 to 7 square kilometers. In these block s, the accuracy improved rapidly from 1.4m to 0.2m vertical RMSE error as the control density was increased from 0 to 5 pts per km 2 Further increasing the point density did little to improve the block accuracy. The asymptote of the accuracy was around 5 GSD vertical RMSE. Control points improved the accuracy to a point, but results were disappointing when compared to manned aerial blocks, which commonly achieve 1 GSD vertical RMSE. Figure 3 1 shows the relationship between the number of images per GCP (a

PAGE 60

60 measure of control density) and the RMSE in GSD units for the reports in Table 3 1. In Figure 3 1, the density of the ground control is nearly perfectly uncorrelated with the reported accuracies ( R squared However, if the outlying low contr ol density reports (the three furthest to the right) are excluded, there is a weak correlation ( R squared that suggests that more ground control made projects more accurate. Based on theoretical foundations of photogrammetry and experience with s imilar data we expect the actual limit of accuracy should approach 1.0 GSD and that the correlation with GCP density should be stronger. Figure 3 1. R eported UAS accuracy is plotted against ground control density. These reports indicated weak relationsh ip between control density and reported accuracy To observe the third violated expectation, consider the two reports from Tonkin et al. [56] [57] In these two studies, the same researchers used the same UAS, the same sensor, and the same flying height. They flew over the same type of terrain (hummocky

PAGE 61

61 moraine) and described their processing similarly. In the former study they reported a 23.5 GSD RMSE. They reported 2. 85 GSD RMSE in the latter. This is an extreme example of how variant results can be, even in similar circumstances, and further makes the case that hardware and methodology may be suspect. The users that consistently reported low RMSEs were those using UA S equipped with post processing kinematic (PPK) Global Navigation Satellite System ( GNSS). The greatest RMSE reported among them was 2.3 GSD. Three of these studies used no ground control and one used five poorly distributed points. This suggests that, with high end GNSS, UAS accuracy can be largely independent of the ground control. In addition to difficulties obtaining good PT results, the common practice of mapping from IDPC can cause issues with accuracy. IDPC work well for open, high contrast are as without abrupt surface breaks. However, they frequently miss or distort above ground features (e.g., powerlines, driplines, signs, and building edges), they round and distort break lines (e.g. curbs, gutters, and walls), and they provide little to no p enetration through vegetation. Figure 3 2 shows examples of IDPC distortions from the experiments in the present project. Using data collected by engineering, surveying, and planning firm McKim & Creed we triangulated a well controlled test area 7 times to test our hypotheses that UAS can accurately and reliably compete with terrestrial mapping methods, that accuracy should improve at lower altitudes, and that UAS stereo pairs could be used to collect features missing or distorted in the IDPC. In additi on, to demonstrate economic competitiveness, we compared UAS versus conventional survey bids for three McKim & Creed test projects.

PAGE 62

62 Figure 3 2. Examples of features distorted in imagery derived point clouds. The top row are profiles of terrestrial las er scanner data. The bottom row are profiles of the same features in the imagery derived point cloud. A \ E) Curb at the edge of the parking lot, B \ F) roof top exhaust pipe, C \ G) roof top cross, D \ H) roof and gutter profile. Experiment Design The Rowlett C hurch test area encompassed a 4 acre lot that included a church, two outbuildings, and a parking area (Figure 3 3). The small area was chosen so that it could be flown multiple times and be mapped quickly using conventional ground surveying and terrestria l LiDAR. It also represents the type of job in which UAS can be effective for efficient mapping [58] The UAS hardware was a DJI 2 Quadcopter equipped with a Loki GNSS system and a Zenmuse X4S camera. This hardware configuration was chosen because it was economical ($12,000 for the complete system, including extra batteries and a custom hard case) and accurate enough for many mapping projects based on specifications. The airframe provides 27 minutes of flight time, which will map approximate 80 acres per flight at 400 foot altitude (20 acres at 200 foot altitude). The camera is an inexpensive ($600) 20 megapixel model with a mechanical leaf shutter designed to avoid rolling shutter distortion. Rolling shutter disto rtion has been shown to cause significant systematic errors in photogrammetrically derived products [59] It is

PAGE 63

63 possible to correct the distortions by modeling the images as line scanners, but this adds several unknowns to the system introducing potential instability, and making the stereo compilation more specialized. Figure 3 3. The primary accuracy test site that includes a church and the parking are around it. The church lot is approximately 4 acres. The peaks of building are approximately 20m above the parking lot. The UAS was flown seven times. In every flight, the manual focus was set to infinity, the ISO and white balance were set to auto, the camera mode was set to shutter priority ( holding the shutter speed to 1/800 th of a second ) and the forward and side laps were 80%. To test the consistency, flights 1, 5, 6, and 7 were planned identically,

PAGE 64

64 except for the data and time. Flights 2 4 varied the flying height to test the effect on the accuracy. The flight settings are summarized in Table 3 2. Table 3 2. UAS flight settings. To test consistency, f lights 1, 5 7 are identical except for the data and time. Flights 2 4 have different flying heights. Flight Local Time Date Lines Photos GSD (CM) Altitude (fee t) 1 11:00 AM 5/23/2018 6 80 1.7 200 2 11:51 AM 5/23/2018 5 107 1 125 3 12:19 AM 5/23/2018 4 36 2.5 300 4 12:33 PM 5/23/2018 3 20 3.3 400 5 1:13 PM 5/23/2018 6 80 1.7 200 6 4:05 PM 5/23/2018 6 80 1.7 200 7 8:57 AM 5/24/2018 6 80 1.7 200 On the chu rch test site, control was established using a Trimble S5 Robotic Total Station and a Trimble R10 GPS receiver. The Trimble R10 receiver was used on two control points to establish a Texas State Plan e position and orientation. Three 180 second occupatio ns were recorded on these two control points and averaged together for their final position. In addition, one of these points was used for the GNSS base station during the UAS flights A four point traverse was then run, using the Trimble S5 Robotic Tota l Station. One of the two control points was held for position and the second was used to establish the azimuth for the initial backsight. The traverse adjusted precision was 1:50,000 horizontally and 1:25,000 vertically. A Leica C10 terrestrial laser sc anner was used from ground level and rooftop locations. Four high definition survey (HDS) targets were set up as primary control points for orienting the registered lidar point cloud to the control. Six additional HDS targets were used for cloud registra tion. Every visible HDS target was acquired twice resolution. After site scans were completed, the data was imported into Leica Cyclone

PAGE 65

65 for cloud registration. The cloud s were registered first to each other, and then to the check points The mean absolute error for the point cloud registration was 0.01m horizontally and 0.002m vertically. The terrestrial scan was then compared to the elevation of the points collected wit h the total station. The RMSE of the point cloud relative to these points was 0.003m vertically with a total vertical error range of 0.0 08 0.00 3m The vertical error mean was 0.00 1m Check points are probably the most common method of assessing PT accuracy. They are surveyed points compared to independently measured coordinates from the PT. We included check point statistics so that our results can be more easily compared to other accuracy reports. However, check point statistics are not an entir ely appropriate measure of accuracy in the context of real world mapping. Consider that the check points are generally high contrast features or targets that are carefully measured in dozens of photos during PT. The PT reports include check point statist ics and allow operators to remeasure the points in a feedback loop. In contrast, UAS mapping is done from point clouds and stereo pairs which do not benefit from as much redundancy, must include low contrast features, are not generally measured as carefu lly as check points, and do not have such direct accuracy feedback. Thus, we hypothesized that check points measured in stereo pairs and point clouds in a manner more like the mapping would be more appropriate. In UAS blocks, there are usually dozens of potential stereo models that show any given feature. Choosing which stereo models to create and use is an area of ongoing research. An example approach is that employ employer) software [60] which is to allow the user to choose a range of acceptable

PAGE 66

66 image overlaps (defined as a percentage of the image area). Additionally, the software gives users the opt ion to allow or disallow cross flight line models. ( cross flight line models may have ex cellent geometry, but they result in a rotated perspective that some users find undesirable). This may still create a vast number of models, which are effectively navigated using model jumping and swapping commands. We relied on information in the Cardina error propagation information for each potential model. For example, Figure 3 4 shows the relationship between the overlap of the volume of error ellipsoid from Flight 2 of the experiment. It indicates that accuracy degrades slowly for most of the domain but accelerates rapidly as the overlap exceeds 80%. The difference in ellipsoid volume between 60% and 80% overlap models is app roximately a factor of 2. Based on this information, we limited our stereo accuracy tests to models with 40% 60% overlap. The procedure for measuring the check points in the stereo models was designed to make each measurement as independent and unbiased as practical. Operators were driven from model to model, rather than making all the measurements for single points sequentially. This saved time in opening and closing models. More importantly, it randomized the order of measurements so that they were m ore independent. To save time, the operators were driven near the features they were to measure. However, to avoid bias, they were intentionally placed at a location randomly offset in all three axes by a distance of up to 20 times the expected measureme nt noise. Thus, they were required to reposition their measuring mark each time they observed a point.

PAGE 67

67 Figure 3 4. Predicted average error ellipsoid volumes for 3D point reconstructed from stereo models plotted against model percent overlap. For ease, c heck points are typically biased toward open areas that are readily accessible, easy to map using GNSS, and easy to see from the air. For the purposes of this research, the body of points we refer to as check points fits this description and is thus compa rable to many other accuracy reports (Figure 3 5) We included a second group of points that were chosen to access the reliability of extrapolating vertically beyond the domain of the ground control, as well as measure break l ines and above ground features (Figure 3 6) For comparison, check and test points were measured in the IDPC as well as stereo.

PAGE 68

68 Figure 3 5. The red crosses are observations of check point locations in image from flight 5. Figure 3 6. The white circle s are example test point locations. These points represent features that are frequently missing or distorted in imager derived point clouds.

PAGE 69

69 Photo Triangulation Processing The PT work began with camera calibrations. Camera calibration is commonly skippe d because of the touted capabilities of self calibration bundle adjustments. We chose instead to control this variable because the process is not difficult or time consuming, and aerial blocks (particularly those with only nadir looking images) are prone to systematic distortions if processed with uncalibrated cameras [61] [62] The calibration was done using a signalized target rolled out on the ground and weighted down by some pieces of rebar. Figure 3 7 2 with Zenmuse X4S calibration picture May 24, 2018. Courtesy of Mckim & Creed. The calibration images were collected by manually holding the UAS and moving it in circles while pointing the camera at the target (Fig ure 3 7 ). The camera settings mimicked those used in the mapping flights (manual focus at infinity, mechanical shutter enabled, and shutter priority set at 1/800 of a second). The intent was for each circle of images to create a cone of principal vectors whose apex was the center of the target. The first circle was large, close to the ground, and highly oblique. The following two circles were progressively smaller, higher, and more nadir looking. The sensor was

PAGE 70

70 then flipped upside down and the proces s was repeated. We collected a total of 65 images. The auto tie was done in Agisoft Photoscan and the final optimization was done limited to the target, but rather extend ed through nearly the entire field of view. The tie points were measured in up to 40 images. The calibration was done in a free network adjustment because it was not reasonable to assume that the target was planar. The final bundle rejected 912 of the ti e points (0.3%) and had a final RMSE of 0.09 pixels. The 99 th percentile of the 2D image residual magnitudes was 0.48 pixels. Blocks of mapping images were auto tied in Agisoft Photoscan The check points were measured and f inal optimization was done in Triangulation software. In the bundle adjustment, the GNSS measured photo positions were weighted like control observations and no GCPs were used There was then a post bundle transformation to the check points to remove any GNSS or boresight related bias. The PT check point statistics are thus the residuals of the post bundle transformation. This approach avoided concerns about the stability of the boresights and complications about deciding exactly how to model the GNSS shift and drift. It was also the fastest, easiest to implement, simplest to teach to personnel, and most independent of the ground control distribution and density. There was some concern about how well this approach would scale to large blocks with multiple flights. We do not yet have extensive data on larger blocks, however we proc essed a 260 acre, 695 image, 27 check point four flight project of a country club in south Florida the same way for comparison (Figure 3 8 ).

PAGE 71

71 Figure 3 8 This secondary, much larger, test site flown with the same UAS hardware and processed with the same photo triangulation procedures. The accuracy results were remarkably similar to church Flight 4.

PAGE 72

72 Results and Discussion The Effect of the Altitude on the Accuracy C onsider t he effect of the flying height on the accuracy (Figure 3 9 ). The expectation of accuracy improving with lower altitudes appears to be justified. The R Squared between the PT check point vertical RMSE and altitude was 0. 7 9. The R squared between the ster eo check point vertical RMSE and the altitude was 0. 57 The IDPC check point statistics were noisy (linear correlation 0. 12 ). Converting the units to feet to make it analogous to the manned aerial rule of thumb the PT check point vertical accuracy was r easonably approximated by : 5 hundredths of a foot plus two hundredths per 100 feet of altitude. Note that this relationship applies only to this hardware configuration. Figure 3 9 The relationship between vertical RMSE in cm and the altitude. 0 1 2 3 4 5 6 7 0 50 100 150 200 250 300 350 400 450 Vertical RMSE (cm) Flight Alltitude PT Check Points Stereo Check Points IDPC Check Points

PAGE 73

73 Our hy pothesis that PT check points are a biased indicator of mapping was verified by the data in Figure 3 9 The nearly parallel trend lines suggest that PT check points exaggerate accuracy relative to stereo and IDPC collection by a constant, regardless of th e altitude. The stereo measurement accuracy was generally better and certainly more consistent than the IDPC (Figure 3 9 ). This was an unexpected result in the open flat areas where the check points were located; we believed that the IDPC would have the a dvantages of redundancy and smoothing (filtering) around these points. Figure 3 10 The relationship between vertical RMSE in GSD units and the altitude. Figure 3 10 shows the relationship between altitude and vertical accuracy in GSD units. All three methods of accuracy verification improved relative to the GSD as t he altitude increased. The improvement in the PT check points was minor. The IDPC check point improvement was the most dramatic. The minor improvement in PT check 0 0.5 1 1.5 2 2.5 3 3.5 4 0 50 100 150 200 250 300 350 400 450 Vertical RMSE (GSD) Flight Alltitude PT Check Points Stereo Check Points IDPC Check Points

PAGE 74

74 point consistency sugges ts that redundancy (9 25 measurement per point in the bundle adjustment) compensates for the effects of flying lower. Redundancy compensation suggests that the sources of error can be reasonably modeled as random. We believe this is consistent with our h ypothesis that motion blur, undamped vibrations, focus sensitivity, depth of field, and perspective effects (e.g., tree and building lean) can cause diminishing returns at lower altitudes. Table 3 3 Photo triangulation and stereo check point residuals (m ). PT Check Point RMSE (m) Stereo Check Point RMSE (m) IDPC RMSE Flight X Y Z X Y Z Z (m) 1 0.012 0.014 0.014 0.028 0.026 0.032 0.040 2 0.015 0.014 0.010 0.026 0.019 0.020 0.027 3 0.015 0.013 0.014 0.026 0.028 0.026 0.032 4 0.015 0.024 0.027 0.035 0 .042 0.038 0.054 5 0.012 0.014 0.013 0.027 0.025 0.029 0.064 6 0.010 0.012 0.013 0.030 0.026 0.031 0.051 7 0.014 0.011 0.016 0.028 0.026 0.035 0.063 Table 3 4 Photo triangulation and stereo check point residuals in GSD units. PT Check Point RMSE ( GSD ) Stereo Check Point RMSE ( GSD ) IDPC RMSE Flight X Y Z X Y Z Z (GSD) 1 0.7 0.8 0.8 1.6 1.5 1.9 2.4 2 1.5 1.4 1.0 2.6 1.9 2.0 2.7 3 0.6 0.5 0.6 1.0 1.1 1.0 1.3 4 0.5 0.7 0.8 1.1 1.3 1.2 1.6 5 0.7 0.8 0.8 1.6 1.5 1.7 3.7 6 0.6 0.7 0.8 1.8 1.6 1.8 3.0 7 0.8 0.6 1.0 1.7 1.5 2.0 3.7 UAS Mapping Consistency The check point accuracy results (in meters) for the seven UAS flights are given Table 3 3. The same data in GSD units is given in Table 3 horizontal RMSE errors for fou r identical flights (1, 5, 6, 7) ranged from 0.6 to 0.8 GSD

PAGE 75

75 (0.010 to 0.014m). The vertical RMSE for the same flights ranged from 0.8 to 1.0 GSD (0.013 to 0.016m). The horizontal stereo check RMSE ranged from 1.5 to 1.8 GSD (0.025 to 0.030m). The stereo vertical RMSE ranged from 1.7 to 2.0 GSD (0.029 to 0.035m). There were no horizontal IDPC errors because the check points were draped vertically to the surface. The IDPC vertical RMSE errors ranged from 2.4 to 3.7 GSD (0.040 to 0.064m). The random (or time related) variation in the accuracy ranged from 0.2 0.3 GSD for PT and stereo check points. We believe this demonstrates that UAS mapping can be consistent. The IDPC were less consistent, the vertical RMSEs varied by 1.3 GSD. The Effect of GCP Quant ity and Distribution As stated, the Church blocks were processed to make them as independent of the ground control as possible. In these tests, we used 23 check points to compute the seven parameter transformation to the state plane system ; this was for r edundancy and to provide enough data for meaningful statistics. During some experimentation, we noted that choosing different sets of four well distributed c heck points could increase the vertical RMSE by 100% in some flights, compared to using all 23 poi nts. This variability was due to the presence of noise in the PT points and ground control and indicated insufficient redundancy. If there are too few control points, or they are poorly distributed, the seven parameter transformation is unstable. Quanti fying too few control points is a project specific question because it depends on the locations of the points and how well they are measured. However, care should be taken to place the control so that they will be measured in as many photos as practical a nd cover the edges of the working area.

PAGE 76

76 If circumstances prevent the collection of well distributed control points, reasonable results can be obtained with a three parameter translation. For example, using a single random point to calculate the translatio n for Church Flight 1 degraded the check point accuracies by no more than 160% (typically 18% 100%). Using four random points to calculate the translation reduced the degradation to typically 14 50% and no more than 110%. PT Procedures Tested on a Large r Block of Images The same PT procedures were used on the 260 acre country club block. It was flown at the same altitude as Church Flight 4 (400 feet). However, the blocks were otherwise dissimilar. The country club was 75 times the area and had 35 tim es the number of images. The country club was four flights with two different flight directions (north south and east west), and included 27 control points used for seven parameter transformation of the model to the check points Church Flight 4 required only one flight. The country club was composed almost entirely of soft features (e.g., grass, brush, trees, and water). Church Flight 4 was almost entirely hard features (e.g., cement, asphalt, and buildings). The country club had 26 images per control point, and Church Flight 4 had one image per control point. Despite the differences, the accuracy results were remarkably similar. The country club PT check point vertical RMSE was 0.90 GSD and horizontal RMSE was 0.65 GSD (27 check points). Church Flig ht 4 PT check point vertical RMSE was 0.81 GSD and horizontal was 0.72 GSD. U AS Stereo Compilation Accuracy o f Features Missing From IDPC A separate group of 18 test points were selected from the terrestrial laser scans of the Church to demonstrate that st ereo compilation could fill in above ground features that are frequently missing or distorted in IDPC. The points can be broadly categorized

PAGE 77

77 as tops of poles (four points), building edges and peaks (five points), rooftop utilities (five points), curb and gutter (three points), and a handrail (one point). Individual point descriptions and IDPC vertical errors are given in Table 3 5. The IDPC was created in Agisoft Photoscan and worked well in the open areas (as was shown in the check point residuals). Ho wever, as can be seen by the residuals, most of the test features were distorted or absent. Only point one, the curb flow line, was consistently well measured. We acknowledge that different software or settings may perform better, but we believe we have shown that IDPC are generally ill suited for these types of features. Figure 3 2 has visual examples of the distortion Table 3 5. Accuracy of above ground test points in imagery derived point clouds (m). Pt Description Flight 1 Flight 2 Flight 3 Flight 4 Flight 5 Flight 6 Flight 7 1 Flow Line of curb 0.02 0.04 0.05 0.02 0.01 0.01 0.04 2 Top face of curb 0.10 0.06 0.08 0.09 0.13 0.13 0.15 3 Top back of curb 0.09 0.05 0.08 0.08 0.11 0.11 0.12 4 Top of cross 4.66 6.61 5.56 6.46 6.57 6.68 4.23 5 To p of cross 10.47 11.31 6.66 12.30 8.11 12.56 12.60 6 Overhead wire 7.48 7.47 7.46 7.48 7.54 7.51 7.52 7 Top of utility pool 7.77 7.77 7.71 7.70 7.82 7.80 7.78 8 Building corner 3.26 3.24 2.37 2.42 3.27 0.94 3.28 9 Building drip line 7.78 1.45 2.78 0.23 1.80 5.34 5.04 10 Building corner 5.68 7.82 7.66 3.91 7.57 4.44 7.90 11 Building corner 9.07 10.76 3.54 1.01 10.83 3.60 10.67 12 Building peak 1.93 1.89 1.90 1.88 1.94 1.94 1.93 13 Roof top conduit 0.07 0.06 0.01 0.01 0.11 0.12 0.08 14 Roof top AC 1 .60 0.40 1.13 0.82 1.61 1.43 1.64 15 Roof top AC 0.44 1.09 0.63 0.35 1.06 0.20 0.61 16 Roof top pipe 0.15 0.12 0.13 0.15 0.19 0.16 0.14 17 Roof top conduit 0.22 0.26 0.23 0.47 0.22 0.24 0.18 18 Corner of handrail 0.95 0.90 1.01 0.98 0.99 0.86 0.94 Th e same above ground features were measured in stereo models. Points were measured in 3 15 stereo models. The median residual vector length, is given in Table 3 6. Point 12 (a building peak)

PAGE 78

78 was consistently the wo rst and averaged 11 cm in error. That was almost twice as much as any other point, and we believe this was due to outlying identification bias between the terrestrial laser scan and the imagery. Note that the tops of the crosses and the utility pole (poi nts 4, 5, and 7) were consistently within ~3 cm. These points were 10 20 m outside the vertical range of the ground control (as much as 50% of the altitude). This indicates a stable vertical scale and the ability to safely extrapolate from ground based c ontrol to building top data collection. The altitude did not seem to significantly affect the accuracy of these points. We believe this is because the identification bias was hiding the impact. The same residual data in GSD units is given in Table 3 7. Note that test point residuals in the 400 foot altitude flight (Flight 4) never exceeded 2.3 GSD. The RMSE was 1.3 GSD. This is accuracy relative to the GSD is comparable to the best manned aerial systems. Table 3 6. Accuracy of Test Points measured in stereo models (m). Pt Description Flight 1 Flight 2 Flight 3 Flight 4 Flight 5 Flight 6 Flight 7 1 Flow Line of curb 0.039 0.020 0.068 0.044 0.024 0.031 0.033 2 Top face of curb 0.026 0.033 0.024 0.024 0.024 0.016 0.014 3 Top back of curb 0.020 0.026 0 .012 0.020 0.028 0.012 0.016 4 Top of cross 0.059 0.000 0.062 0.062 0.046 0.047 0.059 5 Top of cross 0.040 0.044 0.042 0.031 0.026 0.032 0.036 6 Overhead wire 0.045 0.064 0.073 0.011 0.071 0.073 0.075 7 Top of utility pool 0.027 0.038 0.042 0.031 0.031 0.034 0.032 8 Building corner 0.061 0.021 0.044 0.055 0.049 0.045 0.064 9 Building drip line 0.046 0.048 0.011 0.035 0.033 0.031 0.034 10 Building corner 0.073 0.095 0.058 0.056 0.083 0.065 0.088 11 Building corner 0.024 0.028 0.053 0.048 0.033 0.036 0.036 12 Building peak 0.105 0.077 0.135 0.075 0.130 0.111 0.138 13 Roof top conduit 0.043 0.027 0.034 0.036 0.031 0.024 0.036 14 Roof top AC 0.026 0.026 0.047 0.034 0.027 0.023 0.027 15 Roof top AC 0.032 0.014 0.028 0.025 0.018 0.017 0.027 16 Roof to p pipe 0.044 0.076 0.092 0.034 0.093 0.062 0.107 17 Roof top conduit 0.032 0.035 0.036 0.039 0.044 0.023 0.041 18 Corner of handrail 0.048 0.072 0.021 0.039 0.095 0.000 0.097

PAGE 79

79 Table 3 7. Accuracy of Test Points measured in stereo models (GSD). Pt Descr iption Flight 1 Flight 2 Flight 3 Flight 4 Flight 5 Flight 6 Flight 7 1 Flow Line of curb 2.3 2.0 2.7 1.3 1.4 1.8 1.9 2 Top face of curb 1.5 3.3 1.0 0.7 1.4 0.9 0.8 3 Top back of curb 1.2 2.6 0.5 0.6 1.6 0.7 0.9 4 Top of cross 3.4 0.0 2.5 1.9 2.7 2.8 3 .5 5 Top of cross 2.4 4.4 1.7 1.0 1.5 1.9 2.1 6 Overhead wire 2.7 6.4 2.9 0.3 4.2 4.3 4.4 7 Top of utility pool 1.6 3.8 1.7 0.9 1.8 2.0 1.9 8 Building corner 3.6 2.1 1.7 1.7 2.9 2.7 3.7 9 Building drip line 2.7 4.8 0.4 1.0 1.9 1.8 2.0 10 Building cor ner 4.3 9.5 2.3 1.7 4.9 3.8 5.2 11 Building corner 1.4 2.8 2.1 1.5 1.9 2.1 2.1 12 Building peak 6.2 7.7 5.4 2.3 7.7 6.5 8.1 13 Roof top conduit 2.5 2.7 1.4 1.1 1.8 1.4 2.1 14 Roof top AC 1.5 2.6 1.9 1.0 1.6 1.3 1.6 15 Roof top AC 1.9 1.4 1.1 0.7 1.1 1 .0 1.6 16 Roof top pipe 2.6 7.6 3.7 1.0 5.5 3.7 6.3 17 Roof top conduit 1.9 3.5 1.4 1.2 2.6 1.3 2.4 18 Corner of handrail 2.8 7.2 0.8 1.2 5.6 0.0 5.7 UAS Economic Viability Companies will not invest in new hardware or training if there is not sufficie nt economic motivation to do so. To demonstrate the economic advantage of UAS mapping, three projects were selected for UAS stereographic compilation compared to conventional survey. The projects consisted of a rooftop survey for a solar panel installatio n, a 260 acre golf course requiring a 50 scale plan, and the Rowlett church test site for which we collected 50 scale plan. The savings in time and money of UAS versus conventional survey are summarized in Table 3 8. Table 3 8 Time and cost savings from using UAS stereo compilation compared to terrestrial surveying. Project Survey Total Days UAS Total Days % Saving (Time) % Savings (Cost) Roof Tops 12 7 58% 41% Golf Course 30 15 50% 75% Church 3 2 33% 58%

PAGE 80

80 Rooftop Survey for Solar Panel Installation In the rooftop example, we collected UAS data on three grocery store rooftops that ranged from 3 7 acres in size. All three sites were collected in the same day. All rooftop features were extracted in stereo including gas lines, roof vents, HVAC units, skylights, and electrical panels. F light to delivery took 7 days compared to 12 days using conventional survey and realized a 41% cost savings and 58% time savings. In addition, because some of the roofs did not have parapet walls, a lift and harnesses would have been required for much of the work if using conventional survey Using the UAS, it was not necessary to put personnel near the edge of the roof. 260 Acre Golf Course In the golf course example, a full topographic / planimetric survey was required for the 260 acre site. It was estimated that it would take 3 field crews 30 days to collect all the data conventionally. This, in turn, would result in significant downtime of the course. We flew the site with UAS in one day 95% of the course was mapped u sing a combination of stereo compilation for all vector features and IDPC for mass points in open areas. Some heavy tree lines required conventional survey, but many semi obscured locations were mapped via stereo compilation by selecting suitable stereo m odels. The site was mapped in 15 days by a single compiler at a cost savings of 75% and a time savings of 50%. Rowlett Church Test Site While the Rowlett C hurch test site was not an active project, we compile d the site to study the level of effort require d for UAS versus conventional survey W e conducted 22 flights in two days Had this been a typical project, we estimated that collecting and mapping using UAS would have taken half a day of field time and a day and a half of

PAGE 81

81 office work. Further, it was es timated that three days would have been required for conventional survey including office time resulting in a 33% time savings and a 58% cost savings. Conclusion We have shown that, with specific hardware and processing, UAS mapping from stereo models i s consistent and as accurate as 0.021 m RMSE (at a 125 foot altitude). The RMSE of data collection from the imagery derived point clouds in open areas was 0.027 cm at 125 foot altitude but was less consistent from flight to flight. This is accurate enoug h to be used for many applications typical of terrestrial surveying. Accuracy is improved by flying lower, but there are diminishing returns. We believe this is a result of apparently random errors we attributed primarily to motion blur and undamped vibra tions, which are more of an issue at lower elevations. Another potential contributing factor is image compression. We also demonstrated that a well calibrated camera can be used to accurately measure features at least as much as 50% of the altitude outsid e the vertical domain of the ground control. This empowers surveyors to measure hazardous and inaccessible areas like rooftops, trees, towers, and steep inclines safely and economically. We showed that data collection from stereo image pairs can be accura tely and economically used to fill in features missed or distorted in IDPC. In our experiments, these features included the tops of the crosses on the church, overhead wire attachment points, tops of power poles, and building drip lines. These points wer e generally measured within 4 cm (three axis RMSE), with a few exceptions. The 400 foot altitude flight yielded errors no more than 2 .3 GSD.

PAGE 82

82 Our PT procedures demonstrated that, for our UAS equipped with high end GNSS, accuracy depended little on ground c ontrol. However, we still recommend 20 well measured points around the perimeter of a project to verify accuracy and remove any GNSS bias. Future Work The relative accuracy achieved at the 400 foot altitude was nearly the limit of what was expected (maxi mum errors around 2 GSD). Hence, our primary interest is in improving the marginal returns from flying lower. Unfortunately, we believe this is going to require better (more expensive) hardware. For instance, we would like to test cameras with digital f orward motion compensation and better vibration dampening. There is also general suspicion among the people who worked with the imagery that jpeg compression hurts the accuracy. However, DJI does not currently allow saving images in raw or uncompressed formats when using Ground Station Pro. Thus, testing this will at least require different software. Regarding data processing, we are interested in testing the improvement that can be achieved using more flexible camera calibration models and different PT procedures. However, we do not believe any improvement from these experiments will be highly correlated with altitude and, therefore, will be small.

PAGE 83

83 CHAPTER 4 THE INFLUENCE OF MISSION PARAMETERS ON UNMANNED AERIAL SYSTEMS MAPPING ACCURACY Introduction In the C hapter 3 it was established that specific UAS hardware and processing methods can compete effectively with terrestrial mapping methods. C hapter 4 covers the effects of additional mission parameters and hardware variations on mapping accuracy. T he influences of some mission parameters on 3D reconstruction accuracy from Unmanned Aerial System (UAS) imagery have been previously studied. T he analysis of the effect of Ground Control Point (GCP) density and distribution has been the most popular. Most basically, it has been reported that using some GCPs was better than none [32] It was further shown that 5 GCP s are better than 4 [35] 9 GCP are better than 8 [57] and 20 GCPs was better than 15 [58] The more is better trend was distributed throughout the focus area with a spacing of one fifth to one te nth the UAV [59] While the statement may be accurate, it suggests placing points so that dozens are visible in each photograph not a practical approach. While agreeing that more is better, [53] reported that increasing the GCP density beyond 5 points per square kilometer had little marginal benefit. It has been recommended to distribute GCPs evenly [59] near the limits of the mapping [35] and based on a universal kriging model [60] It may also be advantageous to distribute the GCP in three dimensions and near discontinuities in the terrain [61] The accuracy i mpact of the overlap percentages of nadir looking images has also been studied. These overlaps are typically referred to as forward overlap (or simply

PAGE 84

84 overlap) and sidelap. Forward overlap is the percentage that two consecutive images overlap calculated as a linear ratio (not a ratio of areas). Sidelap is the percentage that two consecutive image strips overlap (again, this is calculated as a linear ratio). In one test 80% overlap and 50% sidelap w ere more accurate than 70 40%. The differences were mi nor at higher altitudes, but significant at lower altitudes [32] In another case study increasing the overlap from 40 % to 60% improved the accuracy by about 58%. Further increasing the overlap to 80% improved accuracy by abo ut 18% [62] 95% overlap has been suggested as the optimum balance between accuracy and processing time [63] For nadir looking images, the Ground Sample Distance (GSD) is a function of the camera focal length, pixel size and altitude. The expectation that flying lower and thus, getting a smaller GSD improve s accuracy has been verified by several studies that were concerned only with horizontal accuracies [35] [32] The relationship may not be as straight forward in three dimensions [1] Users of UAS equipped with high end PPK GNSS have consistently reported good results [48, 49, 52] However, t here are no previo us published controlled comparisons. O ther researchers at the University of Florida are working on this issue as well Simulations have suggested that oblique images should be included to make blocks of mapping images stable, particularly if using an unca librated camera [56] [70] A t least two case studies have failed to demonstrate any significant advantage using oblique images [45] [64] I n contrast, [70] showed a 50% reduction in vertical uncertainty that was fairly consistent with his simulated predictions.

PAGE 85

85 Finally, rolling camera shutters (mechanical curtain shutters or line by line digital shutters) have be en shown to cause significant distortions if they are not explicitly modeled [54] This is a result of the rolling shutter causing each line of pixels to be exposed at a slight ly different time effectively turning the camera i nto a line scanner. This study is concerned with the accuracy impact of some previously untreated mission parameters: airframe speed, camera shutter speed, camera shutter type (leaf or rolling curtains), PPK GNSS, and convergence. The effect of flying spe ed and shutter speed are of interest because they affect how much area can be covered in a single flight. The intent of testing the shutter mode is to determine if cameras with leaf shutters can be reasonably modeled as frame sensors. PPK GNSS users have reported some remarkable results, here we provided a control comparison by process ing the same data with and without the PPK. Finally, we revisit the subject of convergent images. Experiment Design The test site was a three acre lot near Dallas, T exas The lot contains a church and a large parking area (Figure 4 1). A small area was chosen that could be flown many times and quickly mapped using a robotic total station. The check points were collected with a Trible S5 Robotic Total Station. They were al l natural targets created by paint in the parking lot. Examples of check point location are shown in Figure 4 2. The check points were collected as side shots from a four point traverse that was run around the church. The adjusted horizontal traverse precision was 1:50,000 and the vertical precision was 1:25,000. We used a Trimble R10 GNSS receiver on two of the traverse points to establish the Texas State Plane

PAGE 86

86 coordinates. Three 180 epoch GNSS observations were recorded on each of these two trave rse points. Figure 4 1. Red crosses indicate example check point locations Figure 4 2 Mosaic of the test site in Dallas, TX. A five ac re lot with several buildings and a large parking area.

PAGE 87

87 The UAS airframe was a DJI 2 Quadcopter equipped with a Loki GNSS system and a Zenmuse X4S camera. This airframe was economical ($1 2 ,000 for the system including extra batteries and a custom case) and accurate. It wa s capable of mapping 80 acres per flight at a 400 foot altitude (20 acres at 200 foot altitude). The camera is an inexpensive ($600), 20 megapixel model with a mechanical leaf shutter designed to eliminate distortion from the rolling shutter. A DJI P HANTOM 4 Pro UAS (P4P) was also flown for comparison. The P4P camera ha d the same comp lementary metal oxide semiconductor (CMOS) sensor, same focal length, same ISO range, same aperture range, and same mechanical shutter as the Zenmuse X4S. This is to say, they look ed the same on paper, but their manufacturing and housing materials were cl early different The P4P camera housing and mount is plastic while the Zenmuse X4S is aluminum. The motivation for the different materials was likely cost verses physical robustness. T h e more robust aluminum may imply a more stable geometry and calibrat ion, but that would probably be an unintended consequence. The was flown 20 times and the P4P was flown twice. The flight parameters are shown in Table 4 1. The flights were grouped to test the e ffect of a single flight parameter ; each group had two flights with identical parameters to estimate random variation. Flights 1 5 tested the effect of the altitude on the accuracy. The photo triangulated ( PT ) check point accuracy results from these fli ghts were reported in the Chapter 3 Flights 6 10 ha d different flying speeds because it was hypothesized that slower flights would be more accurate due to reduced motion blur.

PAGE 88

88 Flights 11 15 have different shutter speeds; it was hypothesized that faster shutter speeds would be more accurate due to reduced motion blur. Flights 16 18 alternate d between using the mechanical leaf shutter and the curtain shutter; it was hypothesized that the mechanical leaf shutter would improve accuracy. Flights 19 and 20 w ere flown with the camera 15 degrees off nadir. However, individually the off nadir flights (19 and 20) we re not convergent ; t o sa v e energy the airframe does not yaw on turns hence the look vectors are all still essentially parallel. To create two blo cks with convergent geometry, we mixed alternating strips from flight 19 and 20. It was hypothesized that this would make the PT check point observations more accurate because they are in open areas were occlusions will not be an issue The P4P flight s ( 21 and 22) were hypothesized to be less accurate based on prior experience with the sensor and the cheaper camera housing construction These two blocks differ ed only in the flight direction. The following flight parameters were constant in every UAS flig ht: manual focus set to infinity, ISO and white balance were set to auto, camera mode set to shutter priority, the forward and side laps were 80%. The flights were primarily designed to assess the accuracy impact of altitude, flying speed, shutter speed, and shutter mode. In addition, t here were four pairs of consecutive flights that differed only by flight direction (flights 5 and 6, 10 and 11, 15 and 16, and 21 and 22). There were also two groups of four flights that were identically parameterized exc ept for the time of day (flights 1, 5, 11, and 15, and flights 6, 10, 16, and 18). These last two groups were used to describe the random variation.

PAGE 89

89 We emphasize that these experiments and results are probably relevant only to the specific hardware tested They may also only be relevant to the specific procedures employed. We acknowledge that there is also insufficient redundancy for good estimates of the random variation and hence weak ground for establishing statistical significance. We were not, th erefore, looking for a detailed model of the accuracy impact of each parameter, but rather some practical indication of how important they are. Photo Triangulation Processing It has been reported that aerial blocks (particularly those with only nadir lo oking images) are prone to systematic distortions if processed with uncalibrated cameras [55] [56] Thus, to be conscientious we began our PT work began with camera calibrations. T he process wa s not difficult n or time consuming and gave some indication of the metric capability of the sensors The calibration was done using a signalized target rolled out on the ground and weighted down by pieces of rebar. Figure 4 3 2 with Zenmuse X 4S calibration picture May 24, 2018. Courtesy of Mckim & Creed.

PAGE 90

90 Table 4 1. UAS flight settings. Bold text was used to indicate groups that were designed to test the impact of a single flight parameter. There were four pairs of consecutive flights tha t differed only by flight direction (flights 5 and 6, 10 and 11, 15 and 16, and 21 and 22). There are also two groups of four flights that are identically parameterized except for the time of day (flights 1, 5, 11, and 15, and flights 6, 10, 16, and 18) Flight Airframe Start Time Day Flight Lines Photos Direction GSD (CM) Shutter Speed Speed (MPH) Altitude (feet) Off Nadir (Deg) Leaf Shutter 1 11:00 1 6 80 NS 1.7 800 6.9 200 0 y 2 11:51 1 5 1 07 NS 1 800 4.3 125 0 y 3 12:19 1 4 36 NS 2.5 800 10.3 300 0 y 4 12:33 1 3 20 NS 3.3 800 13.8 400 0 y 5 13:13 1 6 80 NS 1.7 800 6.9 200 0 y 6 13:59 1 8 72 EW 1.7 800 6.9 200 0 y 7 14:18 1 7 59 EW 1.7 800 9.2 200 0 y 8 14:42 1 7 59 EW 1.7 800 13.8 200 0 y 9 15:19 1 7 59 EW 1.7 800 4.6 200 0 y 10 15:42 1 7 59 EW 1.7 800 6.9 200 0 y 11 16:05 1 6 80 NS 1.7 800 6.9 200 0 y 12 16:19 1 6 80 NS 1.7 1000 6.9 200 0 y 13 16:51 1 6 80 NS 1.7 500 6.9 200 0 y 14 17:06 1 6 80 NS 1.7 2000 6.9 200 0 y 15 8:57 2 6 80 NS 1.7 800 6.9 200 0 y 16 9:12 2 7 59 EW 1.7 800 6.9 200 0 y 17 9:30 2 7 59 EW 1.7 800 6.9 200 0 N 18 9:45 2 7 59 EW 1.7 800 6.9 200 0 Y 19 9:59 2 6 80 NS variant 800 6.9 200 1 5 y 20 10:15 2 6 80 NS variant 800 6.9 200 15 y 21 P4P 10:59 2 7 59 EW 1.7 800 6.9 200 0 y 22 P4P 11:11 2 6 80 NS 1.7 800 6.9 200 0 y

PAGE 91

91 The calibration images were collected by hand holding the UAS and moving it in circles while po inting the camera at the target (Figure 4 3 ). The camera settings mimicked those used in the mapping flights manual focus at infinity, mechanical leaf shutter enabled, and shutter priority set at 1/800 of a second. The intent was for each circle of image s to create a cone of principal vectors whose apex was the center of the target. The first circle was large, close to the ground, and highly oblique. The following two circles were progressiv ely smaller, higher, and more nadir looking. The three circles were repeated with the camera rotated 180. We collected a total of 65 images. The auto tie was done in Agisoft Photoscan and the final optimization was 488 tie points were not limited to the target, but rather extended through nearly the entire field of view, and were measured in up to 40 images. The calibration was done in a free network adjustment because it was not reasonable to assume that the target was planar. The final bundle rej ected 912 of the tie points (0.3%) and had a final square Root of the Mean Squared Error ( RMSE ) of 0.09 pixels. The 99 th percentile of the 2D image residual magnitudes was 0.48 pixels. The P4P camera was calibrated similarly. The data consisted of 71 pho tos and 405 772 tie points. The final bundle rejected 925 of the tie points (0.2%) and had a final RMSE of 0.07 pixels. The 99 th percentile of the 2D image residual magnitudes was 0.44 pixels. Like the calibrations, blocks of mapping images were auto tie d in Agisoft Photoscan The check points were then measured, and final optimization was completed

PAGE 92

92 try to optimally balance the weighting of the image measurement, control, and G CPs This turned out to be tedious and difficult to quantify due to some systematic bias between the GNSS and the ground control. Correcting the bias could have been done with GNSS shift and drift parameters, through a rigorous boresight calibration, o r, most simply, by holding the airborne GNSS during the bundle and performing a post PT 3D similarity transformation to the check points Th is simplest approach worked well for this data and was desirable because it was the fastest, easiest to implement, simplest to teach to personnel, and the most independent of the GCPs. Figure 4 4 Accuracies of flight groups. The four groups on the left are processed with post process kinematic (PPK) global navigation satellite system (GNSS) data. The East West (EW) flights required an alternative camera model to be in family with the North South (NS) flights. The three groups on the right are the NS and EW flights reprocessed as free networks for comparison to the P 4 Pro UAS flights. To clarify, we des cribe the method as the most independent of the ground control because no GCPs were used The check points affect ed only the placement of final solution but had no effect on the shape It also avoid ed concerns about the stability of 0.015 0.017 0.019 0.021 0.023 0.025 0.027 0.029 0.031 0.033 NS with PPK EW with PPK EW Alt. Camera Convergent NS Freenet EW Freenet P4P 3 AXIS RMSE (M) FLIGHT GROUPS Accuracy of Flight Groups

PAGE 93

93 the boresights and c omplications about deciding exactly how to model the GNSS shift and drift. PT check point residuals from blocks processed this way were used to compare the accuracy of flights with different airframe speeds, shutter speeds, shutter modes, and flight direc tions. PT check points were used for this comparison because they were parameters that benefit ted from a lot of redundancy and hence should have low noise. The P4P Pro was not equipped with a PPK GNSS. For these blocks we ran free network adjustments and then did post PT similarity transformations to move into the frame of the check points To perform fair comparisons we re processed eight of the 2 flights the same way. Results and Discussion To begin, we look ed at comparing the accuracy among g roups of similar flights. This wa s necessary to explain some of the PT processing and established an estimate of the random variation. There were four groups of interest: The groups of four North South (NS ; flights 5, 11, 15, 22 ) and four East West fli ghts (EW ; flights 6, 10, 16, 2 1 ) with the same flight parameters, the P4P flights (flights 21 and 22) and the convergent flights (flights 19 and 20) Additionally, the NS and EW groups were processed both with and without PPK GNSS. They were process ed with PPK for every comparison except against the P4P flights Finally, the EW flights were processed with an alternative camera parameterization described in the next subsection The accuracy data for these groups are summarized in Figure 4 4 Comparing North South and East West flight groups. First, we compare d the NS and EW accuracies and explain ed why the EW flights were processed with an alternative camera model. The 3 Axis RMSE of the PT

PAGE 94

94 checkpoint residuals for NS with PPK and EW with PPK were si milarly consistent (Figure 4 4 ). The 3 axis RMSE varied within 0.1 GSD in both cases. We used a single factor analysis of variance (ANOVA) to test if the NS flights had the same mean. The factor was the flight number and response were the 23 check poin t residual vector magnitudes from each flight. This assumes that the individual point residuals can be assumed to be independent which while not strictly true we believe it is reasonable. The null hypothesis that the means were equal was not rejected (p =0.43). A similar test for EW flights failed to reject the null hypothesis that the EW flights had the same mean (p=0.71). However, the means of the two groups were separated by 0.4 GSD, and an ANOVA test rejected the null hypothesis that the EW and NS f lights all had the same mean (p=0.002). Contrary to our expectations, the evidence strongly suggested that the different flight directions resulted in different accuracy distributions. The difference in the NS versus EW flights was unexpected. Our firs t hypothesis was that it was a result of the EW flights having fewer measurements of the control points. There was a group of eight (out of 23) check points that were measured in about half as many photos in EW flights compared to the NS flights due to oc clusions and image distribution differences. However, the accuracy discrepancy remained when we considered only the more consistently visible check points To gauge independence, we verified that rotating the working coordinate systems had no effect on t he accuracy. Eventually, we discovered that the EW flights could be processed using an alternative camera parameterization to get results consistent with the NS flights using the same PT procedures ( third group in Figure 4 4 ). The alternative camera def inition

PAGE 95

95 was computed by performing a self calibration bundle adjustment on one of the EW flights using 23 GCPs and the PPK GPS to constrain the solution. Once we had the alternative camera, it could be used for all four of the EW flights using the previou sly described, largely GCP independent, PT procedures. ANOVA using the flight as the factor and the check point PT residual s as the response failed to reject the null hypothesis that the four NS flights and the four EW flights processed with the alternati ve camera model had equal means resulted in p=0.54. A B Figure 4 5 A raw Zenmuse X4S image (A) has obvious barrel distortion and vignetting. The recorded JPG image (B) does not have either. The field of view has also been cropped and a white bal ance correction applied. Evidence suggests that the geometric corrections (distortion and cropping) may vary from image to image. 2 with Zenmuse X4S calibration picture May 24, 2018. Courtesy of Mckim & Creed. The implication that the ca mera internal geometry varie d significantly with flight direction wa s unsettling, and physically improbable. However, the facts were these : one camera calibration done independently worked consistently for all the NS flights flown at different times and on two days. Another bundle self calibration camera model worked consistently for all the EW blocks flown at different times on two days. The NS and EW flights were flown in alternation spanning the same period. Hence it wa s unlikely that

PAGE 96

96 the differen ce was the result of temperature, lighting, weather or other physical influence. We believe d the explanation lies in the pre processing of the raw images. Raw images ha d obvious distortion and vignetting in the image corners; the JPEG images saved durin g the flights do not (Figure 4 5 ). From this we infer red that during preprocessing the corners of the raw images were stretch ed, to straighten the projection of linear features and then cropped to the size of the original sensor chip. The difference be tween the NS and the EW flights suggest ed that this was not done the same way for every image. Specifically, the difference between the two camera models is predominately a difference in the principal point offsets (PPOs). Solving for complimentary PPOs to equate the two camera models reduces the difference between them from a RMSE of 22.4 pixels to a RMSE of 0.36 pixels (98.4% reduction). This is consistent with saving slightly different subsets of the rectified images. However, w e do not know why thi s would be influenced by the flight direction. Assuming variant preprocessing is the correct explanation, the best way to avoid this complication is to work with the raw images. Currently, the DJI flight application does not allow users to save raw ima ges. Verifying this theory will thus require finding or writing alternative flight control software. For the remainder of this analysis, the NS flights were processed using the camera parameters from the pre calibration, and the EW flights using the alte rnative calibration. On a practical note, PT with this sensor should include a limited self calibration that solves only for PPOs. Comparing Nadir Looking and Convergent Flight Groups The convergent flight group appeared to be slightly less accurate than the nadir looking NS and EW groups (Figure 4 4 ). However, a two sample T Test (two tailed

PAGE 97

97 heteroscedastic) did not show the difference between the residual vector magnitudes to be significant (p=0.19). This wa s consistent with other tests [45] [64] but contrary to theoretical predictions [56] Comparing the Zenmuse X4S and P HANTOM 4 Pro UAS Sensors Despite some past struggles with the P4P system, it appeared to have performed in family with the NS and EW free network groups ( Rightmost three groups in Figure 4 4 ). We verified this appearance with a two sample T Test comparing the residual 90). We believe d PT processing procedures are the most likely explanation for past struggles with the P4P sensor but other experiments are required to verify this. We also note d that the NS and EW P4P flights had similar accuracy results; a two sample T Test comparing their means resulted in p=0.74. This maybe coincidental, it may be that the P4P preprocessing wa s more consistent Comparing Blocks Constrained With PPK GNSS and Without It C onsider the difference between the PPK constrained groups and the free network groups (Figure 4 4 ). Among the NS and EW groups, for each check point in each f light there was one residual vector magnitude from the PPK constrained PT and one from the free network. The difference between them was modeled as a paired T Te st. The sample size was 184, and the p= There wa s clearly an advantage to including the PPK. The magnitude of the difference in these cases was small (approximate ly 0.01 m) ; however these blocks were small and full of well defined man made features for image matching. Larger b locks with vegetation will likely be more variant without PPK.

PAGE 98

98 Influence of Airframe Speed on PT Accuracy The changing airframe speed in flights 6 10 varied the predicted forward motion blur from 0.15 pixels to 0.45 pixels. Flights 6 and 10 were flown at the same speed to provide some indication of random variation. Accuracy statistics for these five flights are given in Table 4 2. Figure 4 6 shows the relationship between the flight speed and the RMSE ; the R squared was 0. 05 The total spread in the R MSE was 0.003 m; this is a tenth of the spread among the four equivalently parameterized EW flights. The flight speed was insignificant compared to random variation (Table 4 2 ). Fig ure 4 6 The relationship between the flight speed and the accuracy. The l ack of correlation between the airframe speed and the check point accuracy led us to question if image sharpness varied measurably among the flights. Image sharpness depends largely on the image content. Fortunately, the five flights were so similar that we ha d a one to one image correspondence among them. We used the variance of a Laplacian image convolution to quantify the sharpness as a single continuous variable [65] In our first ANOVA analysis we used the flight as the factor and the sharpness as the response. It was highly improbable that the flights had R = 0.0465 0.021 0.021 0.022 0.022 0.022 0.022 0.022 0.023 0.023 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 3 Axis RMSE (m) Flight Speed (m/s) Accuracy and Flight Speed

PAGE 99

99 equally sharp images (p=0.008) ; h owever, if the flight speed wa s made the factor then the null hypothesis that the groups have the same sharpness cannot be rejected (p =0.36). Hence, the sharpness varied among the flights, but it was not caused by the speed. There was somewhat more correlation between the average image sharpness and the RMSE ( R squared = Table 4 2. Accuracy statistics for flights that varied t he airframe speed. Higher sharpness values indicate sharper images. Flight Speed (M/S) Forward Motion Blur (Pixels) Checkpoint RMSE (m) Average Image Sharpness 9 2.1 0.15 0.02 1 36.5 6 3.1 0.22 0.02 3 34.1 10 3.1 0.22 0.0 21 37.1 7 4.1 0.30 0.0 22 34.9 8 6.2 0.45 0.02 2 35.9 Influence of Shutter Speed on PT Accuracy The shutter speeds in flights 11 15 varied the expected forward motion blur in pixels from 0.09 to 0.36 pixels. The accuracy statistics for these flights are given in Table 4 3. In Figure 4 7 the PT checkpoint RMSE was plotted against the shutter speed. The plot suggests that the accuracy wa s improved with the faster shutter speed, but the relationship wa s weak ( R squared = 0.44 ). The difference between the fastest shutter and the slowes t shutter was about 15% greater than the difference in the two flights with the same shutter speed. The widest spread between any two flights was 0.004 m. Thus, the implication is primarily that the camera shutter speed had little effect on accuracy. It is interesting that the shutter speed appear ed to be more influential on PT checkpoint accuracy than the aircraft speed (airframe speed R squared = 0.0465)

PAGE 100

100 This suggests that the data is more impacted by high frequency motion (undamped vibrations) than by the dominate translation. Table 4 3. Accuracy Statistics for flights that varied the shutter speed. Higher sharpness values indicate sharper images. Flight 15 was flown separately and thus did not have a comparable set of images for sharpness compari son. Flight Shutter Speed (S 1) Forward Motion Blur (Pixels) Checkpoint RMSE (m) Average Image Sharpness 13 500 0.36 0.02 4 41.5 11 800 0.22 0.02 0 42.2 15 800 0.22 0.02 4 N/A 12 1000 0.18 0.02 2 43.8 14 2000 0.09 0.020 37.4 Fi gure 4 7 The relationsh ip between the shutter speed and the accuracy. Only flights 11 14 could be included in the sharpness analysis because flight 15 was captured in morning of the following day and hence had dissimilar images. Excluding flight 15 remove d the flight with the duplicate shutter speed ; thus modeling the flight or the shutter speed as the ANOVA factor was equivalent and it was unlikely they had equal means (p= However, the flight with the fastest shutter R = 0.4407 0.0000 0.0050 0.0100 0.0150 0.0200 0.0250 0.0300 0 500 1000 1500 2000 2500 3 Axis RMSE (m) Shutter Speed (S 1 ) Accuracy and Shutter Speed

PAGE 101

101 speed was the blurriest. As before, it appears sharpness varies between flights, but it is not driven by the mission parameter we were changing. The linear correlation between the average sha rpness a nd the accuracy was also weak (R squared = 0. 17 ), and if anything was the reverse of expected trend. Influence of Shutter Type The Zenmuse X4S uses a digital shutter when the leaf shutter is disabled. This type of shutter reads out individual c olumns of pixels sequentially Th is results in an image that has features systematically displaced from where they would have been simultaneously captur ed. The resulting geometry is similar to line scanners. The leaf shutter closes from the outside inwa rd. Thus, the edge pixels are dimmer and have less motion blur. Table 4 4. Accuracy Statistics for flights that used different shutter types. Higher sharpness values indicate sharper images. Flight Leaf Shutter Forward Motion Blur (Pixels) Checkpoi nt RMSE (m) Average Image Sharpness 17 No 0.22 0.0 46 40.8 16 Yes 0.22 0.0 19 47.6 18 Yes 0.22 0.0 21 41.9 T he leaf shutter was notably more accurate when modeling the images as frame sensors. The 3 axis RMSE for the flight with the leaf shutter disable d was 0.046 m. shutter modes was 0.026 m, 12 times the spread among the two similar flights (Table 4 4). Using a two sample T Test to detrmine if the 46 residual vector ma gnitudes from the flights with the leaf shutter enabled ha d the same mean as the 23 magnitudes from the flight with the leaf shutter disabled g ave p= Modeling the distortion from the

PAGE 102

102 leaf shutter as random as is assumed in adjustment computations works significantly better than modeling the distortions from the rolling curtain shutter as random. Next, we consider ed the effect of the shutt er type on the image sharpness (Table 4 4 ). First, note that the rolling shutter flight was the blurriest flight. An ANOVA analysis of the three flights show ed that is unlikely they were equally sharp (p= However, the hypothesis that the t wo flights that used the leaf shutter would have equal sharpness was considered and rejected using a paired T Test (p= Once again, the sharpness varied, but not as predicted based on the mission parameter being tested. Conclusion Unmanned A erial Systems (UAS) cameras with mechanical curtain shutters (i.e., rolling shutters) should not be modeled as frame sensors if it can be avoided. The systematic errors result in significant distortions in the 3D reconstruction. The Zenmuse X4S UAS camer as with a mechanical leaf shutter can be modeled as a frame sensor ; therefore, we conjecture that other leaf shutter sensors could be as well. Photo Triangulation (PT) solutions constrained by Post Processed Kinematic (PPK) Global Navigation Satellite Sy stem (GNSS) data were significantly more accurate th a n free network PT. The magnitude of the improvements was small (0.01 m) but would likely be larger in a more challenging project. Our tests did not show any significant PT check point accuracy change f rom convergent flight geometry. Neither did they show significant difference s between the P4P sensor and the Zenmuse X4S when similar PT procedures were used.

PAGE 103

103 Varying the airframe flight speed or the camera shutter speed so that the predicted forward mo tion blur increased from 0.1 pixels to 0.45 pixels had little effect on accuracy. The flight speed, shutter speed, and shutter mode also did not have the anticipated impact on image sharpness. Sharpness was significantly different from flight to flight, but the changes were not attributable to the mission parameter variables we tested. This is convenient for mission planning because it permits the pilot to increase the flight speed or slow the shutter within the bounds tested without paying a high accura cy cost. However, it implies that there are still some sources of error masking the impact of the blur that are at least on the order of half a pixel. We presented evidence that the DJI Zenmuse X4S camera has a variant camera geometry. We theorized th is was due to inconsistent on UAS image processing. To avoid this source of error we propose to work with the raw images, but Alternatively, we recommend that the images be pr ocessed used a limited self calibration PT that solves for the PPOs.

PAGE 104

104 CHAPTER 5 GENERAL INCREMENTAL POSE ESTIMATION (GIPE) Introduction Chapter 5 presents the first of the algorithms proposed for faster and more accurate 3D reconstruction from imagery. It deals with robustly obtaining good initial values for PT which is an area of active research. Methods for obtaining initial values can be categorized as either incremental or simultaneous. Simultaneous methods that orient more than three images simul taneously and have been given by Triggs [66] Hartly et al. [67] and Bloquist and Pack [68] These simultaneous make limiting assumptions about t he scenes and point distributions (e.g. observations of planar scenes or a set of point observed in all images). Incremental methods build sets of initial values by sequentially orienting images with respect to a body of n>1 previously oriented images; th is is referred to as the Incremental Pose Problem or IPP. Luhmann et al. [69] provided three illustrative methods of solving the IPP. To discuss these methods, let us establish a description of the data. For ease, th e 3D primitives are limited to points. Let be the set of previously oriented images. The image to be oriented is then Let be the set of all object points observed on images in and be the set of points observed in the image If is the set of object points observed in then the set of object points observed in both and is thus The common object points between and are Let be the set of image observations of a point in the image set A and be the image observation of point object point p in image f The first IPP method given by Luhmann et al. [69] was referred to as initial image pair

PAGE 105

105 is selected, and relative orientation is used to calculate model space poses. 3D model coordinates of conjugate points in the oriented images are calculated (an operation referred to as space intersection) and orientations of additiona l images are computed from observations of those points (an operation referred to as image resection). The block of oriented images grows as the operations of intersecting 3D points, and resecting images is repeated. The subset of data involved in each I PP is [69] This method also begins with a relative orientation. The relative orientation requires }|>0. Further, the relative orientation solves for the direction to, and orientation of, the next image, but cannot resolve the distance to the new image (this is frequently referred to as the scale ambiguity). The authors do not detail how the scale amb iguity is resolved. There are at least two ways the ambiguity could be resolved. First, if for a relative orientation with respect to image i then these points can be intersected in the space of the block and in the space of the relative orientation. The average ratio of the distances among point pairs in the two systems is an estimate of the unknown sca le. Alternatively, additional relative orientations could be done. The 3D lines implied by the relative orientations can be intersected to find the coordinates of the new image perspective center. This method requires and t hat the image set cannot all be collinear (otherwise the 3D line intersection is ill conditioned). [69] o starts with a relative orientation and intersection of 3D model

PAGE 106

106 points. Pairs of un oriented images ( and ) are then incrementally added. This method requires that Each pair of images is relatively oriented and the object points are intersected in the relative and block space to compute a 3D conformal transformation to the block space. A trifocal based IPP solution is possible if for some previously oriented images i and j The IPP application of the trifocal differs from the common uses in that the parameters of two of the involved images are known and held constant; only the pose of image b is unknown. Unlik e relative orientation methods this method has no scale ambiguity and is simultaneous. However, it uses a potentially small subset of the available data Mikhail also proposed a method for solving the IPP u sing sets of three images. It used more data than a pure trifocal method, for some previously oriented images i and j However, it relied on assumptions of nadir looking cameras in its derivation [70] [67] They used a trifocal tensor method to start the block (instead of a relative orientation) and used a linearized Direct Linear Transformation (DLT) resection instead of an iterative resection. (2003). Instead of relative orientation, they use the linear fundamental matrix method for pai rs of images or trifocal tensor methods for triplets of images. The two or three

PAGE 107

107 image groups are then combined using conformal transformation calculated from DLT intersected object points [71] [72] presented a method which reduced all conjugate point and line observations to constraints on the elements of the image rotation matrix and position of image To make the solution linear, they limited the data used to observations of the set of points observed in a single image, It is a DLT style approach; the cost function minimized is not a function of measurement residuals but r cannot be coplanar. Although it is not as obvious, this method is also a variation of These methods use combina tions of relative orientations, trifocal tensors, intersections, and resections to solve the IPP. At least one of the following was true of all previous methods: Only subsets of the available data were used They had geometric weakness not intrinsic to the IPP They had unnecessarily high minimum data requirements They were not simultaneous They unnecessarily required 3D reconstruction Herein is proposed a General Incremental Pose Estimation method (GIPE) that has none of these weaknesses (Table 5 1). The p rimary motivation for this research was to create a maximally robust IPP solution. A simultaneous method that used all the data was needed to accomplish this. Using RANSAC (or one of its derivatives) it is possible to find solutions even if more

PAGE 108

108 than 5 0% of the data is blundered in simultaneous systems [73] However, this study focused on a preliminary subject, demonstrating the superiority of GIPE over legacy IPP alternatives. To that end, specific geometries and experim ents were chosen to test the limits of the IPP algorithms. However, the experiments used only clean data (data that has only random Gaussian errors) and calibrated cameras. Hence, the robustness advantages of GIPE were not explored here. Table 5 1. Prope rties of IPP Algorithms Intersection Resection and Ma Multi Relative Trifocal GIPE Uses all the data Adds no geometric weakness Works for minimal data sets Simultaneous Requires no 3D reconstruction Minimiz es measurement error Legacy Models The most obvious way to approach the IPP was using an Intersection Resection (IntRes) approach. For IntRes, a set of points observed in the oriented images was reconstructed in object space, and the pose of t he new image was resected based on observations of those points. This method is not simultaneous (the intersections and the resection were separate steps). It only uses the data where and relies on 3D reconstructions that may not be stable. The linearized variation of this and is referred to as KMa. As explained above, to make the solution linear, KMa limited the data used to observations of the set of points observed in a single image,

PAGE 109

109 The multiple two image relative orientations IPP method implemented for testing is referred to as MRel. The specific form of MRel tested solves all possible relative orientations and then combines using them using Gauss Helmert least squares [74] Hence it uses the subset of data: As mentioned above, MRel requires that and adds the geometric weakness that image stations cannot be collinear. Hence, two or more relative orientations were solved, then the 3D lines they implie d intersected to find the optimization the relative orientation solutions were modeled as observations, and the full covariance data was used. The data was centralized and no rmalized to maximize numerical stability [75] For additional stability, the covariance matrices were also scaled so that the average magnitude of the diagonal elements was one. This method is not simultaneous and uses a subse t of the data. It does not, however, require any 3D reconstruction. The third IPP solution is MRIR, which combines MRel and IntRes methods. The addition of the IntRes operation removes the MRel geometric weakness that must not be collinear. It also uses a larger subset of the data However, it is not simultaneous and uses 3D reconstruction. The specific version of MRIR tested solve d all possible relative orientations and intersections. They were then combined using Gauss Helmert least squares. The resection equations in the optimization used the full covariance matrix of the intersected points. As with MRel, the data was central ized and normalized and the covariance matrices were scaled to maximize numerical stability.

PAGE 110

110 A trifocal method (Tri) was also used to solve the IPP. This method is inherently a three image method. Thus, it can only use data from a pair of oriented image s and : Similarly to MulRes, multiple trifocal solutions could be done and then combined. However, solving all possible sets of three images (analogously to MRel) is impractical because of the poten tially unwieldy number of three image combinations. The specific version tested chose the triple of images with the most three ray points to calculate the pose estimation. This model was implemented as a nonlinear Gauss Helmert least squares optimization that minimizes the sum of the squared measurement residuals. Note that this method was simultaneous and used one of the same mathematical models as GIPE (detailed in the next section). However, it used a potentially small subset of the data. General Inc remental Pose Estimation Mathematical Model The model used in GIPE is given in more detail. A conjugate point measurement in 2 images implies a bilinear or coplanarity constraint: ( 5 1) Where and are the ideal image coordinates (distortion corrected, and scaled by the negative focal length) of some object point in images and Using as the position of image and as the rotation matrix from t he world frame to the camera frame and Finally, is the skew symmetric matrix formed from the column vector as follows:

PAGE 111

111 A set of images observing the same object point imply independent trilinear constraints of the following form: ( 5 2) Three of the images have been arbitrarily labeled , and and is the homogeneous form of an image line containing the image point Homogenous coordinates of image lines through the points can be usefully constructed as a fu nction of a single parameter the rotation of the line from horizontal The choice of is largely discretionary but should be chosen to avoid degeneracies [76] Equation 5 2 enforces that the two planes resulting from the projections of and into object space and the line resulting from the projection of the image observation have a nontrivial null space (Thomas and Oshel 2012) It is the key to enforcing incidence rel ationships among lines and points. The most obvious extension of this concept gives constraints for observations of a point incident to lines in other images. Not as straightforward are the constraints implied by a set of observations of the same object line The first three images, arbitrarily labeled and are constrained by where and are two non coincident (and ideally widely separated) image coordinates on the line Two additional constraints are needed for every additional image, One intuitive way to form those two constrai nts is

PAGE 112

112 where and indicate any images whose incidence was enforced by o ther previous constraints. Note, however, that a line must be observed in at least three images to provide any constraint on the camera poses. The general method of linking an unoriented image is solving a simultaneous system of all the equations implied by incidence relations to every previously oriented image. Assuming incidence relations are limited to points and lines they can all be expressed either in the form given in Equation 5 1 or Equation 5 2. The methods given below address finding initial va lues and solving the system for any combination of these constraints provided there are at least six of them. Note that this makes the minimum data requirements for GIPE the theoretical minimum. It can, for instance, find the finite set of pose solutions given six bilinear constraints (1 common point with each of 6 oriented images). Equations 5 1 and 5 2 are forms of what have been termed multilinearity equations. They are explored in detail by various authors [77] [78] [79] GIPE Initial Values A typical approach to generating initial values is to randomly select a minimal set of equations and solve for the solution(s) that exactly satisfy that set of equations. The c onceptual beginning of this process was to convert constraints of the forms given in Equations 5 1 and 5 2 into compatible formats that can be easily intermixed. To this end, the quaternion formulation for the rotation matrix given in Mugnier et al. [80] was used:

PAGE 113

113 The algebraic rotations formula lends its self well to the solution algorithm that follows and extends the convergence zones [69] The vector of unknown parameters, consists of the quaternion elements and position of the camera perspective center. When Equation 5 1 is expanded as a polynomial, it results in a 43 term third order polynomial in the elements of Equation 5 2, when similarly expanded, results in a 37 term third order polynomial. Conveniently, the 37 terms resulting from Equation 5 2 are a subset of the terms from Equation 5 1. The coefficients of both polynomials are functions of the image measurements and previously determined camera poses. All the constraints can be converted into 43 element row vectors of coefficients, and whether they begin as constraints of the form given in Equation 5 1 or Equation 5 2 is irrelevant to the solution algorithm. The be ginning of the minimal solution set process began with the selection of 6 constraints and their conversion to coefficient vectors. This results in a system of the following form: The field of mathematics that addresses solutions to systems of polynomial equations is algebraic geometry. This material is covered in Cox et al. [81] Algebraic geometry methods including Grobner bases [82] multi polynomial resultants [83] and polynomial eigenvalue methods [84] have been applied successfully to rela tive

PAGE 114

114 orientation and are worth considering. They may provide more elegant solutions than the method suggested here. However, a functional solver was written following the Monte Carlo approach suggested for relative orientation by Horn [85] was to base a search of the solution space on selections of quaternions. Vertices of polychora inscribed in a unit 4 sphere are used as starting points because they are evenly distributed in space. Once a quaternion is sel ected, the polynomials are linear in the camera position coordinates, which can be found by using linear Gaussian least squares [74] These initial values are then further optimized with an iterative gradient descent Newton me thod toward the common root. If a root is found, the solution vector is saved. The set of solutions resulting from the Monte Carlo search is finite but may be large. Each unique solution was then optimized using all the data. Details of the optimizati on method are given in the next section. Optimization of Initial Values The data used in these experiments was known to have no outliers. Hence there was no need for robust analysis. The optimization was, therefore, implemented as a least squares optimiza tion of the type that Uoltila Helmert Model with [74] The system involved equations that include measurements and thus have residuals (Equations 5 1 and 5 2), and a u nit length quaternion constraint that must be perfectly satisfied. Given a vector of unknowns, that have approximate initial values, and a vector of measurements, the system of equations can be written :

PAGE 115

115 The system of equations is the multilinearity constraints of the forms given in Equations 1 and 2. is the unit length quaternion equation, which is only a function of the unknown parameters. Adding symbols for the residual vector, correction vector, ; and linearizing with a first order Taylor series expansion gives: Using the following substitutions this the systems is rewritten in matrix form. Given a covariance matrix for the measurements, we define:

PAGE 116

116 The corrections to the initial values, are thus expressed as a sum of the contributions of the systems and : These corrections are added to the initial values, and the procedure iterates until the corrections become negligible (or the solution converges). Numerical Simulations Numerical simulations were designed to test the various algorithms for weaknesses, simulate common geometries, and be analogous t o real data. They were intended to be effectively infinite sequences so that the rate of error accumulation could be compared to truth values. The error accumulation rate was the quality metric of greatest interest because building initial values require s sequences of IPP solutions. Further, it was not the purpose of this study to test how well a block is started. Hence, all the algorithms are seeded with two perfect initial camera poses. It is acknowledged that each of the configurations below could be compounded by using different cameras, image spacing, object space variation, and varying the magnitude of the image observation noise. The parameter space was too broad to sample comprehensively, and thus some selections were made. The amount of image noise was simulated to be consistent with real data bundle adjustments done with these cameras. The simulations were run hundreds of times for each IPP algorithm with small random variations to camera pose and random normal image measurement noise. projection centers were all circular and looked radially inward toward a cylinder. The cylinder had a radius of 25 units, and the camera moved in a circle with a radius of 35 units. There were 36 images in every circle. The camera was simulated to mimic a

PAGE 117

117 Canon SL1 with a 20mm lens. A grid of 25 points was projected from each image to the cylinder and then back into other images to create the conjugate points. Ties were limited to images within two circular passes of each other to prevent the image measurement stacks from becoming unwieldy. There were 115 points per image and a maximum of six rays per point. This series has a large amount of rotation, which has been shown to be advantageous for unambiguous solutions [86] It was predicted that all the algorithms would perform at their best in this geometry and thus provide a baseline. The next simulations were of two aerial strips. All the images w ere nadir looking, and the camera motion was parallel to the image x axis. The first of these mimics a DMCII camera with 60% overlap. This geometry was chosen because it is of interest to the photogrammetry community, there is analogous real world data, and it was predicted to challenge three of the algorithms. MRel was predicted to have difficulty resolving the along strip distances due to the nearly linear camera positions. Tri and KMa were predicted to accumulate error quickly due to the algorithms ha ving only narrow three image overlap regions. Simulations of an aerial strip using a Canon SLI1 with a 20mm lens and 90% overlap were also done. This geometry was intended to be representative of strips collected by unmanned aerial vehicles. Next was g eometry typical of mobile terrestrial systems. These series mimicked an Olympus E520 with 25mm lens moving straight down the center of an infinite 1.79 m by 2.44 m hallway. The camera moved in 1/3 m increments. Eight observed points ring the hallway eve ry 2/3 m (two points on each surface). Images observed only points less than 7m in front of the lens. There were 25 30 tie points for each IPP calculation.

PAGE 118

118 Points were measured in as many as 11 images. It was predicted to be challenging because the ima ge positions are collinear and the image motion was parallel to the look directions (which makes 3D reconstruction difficult). MRel, MRIR, and KMa were all predicted to struggle. Finally, a radial panorama was simulated. The image projection centers were arranged in a horizontal circle with a radius of 1 unit. There were 50 images in each circle. The images looked radially outward toward points that lie on a cylinder with a ter of the circle of image positions. There were approximately 500 points per image. The maximum number of measurements of any point was 14. This geometry was simulated because it approaches impossibility, and yet is of some interest. Applications of p anorama processing frequently model image perspective centers as coincident. However, better IPP algorithms may reduce the need to rely on the assumption. Simulation Results It was decided that a useful way to present the rate of error accumulation for th e simulations was to show the distribution of the number of photos that can be incrementally linked before the position error exceeded the nominal image spacing or the angular error exceeded five degrees. These thresholds were somewhat arbitrary, but they were scale invariant and adapted to different geometries.

PAGE 119

119 The cylinder simulation data sets did not turn out to be a good baseline. KMa, MRel, and Tri all consistently failed before completing a single revolution (less than 36 images). MRIR consistentl y made exactly one revolution before failing (exactly 36 images). The fact that the image sequence closed caused an unexpected difficulty. If the pose solutions wandered too much on their trip around the cylinder, the ties back to the starting images were numerical outliers and including them caused the MRIR solution error to jump over the error threshold. In contrast, GIPE could hold firmly enough to the true geometry that the solution corrected itself as it closed each loop, and the simulations consiste ntly ran until it hit the 10,000 image limit. A graph of the GIPE position error estimation for the first 300 IPP is given in Figure 5 1. GIPE oriented at least 277 times as many images as any other method. Figure 5 1. Graph of cylinder simulation GIPE position error as a function of the number of sequential pose estimations. The error corrects every 36 images as the circles of images close. 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0 50 100 150 200 250 300 Magnitude of Position Error Number of Images Sequentially Added Mangnitude of Incremental Pose Position Estimation Error Using GIPE

PAGE 120

120 The results from the DMCII aerial strip simulations with 60% overlap are shown in Figure 5 2. This case demonstrates that a typical strip of aerial images is a difficult geometry and that the algorithm used matters a great deal. Both the Tri and KMa methods had narrow three image overlap regions to work with (approximately 10% of the image width) and failed quickly as predicted. It was surprising that MRel performed as well as it did because the image positions were nearly colinear. MRIR outperformed MRel as expected. The addition of IntRes improved the results because the point intersections ar e stable, and helped resolve the along strip position ambiguity. The median number of images linked before failure for GIPE was 3.5 times that of the nearest competitor (MRIR). Figure 5 2. The results from the DMCII aerial strip with 60% overlap Distri butions of how many images could be linked before the position error exceeded the nominal image spacing. The results from the Canon SL1 aerial strip simulations with 90% overlap are shown in Figure 5 3. Increasing the overlap in an aerial strip made IPP less sensitive to the choice of algorithm. The two simultaneous algorithms built from the models given in Equations 5 1 and 5 2 (GIPE and Tri) performed similarly well. The tails of the 0 500 1000 1500 2000 2500 3000 DMCII, Distributions of Images Linked Before Error Theshold Exceeded GIPE Tri MRIR MRel KMa

PAGE 121

121 distributions constricted when more data was included (recall GIPE allowed for more observations than than Tri), but the medians were almost identical. MRIR was no better than MRel in this case. The point intersections (particularly during the first few IPP) had poor geometry and seemed to have been a hindrance as often as a help. As with the DMCII scenario, KMa failed earliest. Figure 5 3. The results from the Canon SL1 aerial strip with 90% overlap. The distributions of how many images could be linked before the position error exceeded the nominal image spacing. Th e results from the Olympus E520 hallway simulations are shown in Figure 5 4. As, predicted the simultaneous algorithms that require no object space reconstruction (G IPE and Tri) outperformed the others. The median of the GIPE distribution is 6.5 times gr eater than the median of the closest non multilinearity based method (MRIR) and 2.3 times greater than any other method. 0 20 40 60 80 100 120 140 160 Cannon SL1 40mm Distributions of Images Linked Before Error Theshold GIPE Tri MRIR MRel KMa

PAGE 122

122 Figure 5 4. Olympus E520 hallway simulations results, the distributions of how many images could be linked before the position erro r exceeded the nominal image spacing. The results from the Olympus E520 panorama simulations are shown in Figure 5 5. As predicted, this geometry was challenging for all the algorithms. The median of the GIPE distribution is approximately twice that of any other method. Figure 5 5. Olympus E520 panorama simulations results, the distributions of how many images could be linked before the position error exceeded the nominal image spacing. 0 100 200 300 400 500 Olympus E520 Hallway, Distributions of Images Linked Before Error Theshold Exceeded GIPE Tri MRIR MRel KMa 0 5 10 15 20 25 30 Olympus E520 Panarama, Distributions of Images Linked Before Error Theshold Exceeded GIPE Tri MRIR MRel KMa

PAGE 123

123 It is also instructive to compare the quality of the first IPP. Figure 5 6 compares the distributions of the position errors for the first IPP from the DMCII aerial strip. Note that the Tri first pose estimation error distribution is very similar to the MRel and MRIR despite the later dramatically outperforming in th e rate of error accumulation. Figure 5 6. Distributions of the first pose position estimation error for the DMCII aerial strip with 60% overlap. In this example analysis of the first IPP position error would lead to similar conclusions as analysis of the error accumulation rate Figure 5 7 shows the first IPP error distributions for the Canon SL1 aerial strip with 90% overlap. Based on these distributions only, there would appear to be little difference in any of the algorithms (except KMa) despite Fi gure 5 3 demonstrating the dramatic differences in their error accumulation rates. 0 0.001 0.002 0.003 0.004 0.005 0.006 Units are normalized, 1.0 is the nominal distance between images DMCII First Pose Estimation Position Error GIPE Tri MRIR MRel KMa

PAGE 124

124 Figure 5 7. Distributions of the first pose position estimation error for the Canon SL1 aerial strip with 90% overlap. In this case, considering only the first po se error would lead to erroneous conclusions about the relative usefulness of the algorithms. Real Data Tests IPP results for a block of 233 DMCII images of semi rural Pennsylvania are shown in Figure 5 8. Each image was 15552 by 14144 pixels (220 megapix els). The camera has a focal length of 92 mm and horizontal field of view of 50.65. There was an average of 370 points per image and 17 surveyed control points in the project area. The various IPP algorithms were seeded with two image poses from the fi nal bundle and used to calculate the remaining 231 in the order the images were captured without any intermediate bundles or reference to the ground control. Some of the curves terminated early because the IPP was halted when the position error exceeded t he nominal image spacing, the orientation error exceeded five degrees, or the algorithm otherwise failed. Only GIPE linked all the images, and the GIPE error never exceeding 13% of threshold. The error shows a saw tooth pattern because the solutions corr ect as the meandering strip lines close back to near the initial image pair. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Unit are normalized, 1.0 is the nominal distance between images Cannon SL1 40mm Distribution of First Pose Estimation Position Error GIPE Tri MRIR MRel Kma

PAGE 125

125 Figure 5 8. Error accumulation in IPP by algorithm calculating initial values for a 233 image DMCII block. Linear unit is the nominal image spacing. Figure 5 9. Error accumu lation in IPP algorithms calculating initial values for a 58 image DJI FC6310 block. Linear unit is the nominal image spacing. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0 50 100 150 200 Linear Position Error Number of Images Sequentially Linked DMCII Block, Sequential IPP Position Error GIPE Tri MRIR MRel KMa 0 0.05 0.1 0.15 0.2 0.25 0 10 20 30 40 50 Linear Position Error Number of Images Sequentially Linked DJI FC6310 Block, Sequential IPP Position Error GIPE Tri MRIR MRel KMa

PAGE 126

126 IPP results for a block of 58 DJI FC6310 images of a construction site in upstate New York are shown in F igure 5 9. Each imag e was 5472 by 3648 pixels (20 mega pixels). The focal length was calibrated as 3654 pixels which gave a horizontal field of view of 73.6. There was an average of 1500 points per image and 9 surveyed control points in the project area. The various IPP a lgorithms were seeded with two image poses from the final bundle and used to calculate the remaining 56 in the order they were collected without any intermediate bundles or reference to the ground control. GIPE and Tri (the simultaneous multilinearity ba sed IPP) linked all images within the threshold limitations. GIPE never exceeded 2.5% of the error threshold. Figure 5 10. Error accumulation in IPP algorithms calculating initial values for a 25 image Olympus E520 hallway series. Linear unit is the nomi nal image spacing. IPP results of a series of 25 Olympus E520 images moving straight down a hallway are shown in F igure 5 10. Each image was 3648 by 2736 pixels (10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 5 10 15 20 25 Linear Position Error Number of Sequentially Linked Images Olympus E520, Hallway Data GIPE Tri MRIR MRel KMa

PAGE 127

127 megapixels). The focal length was calibrated at 25.1 mm, which gave a horizontal field of view of 37.9. There was an average of 22 points per image and 44 surveyed control points in the project area. The various IPP algorithms were seeded with two image poses from the final bundle and used to calculate the remaining 23 in the order they wer e captured without any intermediate bundles or reference to the ground control. GIPE, Tri, MRel, and MRIR linked all 25 without exceeding the error threshold. GIPE had the lowest overall error, but MRIR performed similarly well. Figure 5 11. Error accu mulation in IPP algorithms calculating initial values for an 88 image Olympus E520 panorama. Linear unit is the radius of the circle of camera centers IPP results for a 0.17m radius horizontal circle of 88 Olympus E520 images of the interior of a racquetb all court are shown in F igure 5 11. Each image was 3648 by 2736 pixels (10 megapixels). The focal length was calibrated at 25.1 mm which gave a horizontal field of view of 37.9. There was an average of 11 points per image and 95 surveyed control points in the project area. The various IPP algorithms were seeded 0 0.1 0.2 0.3 0.4 0.5 0 20 40 60 80 100 Linear Position Error Number of Images Sequentially Linked Olympus E520, Panorama GIPE Tri MRIR MRel KMa

PAGE 128

128 with two image poses from the final bundle and used to calculate the remaining 86 in order around the circle without any intermediate bundles or reference to the ground control. Only GIPE linke d all 88 images without exceeding the error threshold. Conclusion GIPE is the only published general and simultaneous solution to the increment image pose problem. It was the only algorithm that consistently performed well In the numerical simulations depending on the geometry, it linked between 2 and 277 times as many images as any other IPP algorithms before exceeding the error threshold. Similarly, GIPE was the only method that successfully linked all the images in each real data test. As expect ed, the other algorithms performed well in some of the tests, It was observed that errors in single IPP solutions are not necessarily a good indicator of error accumulation rates.

PAGE 129

129 CHAPTER 6 RELATIVE POSE ESTIMATION CRITICAL VOLUMES Introduction This Chapter presents the second of the algorithms proposed for faster and more accurate 3D reconstruction from imagery. It deals with making relative pose parameter estimation more reliable The r elative pose problem is the calculation of relative position and angular orientation of one calibrated image of a scene with respect to another. It has also been referred to as the motion vision problem [87] and relative orien tation [85] It is commonly the first step in 3D scene reconstruction and is used extensively in processing of image sets and video streams. A relative pose may be ambiguous or unstable due to noise, mismatches, lack of infor mation or poor geometry [87] Various methods of predicting and quantifying ambiguity have been proposed; analyzing the covariance matrix of the calculated parameters is probably the most straightforward. Large variances, lar ge correlations, or, equivalently, design matrices with large condition numbers indicate ambiguity or instability [88] meaning there exists a potentially large but continuous volume in the parameter space that results in stati stically similar solutions. This is analogous to the freedom and mixed units. [89] and [90] used Cramer Rao lower bo unds to study ambiguity. Ambiguity can be also be described by the epipole locations [91] [92] and related to camera internal parameters (e.g. field of view [93] [94] ). [95] studied the stability empirically for the case of calibrated cameras, showing that relative pose based on the projection of planar scenes could become unstable if the fields of view are nar row, the depth variation is small, or the data is from a small region of the image.

PAGE 130

130 objects was large compared to the separations between the images. Note that these desc riptions of ambiguity will not detect uncertainties associated with multiple local minima [88] even with perfect knowledge of the motion fi eld. A motion field is the projection of the 3D velocities of objects in the scene, i.e., the time derivative of image positions as they pertain to scene points. It is an idealized quantity that can be formally defined but can only be approximated in pra ctice. These approximations are referred to as optical flow or displacement fields, and they most commonly take the form of feature matches. Thus, assuming a known motion field is assuming errorless infinite feature matches that cover the whole field of view. For ease in discussion, we categorize the measurements used for approximating the motion field as ideal (errorless data), clean (data that has only zero mean Gaussian noise with a reasonably approximated standard deviation), or regular (clean data corrupted by outliers). The best data ever available in practice is clean data, and it is only available rarely (e.g. from careful manual measurement). Most applications must work with regular data. Each of the solutions of a critical relative pose imply different 3D reconstructions, called critical surfaces. Maybank gave the first derivation of the critical surfaces [96] Horn derived another expression for the critical surface parameterized by the motion field [97] Luong and Faugeras demonstrated that the two formulations are equivalent [98] The critical surfaces are a family of ruled quadratic surfaces and some degenerate forms (hyperbolic paraboloids, circular cylinders, or pairs of perpendicular planes) The

PAGE 131

131 scene points and the camera positions must lie on one of these surfaces for it to be critical [96] [97] Though it was not expressly stated, thes e derivations also assumed spherical cameras, or, more precisely, they allowed infinite focal planes and projections of points behind the camera. It has further been shown that the shape of the critical surface is related to the number of ambiguous solutio ns. Hyperboloids and paraboloids have up to three solutions. Cylinders and plane pairs have up to two [87] It is interesting that Horn considered these critical surface derivations and [97] while Longuet Higgins, disturbing, in that they remind us of the untrustworthiness of vision algorithms based on the implicit assumption that ther e [100] It was later shown that the directions of the motion field (no magnitudes) were sufficient to define the relative pose, and that it could only be critical if the translation vectors ( and rotational velocities ( were related by [101] They also demonstrated that if the visibility constraint was considered (i.e. hemispherical camera) relative pose was never c ritical given perfectly known motion field directions [101] This work is concerned with practical means of dealing with the interplay of ambiguity and criticality in relative pose. Related work was done by [98] [91] for the describe the effect of having near critical data. They demonstrated that solution accuracy could be better predicted by accounting for factors related to instability and

PAGE 132

132 near criticality than by either separately. Thus, verifying the intuitive interplay between the two. In addition, [91] estimated how frequently nearness to the qua dratic critical by small camera separation and planar scenes (which are critic al in uncalibrated relative random geometries). This work contributes: Algebraic descriptions of the critical volumes for calibrated relative pose estimation. These deri vations account for measurement noise and realistic camera models with finite fields of view. Examples of critical geometries found among production data sets are used to illustrate these volumes and compare them to the geometries of surfaces that assume spherical or hemispherical cameras and motion fields. Practical discussion of how including measurement noise affects the modification to robust parameter estima tion algorithm RanSaC is proposed to detect near criticality during parameter estimation with little additional computational cost. The new algorithm is used to report on the prevalence of ambiguous data among real data examples. Critical Volume Derivati on The following derivation of the critical surfaces of the relative pose problem is like that given by Horn [97] parameterizes the surface in terms of the unknown relative pose parameters instead of

PAGE 133

133 the motion field It is presented as a precursor to the novel derivations of the critical volume. Without loss of generality, the first image can be assumed to be positioned at the origin, aligned so that the rotation f rom the 3D reference frame to image frame is identity, and have a focal length of 1. Thus, the projection from the world coordinates, to ideal image coordinates, is simply The transf ormation from image coordinates in image 1 to world coordinates is thus, If is the 3 x 3 rotation from the world frame into the image frame of the second image, is the position of the that image, and it also has a focal length of 1 th en the projection of reference frame coordinate in image 2 is given by ( 6 1) T he observation is the general homogenous form of the image observation bec ause M is not necessarily identity. T he general form of the critical relative orientation has two solutions: (the special cases with three solutions are subsets of the general form). is the co mponent of the reference ), i.e. the two different range maps. Note that the vectors , and the critical surface di fferently, the use of in the derivations can be confusing. Hence, and are used instead until one of the critical surfaces is eliminated from the expression (thus avoiding potentially confusing subscripts on Once the expr ession contains only one critical surface, is substituted to eliminate the image coordinates,

PAGE 134

134 The image point vectors for each solution, in the image space of the second image (Equation 6 1) must be parallel which is to say they are equal up to scale factor, ( 6 2) To solve Equation 6 2 for the expression of the critical surface it is convenient to take the dot product with to eliminate the entire second term (recalling ( 6 3) The equation for the second critical surface can be derived analogously, by taking the dot product of Equation 6 2 with ( 6 4) The derivation of the critical surfaces (Equations 6 3 and 6 4), as with those given by [97] [96] placed no domain limit on the image coordinates, The projection equation also allowed points behind the sensor. Thus, while not explicitly stated, the cameras implied are spherical. There was also no account taken fo r measurement noise. Thus, the derivations assumed spherical cameras and a known motion field (infinite ideal data). Critical Volume Derivation Let us revisit the derivations of Equations 6 3 and 6 4 to derive the critical volume for spherical cameras con sidering the stochastic nature of the image measurements. If the relative orientation is critical then there are two solutions as before, In addition, each solution implies a different error in each of the observations. Thus, the observations of the points in first image is

PAGE 135

135 where is the ideal (errorless) obs ervation and is the residual (error) in the measurement of the image point with respect to solution Similarly, the observation in the second image is where is the residual (error) in the measurement of the image point with respect to solution The projection into image 2 (Equation 6 1) becomes where scales the projection so that The scaling is necessary in this derivation to have constant scale residuals. Equation 6 2 becomes: ( 6 5) It can be solved for the critical surface by dot product with respect to the vector, ( 6 6) Equation 6 6 is a hybrid mix of image coordinates and scene coordinates. To eliminate the image coordinates we return to the projection into image 1 which becomes, with error term, or Substituting Equation 6 6 becomes: We observe that the vectors and can be scaled without affecting the equality. We chose to scale the former by and the later by resulting i n: (6 7)

PAGE 136

136 Thus scaled, Equation 6 7 maintains quadratic form and reduces simply to Equation 6 3 if the residuals are assumed to be zero. Similarly, the dot product of E quation 6 5 with substitution of and analogous scaling of the sub vectors yields the equation of the twin critical surface: ( 6 8) Equations 6 7 and 6 8 are expressions that include the scene coordinates and weighted sum of the squared residuals is not statistically significant. In this case there are four residuals related to solution 1, and four related to solution 2, All the image measurements have been scaled by the negative focal length to simplify the derivations. Hence, a reasonable value for the variance is where is the focal length of image in pixels (implying a standard error of 1 pixel). The weight matrix, is the inverse of the covariance matrix, such that is the sum of the weighted squared residuals and it follows a Chi square distribution with 4 degrees of freedom. A 95% confidence threshold is approximately (0 9.49). Thus, we are looking for the region of space where < 9.4 9. Adjustment theory uses first order a Taylor series expansion and Lagrange multipliers to solve for the The first order Taylor series expansion of is or in matrix form Using the matrix

PAGE 137

137 notation, where In this case both and are 1x1 matrices so the expression simplifies to Observe from Equations 6 7 and 6 8 that and therefore and Thus, the sums of the weighted squared residuals are equal and only one set of four residuals needs to be considered to define each critical vol ume ( The two critical volumes for spherical cameras are therefore, ( 6 9) ( 6 10) We acknowledge that whether a solution will pass a chi square test for goodness of fit depends on how the data is sampled from the critical volume (as described by Equations 6 9 or 6 10) and what is assumed about the data. For example, a single point observed outside of the volume would have statistica lly significant residuals. If the data is known to be clean (no outliers) then even a single point outside the volume would disambiguate the two possible solutions. However, in regular data the point may simply be classified an outlier with respect to on e of the solutions. This is enough for a greedy algorithm that simply takes the solution with the most inliers, to make a choice between the two solution possibilities. Consider how flimsy the justification for that choice may be if inlier counts are sta tistically similar. This is discussed in more detail in [91] and we return to the topic when presenting our practical solution for detecting near criticality.

PAGE 138

138 A more detailed description of critical volume that includes probab ility as a fourth dimension can be defined by using the chi square statistic to define a probability for each We chose not to present it in this fashion due to difficultly in visualizing it, and because our prosed practical solution does not require it. Next, we limit the critical volume to the region that can be seen by regular cameras with horizontal (image x direction) fields of view, and vertical (image y direction) field of view, For the first image fixed at the origin the co nstraints are: ( 6 11) ( 6 12) ( 6 13) The look vectors, are parallel in the image space. Therefore, it is appropriate to constrain any to be visible for a ll critical surfaces. ( 6 14) ( 6 15) Constraining points to be in front of the camera, however, is specific to each critical solution. ( 6 16) The completed description of the critical volume as seen by solution 1 is given by th e set of polynomial inequalities in Equations 6 9, and 6 11 through 6 16 ( The volume as seen by solution 2 is described by Equations 6 10, 6 11 through 6 16 ( These systems consider measurement noise and visibility conditions. Examples in t he

PAGE 139

139 following section demonstrate how describing the systems in this manner differ from the classical descriptions. Examples of Critical Volumes The pair of images in Figure 6 1 were chosen for measuring features of interest on the nose of the Space Shuttl e after launch. The images were taken from the software packages used at Johnson Space Center at that time used either assumed initial values typical of aerial image ge ometry or a nother fixed set of initial values to the relative orientation. Both methods consistently reported a relative motion that was almost perfectly parallel to the optical axis of the images. However, this motion was impossible because during RPM the vehicle was rotating (not translating) with respect to the observer Using our proposed algorithm, the two statistically similar solutions in Table 6 1 were identified. Note that the points are as well distributed as possible and neither solution has any outliers. The critical surface and volume are shown in Figure 6 2 (A D). Note that the critical surface became a non trivial critical volume. Also, our analysis did not show the critical surface vanishing under the visibility constraint as predicte d by [110] The measurement data is given in appendix A. Consider another relative pose example in Figure 6 3 This image pair shows the ground under the Mars Phoenix lander. Scientists at Marshall Space Flight Center provided these images and desired to measure the rocket erosion crater under the vehicle. Table 6 2 shows the four statistically acceptable relative orientation solutions for these images. Note that a ll four solutions share at least 23 of the 26 points t he least overdetermined solution had a redundancy of 18 (more than 3 times the number of

PAGE 140

140 degrees of freedom) and t he four solutions are observably separate in solution space but have acceptable RMSEs and outlier counts. Table 6 1. Two statistically ac ceptable solutions to the Space Shuttle nose cone relative pose. The vector is a unit vector in the direction from one image to the other. The orientation of the second image with respect to the first is given as Euler angles (Omega, Phi, an d Kappa) in degrees. The RMSE error is given in pixels. The last two columns are the number of inlier and outlier image matches. X Y Z Omega Phi Kappa RMSE Inliers Outliers 0.33 0.05 0.94 1.04 5.79 1.19 0.51 142 0 0.97 0.12 0.19 0.30 0.15 1 .08 0.16 142 0 Figure 6 1. Space shuttle nose stereo pair showing the 142 conjugate points. Two statistically acceptable relative orientation solutions are given in Table 6 1. Top: iss013e47326 Mapping sequence performed during the STS 12 1 R Bar Pitch Maneuver July 6, 2006, 13:52:11 GMT C ourtesy of NASA. Bottom: iss013e47320 Mapping sequence performed during the STS 121 R Bar Pitch Maneuver July 6, 2006, 13:52:02 GMT C ourtesy of NASA.

PAGE 141

141 Figure 6 2. Each column contains, from to p to bottom, the critical surface, the critical surface constrained by the field of view, the critical volume, and the critical volume constrained by the field of view. The left column (A D) is one of the critical surfaces/volumes from the Nose Cone examp le. The middle (E H) is the Phoenix Lander example, and the right column (I L) is the UAS mapping example.

PAGE 142

142 The critical surface and volume for a pair of these solution s is shown in Figure 6 2 (E H). The difference between the critical surface and volume is particularly dramatic the criticality and a mbiguity compound The measurement data is given in appendix B. Figure 6 3 A Pair of image views under the Mars Phoenix lander. The 26 conjugate points ar e marked with white squares. Four statistically acceptable relative orientation solutions are given in Table 6 2. Top: rs005ffl896663219_11730mdm1 May 5, 2008 Mars Phoenix Lander, robotic arm camera C ourtesy NASA Planetary Data System. Bottom: rs14 2rsl908818700_1fca0mdm October 10, 2008 Mars Phoenix Lander, robotic arm camera C ourtesy NASA Planetary Data System

PAGE 143

143 Table 6 2. Four statistically acceptable solutions to the overdetermined Phoenix lander relative orientation. The vector is a unit vector in the direction from one image to the other. The orientation of the second image with respect to the first is given as Euler angles (Omega, Phi, and Kappa) in degrees. The RMSE error is given in pixels. The last two columns a re the number of inlier and outlier image matches. X Y Z Omega Phi Kappa RMSE Inliers Outliers 0.73 0.68 0.02 1.07 4.10 1.31 0.49 25 1 0.03 1.00 0.05 18.33 0.40 0.11 0.28 24 2 0.05 0.99 0.11 29.71 1.32 0.82 0.60 25 1 0.02 0.30 0.95 5.70 0.05 0.09 0.15 26 0 Figure 6 4 A stereo pair captured by an unmanned aerial system with three statistically similar relative pose solutions given in Table 6 3. The 91 conjugate points are shown as gray squares. Drone accuracy test Septem ber 2016. Courtesy of Cooper Aerial Surveys Co.

PAGE 144

144 An image pair from an unmanned aerial system (UAS) mapping project in Arizona, USA is shown in Figure 6 4 It was processed with 91 well distributed conjugate points, the images overlap by 41%, and the b ase to height ratio was approximately 1:10. Which is to say, there was nothing unusual about this pair, and that was one of the primary reasons this example was selected. The nose cone and Phoenix examples suffer from classic sources of ambiguity. The n ose cone focal length was 800mm (a very narrow field of view) and the Phoenix images were heavily compressed 640 by 480 pixel images (high noise and few features). However, the UAS model had no obvious reason for ambiguity, demonstrating that subtle attr ibutes can lead to criticality. The three statistically similar relative pose solutions are given in Table 6 3. This example is also interesting because the RMSE for the true geometry (the first solution in the T able 6 3 ) is higher than the other two. T he critical surface and volume are shown in Figure 6 2 (I L). Note that the shape of the critical surface closely approximates two perpendicular planes. Considering measurement noise, the two planes become a critical volume that could approximate planar scenes. The measurement data is given in appendix C. Table 6 3. Statistically similar relative p ose solutions for a pair of images captured from an unmanned aerial system. The vector is a unit vector in the direction from one image to the other. The orientation of the second image with respect to the first is given as Euler angles (Omega, Phi, and Kappa) in degrees. The RMSE error is given in pixels. The last two columns a re the number of inlier and outlier image matches. X Y Z Omega Phi Kappa RMSE Inliers Outliers 0.04 1.00 0.03 4.17 0.39 3.68 0.13 91 0 0.02 0.09 1.00 9.90 0.64 3.59 0.11 91 0 0.02 0.09 1.00 9.90 0.64 3.59 0.11 91 0

PAGE 145

145 Other Implications of Regular Data Transcending the assumption of ideal data has other noteworthy implications for relative pose algorithms. First, it has been said that a relative pose parameterization can only be valid if all the points are in front of the cameras [85] [100] This is the visibility constraint that was part of the critical volume definition. Horn called points in [2] We observe, trivially, that this statement is a bsurd in the case of regular data because a solution should not be rejected if an outlier seems to be behind a camera. What of restricting the statement to apply only to the numerical inliers? This is also inadequate, since error can only be detected per pendicular to the epipolar lines. This implies that the range errors cannot be checked. Hence, a feature match could be a numerical inlier and be behind a sensor. This should not be assumed to invalidate the entire solution, but rather should be used to classify feature matches as outliers. To illustrate a final point regarding positive feature matches, consider a pair of sensors on a fixed stereo base such as would be used for navigation. If the images are pointed in the direction of travel they likely have large portion of the field of view that shows features that are at effectively infinite range (i.e. the range is >> greater than the length of the stereo base). Due to measurement noise, any inlier feature match that is far away has approximately a 50% probability to be numerically negative. These measurements contribute to the solution and half of them should not be classified outliers. Thus, the classification is more appropriately cast as a hypothesis test. Using a hypothesis test to determine if feature matches are positive ensured that points that could be positive within the expected measurement noise were not rejected.

PAGE 146

146 pair is a name that has been giv en to pairs of relative pose solutions. Specifically, if is relative pose solution and is a half revolution about then is also a solution. However, at most one of the pair can satisfy the visibility constraint [100] This does not hold for clean data. The second and third solutions in Table 6 3 are a twisted pair, and both satisfy the visibility constraints within the measurement noise. Observed Prevalence of Ambiguous Relative Pose Solutions We used our proposed algorithm to obse rved prevalence of critical relative orientations (Table 6 4). The data we analyzed consisted of camera calibration sets and aerial mapping projects. The data sets were collected by colleagues and given to us for testing and software development. Most o f them are from unmanned aerial systems projects, but two were from larger format cameras. The camera calibration sets are highly convergent and representative of close range photography. Mapping projects primarily consisted of images with parallel nadir looking look vectors. We present this data before giving the details of the algorithm used to produce it because these observations justify some of algorithm design. To avoid having ambiguity dominate the data, only relative pose problems with at least 100 points and at least 20% represented the actual geometric motion of the camera could be distinguished from the e pose problem with no additional information there is no way to do this because it requires additional data. In the Shuttle Nose cone example, we used the design shape of the shuttle as secondary information. In the Phoenix Lander example, we used the g

PAGE 147

147 projects, the secondary information consisted of ties to other images and observations of points with known coordinates. Table 6 4. Observed prevalence of critical relative pose problems among real data projects. G solution that did not represent the true geometric motion had a lower solutions that had inlier RMSE that were significantly less than the t rue solution (at the 5% level). Sensor Type Metric Project Type Relatives Critical Percent Ghost < RMSE Ghost << RMSE Small No Mapping 269 0 0.00% 0 0 Small No Calibration 3614 0 0.00% 0 0 Small No Mapping 2059 0 0.00% 0 0 Small No Mapping 6076 3 0.05 % 0 0 Small No Mapping 293 1 0.34% 2 2 Small No Calibration 4899 22 0.45% 3 0 Medium Yes Mapping 294 4 1.36% 0 0 Large Yes Mapping 531 8 1.51% 2 1 Small No Mapping 888 19 2.14% 2 2 Small No Mapping 406 9 2.22% 1 0 Small No Calibration 964 62 6.43% 3 1 Small No Mapping 943 97 10.29% 11 2 Small No Mapping 8180 987 12.07% 215 67 Small No Calibration 253 34 13.44% 12 9 Small No Calibration 820 271 33.05% 22 19 Collectively our observations in Table 6 4 cannot be said to agree with relative pose bei [97] or that that the critical surface was omnipresent [91] Criticality was vanishingly rare in some projects and common in others. Analyses of the factors affecting the prevalence are beyond the scope of this work, but one phenomenon does need to be discussed because it affects the algorithm design. Originally, a design assumption was that the true solution would have an inlier RMSE that was statistically similar or bet ter than any ghost solutions. This assumption caused failures in some projects. The last two columns in Table 6 4 indicate how often ghost solutions had better and significantly better inlier RMSEs (F Test for similarity of

PAGE 148

148 variance significant at the 5% level). G host solution s with significantly better residuals were rarer than our 5% probability threshold would have predicted (0.03% of all relatives or approximately 1:300). Thus, a form multiple hypothesis testing correction would eliminate the issue, however that would require prior knowledge of not only how many relative pose estimations will be attempted but how many solutions hypothes e s will be generated for each parameter estimation. Detecting Near Criticality during Parameter Estimation In practi ce algorithms must use a finite set of regular data. It is absurd to assume the data is ideal or infinite and unwise to assume it is clean. The practical problem is thus determining whether a finite set of displacement vectors (feature matches) that prob ably contains some outliers can be explained by more than one relative pose solution. For this distinction to be meaningful there must be some way to define what amounts to significant change in the parameters. Given the legacy descriptions of the criti cal surfaces, the most obvious approach was to examine the object space reconstruction to determine if it is an approximately quadratic. This approach is computationally expensive and unreliable. It is expensive because it requires 3D reconstruction and a second parameter estimation (fitting 3D measurement noise, measur e density, scene geometry, outliers). For example, consider that a single mismatched feature that is a numerical inlier could make a reconstruction appear non critical. [98] presented a method for detecting near criticality f rom the image space measurements. This is more practical than analysis of 3D reconstructions because the dimensionality is reduced, and it is more directly related to

PAGE 149

149 the measurement noise. However, this method is still computationally expensive because address the possibility of different solutions having differing sets of inliers and outliers nor does it define how different the second solution must be to be considered distinct. We propose to detect near criticality during parameter estimation with a modification to the RANdom SAmple Consensus (RANSAC) algorithm. RANSAC is an effective and widely used robust analysis algorithm because it works well for real data, is conceptually simple, and has a catchy acronym [102] The original RANSAC algorithm can be summarized as follows. Let be the input data, a finite set of samples from sample space If is the minimum number of samples required to solve for a finite set of parameter estimations, then and every set implies a finite set of parameter vector estimations, where is the length of the parameter vector. The function is a measurement space distance that represents the residual of the measurement with respect to the model parameterization The cost fu nction maps a residual to a cost. The total cost associate with a hypothesis is The original cost function was where is a prior residual thresholding parameter. The set of inliers for a hypothesis is and the outliers are Thus, a shorter notation for the original cost is the number of outlier s. RANSAC then iterates as follows: 1. Randomly select a set

PAGE 150

150 2. Solve for the finite set of solution hypotheses, 3. Evaluate the cost of each hypothesis Keep track of the best hypothesis, and its associated set of inliers. 4. Repeat steps 1 3 as many times as necessary to have a high confidence of having found the best solution. 5. Optimize using all the inliers. The enthusiasm to utilize and improve the algorithm has led to a wealth of research and a mess of emulator acronyms. A nearly comprehensive review of the additions and modifications proposed to RANSAC is given in [73] where they attempted to combine all the contributions into a universal framework. ANTSAC [103] and RANSAC Least Entropy Like [104] are noteworthy contributions not covered in that review. There is an assumption in RANSAC, and all the proposed adaptations, that there is a single optimum solution to be found for a given set of inlie rs. There are scattered references to, and examples of, near criticality among proposed solution algorithms. In a proposed RANSAC modification, MLESAC, one of the real data examples was near [105] Szeliski and Kang acknowledged that their relative orientation algorithm would not detect multiple solutions but discussed solution stability and correlations among parameters at length [86] Numerica l techniques for determining if randomly selected sets of minimum data are near critical for projective relative orientation and resection (relating to RANSAC step 2) have been proposed [106] [107] but the assumption that the overall system has a single solution is consistently maintained. Quasi Degenerate RANSAC, QDEGSAC, came the closest to addressing the issue, but is limited to linear systems. Further, what it detects are multidimensional null spaces. Hence, there is an infinite set of similar solutions forcing a fall back to using fewer parameters [108] [109]

PAGE 151

151 The modification proposed here is to retain the set of set of statistical ly similar solutions in RANSAC step 3 instead of retaining the single best solution. Stated more formally, let be the current (possibly empty) set of best solutions, and let again be the finite set of minimum data solutions (RANSAC step 2). The set of solution hypotheses calculated from the new random minimum data set is merged with the current set of bes t hypotheses, and winnowed with respect to their associated costs and the variance of their inlier residuals. All similarly good solutions are retained as the new set of best solutions. For most image pairs, image matching algorithms ha ve predictable accuracies in pixels this makes an a priori estimation of inlier variance or residual threshold reasonable. However, the ratio of inliers to outliers varies widely from image pair to image pair. Hence, it is recommended to winnow first wi th respect to the more unpredictable quantity. Therefore, we winnowed the solution hypotheses by the number of outliers first. If is the lowest cost for any then for each test the null hypothesis that If the null hypothesis is rejected at some suitably high significance level (e.g. 95%) then is removed from If the or iginal RANSAC cost function is used (as in this investigation) this can be implemented as a test for equivalence between two binomial distributions. Inlier variance winnowing can be done in many ways. At first, we used an adaptive method: let be the lowest variance of inlier residuals for any then for each test the null hypothesis that If the null hypothesis is rejected at some suitably high significance level (e.g. 95%) then is removed from This was implemented as an F test for equality of variance. Using the adaptive F test to winnow

PAGE 152

152 solutions resulted in fewer parameter estimations being identified as near critical. However, it assumed that true solutions would have similar or bett er residuals. While this is generally the case, in our test data this approach would cause about 100 (out of 30500) true solutions to be rejected in favor of ghost solutions. This could be corrected with compensation for multiple hypothesis testing, but a priori knowledge of the number of tests is typically unavailable. We fell back to using a more traditional chi squared test for goodness of fit base on a prediction of the inlier variance (typically 1 pixel). Consider that a solution with an inlier sta ndard error of 0.2 pixels may be significantly better than a solution with a standard error of 0.3 pixels. There is a theoretical justification for eliminating one of them. However, both are well within the range of expected noise and hence both would pa ss the chi squared test for goodness of fit. Statistical theory aside, how sure should we be that the 0.3 pixel standard error solution is wrong? Finally, the set was winnowed so that it contains only distinct solutions (those that are significantly separated in solution space). It may seem counter intuitive to do this step last, but it is computationally the most expensive of the three. Doing it last is the refore more efficient. For this test, let the difference between two solution hypotheses, The variance of the quantity is by the general law of propagation of variance as where and are the co variances matrices of hypotheses i and j respectively (which are calculated as part of the least squares regression or by error pro pagation). Thus, is a standard normal test statistic that can be used to test if hypotheses are distinct. One effective method of using to ensure that all hypotheses in are

PAGE 153

153 distinct is to sort the h ypothesis set by cost and loop through it removing any higher cost considers the inlier residuals as well as the outlier count for this comparison, e.g. the maximum lik elihood cost proposed by [105] Winnowing methods used represent a tradeoff between the probability of categorizing a non critical solution as near critical and categorizing a near critical solution as non critical. In general, we found that falsely classifying a system as near critical is more acceptable than the reverse. Consider that relative pose is often used to start a sequential structure from motion operation. The first relative pose is the seed or foundation the remai nder is built on. For that operation there are typically many thousands of possible image pairs that could be used. If one pair appears near critical simply use another. After the initial image pair, the relative pose may be part of the operation to com pute other image positions and orientations. If one image appears to be near critical and thus ill suited to be the next image merged into the growing mass, then the algorithm can simply skip it and add a different image. The near critical image would th en be revisited when there was more data. Thus, it became primarily a flow control mechanism. We do not claim these methods are absolute or even optimal. Rather, we present them as recommendations and emphasize that they can and should be modified for s pecific tasks based on the risk associated with different types of errors. Other RANSAC variations were evaluated in comparison to the previously proposed algorithms. In this case there was no previously proposed algorithm that finds critical solutions se ts. Hence, the best way to quantify the performance of this algorithm is by quantifying how often other algorithms would have failed to find the true geometry.

PAGE 154

154 The exact answer depends on the cost function they use in RANSAC step 3. However, the number of true solutions that had worse inlier statistics than ghost solutions provided a reasonable estimate (Table 6 4). This implies that our algorithm flagged 273 out of 30489 (1:112) relative pose estimations as near critical for which previous algorithms w ould have chosen ghost solutions 0.9% of relative pose estimations tested. Conclusion We demonstrated the difference between relative pose critical volumes (that considering measurement noi se and sensor fields of view) and critical surfaces (that assume errorless infinite data and spherical cameras ). A modification of the popular RANSAC algorithm that maintains a set of statistically acceptable solutions (instead of a single best solution) was used to search for critical relative pose among real data projects. The prevalence of criticality ranged from 0.00% to 33.05% among projects tested. This is in apparent contradiction to previous theoretical work [97] [105] that concluded they should be rare to non existent. It is more consistent with previous [91] We conclude that critical s urfaces derivations (that assume errorless infinite data) offer an incomplete and possibly misleading characterization of the practical problem that must deal with finite data corrupted by noise and outliers. Our modification of the RANSAC algorithm is pr oposed as a useful and computationally inexpensive means of detecting near criticality during parameter estimation. We estimate that previous RANSAC algorithms would have rejected the true geometry in favor of statistically superior numerical ghost soluti ons in 0.9% of the relative pose estimations we tested.

PAGE 155

155 CHAPTER 7 THE EXPANDING NICHE OF UAS IN HIGH ACCURACY MAPPING The current niche of the UAS in high accuracy mapping is predominately in earth work and ortho photo production. D isappointing and incon sistent published accuracy assessments and reliance on Imagery Derived Point Clouds (IDPC) had been impediments to wider applications Chapter 3 demonstrated that is technically feasible for UAS to do a wider variety of work. Specifically using specific hardware and techniques, UAS mapping could be consistently accurate (varying by only 0.2 0.3 GSD in RMSE). Further, stereo compilation was shown to be an accurate and reliable method for extraction of features missing or distorted in IDPC e.g. overhead w ire attachment points, tops of power poles, and building drip edges. The maximum measurement error for these features in the 400 foot altitude flight was 2.3 GSD (0.076m). These findings extend the versatility of UAS equipped with cameras to projects req uiring accuracies as low as 0.04m RMSE, and to projects that require accurate extractions of break lines and above ground features. Technical feasibility, while necessa ry, is insufficient. The UAS mu st also offer an economic advantage dramatic enough to m ove the market. Chapter 1 tested the hypothesis that UAS could compete on projects currently using Large Manned Aerial Systems (LMAV). There was consistent potential for savings in project ranging from 0 to more than 1000 acres with in 200 miles of the ba se of operations. However, the savings were small and hence unlikely to move the market. In contrast, it was shown in Chapter 3 that UAS mapping can save approximately 50% on a variety of projects when competing against terrestrial mapping methods. Thus LMAV work ; they are coming for the survey work.

PAGE 156

156 To further improve UAS mapping accuracy the influence of two mission parameters and several hardware configurations were tested in Chapter 4 As predicted, the use of a m echanical leaf shutter was significantly more accurate than a rolling digital shutter. The DJI P 4 Pro UAS camera was shown to be as suitable a metric sensor as the DJI Zenmuse X4S. The Loki PPK GNSS was shown to improve the consistency and accu racy of mapping. The magnitude of improvement was small but may be more dramatic in larger more challenging projects. Little to no evidence was found to support the hypothesis that f lying slower or using a faster camera shutter speed ( to improve the ima ge sharpness ) would improve the mapping accuracy. The differences in RMSE amo ng these flights were remarkably small (2 3mm). Any advantage of slower flight or faster shutter speed wa s masked by other sources of error. The primary suspects are vibration s and image compression. It was disappointing to be unable to improve the accuracy with these flight parameters. On the positive side being free to fly faster allows pilots to cover more area per flight which increases the size of viable mapping projec ts. Two algorithms to improve the robustness and accuracy of automatic photo triangulation were also proposed. The first deals with computing initial values for camera positions and orientations. The second deals with detecting numerical ghost solutions in nonlinear parameter estimations. It has been demonstrated that they are improvements to the current state of the art. However, how effectively they will extend the niche of UAS mapping remains to be seen and depends in large part on whether the indust ry takes notice.

PAGE 157

157 APPENDIX A CONJUGATE POINT DATA FOR THE SHUTTLE NOSE IMAGES Distortion corrected conjugate image measure ments in the Space Shuttle nose example are given in Table A 1. The focal length used in the adjustment was 11111 pixels. This set o f data has two statistically acceptable solutions given in Table 6 1 Table A 1. Distortion corrected conjugate p oints for the Space Shuttle nose example in pixels Left X Left Y Right X Right Y 532.00 78.00 532.00 263.65 529.00 789.00 499.13 621.4 6 1397.12 558.24 399.35 371.99 232.80 216.80 847.44 399.94 178.36 432.80 1277.15 612.61 864.76 210.32 163.06 26.10 541.00 671.00 497.01 500.07 249.85 521.73 1337.39 359.51 23.86 499.34 1051.48 332.48 567.60 976.80 433.03 814.08 1482.08 682.08 501.75 497.84 424.80 967.20 586.88 806.20 241.20 366.80 835.42 552.23 439.20 878.40 583.85 714.50 1084.64 533.76 67.30 351.53 377.00 119.00 1500.10 50.41 536.40 218.00 526.17 405.87 164.40 620.40 896.62 454.03 157.20 555.20 9 16.82 741.78 353.00 369.00 1456.29 204.88 993.50 35.63 44.26 227.85 246.00 441.44 827.75 627.83 1467.39 718.66 490.36 535.82 762.64 42.13 288.06 145.05 172.00 160.00 1271.10 10.64 673.00 897.00 334.27 730.29 648.00 632.00 387.14 458.5 6 928.00 604.00 94.23 425.64 319.00 258.00 1426.54 91.08 1319.48 828.21 347.86 650.82 197.00 355.00 1290.59 188.48 179.76 474.72 889.08 304.95 476.48 512.84 575.01 339.38

PAGE 158

158 Table A 1. Continued Left X Left Y Right X Right Y 27.00 217.00 1116.86 45.15 559.92 467.52 489.63 291.85 727.00 963.00 265.26 797.85 313.00 546.00 1415.76 725.09 18.00 741.00 1090.57 926.51 686.00 164.00 368.94 353.39 371.00 615.00 1474.08 794.07 628.00 859.00 387.49 691.77 137.37 869.68 903.29 709.98 955.00 132.00 84.14 325.09 863.00 176.00 180.99 368.37 2.68 62.86 1095.13 112.17 608.00 301.00 448.02 491.25 191.03 14.48 1293.98 158.46 761.00 58.00 289.96 247.00 567.60 305.60 488.32 126.50 66.03 487.24 1016.55 671.59 115.00 290. 96 1207.20 121.93 160.00 744.00 1242.59 927.49 150.64 330.48 932.30 514.08 369.00 462.00 1480.81 639.58 968.00 192.00 68.69 386.13 211.20 251.60 865.40 76.80 698.00 408.00 348.06 600.93 15.00 681.00 1092.29 866.14 166.00 585.00 125 8.63 766.94 162.00 768.00 1242.64 951.68 759.00 248.00 289.70 439.81 1153.00 391.00 132.67 204.46 1242.00 606.00 238.61 423.09 695.00 552.00 343.00 376.08 250.00 590.00 815.72 778.23 350.95 31.76 722.52 149.27 244.00 706.00 811.97 895.04 166.40 353.17 908.84 181.14 498.17 197.65 564.41 17.37 369.61 208.02 699.59 29.97 814.00 297.00 229.07 113.69 334.36 378.05 731.19 203.82 1167.00 202.00 141.59 11.35 1153.00 351.00 130.79 163.73

PAGE 159

159 Table A 1. Continued Left X Left Y Right X Right Y 981.40 270.80 51.72 466.22 909.40 488.00 114.13 684.61 1161.00 678.00 158.83 498.22 38.00 408.00 1118.28 588.93 1185.20 14.00 159.52 180.15 1352.00 41.20 337.49 238.70 1172.00 756.00 178.60 578.24 1484.00 8.80 477.35 207.98 327.00 200.00 1441.89 374.55 489.88 166.54 573.94 14.20 1124.00 706.00 122.25 527.46 520.98 83.60 542.64 99.11 698.00 425.00 346.37 246.27 1049.80 158.00 17.70 353.04 561.60 522.00 486.69 714.13 1059.16 532.16 59. 61 730.36 1430.24 167.12 419.87 28.03 1170.18 125.07 143.75 66.92 1265.15 430.31 253.48 242.65 1334.82 373.07 324.13 183.25 665.00 282.00 386.47 100.75 288.00 93.00 1399.51 266.40 687.99 482.27 354.05 675.74 120.78 226.26 961.24 5 2.38 922.00 521.00 105.65 340.79 265.00 1.00 1375.23 171.27 1302.00 112.00 285.64 309.86 691.00 661.00 328.89 855.39 706.00 624.00 318.81 818.67 807.00 108.00 240.57 78.65 1303.00 179.00 289.05 377.97 1398.00 63.00 386.72 261. 55 1073.00 443.00 59.97 641.29 95.00 232.00 1193.21 410.37 140.68 69.50 943.91 107.56 356.20 266.00 1473.51 441.27 392.00 444.00 1494.52 281.99 193.00 904.00 1246.30 750.09 205.00 680.00 1279.75 520.70 241.00 847.00 1302.77 692.24 320.20 384.80 1431.26 561.76

PAGE 160

160 Table A 1. Continued Left X Left Y Right X Right Y 1152.80 483.20 155.02 682.38 1169.00 835.00 186.30 659.65 220.00 600.00 1301.09 439.18 993.00 24.00 44.66 167.14 276.00 571.00 1361.83 410.28 55.40 203.60 1034.40 383.84 1038.80 348.80 14.21 546.03 969.40 204.80 67.54 17.36 407.00 590.00 1500.70 431.31 1484.00 200.00 486.19 401.50 343.00 861.00 1407.88 708.11 1270.40 324.80 263.28 524.81 1258.88 452.96 269.28 653.02 39.80 583.20 1029.84 4 17.89 252.00 48.00 827.51 228.93 649.00 170.00 406.51 13.06 251.80 922.56 1306.23 770.00 1372.14 390.20 385.23 591.52 1337.31 300.62 334.53 501.23 268.00 108.00 810.62 290.02 133.00 895.00 905.00 735.91 540.00 50.00 523.45 235.3 5 1057.62 930.81 84.83 760.44 1107.20 267.20 79.96 78.83 954.09 523.38 71.84 342.28 317.96 388.27 1428.68 565.54 211.18 826.66 1274.92 671.46

PAGE 161

161 APPENDIX B CONJUGATE POINT DATA FOR THE MARS PHOENIX LANDER IMAGES Distortion corrected conjugat e image measurements from the Phoenix lander example are given in Table B 1. The focal length used in the adjustment as 12.5 mm. This set of data has four statistically acceptable relative orientation solutions given in Table 6 2. Ta ble B 1. Distortion corrected conjugate points for the Mars Phoenix lander example given in millimeters. Left X Left Y Right X Right Y 1.587 0.437 1.553 1.563 2.691 1.357 2.689 2.554 4.421 0.485 4.350 1.626 0.207 0.874 0.205 2.006 2.990 0.207 2.876 0.886 5.566 1. 288 5.572 2.489 1.175 0.154 1.141 1.280 2.415 0.092 2.344 1.202 1.909 0.299 1.840 0.801 2.402 0.816 2.370 1.966 0.299 1.265 0.295 2.401 3.772 0.759 3.733 1.921 0.057 0.094 0.057 1.210 2.898 1.150 2.889 2.333 0.851 0.621 0.808 0.473 2.668 1.311 2.661 2.497 2.958 0.308 2.927 1.461 0.032 0.768 0.047 0.321 4.255 0.115 4.019 0.936 0.078 1.050 0.099 2.167 0.910 1.908 0.900 2.910 2.985 0.671 2.934 1.845 3.503 0.964 3.296 0.117 1.827 1.305 1.691 0.268 5.630 0.290 5.173 1.189 3.721 2.859 3.006 1.782

PAGE 162

162 APPENDIX C CONJUGATE POINT DATA FOR THE UAS STEREO PAIR Distortion corrected conjugate image measurements from the unmanned aerial system stereo pair example are given in Table C 1. The focal length used in the a djustment was 9343.851 pixels. This set of data has three statistically similar relative orientation solutions given in Table 6 3 Table C 1. Distortion corrected conjugate points for the unmanned aerial system stereo pair example given in pixels. Left X Left Y Right X Right Y 2583.4 181.4 2394.5 1596.0 2349.3 13.5 2152.9 1750.4 2603.6 1306.0 2463.8 474.8 2645.8 4.1 2449.4 1778.3 2767.0 1128.3 2618.0 660.0 2748.6 1095.9 2598.5 690.7 2580.3 1313.1 2438.1 465.4 2313.5 1349.3 2176.5 412.9 2080.1 1218.9 1938.5 527.5 2091.7 13.8 1893.0 1733.4 1864.4 1140.6 1721.2 591.1 2226.4 1397.8 2092.3 360.0 2216.6 1410.4 2083.0 346.4 2699.4 292.0 2515.6 1492.3 1902.2 43.5 1702.5 1781.2 2169.6 120.9 1966.7 1876.6 2730.3 93.8 2529.2 1883.8 1804.3 52.6 1604.2 1783.9 2381.6 253.4 2196.3 1511.1 1931.1 1485.5 1803.8 254.8 2754.9 517.7 2580.6 1268.8 1971.5 260.9 1787.3 1478.2 2362.7 1904.7 2250.4 129.4 2928.6 1484.4 2792.6 318.2 1970.2 1841.8 1859.2 91.0 2818.7 119.2 2616.9 1915.0 2467.3 1506.4 2336.0 267.5 2424.9 1918.5 2311.7 139.4 2450.9 1907.9 2338.0 128.0 1902.5 1488.5 1775.5 250.4 2578.8 553.7 2406.1 1221.0

PAGE 163

163 Table C 1. Continued Left X Left Y Right X Right Y 2169.5 494.9 1995.2 1254.6 2946.4 169.4 2755.4 1630.2 2311.0 1437.5 2177.7 325.6 2040.3 1450.2 1910.3 297.1 1743.2 1136.4 1601.5 587.1 1797.4 40.9 1597.7 1772.2 1875.4 1445.0 1743.8 292.0 2618.9 287.9 2870.9 1752.5 1793.0 11.3 1594.7 1716.5 1941.8 451.6 1765.6 1283.9 1689.3 1664.0 1777.0 145.2 1693.8 1635.6 1783.7 118.1 2549.8 1799.0 2623.2 330.6 1750.0 1151.6 1877.5 355.0 1832.8 445.7 1657.3 1282.9 1835.7 698.2 1672.1 1029.8 1843.8 1813.1 173 2.8 70.3 1984.5 313.1 2180.7 1181.1 1838.8 339.6 2031.9 1162.3 2358.1 542.3 2187.0 1222.2 2487.5 62.8 2709.2 1403.0 1619.8 1849.7 1693.5 322.5 1773.1 442.6 1597.4 1282.8 1534.9 842.7 1686.7 675.3 1640.1 729.4 1801.3 784.1 1970.6 522.2 2 149.5 971.0 1932.6 827.7 1774.6 907.2 1752.3 110.3 1551.0 1839.9 1586.1 1842.0 1660.8 312.9 373.6 912.3 520.0 676.8 1499.0 178.1 1732.0 1708.1 2407.4 417.0 2668.3 1897.6 1341.4 43.4 1140.1 1746.5 982.3 1240.5 850.0 435.8 891.7 1301.5 763.5 370.1 52.6 65.9 149.4 1553.2 666.7 21.7 875.6 1555.2 1211.5 36.4 1009.5 1730.5 140.4 77.4 60.3 1547.3 1056.7 194.0 845.2 1882.0 2164.9 554.0 1993.5 1196.4

PAGE 164

164 Table C 1. Continued Left X Left Y Right X Right Y 492.0 11 63.3 358.7 480.1 177.2 79.1 379.2 1525.4 559.6 3.8 769.5 1579.9 1292.1 12.3 1509.7 1551.7 110.1 151.4 105.4 1779.3 1001.3 140.6 1226.6 1700.0 802.8 181.1 1029.1 1753.6 2579.8 572.6 2408.4 1203.1 1216.5 47.9 1014.1 1742.5 1288.2 75.1 1509.2 1615.5 908.6 44.9 710.7 1628.7 978.4 1832.2 878.0 144.0 604.6 1310.1 723.3 268.9 434.9 1509.3 322.0 137.2 1715.0 1798.5 1604.7 64.3 291.4 1728.2 384.2 123.2 660.7 1499.9 766.0 78.1 1211.5 31.8 1010.0 1726.0 1795.3 4 60.5 1619.6 1265.3

PAGE 165

1 65 LIST OF REFERENCES [1] D. Day, W. Weaver and L. Wilsing, "Accuracy of UAS Photogrammetry: A Comparative Evaluation," Photogrammetric Engineering & Remote Sensing, vol. 82, no. 12, pp. 909 914, 2016. [2] K. R. Koch, "Parameter Estimation in Linear Models," in Parameter Estimation and Hypothesis Testing in Linear Models, 2nd Edition Berlin, Springer Verlag, 1999, pp. 149 270. [3] "Altavian 2016 Catalog," 27 December 2016. [Online]. Available: htt p://cdn.altavian.com/media/2016/11/09203929/Altavian_Catalog.pdf?ed2f26df 2d9c416fbddddd2330a778c6=zvtwbtmbsf zvmnbpfpt. [4] Helicopters, "Mosquito_Model_XE285," 26 December 2016. [Online]. Available: http://www.innovatortech.ca/Mosquito_Model_X E285.html. [5] D. Junkin, Interviewee, EX Question. [Interview]. 25 November 2015. [6] K. Chance, Interviewee, Director of Buisness Development. [Interview]. 16 December 2016. [7] J. Farrar, "15 Things Every LiPo Battery User Should Know," 7 Feburary 2015. [Online]. Available: http://thedronegirl.com/2015/02/07/lipo battery/. [8] Horizon Hobby, "Lithium Polymer Batteries," 26 December 2016. [Online]. Available: https://www.horizonhobby.com/pdf/EFL LiPoSafetyWarnings.pdf. [9] Thunder Powe r RC, "Important Safety Instructions and Warnings," 26 December 2016. [Online]. Available: http://www.thunderpowerrc.com/core/media/media.nl?id=132007&c=1065743&h =8637782eee240ecb0782&whence=. [10] A. Amato, "do you need drone insurance," 15 Feburary 201 5. [Online]. Available: http://dronelife.com/2015/02/19/do you need drone insurance/. [Accessed 12 December 2016]. [11] J. Perry, Interviewee, Insurance Agent. [Interview]. 19 October 2015. [12] M. Burgess, Interviewee, Program Coordinator, Unmanned Ai rcraft Systems

PAGE 166

166 Research Program. [Interview]. 4 September 2015. [13] G. Barnes, Interviewee, Profesor, University of Florida. [Interview]. 18 September 2015. [14] T. Mamatas, Interviewee, Chief Technical Officer, Altavian. [Interview]. 13 November 2015 [15] P. Ifju, Interviewee, Profesor, University of Florida. [Interview]. 20 Novermber 2015. [16] S. Bohlman, Interviewee, Assistant Professor, University of Florida. [Interview]. 2 October 2015. [17] Internal Revenue Service, "2016 Standard Mileage Rates for Business," 26 December 2016. [Online]. Available: https://www.irs.gov/uac/newsroom/2016 standard mileage rates for business medical and moving announced. [18] Alan Divers, PLS Land Surveying, "rates," 21 December 2016. [Online]. Available: h ttp://www.adivers.com/rates.pdf. [19] BFM Professional Land Surveying, "bfmcorporation.com," 21 December 2016. [Online]. Available: http://www.bfmcorporation.com/overview/rates.html. [20] ProMatcher, "Land Surveying," 21 December 2016. [Online]. Availa ble: http://land surveyors.promatcher.com/cost/. [21] Phase One "Phase One Aerial Cameras," 21 December 2016. [Online]. Available: http://downloads.phaseone.com/7e71b0a2 f42d 42c7 896b 2c16968f79c3/English/iXU 1000_Brochure_A4_4print_singlP.pdf. [22] Illunis, "Aerial Imaging," 21 December 2016. [Online]. Available: http://www.illunis.com/illunis/applications/aerial imaging/. [23] T. Hospod, Interviewee, Sales Engineer, 1st Vision, Inc.. [Interview]. 26 December 2016. [24] S. Elhardt, Interviewee, Partner and Chief Operating Officer. [Interview]. 15 November 2016. [25] "Highlight Insurance," 15 July 2015. [Online]. Available: http://forum.mosquito helicopter.info/showthread.php?tid=3509&highlight=insurance.

PAGE 167

167 [26] Mosquito Helicopters, "thread 79 5," 14 April 2010. [Online]. Available: http://forum.mosquito helicopter.info/archive/index.php?thread 795.html. [27] J. Uptigrove, Interviewee, Engineer. [Interview]. 16 December 2016. [28] "Drones Survey Erosion In Death Valley," 2 January 2017. [Online]. Available: https://www.altavian.com/case study/drones survey erosion death valley/. [29] N. Thomas, Interviewee, Helicopter Pilot. [Interview]. 16 November 2016. [30] C. Crouse, "Quote#: M29695 0," Fredricks, MD, 2015. [31] D. Verges Interviewee, General Manager, Midwest Aerial Photography. [Interview]. 15 December 2016. [32] F. J. Mesas Carrascosa, M. Notario Garca, J. Meroo de Larriva and A. Garca Ferrer, "An Analysis of the Influence of Flight Parameters in the Generation of Unmanned Aerial Vehicle (UAV) Orthomosaicks to Survey Archaeological Areas," Sensors, vol. 16, no. 11, p. 1838, Nov 2016. [33] Y. Zhang, J. Xiong and L. Hao, "Photogrammetric processing of low altitude images acquired by unpiloted aerial vehicles," Phot ogrammetric Record, vol. 26, no. 134, pp. 190 211, 2011. [34] J. Gonalves and R. Henriques, "UAV photogrammetry for topographic monitoring of coastal areas," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 104, no. 6, pp. 101 111, June 216. [35] F. J. Mesas Carrascosa, J. Torres Snchez, I. Clavero Rumbao, A. Garca Ferrer, J. M. Pea, I. Borra Serrano and F. Lpez Granados, "Assessing optimal flight parameters for generating accurate multispectral orthomosaicks by UAV to support site specif ic crop management," Remote Sensing, vol. 7, no. 10, pp. 12793 12814, 2015. [36] Aksamitauskas, "The surface modelling based on uav photogrammetry and qualitative estimation," Measurement, vol. 73, pp. 619 627, 2015. [37] S. Miller, "Photogrammetric Products," in Manual of Photogrammetry 5th edition J. C. McGlone, Ed., Bethesda, Marlyand, American Society of Photogrammetry and Remote Sensing, 2004, pp. 991, 1002 1003.

PAGE 168

168 [38] M. Ai, Q. Hu J. Li, M. Wang, H. Yuan and S. Wang, "A robust photogrammetric processing method of low altitude UAV images," Remote Sensing, vol. 7, no. 3, pp. 2302 2333, 2015. [39] R. Gini, D. Pagliari, D. Passoni, L. Pinto, G. Sona and P. and Dosso, "UAV Photogram metry: Block Triangulation Comparisons," in International Archives Photogrammetry, Remote Sensing and Spatial Information Sciences Rostock, Germany, 2013. [40] S. Rheea and T. Kim, "Automated DSM extraction from UAV images and Performance Analysis," in The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Toroton, Canada, 2015. [41] C. Stallings, "Exploring Mapping Capabilities of Small Commercial Drones," in Imaging and Geospatial Technology Forum 2017 Co nference Proceedings Baltimore, Maryland, 2017. [42] C. Smith, Interviewee, Owner, Mapping Resource Group. [Interview]. 15 Novermber 2015. [43] M. Kitaif, Interviewee, Owner, Cardinal Systems LLC. [Interview]. 21 August 2017. [44] F. Mancini, M. Du bbini, M. Gattelli, F. Stecchi, S. Fabbri and G. Gabbianelli, "Using unmanned aerial vehicles (UAV) for high resolution reconstruction of topography: The structure from motion approach on coastal environments," Remote Sensing, vol. 5, no. 12, pp. 6880 6898 2013. [45] E. Nocerino, F. Menna, F. Remondino and R. Saleri, "Accuracy and block deformation analysis in automatic UAV and terrestrial photogrammetry Lesson learnt," in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Scienc es Strasburg, France, 2013. [46] S. Ostrowski, G. Jozkow, C. Toth and B. J. Vander, "Analysis of point cloud generation from UAS images," in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences Denver, Colorado, USA, 201 4. [47] T. N. Tonkin, N. G. Midgley, D. J. Graham and J. Labadz, "The potential of small unmanned aircraft systems and structure from motion for topographic surveys: A test of emerging integrated approaches at Cwm Idwal, North Wales," Geomorphology, vol 226, pp. 35 43, 2014.

PAGE 169

169 [48] M. Rehak and J. Skaloud, "Fixed wing micro aerial vehicle for accurate corridor mapping," ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, p. 23, 2015. [49] W. Volkmann, "White Paper: M APPING WITHOUT GROUND CONTROL POINTS: DOES IT WORK?," August 2015. [Online]. Available: http://v map.net/white paper/ [50] M. Rehak and J. Skaloud, "Applicability of New Approaches of Sensor Orientation to Micro Aerial Vehicles," in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences Prague, Czech Republic, 2016. [51] T. N. Tonkin and N. G. Midgley, "Ground control networks for image based surface reconstruction: an investigation of optimum survey designs using UAV de rived imagery and structure from motion photogrammetry," Remote Sensing, vol. 8, no. 9, p. 786, 2016. [52] L. Wijnber, "3DroneMapping completes PPK trials," 1 December 2016. [Online]. Available: https://community.emlid.com/t/3dronemapping completes ppk trials/4613 [53] S. Gindraux, R. Boesch and D. Farinotti, "Accuracy Assessment of Digital Remote Sensing, vol. 9, no. 0, p. 186, 2017. [54] J. Vautherin, S. Rutishauser, K. Schneide r Zapp, H. F. Choi, V. Chovancova, A. Glass and C. Strecha, "Photogrammetric Accuracy and Modeling of Rolling Shutter Cameras," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 3, no. 3, pp. 139 146, 2016. [55] C. S. Fraser, "Automatic camera ca libration in close range photogrammetry," Photogrammetric Engineering & Remote Sensing, vol. 79, no. 4, pp. 381 388, 2013. [56] M. R. James and S. Robson, "Mitigating systematic error in topographic models derived from UAV and ground based image networks," Earth Surface Processes and Landforms, vol. 39, no. 10, pp. 1413 1420, 2014. [57] K. Tahar, "An evaluation on different number of ground control points in unmanned aerial vehicle photogrammetric block," in International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences ISPRS

PAGE 170

170 Archives Istanbul, Turkey, 2013. [58] F. Agera Vega, F. Carvajal Ramrez and P. Martnez Carricondo, "Assessment of photogrammetric mapping accuracy based on variation grou nd control points number using unmanned aerial vehicle," Measurement, vol. 98, pp. 221 227, Feburary 2017. [59] S. Harwin and A. Lucieer, "Assessing the accuracy of georeferenced point clouds produced via multi view stereopsis from unmanned aerial vehic le (UAV) imagery.," Remote Sensing, vol. 4, no. 6, pp. 1573 1599, 2012. [60] J. Wang, G. Yong, G. B. M. Heuvelink, C. Zhou and D. Brus, "Effect of the sampling design of ground control points on the geometric correction of remotely sensed imagery," Inte rnational Journal of Applied Earth Observation and Geoinformation, vol. 18, pp. 91 100, 2012. [61] E. Ridolfi, G. Buffi, S. Ventrui and P. Manciola, "Accuracy Analysis of a Dam Model from Drone Surveys," Sensors, vol. 17, no. 8, 2017. [62] M. Bologne si, A. Furini, V. Russo, A. Pellegrinelli and P. Russo, "Accuracy of cultural heritage 3D models by RPAS and terrestrial photogrammetry," The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XL, no. 5, pp. 113 119, 2014. [63] J. Torres Snchez, F. Lpez Granados, I. Borra Serrano and J. Pea, "Asseessing UAV collected image overlap influence on computation time and digital surface model accuracy in olive orchards," Precision Agriculture, vol. 19, no. 1, pp. 115 133, 2018. [64] C. Amrullah, D. Suwardhi and I. Meilano, "Product Accuracy Effect of Oblique and Vertical Non Metric Digital Camera Utilization in Uav Photogrammetry to Determine Fault Plane," in ISPRS Annals of Photogrammetry, Remote Sensing and Sp atial Information Sciences Prague, Czech Republic, 2016. [65] S. Pertuz, D. Puig and M. A. Garca, "Analysis of focus measure operators for shape from focus," Pattern Recognition, pp. 1415 1432, 2013. [66] B. Triggs, "Factorization methods for proje ctive structure and motion," in Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition San Francisco, California, USA, 1996.

PAGE 171

171 [67] R. Hartly, N. Dano and R. Kaucie, "Plane based projective reconstruction," in Procee dings Eighth IEEE International Conference on Computer Vision. ICCV 2001 Vancouver, BC, Canada, 2001. [68] K. F. Blonquist and R. T. Pack, "Network orientation using the scaled orthographic projection for parameter initialization," Photogrammetric Engi neering & Remote Sensing, vol. 78, no. 5, pp. 505 517, 2012. [69] T. Luhmann, S. Robson, S. Kyle and I. Harley, Close Range Photogrammetry, Wiley, 2006. [70] E. M. Mikhail, "Use of triplets in analytical aerotriangulation," Photogrammetric Engineerin g, pp. 625 632, 1962. [71] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, New York, NY, USA: Cambridge University Press, 2003. [72] Applications: a Re view," in Proceedings of the Tyrrhenian International Workshop on Digital Communications (IWDC 2002), Advanced Methods for Multimedia Signal Processing Capri, Italy, 2002. [73] R. Raguram, O. Chum, M. Pollefeys, J. Matas and J. Frahm., "USAC: a univers al framework for random sample consensus," IEEE Transactions on Pattern Analysis and Machine Intelligence vol. 35, no. 8, pp. 2022 2038, 2013. [74] U. A. Uotila, "Notes on Adjustment Computations Par 1," Department of Geodetic Science and Surveying Th e Ohio State University, Columbs, Ohio, 1986. [75] R. I. Hartley, "In defense of the eight point algorithm," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 6, pp. 580 593, 1997. [76] O. Thomas and E. R. Oshel, "Geometric Derivations of Minimal Sets of Sufficient Multiview Constraints," The Photogrammetric Record, vol. 27, no. 137, pp. 74 93, 2012. [77] O. Faugeras, "On the Geometry and Algebra of the Point and Line Correspondences Between N Images," in Proceedings of IE EE International Conference on Computer Vision Cambridge, MA, USA, 1996. [78] C. Moons, "A Guided Tour Through Multiview Relations," in 3D Structure from Multiple Images of Large Scale Environments. SMILE 1998. Lecture Notes in

PAGE 172

172 Computer Science vol. 1 506, R. Koch and L. Van Gool, Eds., Berlin, Springer, 19998, pp. 304 346. [79] R. Heyden, "Tensorial Properties of Multiple View Constraints," Mathematical Methods in the Applied Sciences, vol. 23, no. 2, pp. 169 202, 2000. [80] C. J. Mugnier, W. Fors tner, B. Wrobel, F. Paderes and R. Munjy, "The Mathmatics of Photogrammetry, Manual of Photogrammetry, Fith Edition," in The Manual of Photogrammetry, 5th Edition Bethesda, Maryland, American Society of Photogrammetry and Remote Sensing, 2004, pp. 47 49. [81] D. A. Cox, J. Little and DonalO'shea, Using Algebraic Geometry, Springer Science & Business Media, 2006. [82] H. Stewenius, C. Engels and D. Nister, "Recent Developments on Direct Relative Orientation," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 60, no. 4, pp. 284 294, 2006. [83] H. Li and R. Hartley, "Five Point Motion Estimation Made Easy," in 18th International Conferance on Pattern Recognition, 2006. Hong King, China, 2006. [84] Z. Kukelova, M. Bujnak and T. Pajdla, "Polyn omial Eigenvalue Solutions to the 5 pt and 6 pt Relative Pose Problems," in Proceedings of the British Machine Conference Leeds, United Kingdom, 2008. [85] B. K. P. Horn, "Relative Orientation," International Journal of Computer Vision, vol. 4, no. 1, pp. 59 78, 1990. [86] R. Szeliski and B. K. Sing, "Shape Ambiguities in Structure From Motion," IEEE Transaction on Pattern Aalaysis and Machine Intelligence, vol. 19, no. 5, pp. 506 512, 1997. [87] S. Negahdaripour, "Multiple interpretations of the shape and motion of objects from two perspective images," IEEE Transactions on Pattern Analysis and Machine Intelligence vol. 12, no. 11, pp. 1025 1039, 1990. [88] S. B. K. R. Szeliski, "Shape Ambiguities in Structure From Motion," IEEE Transactions o n Pattern Analysis and Machine Intelligence, vol. 19, no. 5, pp. 506 512, 1997. [89] R. C. G. S.J. Young, "Statistical Analysis of Inherent Ambiguities in Recovering 3 D Motion from A Noisy Flow Fieldt," IEEE Transactions on Pattern Analysis and

PAGE 173

173 Machine Intelligence, vol. \ 14, no. 10, pp. 995 1013, 1992. [90] K. Dannilidis and H. H. Nagel, "The Coupling of Rotation and Translation in Motion Estimation of Planar Surfaces," in Computer Vision and Pattern Recognition, Proceedings CVPR '93 New York, NY, USA, 1993. [91] Q. T. Luong and O. D. Faugeras, "The Fundamental Matrix: theory, algorithms, and stability analysis," International Journal of Computer Vision, vol. 17, no. 1, pp. 42 75, 1996. [92] J. Oliensis, "A New Structure from Motion Ambiguity, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 7, pp. 685 700, 2000. [93] B. K. P. Horn and E. J. Weldon, "Robust Direct Methods for Recovering Motion," International Journal of Computer Vision, vol. 2, no. 1, pp. 57 76, 1986. [94] C. H. Y. S. Negahdaripour, "Robust recovery of motion: effects of surface orientation and field of view," in Computer Vision and Pattern Recognition, 1988. Proceedings CVPR '88 Ann Arbor, MI, USA, 1988. [95] G. Adiv, "Inherent Ambiguities in Recovering 3 D Motion and Structure from a Noisy Flow Field," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 5, pp. 477 489, 1989. [96] S. J. Maybank, "The angular Velocity associated with the optical flowfield arising from motion through a rigid environment," Proceedings of the Royal Society, vol. 401, pp. 317 326, 1985. [97] B. K. P. Horn, "Motion Fields Are Hardly Ever Ambiguous," Internation Journal of Computer Vision, vol. 1, pp. 259 274, 1987. [98] Q. T. Luo ng and O. D. Faugeras, "A Stability Analysis of the Fundamental Matrix," in European Conference on Computer Vision -ECCV '94 Stockholm, Sweden, 1994. [99] S. Negahdaripour, "Wrong Multiple Interpretations of the Shape and Motion of Objects from Two Per spective Images," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 11, pp. 1025 1039, 1990. [100] H. C. Longuet Higgins, "Multiple Interpretations of a Pair of Images of a Surface," Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, vol. 418, no. 1854, pp. 1 15, 1988.

PAGE 174

174 [101] C. F. Y. A. Tomas Brodsky, "Directions of Motion Fields are Hardly Ever Ambiguous," International Journal of Computer Vision, vol. 26, no. 1, pp. 5 24, 1998. [102 ] M. A. Fischler and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography," Communications of the ACM, vol. 24, no. 6, pp. 381 395, 1981. [103] S. Otte, U. Schwanecke and A. Zell, "ANTSAC: A Generic RANSAC Variant Using Principles of Ant Colony Algorithms," in 2014 22nd International Conference on Pattern Recognition (ICPR) Stockholm, 2014. [104] C. Distante and G. Indiveri, "RANSAC LEL: An optimized version with least ent ropy like estimators," in 18th IEEE International Conference on Image Processing Brussels, Belgium, 2011. [105] P. H. Torr and A. Zisserman, "MLESAC: A New Robust Estimator with Application to Estimating Image Geometry," Computer Vision and Image Under standing, vol. 78, no. 1, pp. 138 156, 2000. [106] Y. Wu, Y. Li and Z. Hu., "Detecting and handling unreliable points for camera parameter estimation," International Journal of Computer Vision, vol. 79, no. 2, pp. 209 223, 2008. [107] T. Lan, Y. Wu a nd Z. Hu., "Twisted cubic: degeneracy degree and relationship with general degeneracy," in Asian Conference on Computer Vision ACCV 2009 Xi'an, China, 2010. [108] R. Harvey and A. Bangham, "The Analysis of Ambiguous Solutions in Linear Systems and its Application to Computer Vision," in British Machine Vision Conference (BMVC '03) Norwich, United Kingdom, 2003. [109] J. M. Frahm and M. Pollefeys, "RANSAC for (Quasi )Degenerate data (QDEGSAC)," in 2006 IEEE Computer Society Conference on Computer Vis ion and Pattern Recognition (CVPR'06) New York, NY, USA, 2006.

PAGE 175

175 BIOGRAPHICAL SKETCH Orrin Thomas completed his undergraduate work in g eomatics e ngineering at California State University, Fresno. He wanted to be a land surveyor so that he could wo rk outside and not have to live in a large urban area. Ironically upon completing his b degree he was offered a position at Johnson Space Center as a photogrammetrist and moved to Houston, TX There he supported Space Shuttle and Space Statio n remote sensing needs and completed a m s degree i n c ivil e ngineering that focused on industrial photogrammetry topics relevant to his work. When American manned space fli ght was e ffectively cance l led, he resigned and found work support ing unmanned space flight at the USGS Astrogeology Unit. There he wrote sensor models for satellites, did mapping projects on the Moon and other celestial bodies, and added robust analysis support to their bundle adjustment. After a couple of years, he enter ed the private sector at Cardinal Systems, LLC and has been adding automation to their photogrammetry and LiDAR processing packages ever s i n c e His doctoral research work was inspired by the needs of his production work.