<%BANNER%>

LIDAR Aided Camera Calibration in Hybrid Imaging and Mapping Systems

Permanent Link: http://ufdc.ufl.edu/UFE0021836/00001

Material Information

Title: LIDAR Aided Camera Calibration in Hybrid Imaging and Mapping Systems
Physical Description: 1 online resource (80 p.)
Language: english
Creator: Singhania, Abhinav
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2007

Subjects

Subjects / Keywords: adjustment, aerial, altm, bundle, calibration, camera, ilris, lidar, mapping, optech, photogrammetry
Civil and Coastal Engineering -- Dissertations, Academic -- UF
Genre: Civil Engineering thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Advancement in the fields of position and navigation systems has made possible the tracking of a moving platform with an accuracy of a few centimeters. This in turn has led to the development of the concept of direct georeferencing with the aid of Inertial Measurement Unit (IMU) and Differential Global Positioning System (DGPS). The aerial mapping systems have expanded their scope from aerial photography to include laser scanning systems, popularly known as LIDAR, which use laser pulses to map the earth surface resulting in high resolution surface models. The present century has seen the successful introduction and use of hybrid imaging and mapping systems which typically consist of a laser scanner and a digital camera mounted together on a platform. The strength of such systems lies in the use of two sensors that complement each other well. The laser scanning system provides high resolution and accurate three-dimensional positions, whereas the digital camera provides textural and spectral information. The successful co-registration of such systems depends on the accurate calibration of the mapping systems. Camera calibration is one such important component of this mapping process. Calibration includes the relative position and orientation of the camera with respect to the other sensors in the mapping system, as well as the internal geometry/orientation of the camera. Previously, aerial mapping was limited to the use of metric cameras, but in recent years some users have elected to work with less expensive, small format cameras, for which the internal orientation of the camera assumes a higher significance. Various procedures have been introduced and studies carried out to study the use of nonmetric cameras in mapping. One conclusion reached by virtually all investigations is that careful calibration of the camera at frequent intervals is essential. This thesis reports on a study carried out at the Geosensing Systems Engineering Research Center at the University of Florida, which explores the use of LIDAR data in the calibration of digital cameras. Two different cases are studied, one using a terrestrial mapping system and another using an airborne mapping system. The calibration is performed with the aid of LIDAR data and the results are examined by evaluating the accuracy in positions of the control points obtained from georeferenced images.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Abhinav Singhania.
Thesis: Thesis (M.S.)--University of Florida, 2007.
Local: Adviser: Shrestha, Ramesh L.
Electronic Access: RESTRICTED TO UF STUDENTS, STAFF, FACULTY, AND ON-CAMPUS USE UNTIL 2009-12-31

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2007
System ID: UFE0021836:00001

Permanent Link: http://ufdc.ufl.edu/UFE0021836/00001

Material Information

Title: LIDAR Aided Camera Calibration in Hybrid Imaging and Mapping Systems
Physical Description: 1 online resource (80 p.)
Language: english
Creator: Singhania, Abhinav
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2007

Subjects

Subjects / Keywords: adjustment, aerial, altm, bundle, calibration, camera, ilris, lidar, mapping, optech, photogrammetry
Civil and Coastal Engineering -- Dissertations, Academic -- UF
Genre: Civil Engineering thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Advancement in the fields of position and navigation systems has made possible the tracking of a moving platform with an accuracy of a few centimeters. This in turn has led to the development of the concept of direct georeferencing with the aid of Inertial Measurement Unit (IMU) and Differential Global Positioning System (DGPS). The aerial mapping systems have expanded their scope from aerial photography to include laser scanning systems, popularly known as LIDAR, which use laser pulses to map the earth surface resulting in high resolution surface models. The present century has seen the successful introduction and use of hybrid imaging and mapping systems which typically consist of a laser scanner and a digital camera mounted together on a platform. The strength of such systems lies in the use of two sensors that complement each other well. The laser scanning system provides high resolution and accurate three-dimensional positions, whereas the digital camera provides textural and spectral information. The successful co-registration of such systems depends on the accurate calibration of the mapping systems. Camera calibration is one such important component of this mapping process. Calibration includes the relative position and orientation of the camera with respect to the other sensors in the mapping system, as well as the internal geometry/orientation of the camera. Previously, aerial mapping was limited to the use of metric cameras, but in recent years some users have elected to work with less expensive, small format cameras, for which the internal orientation of the camera assumes a higher significance. Various procedures have been introduced and studies carried out to study the use of nonmetric cameras in mapping. One conclusion reached by virtually all investigations is that careful calibration of the camera at frequent intervals is essential. This thesis reports on a study carried out at the Geosensing Systems Engineering Research Center at the University of Florida, which explores the use of LIDAR data in the calibration of digital cameras. Two different cases are studied, one using a terrestrial mapping system and another using an airborne mapping system. The calibration is performed with the aid of LIDAR data and the results are examined by evaluating the accuracy in positions of the control points obtained from georeferenced images.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Abhinav Singhania.
Thesis: Thesis (M.S.)--University of Florida, 2007.
Local: Adviser: Shrestha, Ramesh L.
Electronic Access: RESTRICTED TO UF STUDENTS, STAFF, FACULTY, AND ON-CAMPUS USE UNTIL 2009-12-31

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2007
System ID: UFE0021836:00001


This item has the following downloads:


Full Text

PAGE 1

1 LIDAR AIDED CAMERA CALIBRATI ON IN HYBRID IMAGI NG AND MAPPING SYSTEMS By ABHINAV SINGHANIA A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2007

PAGE 2

2 Copyright 2007 by Abhinav Singhania

PAGE 3

3 Dedicated to my mother, father and brother

PAGE 4

4 ACKNOWLEDGMENTS I would like to thank my professors: William Carter, Ramesh Shrestha and Clint Slatton for their support and encouragement throughout my study. Dr Ramesh Shrestha provided the much needed guidance and support towards the co mpletion of this work. The direction, insight and critique provided by Dr William Carter were quintessential and appreciated. I would also like to express my gratitude towards Sidney Sc hofield and Scott Miller for flying and collecting the data required for the st udy. Finally I would also like to thank Donald Moe from USGS for his inputs.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS...............................................................................................................4 LIST OF TABLES................................................................................................................. ..........7 LIST OF FIGURES................................................................................................................ .........8 ABSTRACT....................................................................................................................... ............10 CHAPTER 1 INTRODUCTION................................................................................................................. .12 Background..................................................................................................................... ........12 Aerial Mapping................................................................................................................1 2 Ground Based Terrestrial Mapping.................................................................................14 Objective...................................................................................................................... ...........15 2 IMAGE GEOREFERENCING..............................................................................................17 Indirect Georefencing as Applie d to Terrestrial Mapping System.........................................17 Direct Georeferencing as Applie d to Airborne Mapping System..........................................19 3 CAMERA CALIBRATION...................................................................................................24 Introduction................................................................................................................... ..........24 Camera Model................................................................................................................... .....25 Projective Collineation Camera Model...........................................................................25 Perspective Collineation Camera Model.........................................................................26 Calibration Parameters......................................................................................................... ...27 Exterior Orientation Parameters......................................................................................27 Interior Orientation Parameters.......................................................................................28 Additional Parameters.....................................................................................................29 Calibration Procedure.......................................................................................................... ...29 Network Geometry for Self Calibrating Bundle Adjustment..........................................30 Least Squares Analysis....................................................................................................31 4 TERRESTRIAL CAMERA CALIBRATION.......................................................................35 Calibration Model.............................................................................................................. .....35 Calibration Data............................................................................................................... .......37 LIDAR Data....................................................................................................................3 7 Initial Orientation Parameters..........................................................................................37 Control Points................................................................................................................. .38 Results........................................................................................................................ .............38

PAGE 6

6 5 AIRBORNE CAMERA CALIBRATION..............................................................................49 Study Area..................................................................................................................... .........49 Calibration Data............................................................................................................... .......49 Initial Exterior and Interior Orientation..........................................................................49 LIDAR Data....................................................................................................................5 0 Tie Points and Ground Control........................................................................................50 Ground Truth Using GPS.......................................................................................................51 Calibration Results............................................................................................................ ......52 6 CONCLUSION................................................................................................................... ....64 APPENDIX A CAMERA MODELS..............................................................................................................66 B IMAGING AND MAPPING SENSORS AT UF...................................................................70 C PRINCIPAL AND NODAL POINTS....................................................................................75 LIST OF REFERENCES............................................................................................................. ..77 BIOGRAPHICAL SKETCH.........................................................................................................80

PAGE 7

7 LIST OF TABLES Table page 4.1 Initial value of calibration parameters................................................................................... .40 4.2 Control Points for scan 1................................................................................................. ........40 4.3 Control Points for scan 2................................................................................................. ........41 4.4 Camera model parameters................................................................................................... ....41 4.5 Camera orientation in degrees............................................................................................. ...41 4.6 Control point residual s for scan1 (in pixels)...........................................................................42 4.7 Control point residual s for scan2 (in pixels)...........................................................................42 4.8 RMSE values for control point residuals................................................................................43 5.1 Flight line information................................................................................................... .........54 5.2 Lever arms between the laser and the camera........................................................................54 5.3 Initial image data as obt ained from trajectory file..................................................................54 5.4 Ground control points..................................................................................................... ........55 5.5 Initial coordinates for each tie point.................................................................................... ...55 5.6 Control Points obtained from GPS survey..............................................................................56 5.7 Orientation parameters from Self calibrating bundle adjustment...........................................57 5.8 Average geodetic position derived from geor eferenced imagery for each tie point with standard deviations............................................................................................................ .57 5.9 Residuals obtained from comparison of derived geodetic coordinates with GPS surveyed ground truths.......................................................................................................58 B.1 ALTM 1233 specifications.................................................................................................. ...71 B.2 ILRIS3D specifications.................................................................................................... ......72 B.3 Specification for MS4100 Multispectral camera...................................................................73 B.4 Specification for Nikon D80 SLR camera.............................................................................74

PAGE 8

8 LIST OF FIGURES Figure page 2.1 The coordinate transformations involved between these frames are given by;......................18 2.2 Direct Georeferencing..................................................................................................... ........22 2.3 Coordinate Systems in Direct Referencing.............................................................................23 2.4: Coordinate transformation in Direct Sensor Orientation.......................................................23 3.1 Exterior orientation elements for an airborne mapping system..............................................33 3.2 Exterior orientation parameters in a Terrestrial mapping system...........................................33 3.3 Interior orientation pa rameters (Fraser, 2001)........................................................................34 4.1 Terrestrial Mapping System consisting of the laser scanner and the digital camera..............44 4.2 Point of origin for the la ser data (ILRIS Product Manual).....................................................44 4.3 Point cloud with points co lor coded with intensity.................................................................45 4.4 Scan1 mage showing tie points............................................................................................. ..46 4.5 Laser point cloud color coded by RGB values obtained from the internal camera of the ILRIS.......................................................................................................................... ........47 4.6 Laser point cloud color coded by RGB valu es obtained from external camera (scan 1)........47 4.7 Laser point cloud color coded by RGB valu es obtained from external camera (scan 2)........48 5.1 Airborne Mapping System showing the laser head and the camera.......................................59 5.2 Location of the study area................................................................................................ .......59 5.3 Study area showing flight lin es location and orientation........................................................60 5.5 Intensity image from LIDAR data..........................................................................................6 1 5.6 Tie points (blue circle) and grou nd control points (red circle) ..............................................62 5.7 Intensity image with ground control points............................................................................63 5.8 Georeferenced imagery overlaid with point cloud color coded with elevation......................63 A.1 Projective camera model using standard perspective projection...........................................66

PAGE 9

9 A.2 Image coordinate system para llel to mapping coordinate system..........................................68 A.3 Image coordinates transfor mation in the tilted photo plane...................................................68 B.1 ALTM 1233 Airborne laser mapping system........................................................................70 B.2 ILIRIS 3D................................................................................................................. ..............71 B.3 MS4100 Multispectral camera............................................................................................... 72 B.4 Nikon D80 digital SLR camera.............................................................................................. 73

PAGE 10

10 Abstract of Thesis Presen ted to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science LIDAR AIDED CAMERA CALIBRATI ON IN HYBRID IMAGING AND MAPPPING SYSTEMS By Abhinav Singhania December 2007 Chair: Ramesh Shrestha Major: Civil Engineering Advancement in the fields of position and navigation systems has made possible the tracking of a moving platform with an accuracy of a few centimeters. This in turn has led to the development of the concept of direct georeferen cing with the aid of In ertial Measurement Unit (IMU) and Differential Global Po sitioning System (DGPS). Th e aerial mapping systems have expanded their scope from aeria l photography to include lase r scanning systems, popularly known as LIDAR, which use laser pulses to map the earth surface resulting in high resolution surface models. The present century has seen the successful intr oduction and use of hybrid imaging and mapping systems which typically consist of a laser scanner and a di gital camera mounted together on a platform. The strength of such systems lies in the use of two sensors that complement each other well. The laser scanning system provides high resolution and accurate three-dimensional positions, whereas the di gital camera provides te xtural and spectral information. The successful co-registration of such system s depends on the accurate calibration of the mapping systems. Camera calibration is one such important component of this mapping process. Calibration includes the relative pos ition and orientation of the cam era with respect to the other

PAGE 11

11 sensors in the mapping system, as well as the internal geometry/orientation of the camera. Previously, aerial mapping was limited to the use of metric cameras, but in recent years some users have elected to work with less expensive, small format cameras, for which the internal orientation of the camera assumes a higher si gnificance. Various procedures have been introduced and studies carried out to study the use of nonmetric cameras in mapping. One conclusion reached by virtually all investigations is that careful calib ration of the camera at frequent intervals is essential. This thesis reports on a study carried out at the Geosensing Systems Engineering Research Center at the University of Flor ida, which explores the use of LIDAR data in the calibration of digital cameras. Two different cases are studie d, one using a terrestri al mapping system and another using an airborne mapping system. The ca libration is performed with the aid of LIDAR data and the results are examined by evaluating the accuracy in positions of the control points obtained from georeferenced images.

PAGE 12

12 CHAPTER 1 INTRODUCTION Background Aerial Mapping Aerial mapping has evolved into a relatively efficient and accurate method for producing topographic maps. Within the past decade, film based cameras have phased out and replaced by the widespread use of digital imaging sensors. Moreover with the use of position (Differential Global Positioning System, DGPS) a nd navigation sensors (Inertial Measurement Unit, IMU) in recent years, combined with an ar ray of digital imaging sensors available such as digital cameras, hyperspectral/multispectral cameras, LIDAR (Light Detection and Ranging) imaging sensors and SAR (Synthetic Aperture Radar), a marked transfor mation has taken place in the approach to the problem of image georeferencing. The traditional procedure for aerial mapping invol ved collecting images with at least 60% forward overlap and 30% side overlap, which we re used to form contiguous stereo models covering the area of interest (Wolf and Dewitt, 2000). Traditional surveying methods were used to establish Ground Control Points (GCPs). Th e GCPs and their corresponding image pixel coordinates were used to perfor m Aerotriangulation using the pr inciple of space resection to estimate the position and the orientation of the aerial camera sensor. A mathematical model, derived from collinearity equations using the or ientation and position parameters, was used to georeference the image. This procedure is known as space intersection. The position and orientation parameters together constitute the exterior orientation parameters (EOP) of the imaging sensor (Wolf and Dewitt, 2000). Advances in the field of GPS and navigation sensors and their use in aerial mapping have realized the concept of Di rect Image Georeferencing (Dorota A, 2001; Cramer, 2001; Bumker

PAGE 13

13 et al. 2002). These sensors provide direct meas urements for EOP. Direct georeferencing has proven advantageous because it eliminates the need for using a large number of ground control points (GCP), greatly reducing th e costs associated with esta blishing them. Moreover it makes possible the aerial mapping of otherwise inacces sible remote areas (Cramer, 2001). GPS and IMUs have also made possible the use of aer ial imaging sensors such as LIDAR and SAR. Moreover different imaging sensors can be used simultaneously on the same aircraft, complementing each other. For example LIDAR provides accurate three dimensional (3D) position data in the form of X,Y, Z point cloud s or a Digital Elevati on Model (DEM), Charge Coupled Devices (CCD) sensor based multi-spectr al cameras can provide spectral data over a wide range of wavelengths.. Today, a typical aerial mapping system may co nsist of a digital multispectral camera, a laser scanning system, a high accuracy GPS receiver, and an IMU. Each of the instruments must be calibrated and their relative positions must be accurately determined. The various geometrical aspects that must be considered for calibration include: Lever arm offsets between the camera sensor and the GPS antennae reference point Angular misalignment between the coordinate systems of the imaging sensor and the IMU systemalso known as Boresight angles Apart from these, if a camera is used the in ternal orientation parame ters also need to be calibrated (principal po int coordinates and focal length; Cramer, 2001). The lever arm offsets are generally determ ined using total station or other ground surveying instruments. In the case of an IMU sy stem integrated into the imaging sensor, the angular misalignment calibration may be done in the lab once and then checked at regular intervals. However, if the IMU sensors are m ounted separately from the imaging sensor, the calibration needs to be checked each time either of the sensors is removed from the aircraft.

PAGE 14

14 The internal orientation parameters of a nonmetric camera are also generally determined in a laboratory using a calibrati on test field. However, these came ras have an unstable interior geometry. Potential error sources include the m ovement of the CCD sensor and the lens mounts with respect to camera body (Fraser, 1997; Ruzgien 2005). These movements though minute in nature, are significant enough to aff ect the interior orientation para meters. Also, these parameters are determined in laboratori es under constant and homogenous temperature and pressure conditions. In actual flight conditions, these cond itions vary and can cause non-negligible lens deformations and distortions (Karsten et al, 2005). Hence, the accuracy and stability of the interior orientation parameters is always questionable. Ground Based Terrestrial Mapping In the past, ground based data collection usin g close range photogram metric techniques was largely limited to such fields as architectu re, heritage data collect ion, and geology. Data collection was done using cameras. The inform ation obtained was mainly 2D. Later 3D information was extracted using multiple images and stereoscopic principles. The start of the present century saw the use of ground based la ser scanners becoming popular. Also known as terrestrial laser scanners, they provide high density point clouds (position information) together with monochrome intensity data, reaching high accuracy levels. Thei r applications have expanded to the fields of civi l engineering, forestry, beach erosion studies and archeology, to name a few (Drake, 2002; Fernandez, 2007). Sim ilar to the airborne mapping systems, recent years have seen a shift towards hybrid terrestri al imaging systems that simultaneously acquire data using a digital camera as well as a laser s canner (Ulrich et al, 2003; Jansa et al, 2004). The laser data not only enhances the data acquired but can also assist in the calibration. With the millimeter level positional accuracy of terrestrial laser systems (Optech product specifications), it provides a 3D calibration field for external as we ll as the internal calibration of the camera.

PAGE 15

15 The basic photogrammetric principles for the data fusion in these hybrid systems remain the same as for the airborne systems. Here t oo, the importance of the calibration of the relative geometry of the two sensors cannot be overem phasized. It involves the determination of the parameters for the registration of the image to th e coordinate system of the laser scanner. These parameters are: Relative linear position of the origins of the c oordinate systems of th e laser scanner and the CCD sensor Angular misalignment between th e two coordinate systems. Interior orientation parameters for the camera Objective The integration of LIDAR and digital camera pr ovides beneficial prospects not only for the purpose of data fusion but also camera calibration. The DEM (Digital Elevation Model) as well as the intensity image produced from airborne LIDAR data provides useful information for calibration. The former can be used to obtain the actual ground elevation to establish an accurate scale and the latter can provide ground control points for calibrati on. In the case of terrestrial LIDAR systems, where the point density reaches much higher than the airborne systems (about 104 per m2), the points themselves can be used to provide control for calibration. The GEM (Geosensing Engineering and Mapping) research center at the University of Florida owns an airborne laser mapping syst em, ALTM 1233 and a terrestrial laser mapping system, ILRIS3D, both manufactured by Opt ech Incorporated, a Canadian company. The research center also acquired a 4-band multispectral non-metric camera, Redlake MS4100, for combined airborne laser and digital photogr aphy mapping. Although the IL RIS3D has a built-in digital camera, the full advantage of its hybrid capability is not re alized because of low quality of the images (low resolution and limited colour ba lance). So, UF recently purchased a non-metric

PAGE 16

16 10.2 megapixel digital SLR camera: Nikon D80, equi pped with a 20mm focal length lens, to use with the ILRIS3D. This thesis explains a study pe rformed to demonstrate the use of LIDAR data as an aid for performing on-the-job calibration of non-metric digital cameras used for aerial as well as terrestrial mapping in hybrid imaging systems. Va rious issues concerned with this application, such as parameters affecting the strength of calibration, the effect of LIDAR accuracy on the calibration, the accuracy of the ge oreferenced imagery, and practical usefulness of the procedure, are also discussed. Chapter 2 introduces the concepts behind im age georeferencing. The camera models and calibration fundamentals are explained in Chapte r 3. In Chapters 4 and 5, the study that was carried out is presented. Fi nally, Chapter 6 summarizes and concludes the thesis

PAGE 17

17 CHAPTER 2 IMAGE GEOREFERENCING Image georeferencing implies the transformation of image coordinates with respect to the image coordinate system to the 3D coordinate s in the mapping reference frame or the geodetic reference frame. This is carried out by using extended collinear equations given below (Wolf & Dewitt, 2002; Pinto et al, 2002): Xm = Xc + (ri m (Xi)) Where, Xi = 2D image coordinates in imaging frame Xm = corresponding 3D coordi nates in the mapping frame Xc = 3D coordinates of the sensor in the mapping frame, perspective center of the camera = scale factor ri m = rotation matrix for conversion from image frame to mapping frame As the equation suggests, the position and orie ntation of the camera at the time of exposure are the two parameters required for georefer encing, which need to be determined. These parameters can be obtained in two ways: Indirect Georeferencing using control points: Also known as the space resection principle, it involves using control point s whose coordinates on the imag e as well as in the mapping frame are known. A minimum of 4 points ar e required to obtain a solution for the 7 parameters i.e. the X,Y,Z position of the camera, the three rotation angles for transformation and the scale factor. If 5 or mo re control points are used, a least squares solution can be obtained. This procedure is us ed for georeferencing the terrestrial mapping system Direct Georeferencing /Direct Sensor Orient ation: This involves the use of position (DGPS) and inertial sensors (I MU) to obtain the two set of parameters. The collinear equation needs to be modified to include the information given by the position and navigation sensors. This is used in the case of the airborne mapping system Indirect Georefencing as Applie d to Terrestrial Mapping System The reference frames involved are as follows (figure 2.1):

PAGE 18

18 Object mapping frame, representing the re ference frame in which the point cloud is represented. Intermediate frame Camera frame: frame misaligned with the intermediate frame by misalignment angles about X, Y and Z axes respectively. The coordinate transformations involve d between these frames are given by; 1. Object mapping frame to Intermediate frame: t mC: 0 1 0 0 0 1 1 0 0 2. Intermediate frame to Camera frame: c tC, (Rogers, 2003): cos cos sin cos sin sin cos cos sin sin sin sin sin cos cos cos sin cos sin cos sin sin cos sin sin sin cos cos cos where are rotations about X, Y and Z axes respectively in the same order. Therefore the rotation matrix for transformi ng from the Object mapping frame to Camera frame is: c mC=c tCxt mC Once the orientation parameters are known, the ex tended collinear equati on is then used to georeference the image: zi yi xi C Z Zi Y Yi X Xim c L L L) ( where xi, yi, zi : Image pixel coordinates in the camera frame

PAGE 19

19 XL,YL, ZL : Coordinates of the camera in the mapping frame Xi, Yi, Zi : Coordinates of the image pixel in mapping frame = scale factor m cC =' c mC, Rotation matrix from camera to object mapping frame Direct Georeferencing as Applied to Airborne Mapping System DGPS gives the position of the airplane in the mapping frame. The IMU however gives the orientation in terms of roll, yaw and pitch in th e inertial reference frame (figure 2.2). They need to be transformed from the inertial frame to th e mapping frame. Thus the collinearity equations are modified to include the coordinate transfor mations related to the navigation information. The coordinate frames involved (figure 2.3) in the transformations are (Bumker et al, 2001): 1. Mapping reference frame representing the geodetic mapping frame 2. Navigation frame, represents the north east down (NED) frame. The orientation of the airplane in terms of yaw( y), pitch ( p) and roll ( r) is given with respect to this frame. 3. Body frame, represents the act ual orientation given by IMU. Ideally, IMU would be strapped to the sensor itself so as to have the orientation of the sensor itself which is generally the case in single imaging sensor systems. The accelerations accx, accy, accz, are measured by the IMU in this frame. 4. Imaging Sensor/Camera frame, represents the orientation of the camera and is misaligned to the body frame by the boresight angles represented by misalignment in pitch( pm), roll ( rm) and, yaw ( ym). 5. Image frame in which the pixel coordinates are given. The coordinate transformations and their or der involved to go from a mapping frame to image frame are as shown in figure 3. Rotation transformation matrix from mapping fram e to image frame is given by (Rogers, 2003) n m b n c b i c i mC C C C C Where

PAGE 20

20 1 0 0 0 0 1 0 1 0n mC 'cos cos sin cos sin sin cos cos sin sin cos cos sin sin sin cos sin sin sin cos sin cos cos sin sin sin cos cos cos r p r p p r y r p y r y r p y p y r y r p y r y r p y p y C Cn b b n 'cos cos sin cos sin sin cos cos sin sin cos cos sin sin sin cos sin sin sin cos sin cos cos sin sin sin cos cos cos rm pm rm pm pm rm ym rm pm ym rm ym rm pm ym pm ym rm ym rm pm ym rm ym rm pm ym pm ym C Cb c c b '1 0 0 0 0 1 0 1 0 n m i cC C For ease of writing elements of i mC will be represented as i mC = 33 32 31 23 22 21 13 12 11r r r r r r r r r Rotation from image to mapping frame is given by 'i m m iC C Once all the EOP and the boresight angles are known the follow ing equation may be used to georeference the images: zi yi xi C Zp Zi Yp Yi Xp Xim i) ( Where Xi, Yi and Zi = mapping frame c oordinates to be determined

PAGE 21

21 Xp, Yp, Zp = imaging sensor positi on coordinates in the mapping frame = scale factor xi, yi = image pixel coordinates zi = -f(focal length) m iC = rotation matrix for coordinate tran sform from image frame to mapping frame

PAGE 22

22 Figure 2.1 Coordinate frame s for terrestrial system Figure 2.2 Direct Georeferencing GPS N(X) accx E(Y) accy D(Z) accz Roll Yaw Pitch IMU / INS Z Y X Object Mapping Frame Intermediate Frame Z Y X Z Y X Camera Frame

PAGE 23

23 Figure 2.3 Coordinate System s in Direct Referencing Figure 2.4: Coordinate transformation in Direct Sensor Orientation Mapping Frame Navigation (NED) Frame Body (NED) Frame Camera Frame Cn m Cb nCc b Image Frame C n m Y X Z N(X) E(Y) D(Z) N(X) E(Y) D(Z) N(X) E(Y) D(Z) Y X Z 1. Mapping Frame 2. Navigation Frame 3. Body Frame r, p, y Roll, pitch yaw 5. Image Frame 4. Camera Frame rm, pm, ym Roll, pitch yaw misalignment

PAGE 24

24 CHAPTER 3 CAMERA CALIBRATION Introduction Calibration is the determination of the orient ation of the camera, external as well as internal, so that the image coor dinates can be transformed to r eal world mapping coordinates to derive object space information (Gruen, 2001). The issue of calibration was discussed by early photogrammetrists as a problem of determining camera orientation. In 1859 Aim Lau ssedat in France used a theodolite to orient a camera to take images for mapping the st reets of Paris. Th e later half of 19th century and early 20th century saw mathematicians laying down the foundations of photogrammetry which still serve as the basic principles for solving the calibration problem In 1899 Sebastian Finsterwalder described the principles of modern double-image photogrammetry and the methodology of relative and absolute orientation. Otto von Gruber in 1924 derived the projective equations and their differentials, which are still fundamental to analytical photogrammetry. Recent years have seen the subject of camera calibration, being tackled as a computer vision problem. This can be attributed to the increase in complexity of the problem. It has extended from just the determination of orientation of the camera to ascertaining the parameters of its internal geometry and the errors associated with digital sensors. Understanding the camera model and calibrating it has become a complex task. Over the years various calibration procedures have evolved with the involvement of the computer vision community. However, the bundle adjustment (Brown, 1971) technique remains superior as far as photogrammetry is concer ned. This point has b een well discussed by Remondino and Fraser (2006). They compare the self calibration bundle ad justment procedure using corrections for distortion (F raser, 1997; Gruen et al, 2001) to other techniques such as

PAGE 25

25 DLT/linear techniques (Abdel-Aziz et al, 1971) and linear/nonlinear combination techniques (Tsai, 1987; Heikkil & Silven 1997). Better results were ob tained by the bundle adjustment technique as compared to the ot her methods; the basis of comparis on being the root mean square error (RMSE) values of calculate d object point coordinates agai nst their true observed values. Thus the motivation behind the use of bundle adjust ment within this rese arch as the chosen technique to carry out camera calibration. Camera Model Before undertaking the task of calibration, it is important to understand the camera model so as to identify the parameters that need to be determined. The camera model can be either projective or perspective coll ineation. The bundle adjustment uses the perspective model. However, both models are discu ssed briefly below (for more in formation, refer Appendix A). Projective Collineation Camera Model A general projective model maps an object point X to an image point x according to x = P X. P is a 3x4 matrix which can be decomposed as P = K I M (Mohr et al, 2001; Remondino & Borlin, 2004) where, K = 1 0 0 00 0y f x s fy x is an upper triangular matrix with the interior parameters of the camera fx and fy are the focal length along x and y axis s the skew factor (x0, y0) the principal point position;

PAGE 26

26 I = 0 1 0 0 0 0 1 0 0 0 0 1 M = [R|T], gives the position and orientation of camera R is a rotation matrix T is translation vector. If the camera is fixed and undergoes only rota tions (co-centric imag es or negligible eccentricity), we can eliminate the v ector t and express the mapping of X onto x as x = K I R X as P = K I R. The coordinates (x, X) are defi ned as homogeneous coordinate s, explained in the Appendix A (Mohr et al, 2001). The most important benefit of projective model is the linear relationship in the equations. Moreover, it can also handle variable focus and zoom optics of the camera. However it has drawbacks such as less stability in equations, requirement of large number of parameters and most importantly complexities involved in deal ing with non-linear lens di stortions (Frasier 2001; Remondino & Fraser, 2006). Also it does not define the coordinates in a 3D metric space (Fraser, 2001). All these render it unusable for high accu racy photogrammetric ma pping. Still there are other areas which give more prio rity to other criterion such as short processing time and do not require object points position in 3D metric space. It is widely used in applications such as autonomous vehicle navigation, robot ic vision, medical imaging etc. Perspective Collineation Camera Model The perspective collineation model is the most widely used camera model for photogrammetric mapping purposes. It is also derived from th e basic collinear equations. Collinearity is a condition where in the exposure st ation, the object point and its image, all lie on

PAGE 27

27 the same straight line. This condition can be exploi ted to arrive at the ca mera model as described in Appendix A (Wolf and Dewitt, 2002). The model itself is given below: xi x0= ) ( ) ( ) ( ) ( ) ( ) (33 32 31 13 12 11Zo Z r Yo Y r Xo X r Zo Z r Yo Y r Xo X r fL L L L L L yi y0= ) ( ) ( ) ( ) ( ) ( ) (33 32 31 23 22 21Zo Z r Yo Y r Xo X r Zo Z r Yo Y r Xo X r fL L L L L L where, f : focal length of the camera. XL, YL and ZL: perspective center coordina tes in object mapping frame Xo, Yo and Zo: object coordina tes in the mapping frame; xi, yi, zi: image coordinates in image frame x0 and y0: displacement in the principal point pos ition with respect to the center of the image plane Calibration Parameters The calibration parameters consist of the exte rior orientation parameters and interior orientation parameters. Exterior Orientation Parameters The exterior orientation parameters are basi cally the 3 translations along the coordinate axes and the 3 rotational misali gnments about the coordinate ax es between the imaging sensor coordinate system and the object mapping coor dinate system. However the exact parameters which are determined differ in the case of airborne and terrestrial mapping system. Incase of an airborne mapping system the position is provided by the GPS sensor. The translational elements of the orientation are th erefore given in terms of displacement of the camera along the X, Y and Z axis from the phase center of the GPS antenna. The orientation is

PAGE 28

28 provided by an IMU in terms of roll, pitch and ya w in an inertial frame of reference. However there always exist an angular misalignment betw een the orientations of the imaging sensor (digital camera) reference frame and the IMU. These are calculated as misalignments in roll ( rm), pitch ( pm) and yaw ( ym) and constitute the exterior orie ntation parameters for airborne mapping systems. Mathematically they form th e transformation matrix of body frame to camera frame (c bC) that was presented in chapter 2. They ar e further introduced into the camera model embedded in the elements of rotation matrix i mC (mapping to image frame). In a terrestrial mapping system the point cl ouds are collected in a scanner coordinate system defined by the orientation of the laser s canner and an origin which is a manufactured specified point of reference on the instrument. Th e translational orientation parameters are given by the position of the camera as defined in th is coordinate system. The angular orientation parameters are given as misalignments ( along X, along Y and along Z axes) between the camera and the laser scanner as represented in the same scanner coordinate system. Interior Orientation Parameters They define the relationship between the pers pective center of the imaging sensor and the image coordinates. There are basically thr ee parameters of interior orientation: The principle distance or the focal length, f: It is the perpendicu lar distance of the perspective center to the projection plane Principle point coordinates: They are the x and y coordinates of the point where the perpendicular from the perspe ctive center intersects the proj ection plane. The principle point should ideally be coincident with the or igin of the image coordinate system defined by the center of the image (the central row a nd column for a CCD image), which is not the case in most of the cameras. The interior orientation paramete rs can also be specified as th e coordinates (x, y, f) of the perspective center in the image coordinate system.

PAGE 29

29 Additional Parameters These parameters account for the distortion ef fects in the non-metric camera and are included as an additional disp lacement of the image pixel x and y along x and y axis in the image coordinate system. The two distortion effect s taken into consideration are (Fraser, 2001): Radial lens distortion: It is represented as an odd number ed polynomial series (as a result of Seidel aberrati ons) given as r = K1r3 + K2r5 + K3r7.. where Ki are the distortion coefficients and r is th e radial distance from the principal point (r2 = x2 + y2). The correction to the imag e coordinates is given as xr = x r/r and yr = y r/r. Generally for medium accuracy applications K1 is sufficient and for high accuracy applications K2 and K3 may be used but are not necessar ily significant for every case. Decentering distortion: This is a result of the lack of centeri ng of the lens element along the optical axis. It causes both radial and tangential image displacements. It is modeled by correction equations given by Brown: xr = P1 (r2 + 2x2) + 2P2xy yr = P2 (r2 + 2y2) + 2P1xy where P1 and P2 are distortion coefficients K1, K2, K3, P1 and P2 form the additional parameters which need to be calculated. Other form of distortion effects like that due to imag e plane unflatness and in plane image distortions are negligible in magnitude and have no metric effect (Fraser, 1997). Calibration Procedure The collinearity equations de rived above for the perspectiv e camera model are given as (Gruen & Beyer, 2001): xi x0 x = ) ( ) ( ) ( ) ( ) ( ) (33 32 31 13 12 11Zo Z r Yo Y r Xo X r Zo Z r Yo Y r Xo X r fL L L L L L yi y0 y = ) ( ) ( ) ( ) ( ) ( ) (33 32 31 23 22 21Zo Z r Yo Y r Xo X r Zo Z r Yo Y r Xo X r fL L L L L L Rearranging

PAGE 30

30 xi = ) ( ) ( ) ( ) ( ) ( ) (33 32 31 13 12 11Zo Z r Yo Y r Xo X r Zo Z r Yo Y r Xo X r fL L L L L L + x0 + x ..(3.1) yi = ) ( ) ( ) ( ) ( ) ( ) (33 32 31 23 22 21Zo Z r Yo Y r Xo X r Zo Z r Yo Y r Xo X r fL L L L L L + y0+ y ..(3.2) These act as the observation equations for the least square analysis for calibration. They result in the following cases depending on which pa rameters are treated e ither as unknowns or known a priori (Gruen & Beyer, 2001): General Bundle Method: All parameters on the right hand side are unknown ( interior orientation, exterior orientation, object point coordinates) Bundle method for metric camera: The interior orientations (x0, yo and f) are known and all others need to be determined Spatial resection: object point coordinates are known, interi or and exterior orientation parameters need to be determined. Spatial intersection: The interior and exteri or orientations are known and the object point coordinates need to be determined, the case for achieving georeferencing of images. The cases for the airborne mapping system a nd the terrestrial mapping systems differ. For the airborne system, the position of the imaging sensor (XL, YL, ZL) is known and tie points between the images are used toge ther with a minimal number of control points (points with known object point position) The analysis therefore is a comb ination of a par tial general bundle method and spatial resection also known as Self Calib rating Bundle Adjustment In the case of a terrestrial mapping system, only control points are used for solving or ientation parameters. Therefore the case is that of a spatial resection, also known as Test Range Calibration. Network Geometry for Self Calibrating Bundle Adjustment In airborne photography, the pr ocedure for calibration, as di scussed above, involves bundle adjustment of photographs. The self calibrati ng bundle adjustment approach does not require control points to be well distribut ed in three dimensions nor do es it require any ground control

PAGE 31

31 (Fraser, 2001), though the use of minimal ground constraint can only improve the quality of calibration. Highly converge nt network geometry however is ex tremely significant. It influences the accuracy of the camera calibration by decoup ling the interior and exterior calibration parameters. A few characteristics of the network geometry for mo re accurate determinability of orientation parameters are as follows (F raser, 2001; Remondino & Fraser, 2006) Variations in roll angle are required to diminish the corr elation between the exterior orientation elements and th e principal point location(x0, y0) which is achieved in airborne mapping by crossing and ob lique flight lines. Variation in roll angles is also required to decouple th e projective coupling between decentering distortion (P1 and P2) and x0, y0, although it might still ex ist to some extent. The coupling between the camera position and th e principle distance/fo cal length is broken through introduction of scale vari ations. This translates to acquiring imagery at different altitudes so as to have differe nt scale factors in the transf ormation from image coordinate system to object mapping coordinate system. Least Squares Analysis For the least square bundle adjustment, the equations 3.1 and 3.2 are linearized using Taylors equation and put in th e form (Gruen and Beyer, 2001) v = A1x1 + A2x2 + A3x3 L where v = vector of error in image coordinates x1 = parameter vector containi ng the orientation elements x2 = parameter vector containing the obj ect point coordinate s for tie points x3 = parameter vector containing interior orientation and additional parameter A1 A2 and A3 are respective jacobians for x1 x2 and x3. L = observed image coordinate values cal culated values obtained from Taylors expansion The unknowns may then be estimated as:

PAGE 32

32 L A L A L A A A A A A A A A A A A A A A A A A A x x x3 2 1 1 3 3 2 3 1 3 3 2 2 2 1 2 3 1 2 1 1 1 3 2 1

PAGE 33

33 Figure 3.1 Exterior orientation elemen ts for an airborne mapping system Figure 3.2 Exterior orientation parame ters in a Terrestrial mapping system Laser Scanner y x z X Y Z Digital Camera X Z Y GPS IMU Sensor (Digital Camera)

PAGE 34

34 Figure 3.3 Interior orientati on parameters (Fraser, 2001) True Image Position Perturbed Image Position Principle point f = focal length y 0 x 0 dy dx Perspective center y x

PAGE 35

35 CHAPTER 4 TERRESTRIAL CAMERA CALIBRATION Terrestrial mapping system at UF consists of the ILRIS laser imaging system and a Nikon digital SLR camera mounted on its t op (Figure 4.1). The data collected by ILRIS is referenced to the mounting hole on the undersid e of the system (Figure 4. 2). The calibration technique followed here is that of using a 3D calibrati on field provided by the laser data. A modified calibration as well as distortion model is used to carry out the bundle ad justment as discussed below. Calibration Model The calibration model used here was or iginally developed by Yakimovsky and Cunningham (1978) at the JPL robotic s laboratory. It was later modi fied and improved to include distortion parameters by Grennery ( 2001). The model as describe d by Grennery (2001) is given below: The basic model consists of 4 vectors expresse d in object coordinate frame. These vectors are: c: the position of entrance pupil of the camera representing the positional offset from the point of reference a: the unit vector, perpendicula r to the image sensor plane, pointing outward through the exit pupil of the camera, essentially repr esenting the orientat ion of the camera h and v are mathematical quantiti es expressed as h = h + xca and v = v + yca where h and v are the perpendiculars to the y and x image axes resp ectively, in the image sensor plane (such that h, v; and a form an orthogonal right handed system), and their magnitude equal to the change in image coordina te caused by a unit change in tangent of angle from vector a to the point view ed, p, subtended at the entrance pupil. For cameras with fixed focal length and non telephoto lens (as in the present case), it is equivalent to the distance

PAGE 36

36 between the second nodal point to th e sensor plane, defined in horiz ontal and vertical pixels. xc and yc are the image coordinates of the princi ple point. The terms prin cipal points, principal planes and nodal points are described in Appendix C. The vector a would also repres ent the optical axis if the im age sensor plane was perfectly perpendicular to it (optical axis). However, to al low for a possibility where such is not the case, a separate vector o repres ents the optical axis. If a vector p describes the position of a poi nt in 3D object space, its equivalent pixel coordinates are given by a c p h c p x ) ( ) ( and a c p v c p y ) ( ) ( To include effects of radial distortion, the apparent position of the point p is given as p = p + where, = 0 + 1 + 2 2 ; 0, 1 and 2 are coefficients of distortion 2 ; square of the tangent of the angle from optical axis to the point p o c p ; represents the orthogonal vector from optical axis to point p and o is the vector representi ng optical axis o c p ) ( ; component of the vector from the principle point to Therefore the image coordinates given for a point p are a c p h c p x ) ( ) ( and a c p v c p y ) ( ) ( The unknowns for which the adjustment is carried out consists of vectors c, a, h, v, o and distortion coefficients 0, 1 and 2.

PAGE 37

37 For a complete description and understand ing of the model as well as the bundle adjustement algorithm, see Yakimovsky (1987) and Grennery (2001). Calibration Data LIDAR Data Because LIDAR data was to be used as a 3 Dimensional calibration field, the site was chosen so as to provide points distributed in 3 dimensions, containing edges and intersections which could provide easily disti nguishable control points. A secti on of the University of Florida football stadium met these requirements and was mapped with the laser system. Two scans were taken, first on July 17, 2007 (scan 1) and the other on August 6, 2007 (scan2). Scan1 consisted of 1,911,371 points at average point spacing of about 2.5 cm and scan 2 consisted of 1,444,665 points at the same point spacing. Each point had an intensity value to aid tie-point selection (Figure 4.3). Initial Orientation Parameters The initial values for the positional offsets of the camera (c0) with respect to the point of origin of the 3D object coordinate system were approximately calculated using a measuring tape. To calculate the starting values for other unknowns, a point in the object c oordinate system close to the images centre was chosen (p0). The camera model vectors were then calculated as follows: a0 = unit (p0 c0) 0 0 0 v 0 0 hv 0 0 h 0 hv 0a o a 2 s ) h unit(a p f v a 2 s u) unit(a p f h where

PAGE 38

38 u: a vector in the object coordinate system pointing upwards i.e. [0,0,1]. f f: focal length of the lens = 20 mm phv : pixel size in same units as f = .006 mm sh and sv: no. of rows and columns in the image respectively = 2592 x 3872 The initial value for the distortion coefficien ts was assigned as zero. The values of the parameters calculated as above are given below: Control Points Polyworks software was used to manually se lect control points in the point clouds and their corresponding image pixels were marked in Matlab. 20 points in scan1 (figure 4.4) and 17 points in scan 2 were distributed over the im age. Their pixel coordi nates and object point coordinates are as given below: Results The camera model parameters obtained afte r the bundle adjustments are given below: Table 4.5 gives the orientation of the camer a in degrees, with respect to the mapping reference frame of the laser scanner, calculated from the a vector given above for each scan. It is evident from the results that the external orientation parameter va lues i.e. the offsets (c) and the angular misalignments (a) vary an d not really repeatable This is because the camera has a single screw hole to attach it to an external mount; moreover the curvy shape of the camera on the sides makes it difficult to mount it at exactly the same place and orientation. Thus the mount is not as rigid as it should be to get repeatable mounting pos ition and orientation. The pixel values were extracted for the lase r points and their resi duals calculated for control points (Table 4.6 and Tabl e 4.7). To ascertain the effect of distortion coefficients the pixel values for both the scans were calculated by using both sets of distortion parameters

PAGE 39

39 (referred as D1 for parameters obtained from scan1 and D2 for the ones obtained form scan2). The RMS values for the residuals are given in Table 4.8 As evident by the residual as well as the RMS values, the second scan yields better results. However, the RMS values for both the scans are within the acceptable tolerances (3 pixels). Maximum RMS for scan 1 being 2.202 pixels wh ich is approximately 6.5cms on ground (scaling at an average scan range of 105 m) and for scan 2, 1.5 pixels representing about 3.2cms on ground (scaling at an average scan range of 70m) The high (above 3 pixels) residuals in a few cases for scan 1 could probably be attributed to following reasons: 1. Ranging and position accuracy is a f unction of range and could be a factor of error for e.g. in case of point 18 (range = 126m) 2. The human error involved in c hoosing tie points as the point cloud has some noise at the edges of the building structures. The effect of using distortion parameters obtained from the 2 scans, although not negligible, is not too highly pronounced. Only in one case (RMS residual for xp in scan 2), the value jumps by almost 45% (1.440 from 0.998) when we use the distortion parameters obtained from the other scan. This suggests that constant values for distortion parameters may be used but need to be checked consistently.

PAGE 40

40 Table 4.1 Initial value of calibration parameters Parameter Value (scan 1) Value (scan 2) co(m) [0.01, 0, 0.375] [0.01, 0, 0.375] p0 [-0.677;67.217;-0.218] [-0.298, 36.617, -0.851] a0 [-0.0102, 0.9999, -0.0088] [-0.0084, 0.9994, -0.0334] h0 [3319.914, 1329.948, -11.432] [3322.321, 1323.265, -43.366] v0 [-19.484, 1906.420, -3350.282] [-15.336, 1823.312, -3396.248] Table 4.2 Control Points for scan 1 Image Coordinates (pixels) Object Coordinates (m) xp Yp X Y Z 1 1709 1911 6.197 55.411 0.112 2 336 968 -11.274 37.953 11.045 3 1957 1901 10.952 58.715 0.274 4 2258 1451 30.302 109.073 14.756 5 1880 886 12.559 75.554 23.134 6 1878 1255 13.467 82.047 16.014 7 2094 1892 20.239 88.995 0.453 8 2348 1228 36.021 117.893 23.711 9 2003 1676 17.44 86.979 5.987 10 1903 1775 14.953 87.52 3.493 11 2009 1352 17.681 87.002 14.354 12 2041 1405 19.171 90.207 13.501 13 943 1538 -7.457 65.239 7.447 14 1171 761 -2.852 63.292 21.943 15 801 1603 -8.94 57.076 5.412 16 1619 1500 6.303 72.514 8.854 17 1286 1105 -0.796 66.667 16.077 18 2325 799 34.677 115.341 38.26 19 1543 2415 3.829 60.543 -8.888 20 2254 2338 19.36 70.209 -8.937

PAGE 41

41 Table 4.3 Control Points for scan 2 Image Coordinates Object Coordinates xp Yp X Y Z 1 2413 1325 16.46150.4177.254 2 1315 1049 0.02438.5018.739 3 1414 2026 0.79228.075-1.639 4 697 689 -12.45567.77822.612 5 555 1086 -14.38963.67913.756 6 2413 1874 15.93548.704-0.961 7 2297 2113 13.31145.442-4.105 8 346 1904 -14.63849.969-1.276 9 586 1803 -9.34842.8730.174 10 909 2033 -7.43260.348-3.998 11 2129 1546 9.50139.3573.099 12 2331 1835 13.945.964-0.354 13 810 1730 -6.18641.0451.094 14 1298 1459 -0.21737.5343.978 15 2130 1153 9.72640.3017.893 16 1107 1754 -2.68843.0350.773 17 1400 1608 0.68228.0031.826 Table 4.4 Camera model parameters Parameter Value (scan 1) Value (scan 2) c(m) [-0.00805042, 0.799552, 0.3854579] [-0.0013308, 0.8497070, 0.300866] a(radians) [0.0258054, 0.9993874, 0.0236395] [0.000442, 0.999691, -0.024829] h [3451.5738, 1324.2397, 18.701148] [3291.7308, 1316.9811, -46.0230] v [24.378744, 1897.1058, -3372.8594] [-12.8117, 1791.2551, -3337.6421] o [0.0249488, 0.9994308, 0.0227091] [0.0004532, 0.9996869, -0.0249463] 0 -0.0002776 -.000271148 1 -0.0164255 -0.005075091 2 -0.0030774 -0.001171036 Table 4.5 Camera orientation in degrees Orientation Axis Scan 1 Scan 2 X 88.5212953489.97467526 Y 2.0056183741.424387172 Z 88.6454302491.42274312

PAGE 42

42 Table 4.6 Control point residua ls for scan1 (in pixels) Scan 1 (distortion set 1) Scan 1 (distortion set 2) dxp dyp dxp dyp 1 3.18 -1.07-3.0731.114 2 -0.19 1.59-2.144-3.457 3 1.41 0.95-1.105-0.910 4 -0.13 1.671.085-2.088 5 -0.18 -2.460.8221.204 6 1.43 -0.28-1.065-0.167 7 0.32 1.890.180-1.843 8 0.09 -0.311.308-0.571 9 1.21 -0.71-0.8400.620 10 -0.32 0.570.583-0.597 11 1.02 -2.61-0.5222.228 12 1.44 -0.03-0.916-0.310 13 1.06 2.59-1.328-2.757 14 -3.43 2.033.094-3.576 15 0.95 -1.65-1.3781.505 16 -0.76 -3.210.8493.084 17 -0.18 -3.000.0822.436 18 -1.00 3.152.998-5.390 19 -2.20 0.782.295-0.408 20 -3.73 0.124.8060.530 Table 4.7 Control point residua ls for scan2 (in pixels) Scan 2 (distortion set 2) Scan 2 (distortion set 1) Dxp Dyp Dxp Dyp 1 1.25 0.95-3.2820.066 2 -1.61 -0.681.6171.490 3 -0.09 0.290.065-0.338 4 1.28 0.960.0841.627 5 -1.98 -0.113.1501.324 6 0.45 -0.07-2.1220.073 7 -1.48 0.730.179-1.049 8 0.40 0.78-0.454-0.720 9 0.61 -1.79-0.0061.856 10 1.04 1.91-0.857-1.989 11 -0.87 -2.790.0193.138 12 0.014 -0.43-1.3740.482 13 -0.37 1.110.650-1.032 14 1.35 -1.90-1.3422.087 15 0.12 1.04-1.3500.040 16 -0.51 -1.800.5831.839 17 0.40 1.79-0.429-1.711

PAGE 43

43 Table 4.8 RMSE values for control point residuals RMSE parameter Scan 1 (D1) Scan1 (D2) Scan 2 (D2) Scan 2 (D1) RMSE dxp (pixels) 1.635 1.917 0.998 1.440 RMSE dyp (pixels) 1.851 2.202 1.349 1.487

PAGE 44

44 Figure 4.1 Terrestrial Mapping Syst em consisting of the laser s canner and the digital camera Figure 4.2 Point of origin for the laser data (ILRIS Product Manual) All data is referenced to this point

PAGE 45

45 Figure 4.3 Point cloud with points color coded with intensity scan1 scan2

PAGE 46

46 Figure 4.4 Scan1 mage showing tie points

PAGE 47

47 Figure 4.5 Laser point cloud color coded by RGB valu es obtained from the internal camera of the ILRIS Figure 4.6 Laser point cloud color coded by RGB valu es obtained from external camera (scan 1)

PAGE 48

48 Figure 4.7 Laser point cloud color coded by RGB valu es obtained from external camera (scan 2)

PAGE 49

49 CHAPTER 5 AIRBORNE CAMERA CALIBRATION The airborne mapping system at UF consists of the ALTM 1233 laser scanning system and the MS4100 camera mounted on a twin e ngine Cessna airplane (Figure 5.1). Study Area Data was collected over the Oaks mall located in the west of the city of Gainesville (Figure 5.2), Florida on February 19, 2007. Flight was conducte d over the parking lot so as to have clear road markings to obtain good tie points. The calibration field was about 375m x 350m in dimensions. There were a total of 4 flight lines and 25 images (Table 5.1 ). The location of the study area showing the flight lines is shown in figure 5.3. The flight lines were designed keeping in mind the requirements for convergent network geometry Calibration Data The data obtained from the camera consisted of the images and the timestamps of each image in GPS time, up to a precision of neares t millisecond. The images were obtained at an interval of one second and the overlap betw een the images ranges from 45% to 80%. The trajectory information consisting of the position and the orientati on of the airplane was obtained by post processing the GPS data to obtain di fferential GPS data (DGPS). This was then integrated with the IMU data and smoothened through a Kalman filter. POSPac software (Applanix, USA) was used to carry out these processes. Initial Exterior and Interior Orientation Exterior orientation elements required for calibration are the positional offsets between the reference points of the camera and the laser syst em. These are required as the trajectory obtained represents the position and orientation information in reference to the LIDAR system. They

PAGE 50

50 were measured with the position of camera as origin and the positive X being forward (direction of flight), positive Z being skyward and positive Y to the right completing the right hand system. The values of these lever offsets are kept constant throughout the calibration process: The initial values for the angular misalignment between the camera and the IMU were assumed to be zero. The interior orientation parameters were obt ained in 2005 from camera calibration carried out by USGS. However, when these values were us ed, the results obtained were inconsistent and out of acceptable limits. The reason for the same was not very clear and hence they were discarded. The interior orientation parameters were also determined with the self calibrating bundle adjustment. The image position and orientation were extract ed from the trajectory using the timestamps (Table 5.3). This was done by importing the imag e information and the trajectory to Terraphoto image processing software (developed and distri buted by Terrasolid, Finland), where the position and orientation information for each image wa s interpolated from the trajectory file. LIDAR Data LIDAR data in the form of point clou d consisting of 746,223 points was collected simultaneously with the aerial images. The point de nsity was about 5.6 points per square meter. Tie Points and Ground Control 21 tie points and 5 ground cont rol points were used for the calibration. Each image had at least 8 tie points and the average point count per image wa s 10.4. The distribution and location of the tie points (Table 5.5) and the ground control points (Table 5.4) are shown in figure 5.6 and 5.7. The ground control points we re obtained from the intensity image (figure 5.5). Using the initial orientation information of each image, ground coordinates were calculated for each tie point in each image. The average va lue of X and Y for each tie point was used as

PAGE 51

51 initial value in the bundle adjustment. The Z c oordinate for each point was obtained from the digital surface model created using the LIDAR point cloud. Ground Truth Using GPS The geodetic coordinates of the 21 tie points and the 5 control points were determined by carrying out a GPS survey over the study area. Th ese positions were used as ground truth for comparison with the coordinates derived from georeferenced images. The procedure followed was that of a stop and go kinematic survey. The ma in reference base stat ion was set up in the study area and collected data for 3 hours. Its purpo se was to obtain short baselines (< 250m) with respect to the rover stations. A second base st ation, a CORS (Continually Operating Reference Stations), located at the Gaines ville Regional Airport was also used. The antenna used for the base station was a dual frequency (L1/L2) ASHTECH manufactured Chokering 700936 Rev D antenna. For the roving station, the antenna us ed was dual frequency ASHTECH manufactured ASH700700.C antenna. The receiver make and m odel used for both the base and the rover stations was an ASHTECH Z-Xtre me receiver. It is dual freque ncy (L1/L2), carrier phase, 12 channel, geodetic quality receiver. The data was post processed for differential po sitioning using carrier phase tracking in the Ashtech Office Suite (AOS) software. The so ftware reduces the GPS observations files performing a series of Least Squares vector dist ance calculations between th e stations to derive the final stations coordinates. The positions obtained are given in Table 5.6. The expected values of accuracy for the survey practice followed and the carrier phase differential post processing is less than 5 cm (USACE, 2003)

PAGE 52

52 Calibration Results The self calibrating bundle adjustment was carri ed out in TerraPhoto using the above data. The external misalignments as well as the intern al orientation parameters were determined and are presented in Table 5.10. To access the accuracy, the images we re georeferenced in X and Y using the obtained calibration pa rameters and the position of tie points as well as the control points was calculated (Table 5.7) and compared to the GPS surveyed ground truths (Table 5.8). The LIDAR surface was overlaid with georeferenced images and the Z coordinate values were obtained for the control points. The position of the tie points was calculat ed for each image and then averaged. The standard deviation for each tie points was also calculated as an indi cation for the mismatch between the images. The standard devi ations varied between 10 and 40 cm. Residuals were calculated for these mean coordinates from the ground truth values. The maximum residual in X was -0.359m and in Y was -0.366m. The various sources of error are: Image pixel resolution: The images were take n at different altitudes to have a convergent geometry for better determinability of the focal length. The image resolution therefore varied from 15 cm for images taken at 650m to 35 cm for images taken at 900m altitude. This would result in higher erro r in the chosen tie points for the lower resolution images as the error involved in the geodetic coordinate of a pixel is proportional to the image pixel resolution. Error in control points chosen from LIDAR intensity image: The LIDAR intensity image was used for providing the few control points. Therefore any error in LIDAR data is also introduced in the calibration. Human error in choosing tie points accurately: Misplacement of a tie point by a couple of pixels translates to an error ranging from 30 cm to 70 cm (image pixel resolutions ranging from 15 cm to 35 cm). Uncertainty in the coordinates of the ground truths because of various sources of error in GPS as discussed in the USACE (Unites St ates Army Corps of Engineers), NAVSTAR GPS ground surveying engineering manual

PAGE 53

53 The values of residuals are equivalent to a bout 1.5 pixels on the image, scaled at an average flying height of 750m. Th e root mean square error (RMSE) values were calculated for the residuals, which lied between 20-30cms. This is also equivalent to about 1-2 image pixels (scaled at 750m). These results indicate that the errors are within acceptable limits for the application of overlaying the georeferenced imagery and LIDAR point cloud (or the digital elevation model obtained from the point cloud) Figure 5.8 shows the ge oreferenced imagery overlaid with the point cloud color coded with el evation. Near the edge of the building, the good match between the imagery and the point cloud is evident.

PAGE 54

54 Table 5.1 Flight line information Line No Direction Elevation (m) No. of images 1 N S 610 8 2 E W 620 6 3 NW SE 780 5 4 SE NW 906 6 Table 5.2 Lever arms between the laser and the camera X Lever arm(m) -0.184 Y Lever arm(m) 0.019 Z Lever arm(m) 0.075 Table 5.3 Initial image data as obtained from trajectory file Image(m) Easting(m) Northing(m) Elev ation(m)Heading() Roll() Pitch() 20071.tif 363539.239 3281072.783 610.258 1.51164 1.32967 3.70937 20072.tif 363539.711 3281118.359 610.833 1.62677 0.9302 3.52244 20073.tif 363540.22 3281163.873 611.498 1.78747 0.88235 3.36686 20074.tif 363540.765 3281209.313 612.254 1.72788 0.87308 3.16959 20075.tif 363541.331 3281254.667 613.062 1.63111 0.67809 2.92838 20076.tif 363541.879 3281299.932 613.904 1.64153 0.47138 2.7199 20077.tif 363542.372 3281345.105 614.765 1.60359 0.21121 2.4738 20078.tif 363542.779 3281390.185 615.618 1.70767 0.13646 2.22879 40052.tif 363673.16 3281180.328 620.369 -85.04473 0.59574 1.85774 40053.tif 363614.636 3281178.969 620.266 -84.85696 0.16935 1.42818 40054.tif 363556.006 3281177.578 620.155 -84.79359 0.00001 1.06588 40055.tif 363497.266 3281176.14 620.061 -84.76166 0.06468 0.65269 40056.tif 363438.419 3281174.72 619.948 -83.46022 0.95124 0.2977 40057.tif 363379.487 3281173.58 619.841 -82.97238 1.41948 0.01577 50056.tif 363465.92 3281388.218 772.047 131.35963 0.33675 6.28305 50057.tif 363500.294 3281343.97 774.356 131.6376 1.27795 6.00698 50058.tif 363534.856 3281299.671 776.674 131.67739 1.89909 5.71967 50059.tif 363569.53 3281255.249 779.009 131.79372 1.83026 5.4413 50060.tif 363604.318 3281210.713 781.352 132.05705 1.87282 5.15806 60070.tif 363662.081 3281143.366 906.768 -32.83432 1.03541 1.33601 60071.tif 363627.495 3281187.821 906.591 -32.68246 1.17407 1.29418 60072.tif 363593.063 3281232.391 906.607 -32.55669 1.16715 1.26089 60073.tif 363558.799 3281277.04 906.763 -32.29035 0.68233 0.8148 60074.tif 363524.644 3281321.756 906.705 -32.17591 0.47473 0.28121 60075.tif 363490.538 3281366.562 906.366 -32.26497 0.05857 -0.02744

PAGE 55

55 Table 5.4 Ground control points Point No. X(m) Y(m) Z(m) 22 363434.2570 3281124.2750-1.4000 23 363473.9710 3281158.7450-2.5700 24 363534.1710 3281154.8450-3.4000 25 363572.9050 3281140.8650-4.0500 26 363591.2770 3281107.5760-4.3000 Table 5.5 Initial coordinate s for each tie point Tiepoints X(m) StdX(m)Y(m) StdY(m) Z(obtained from LIDAR Surface) 1 363531.804 1.817 3281194.443 6.049 -3.284 2 363440.286 6.355 3281127.506 8.626 -1.615 3 363369.863 6.580 3281285.467 9.486 -3.216 4 363449.257 5.909 3281215.742 8.984 -2.701 5 363417.374 5.961 3281243.104 8.999 -3.449 6 363635.068 1.577 3281098.324 7.071 -4.887 7 363429.035 6.212 3281159.818 8.629 -1.291 8 363475.455 3.744 3281180.688 6.991 -2.414 9 363572.574 1.880 3281373.231 5.165 2.737 10 363589.406 1.920 3281326.842 5.642 -0.717 11 363578.609 1.879 3281256.823 5.631 -2.29 12 363491.439 1.725 3281353.426 3.919 1.974 13 363580.878 1.632 3281459.991 3.713 1.395 14 363375.933 6.529 3281313.127 9.096 -3.667 15 363581.268 0.712 3281516.290 3.080 0.91 16 363653.528 2.371 3281339.257 5.473 -3.96 17 363573.191 1.759 3281125.164 6.802 -3.985 18 363641.140 2.149 3281233.807 6.781 -4.75 19 363534.868 1.802 3281122.679 6.196 -3.405 20 363645.286 3.195 3281195.712 7.074 -5.083 21 363676.255 2.028 3281419.934 6.207 -3.03 MEAN 3.225MEAN 6.648

PAGE 56

56 Table 5.6 Control Points obtained from GPS survey Easting (m) Northing(m) Elevation (m) 363530.057 3281182.043 -3.718 363440.115 3281110.729 -1.792 363368.851 3281272.928 -3.375 363449.464 3281201.388 -2.352 363417.082 3281229.519 -3.17 363634.394 3281083.505 -5.163 363428.708 3281143.614 -1.569 363474.181 3281167.159 -2.859 363571.622 3281364.48 0.132 363588.096 3281315.181 -1.401 363577.316 3281243.937 -2.958 363489.713 3281344.937 1.159 363579.25 3281452.49 1.291 363374.711 3281300.603 -3.993 363580.341 3281510.422 0.632 363652.841 3281328.017 -4.328 363571.852 3281110.708 -4.329 363640.521 3281221.841 -5.218 363532.961 3281108.765 -3.706 363643.531 3281183.567 -5.37 363674.834 3281408.853 -3.461 363434.283 3281124.537 -1.633 363474.033 3281159.04 -2.893 363534.152 3281155.076 -3.623 363572.696 3281140.656 -4.404 363591.055 3281107.576 -4.612

PAGE 57

57 Table 5.7 Orientation parameters from Self calibrating bundle adjustment Parameter Value Heading (degrees) -0.1439 Roll (degrees) -0.9812 Pitch (degrees) -1.7004 Focal length 24.7124 (mm) xp 0.4347 (mm) Principle point coordinates yp 0.5131 (mm) K1 -7.02502 X 10-5 K2 -6.71944 X 10-6 Radial Distortion Parameters K3 7.25511 X 10-8 P1 -4.065754 X 10-4 Decentering Distortion Parameters P2 -4.618851 X 10-4 Table 5.8 Average geodetic position derived from georeferenced imagery for each tie point with standard deviations Point No. Easting(X) (m) x (m) Northing(Y) (m) y (m) 1 363529.972 0.1383281181.9040.147 2 363440.102 0.1083281110.4970.161 3 363368.517 0.1683281272.5620.191 4 363449.314 0.1343281201.080.124 5 363416.769 0.1313281229.1870.153 6 363634.611 0.1333281083.6230.142 7 363428.582 0.1653281143.3070.143 8 363474.074 0.1283281166.9110.119 9 363571.263 0.1753281364.5670.167 10 363587.792 0.1413281315.2360.184 11 363577.021 0.2943281244.7380.198 12 363489.459 0.1653281344.6830.216 13 363579.473 0.1413281452.6460.176 14 363374.574 0.1543281300.2830.157 15 363580.21 0.1753281510.550.183 16 363652.948 0.1933281328.3290.243 17 363572.016 0.1033281110.6820.127 18 363640.335 0.1553281221.5740.118 19 363533.101 0.1143281108.5720.132 20 363643.738 0.1653281183.3520.221 21 363675.135 0.4523281409.070.278 22 363434.257 0.1253281124.2750.153 23 363473.971 0.1163281158.7450.107 24 363534.171 0.2013281154.8450.262 25 363572.905 0.2443281140.8650.281 26 363591.277 0.1023281107.5760.113 Mean 0.166Mean0.173

PAGE 58

58 Table 5.9 Residuals obtained from comparison of derived geodetic c oordinates with GPS surveyed ground truths Point No. dX (m) dY (m) 1 -0.085 -0.139 2 -0.013 -0.232 3 -0.334 -0.366 4 -0.15 -0.308 5 -0.313 -0.332 6 0.217 0.118 7 -0.126 -0.307 8 -0.107 -0.248 9 -0.359 0.087 10 -0.304 0.055 11 -0.295 0.801 12 -0.254 -0.254 13 0.223 0.156 14 -0.137 -0.32 15 -0.131 0.128 16 0.107 0.312 17 0.164 -0.026 18 -0.186 -0.267 19 0.14 -0.193 20 0.207 -0.215 21 0.301 0.217 22 -0.026 -0.262 23 -0.062 -0.295 24 0.019 -0.231 25 0.209 0.209 26 0.222 0 RMSE 0.205 0.277 MEAN -0.0412 -0.0635

PAGE 59

59 Figure 5.1 Airborne Mapping System s howing the laser head and the camera Figure 5.2 Location of the study area Oaks Mall

PAGE 60

60 Figure 5.3 Study area showing flight lines location and orientation Figure 5.4 Image footprints co lour coded by flight lines Study Area Flightline 1 Flightline 2 Flightline 3 Flightline 4

PAGE 61

61 Figure 5.5 Intensity image from LIDAR data

PAGE 62

62 Figure 5.6 Tie points (blue circle) an d Ground control points (red circle)

PAGE 63

63 Figure 5.7 Intensity image with ground control points Figure 5.8 Georeferenced imagery overlaid with point cloud color coded with elevation 22 2 3 24 25 26 Good match between goreferenced imagery and point cloud

PAGE 64

64 CHAPTER 6 CONCLUSION An effort was made here to demonstrate the procedure of calibration of cameras using LIDAR data for both terrestrial as well as aerial applications. The perspective collineation camera model provided the basic mathemati cal model for carrying out the calibration. For the terrestrial camera, LIDAR provided a three dimensional calibration field and the calibration procedure was based on the space resection principal. The root mean square error for control point residuals lied in the range of 1-2 pixels, maximum being 2.2. Good agreement between the laser data and the referenced images indicated successful ca libration of the camera. Two scans were done on different days to assess the stability and repeatab ility of the parameters. Although the interior distortion parameters were c onsistent, the external parameters varied by few centimeters for the translational offsets and by a couple of degrees in angular orientation. This is attributed to the fact that the mount for the camera is not rigid enough so as to ensure repeatable relative orientation between the camera and the lase r system. Therefore the present system would need to be calibrated every time the camera is taken off the laser system. An extra day for processing calibration data would be need ed for each day of survey work as the camera would be required to be taken o ff for storing and transporting the laser system safely. The mount is one aspect that needs to be improved so as to provide more repeatable values for the external parameters, and hence rendering the syst em much more pract ically useful For the aerial cameras, LIDAR data provide d an intensity image for selecting ground control points (without any external setup) and also a surface model which was used to give a constant scaling factor for the transformation as well as elevation values to the georeferenced imagery. Self calibration bundle adjustment was carried out for the determination of orientation parameters. The residual RMSE values for the tie points were 20 cm in X and 28cm in Y. These

PAGE 65

65 values indicate good agreement between the georeferenced images and the ground truths. Moreover, with the improved ac curacy of airborne LIDAR da ta and the pulse frequencies reaching high numbers like 166kHz for the latest Optech GEMINI system, results obtained from LIDAR aided camera calibration can be expected to improve. Laser data used for doing the calibration in the study was collected using a 33 kHz pulse frequency laser system. The point density obtained after multiple overlaps was about 5 points per square meter. As seen in the figure 5.5 the paint markings to be used as gr ound control are visible but faintly defined and hence affecting the accuracy with which they can be marked. Using higher pulse frequency systems, better LIDAR density would lead to better control in choosing the GCPs and hence improved calibration.

PAGE 66

66 APPENDIX A CAMERA MODELS Projective Model Figure A.1 Projective camera model us ing standard perspective projection Let (x; y) be the 2D image coordinates of p and (X; Y; Z) the 3D coordinates of P. Now from the figure it can be seen x = Z fX y = Z fY f = 1 can be assumed as different values of f just correspond to different scalings of the image such that 1y x corresponds to Z Y X. In homogeneous coordinates, it can be written as: X Y Z f P(x,y) P(X,Y,Z)

PAGE 67

67 1 0 1 0 0 0 0 1 0 0 0 0 1 ~ 1Z Y X Z Y X y x In real images, the origin of the image coordi nates is not the principa l point and the scaling along each image axis is different, so the image coordinates undergo a transformation described by the matrix K (interior camera parameters) as described above. Also, the world coordinate system does not usually coincide with the pe rspective reference frame, so a coordinate transformation is required given by matrix M. Finally we get: 1 0 1 0 0 0 0 1 0 0 0 0 1 1Z Y X M K y x Perspective Model Consider figure A.2 shown below; the image coordinate system is parallel to the object mapping coordinate system. L is the exposure stati on or the perspective ce nter with coordinates XL, YL and ZL in object mapping frame; Xo, Yo and Zo are the object coordinates in the mapping frame; xa, ya, za are the image coordinates in image frame;. From the similar triangles collinearity condition equations are as follows (Wolf & Dewitt, 2001): L a L a L aZ Zo z Y Yo y X Xo x or a L L az Z Zo Xo X x ; a L L az Z Zo Yo Y y and a L L az Z Zo Z Zo z

PAGE 68

68 Figure A.2 Image coordinate system pa rallel to mapping c oordinate system Figure A.3 Image coordinates transfor mation in the tilted photo plane XL YL ZLL O Xo Yo Zoya xa zi X Y Z x y z Z y L ziy Z ya xa xi yi zi = (-f) Tilted photo plane x x

PAGE 69

69 Now consider figure A.3 showing a tilted im age. The relationship between the actual coordinates of the tilted image (x i, yi) and the image coordinate s for the un-tilted image can be expressed as: xi = r11xa + r12ya + r13za yi = r21xa + r22ya + r23za zi = r31xa + r32ya + r33za where r11, r12.rij etc are the ith and jth terms of the rotation matrix defining rotation from the mapping coordinate system to tilted coordinate system of the exposure station. Substituting xa,ya and za we get xi = r11 a L Lz Z Zo Xo X + r12 a L Lz Z Zo Yo Y + r13 a L Lz Z Zo Z Zo (1) yi = r21 a L Lz Z Zo Xo X + r22 a L Lz Z Zo Yo Y + r23 a L Lz Z Zo Z Zo (2) zi = r31 a L Lz Z Zo Xo X + r32 a L Lz Z Zo Yo Y + r33 a L Lz Z Zo Z Zo (3) Now factor out za / (Zo ZL), divide equations (1) and (2 ) by equation (3) and replace zi by (-f) to obtain the following collinearity equations xi = ) ( ) ( ) ( ) ( ) ( ) (33 32 31 13 12 11Zo Z r Yo Y r Xo X r Zo Z r Yo Y r Xo X r fL L L L L L yi = ) ( ) ( ) ( ) ( ) ( ) (33 32 31 23 22 21Zo Z r Yo Y r Xo X r Zo Z r Yo Y r Xo X r fL L L L L L f is the focal length of the camera.

PAGE 70

70 APPENDIX B IMAGING AND MAPPING SENSORS AT UF Laser systems University of Florida acquired its first ai rborne laser mapping system, Optech ALTM 1233 in 1998, in collaboration with the Fl orida International University (FIU) and created an Airborne Laser Swath Mapping (ALSM) research center. Th e System is operated on a Cessna 337 twin engine aircraft at an approximate al titude of about 600m and speed 60m/sec. Figure B.1 ALTM 1233 Airborne laser mapping system Important specifications of the ALTM are given below: Power Supply SensorHead

PAGE 71

71 Table B.1 ALTM 1233 specifications Laser Nd:YAG, 1.064 micrometers Pulse frequency 33000pps Operating altitude: 330 to 2000 m Range accuracy 2 cm (single-shot). Options: Intensity data typically 8 bit grey tones. Number of returns recorded per laser Pulse 2 Scanner Design Oscillating mirror Scanner Range 20 degrees from nadir Scanner Frequency upto 30 Hz. Data Recording hard disk The center has recently upgraded its capability by acquiring the latest Optech Gemini laser system capable of collecting data at a frequenc y of 167 kHz and register ing 4 returns per laser pulse. UF purchased a ground based laser scanning and imaging system, ILRIS 3D in 2002. It scans at a speed of 2 kHz (2000 points per second) in a range of 3m to 1500 m. It generates XYZ point clouds, together with in tensity and RGB texture, provided by a built in 3.5 megapixel digital camera. ILRIS 3D operates in a static mode and therefore does not use any position or navigation sensors. Figure B.2 ILIRIS 3D Some of its specifications are provided below:

PAGE 72

72 Table B.2 ILRIS3D specifications Laser Nd YAG laser, 1.55 nm, pulsed Scanner 2 axis beam steering scanner Data rate 2000 points per sec Scanner Field of View ho rizontal as well as vertical Range 3m to 1500m for target with 80% reflectivity; 3m to 350m for a target with 4% reflectivity Angular accuracy 0.0024 Hori zontal and Vertical Range resolution 1 mm Digital Camera 3.5 megapixe l CMOS sensor (1789 X 1789) Physical dimensions 320 x 320 x 220 mm, weighing 13 kg Data storage Removable USB memory s tick, Computer hard drive through LAN network Temperature Operating range 0C to 40C Digital Cameras UF houses a DuncanTech MS4100 (now di stributed by Geospatial Systems Inc.) multispectral camera. It is a 3 CCD camera and can acquire images in Red, Blue, Green and Infrared band. This camera is used in co mbination with the ALTM system for aerial photography. Figure B.3 MS4100 Multispectral camera Its important specifications are mentioned below:

PAGE 73

73 Table B.3 Specification for MS4100 Multispectral camera Sensor 3 CCD with a colour separating prism Pixel size 0.0074 mm x 0.0074 mm Image resolution 1924 x 1075 Focal length 28 mm Frame rate 10 frames per second Pixel clock rate 25 MHz Signal/Noise 60 dB Digital image output 8 bits x 4 taps or 10 bits x 3 taps Programmable functions Gain, exposure time, multiplexing, trigger modes, custom processing Electronic shutter Range: 1/10,000 1/10 sec., controlled via RS-232 input Operating Temperature 0 50 C The camera can be configured to acquire RGB images or CIR images or RGB images simultaneous with monochrome IR images. It can be triggered externally through a pulse using a BNC connector and has three different operating modes for trigger input. Recently the center bought a digital SLR ca mera, Nikon D80 to fully functionalize the hybrid capability of the terrestr ial laser scanner, ILRIS3D. Figure B.4 Nikon D80 digital SLR camera

PAGE 74

74 The camera already integrated in the laser sy stem gives low quality images and hence is not suited for obtaining RGB texture information of the scans. Since the application is static so a high quality point and shoot camera with the ability to be controlled externally sufficed. Some of its specifications are given below: Table B.4 Specification for Nikon D80 SLR camera Sensor CCD; 23.6 x 15.88 mm Image resolution (pixels) Large: 3872 x 2592 (10 megapixels); Medium: 2896 x 1944 (5.6 megapixels); Small: 1936 x 1296 (2.5 megapixels) Focal length 20 mm Digital image output NEF (RAW): compressed 12 bit; JPEG ISO sensitivity 100 to 1600 External interface USB 2.0 high speed Physical Dimensions 132 x 103 x 77 mm; weighing 585 gms Operating Temperature 0 40 C It is controlled externally using Nikons Ca mera Control Pro software. The shutter speed, aperture size and exposure compensation are a fe w of its important parameters which can be controlled externally.

PAGE 75

75 APPENDIX C PRINCIPAL AND NODAL POINTS Principal Points in a Lens Consider a ray traveling parallel to the optical axis of the lens. It undergoes refraction at the two surfaces of the thick lens. The point wher e the extended rays intersect is known as the rear principal point. The front pr incipal point would be for the cas e when a parallel ray travels from the reverse side. The planes passing through a set of rear principal points is known as the rear principal plane. Ideally it is perpendicula r to the optical axis. Si milarly the plane passing through the front principal points is know n as the front principal plane. Figure C.1 Principal Points and Principal planes Nodal Points Consider a ray traveling at an angle to the le ns striking the front face of the lens, exiting from the rear face at a different angle. Now if we start moving the ray, keeping the angle with the optical axis constant, we will find an arrival path such that the outgoing ray would exit parallel to the incoming ray. If we extend the two rays in the lens at the point of refraction, they will Principal Plane Principal Point Optical Axis

PAGE 76

76 intersect the optical axis at the nodal points N1 (first nodal point) and N2 (second nodal point). N1 and N2 are also called the front nodal point and the rear nodal point respectively. Figure C.2 Nodal Points N2 N1

PAGE 77

77 LIST OF REFERENCES Abdel-Aziz Y.I. & Karara H.M ., 1971. Direct linear transform from comparator coordinate into object-space coordinates. A SP/UI Symposium on Close Ra nge Photogrammetry, Falls Church, Virginia, pp 1-18 Al Khalil, O., 2002. Solutions for exterior orientation in Photogrammetry: A review, Photogrammetric Record, 17(100), pp 615-634 Bumker, M. & Heimes F.J., 2001. New calibration and computing method for direct georeferencing of image and sca nner data using the position and angular data of an hybrid inertial navigation system, Proceedings of OEEPE-Workshop Integrated Sensor Orientation, Hannover, Germany Cardenala, J., Mataa, E.,Castroa, P., Delgadoa, J., Hernandeza, M. A., Pereza, J.L., Ramos, M., & Torresa, M., 2004. Evaluation of a digital camera (Canon D30) for the photogrammetric recording of historical Bu ildings, XXth ISPRS Congress, 12 23 July, Istanbul, Turkey Carter, W.E., Shrestha, R., Tuell, G., Bloomquist, & D., Sartori, M., 2001. Airborne Laser Swath Mapping shines new light on Earths t opography. American Geophysical Union EOS Transactions 82(46), pp 549, 550, 555 Chandler, J. H., Fryer, J. G. & Jack, A., 2005. Metric cap abilities of low-cost digital cameras for close range surface measurement, P hotogrammetric Record, 20(109). pp 12. Cramer, M. & Stallman, D., 2001. On the use of GPS/inertial exterior orientation parameters in airborne photogrammetry, Proceedings of OEEPE-Workshop Integrated Sensor Orientation, Hannover, Germany Cramer, M. & Stallman, D., 2002. System calibra tion for direct georef erencing. International Archives of the Photogrammetry, Remote Se nsing and Spatial Information Sciences, 34(3A): 79 Dimitar Jechev, 2004. Close-range photogrammetr y with amateur camera. Commission V, WG V/4, XXth ISPRS Congress, 12 23 July, Istanbul, Turkey Drake, D.R., 2002. Applications of Laser Scanni ng and Imaging Systems, Unpublished Masters Thesis, University of Florida Fernandez, J.C., 2007. Scientific Applications of Mobile Terrestrial Laser Scanner (M-TLS) System, Unpublished Masters Th esis, University of Florida Fraser, C.S., 1982. On the use of non-metric cam eras in analytical non-metric photogrammetry, International Archives of Photogrammetr y and Remote Sensing, 24(5), pp. 156-166 Fraser, C.S., 1997. Digital camera self-calibration, ISPRS Journal of Photogrammetry and Remote Sensing, 52, pp. 149-159

PAGE 78

78 Fraser, C.S., 2001. Photogrammetric camera compone nt calibration: A review of analytical techniques in Calibration and Orientation of Cameras in Computer Vision, Gruen & Huang (Eds.), Springer Series in Information Sciences 34, New York, pp 95-121 Grennery, D.B., 2001. Least squares camera calibra tion including Lens Distortion and automatic editing of calibration points in Calibrati on and Orientation of Cameras in Computer Vision, Gruen & Huang (Eds.), Springer Series in Information Sciences 34, New York, pp 163-193 Grejner-Brzezinska D.A., Dir ect sensor orientation in ai rborne and land-based mapping applications, http://www.ceegs.ohio-state.e du/gsreports/reports/r eport_461.pdf, last accessed: August 6, 2007 Gruen, A.& Beyer, H.A., 2001. System calibratio n through self-calibratio n in Calibration and Orientation of Cameras in Co mputer Vision, Gruen & Huang (Eds.), Springer Series in Information Sciences 34, New York, pp 163-193 Heikkil J. & Silven O., 1997. A four-step camer a calibration procedure with implicit image correction. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, pp 1106-1112 History of Photogrammetry, 2007. The Center for Photogrammetric Training Surveying Engineering Department, Ferris State University, Big Rapids, Michigan http://www.ferris.edu/faculty/burtchr/sure340/no tes/History.pdf, last accessed: August 7, 2007 Jansa, J., Studnicka, N., Forkert, G., Haring, A., & Kager, H., 2004. Terr estrial laser scanning and photogrammetry Acquisiti on techniques complementi ng one another, Commission III WG 7, XXth ISPRS Congress, Istanbul, Turkey Jacobsen, K., 2000. Potential and Limitation of Dir ect Sensor Orientation. International Archives of Photogrammetry and Remote Sensing, 33(B3/1), pp 429 Jacobsen, K., 2002. Calibration Aspects in Di rect Georeferencing Of Frame Imagery. International Archives of the Photogrammetr y, Remote Sensing and Spatial Information Sciences, 34(1), pp 82 Kolbel, O R., 1976. Metric or nonmetric camera Photogrammetric Enginnering and Remote Sensing, 42(1), pp 103-113 Linder, W., 2006. Digital Photogrammetry: A Pr actical Course, Second Edition, Springer Mohr, R. & Bill Triggs, B., 1996. Pr ojective geometry for image analysis: Tutorial on projective geometry. XVIII ISPRS Congress Vienna, Austria Nagai, M., Shibasaki, R., Manandhar, D. & Zhao, H., 2004. Development of digital surface model and feature extraction by integrating laser scanner and CCD sensor with IMU. XXth ISPRS Congress, Istanbul, Turkey

PAGE 79

79 Naci, Y. & Karsten, J., 2005. Influence of syst em calibration on direct sensor orientation, Photogrammetric Engineering a nd Remote Sensing, 71(5), pp. 629 Pinto, L. & Forlani, G., 2002. A single step calibration procedure for IMU/GPS in aerial photogrammetry, Proceedings of the ISPRS Technical Commission III Symposium, Graz, Austria Remondino, F. & Borlin, N., 2004. Photogrammetric calibration of image sequences acquired with a rotating camera, Proceedings of the ISPRS working group V/1, Panoramic Photogrammetry Works hop, Dresden, Germany Remondino, F. & Fraser, C.S., 2006. Digital camer a calibration methods: considerations and comparisons. International Archives of P hotogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXVI, pa rt 5, pp. 266-272 ISPRS Commission V Symposium, Dresden, Germany Rogers, R.M.; 2003. Applied mathematics in inte grated navigation systems, Second edition Ruzgien B., 2005. Performance Evaluation of Non-Metric Digital Camera for Photogrammetric Application, Geodesy and Cartography, 31(1), pp 23-27 Tsai, R., 1987, A versatile camera calibration te chnique for high accuracy 3-D machine vision metrology using off-shelf TV cameras a nd lenses. IEEE Journal of Robotics and Automation, 3(4), pp 323-344 Ullrich, A., Schwarz, R. & Kager, H. 2003. Us ing hybrid multi-station adjustment for an integrated camera laser-scanner system. Optical 3-D Measurement Techniques, 6(1), pp 298-305 US Army Corps of Engineers. 2003. NAVSTAR Gl obal Positioning System Surveying, Engineer Manual Wackrow R., Chandler, J.H. & Bryan, P., 2007. Geometric consistenc y and stability of consumer-grade digital cameras for accurate spatial measurement, The Photogrammetric Record, 22(118), pp 121-134 Wolf, P. & Dewitt, B., 2002. Elements of Photogr ammetry with Applica tions in GIS, Third edition Yakimovsky, Y., 1987.Cunningham, R.T., A syst em for extracting three-dimensional measurements from a stereo pair of TV cameras, Computer Graphics and Image Processing, 7, pp 195-210

PAGE 80

80 BIOGRAPHICAL SKETCH Abhinav Singhania was born in Patna, Indi a, on December 27, 1982. He graduated with a Bachelor of Engineering degr ee in civil engineering from Punjab Engineering College, Chandigarh, India, in May 2005. His interests in remote sensing led him to University of Florida where he worked with the Geosensing System s Engineering group and gr aduated with a Master of Science degree in civil engineering in December 2007.