<%BANNER%>

The Use of Area Features and Single Camera Perspective for Determining Camera Position

Permanent Link: http://ufdc.ufl.edu/UFE0044730/00001

Material Information

Title: The Use of Area Features and Single Camera Perspective for Determining Camera Position
Physical Description: 1 online resource (59 p.)
Language: english
Creator: Osentowski, Joseph P
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2012

Subjects

Subjects / Keywords: analysis -- computer -- determination -- location -- navigation -- photogrammetry -- robot -- scene -- vision
Mechanical and Aerospace Engineering -- Dissertations, Academic -- UF
Genre: Mechanical Engineering thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: In this project, a computer algorithm was developed to determine the position of a camera based solely on area features extracted from the image of a single target.  This was investigated as an alternative method to that used to solve the perspective-three-point problem for camera-in-hand configurations, which requires the use of additional sensor information to determine a unique solution. The performance of the algorithm was tested experimentally.  The target used was an octahedral with six of the faces painted different matte colors.  This target was imaged from ninety-six different camera positions, and the area information extracted from each image was used to calculate a unique position of the camera. These calculated positions were then compared to the known positions from which each image was taken.  From this comparative data it was shown that the use of three continuous area features was sufficient to generate a unique solution to the location determination problem.  At the same time, it was determined that an octahedral was not an optimal geometry for the target, as the amount of symmetry present in the image of the target had a pronounced effect on the accuracy of the results.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Joseph P Osentowski.
Thesis: Thesis (M.S.)--University of Florida, 2012.
Local: Adviser: Crane, Carl D.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2012
System ID: UFE0044730:00001

Permanent Link: http://ufdc.ufl.edu/UFE0044730/00001

Material Information

Title: The Use of Area Features and Single Camera Perspective for Determining Camera Position
Physical Description: 1 online resource (59 p.)
Language: english
Creator: Osentowski, Joseph P
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2012

Subjects

Subjects / Keywords: analysis -- computer -- determination -- location -- navigation -- photogrammetry -- robot -- scene -- vision
Mechanical and Aerospace Engineering -- Dissertations, Academic -- UF
Genre: Mechanical Engineering thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: In this project, a computer algorithm was developed to determine the position of a camera based solely on area features extracted from the image of a single target.  This was investigated as an alternative method to that used to solve the perspective-three-point problem for camera-in-hand configurations, which requires the use of additional sensor information to determine a unique solution. The performance of the algorithm was tested experimentally.  The target used was an octahedral with six of the faces painted different matte colors.  This target was imaged from ninety-six different camera positions, and the area information extracted from each image was used to calculate a unique position of the camera. These calculated positions were then compared to the known positions from which each image was taken.  From this comparative data it was shown that the use of three continuous area features was sufficient to generate a unique solution to the location determination problem.  At the same time, it was determined that an octahedral was not an optimal geometry for the target, as the amount of symmetry present in the image of the target had a pronounced effect on the accuracy of the results.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Joseph P Osentowski.
Thesis: Thesis (M.S.)--University of Florida, 2012.
Local: Adviser: Crane, Carl D.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2012
System ID: UFE0044730:00001


This item has the following downloads:


Full Text

PAGE 1

1 THE USE OF AREA FEATURES AND SINGLE CAMERA PERSPECTIVE FOR DETERMINING CAMERA POSITION By JOSEPH P. OSENTOWSKI A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREM ENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2012

PAGE 2

2 2012 Joseph P. Osentowski

PAGE 3

3 To those who never gave up on me, and to those who refused t o allow me to give up on myself

PAGE 4

4 ACKNOWLEDGMENTS I would like to thank my advisor Dr. Carl Crane, for his encouragement and input with regard to this project, and my educat ion in general. I thank Dr. John Schueller and Dr. Warren Dixon for graciously agreeing to serve on my review committee. I give many thanks to my parents for supp orting my decision to return to college to pursue a graduate degree. I offer my thanks to Dr. A. Antonio Arroyo for being always ready with a kind word and an encouraging thought. To Darsan Patel and Sujin Jang I offer thanks for sharing their office sp ace with me, and tolerating my talking to myself while I worked on this project. I wish to thank Mary and Keith Reese for providing me with a place to stay during my studies. Lastly, I would like to thank Josh Weaver, Phillip Reiss, and William Mueller f or helping me adjust to being back in college after being away for so long, and encouraging me to finish my degree. I offer my sincere apologies to anyone that I may have overlooked in compiling this list.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS ................................ ................................ ................................ .. 4 LIST OF TABLES ................................ ................................ ................................ ............ 7 LIST OF FIGURES ................................ ................................ ................................ .......... 8 ABST RACT ................................ ................................ ................................ ..................... 9 CHAPTER 1 INTRODUCTION AND LITERATURE REVIEW ................................ ..................... 10 Purpose ................................ ................................ ................................ .................. 10 B ackground ................................ ................................ ................................ ............. 10 2 EXPERIMENTAL DESIGN ................................ ................................ ..................... 14 Description of the Experiment ................................ ................................ ................. 14 Controlled Variables in the Experiment ................................ ................................ ... 14 Description of the Test Apparatus ................................ ................................ ........... 15 Description of the Target ................................ ................................ ......................... 17 3 ALGORITHM DEVELOPMENT ................................ ................................ .............. 21 Premise ................................ ................................ ................................ ................... 21 Use of a Single Target ................................ ................................ ...................... 21 Using Three Projected Areas to Graphically Determine Camera Position ........ 21 The Color Map and Image Indexing ................................ ................................ ........ 22 Camera Position as a System of Three Equations ................................ ................. 23 Development of the Area Equations ................................ ................................ ....... 24 4 DATA AND DATA ANALYSI S ................................ ................................ ................. 36 Recorded Data ................................ ................................ ................................ ........ 36 Quantitative Analysis ................................ ................................ .............................. 37 Generating a Unique Solution ................................ ................................ ................. 38 The Use of Three Continuous Sides ................................ ................................ ....... 39 Target Symmetry ................................ ................................ ................................ .... 40 Real Time Application ................................ ................................ ............................. 42 5 CONCLUSIONS AND FUTURE WORK ................................ ................................ 45 Conclusions ................................ ................................ ................................ ............ 45 Improving Results ................................ ................................ ................................ ... 45

PAGE 6

6 APPENDIX A AREA EQUATIONS USED TO CALCULATE CAMERA POSITION ....................... 47 B EXPERIMENTAL DATA ................................ ................................ .......................... 48 LIST OF REFERENCES ................................ ................................ ............................... 58 BIOGRAPHICAL SKETCH ................................ ................................ ............................ 59

PAGE 7

7 LIST OF TABLES Table page 3 1 Area measurements of extracted from binary images in Figure 3 4 ................... 30 3 2 The three dimensional coordinates of the target vertices ................................ ... 32 3 3 The projected coordinates of the target vertices ................................ ................. 33 4 1 Differences between the calculated camera position and the known camera position. ................................ ................................ ................................ .............. 43 B 1 Experimental data showing the differences between the known and calculated camera position ................................ ................................ ................. 49

PAGE 8

8 LIST OF FIGURES Figure page 1 1 Spherical coordinates ................................ ................................ ......................... 13 1 2 Layout of the perspective three point (P3P) problem ................................ ......... 13 2 1 Orientat ion of the target in the reference coordinate system .............................. 18 2 2 Orientation of the camera coordinate system ................................ ..................... 19 2 3 Illustration of the test apparatus ................................ ................................ .......... 19 2 4 Octahedral target ................................ ................................ ................................ 20 2 5 Ninety degree azimuth view of the octahedral target ................................ .......... 20 3 1 Potential camera positions represented as a continuous surface ....................... 28 3 2 Representation of the camera position as the intersection of three surfaces at a si ngle point ................................ ................................ ................................ ...... 29 3 3 The color map to which all images were indexed ................................ ............... 29 3 4 A set of six binary images created from the color image of the target ................ 30 3 5 The labeled vertices and faces of the octahedral target ................................ ..... 31 3 6 The y coordinate in the image plane determined b y orthogonal projection of the real x and z coordinates of a target vertex ................................ .................. 34 3 7 ......... 34 3 8 Top view of the target, indicating the values of for each colored side ............. 35 4 1 Correlation between calculated angles and image symmetry ............................. 44 5 1 Top view of the proposed tetradecohedral target ................................ ............... 46 5 2 Isometric view of the proposed tetradecohedral target ................................ ....... 46

PAGE 9

9 Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science THE USE OF AREA FEATURES AND SINGLE CAMERA PERSPECTIVE FOR DETERMINING CAM ERA POSITION By Joseph P. Osentowski August 2012 Chair: Carl D. Crane III Major: Mechanical Engineering In this project, a computer algorithm was developed to determine the position of a camera based solely on area features extracted from the image of a single target. This was investigated as an alternative method to that used to solve the perspective three point problem for camera in hand configurations, which requires the use of additional sensor information to determine a unique solution. The perform ance of the algorithm was tested experimentally. The target used was an octahedral with six of the faces painted different matte colors. This target was imaged from ninety six different camera positions, and t he area information extracted from each image was used to calculate a unique position of the camera. T hese calculated positions were then compared to the known positions from which each image was taken. From this comparative data it was shown that the use of three continuous area features was suffic ient to generate a unique solution to the location determination problem. At the same time, it was determined that an octahedral was not an optimal geometry for the target, as the amount of symmetry present in the image of the target had a pronounced effe ct on the accuracy of the results.

PAGE 10

10 CHAPTER 1 INTRODUCTION AND LITERATURE REVIEW Purpose The purpose of my project was to devise and demonstrate a n alternative method to solving the location determination problem using the area features of a specially desi gned target. The camera position was represented using the spherical coordinates theta (azimuthal angle), phi (zenith angle), and rho (radial distance from the center of the target) (Figure1 1) Background The location determination problem is in essence the determination of the positio n of a camera based solely on the features present in an image taken by the camera. This has been formally stated: some coordinate frame, determine the location (relative to the coordinate frame of the landmarks) of that point in space from which an image of the landmarks was obtained [ 1]. where a landmark, or control point is most often defined as the coordinates of a particular feature point, or t he centroid of a region [2 ]. When the relationship between the three dimensional coordinates of the control points and their projections onto a two dimensional image is known, the location determination problem is alternatively referred to as the perspect ive n point problem (PnP), where n represents the number of control points. The scenario using three control points is presented in Figure 1 2 where t he system numbered one is the reference coordinate system, and the system numbered two is the coordinate frame attached to the camera. The coordinates of the points P 1 P 2 and P 3 are known in the reference coordinate system. The camera is aligned with coordinate system two, and the angles and for correspond to pixel

PAGE 11

11 locations in the image. The goal in this scenario is to determine the transformation matrix that relates the pose of coordinate system aligned with the camera to the reference coordinate system. It has been shown that using three points is the minimum number of control points to determine the positi on of the camera. However, the solution is not unique in this scenario The solution to the perspective three point problem takes the form of a quartic equa tion, in terms of one of the distance variable s squared. While this indicates alternatively method have shown that there is a maximum of four solutions to the P3P problem [ 1 ]. Expl anation and examples in [3 ]. To isolate the true position of the camera from the four possible positions two approaches dominate the literature. The first is to use four control points instead of three. T he P4P problem yields a unique solution, provided that critical configurations are avoided. This adds significant complexity to the calculation of the camera pose, as it is necessary to solve a system of six fourth degree polynomials, and then to find the common solution [ 4 ]. However, iterative techniques as those used in [ 5 ] have successfully demonstrated a globally optimal algorithm to solve the P4P problem. Even without utilizing such alg orithms, the P4P scenario has been successfully integrated into adaptive controller designs for visual servoing of a multiple link robotic manipulator as described in [ 6 ]. The other approach often used to ensure a unique result for the P3P problem is to incorporate additional sensor data. This has proven to be a very practical solution in visual servoing applications, especially those in which the camera is assumed to be

PAGE 12

12 uncalibrated. This technique was explained in [ 7 ] to develop an adaptive controller for a three link robot manipulator, utilizing a video camera and optical encoders installed at the joints of the manipulator arm. The position in the joint space obtained from t he encoders was compared to the position in the camera space obtained from the camera, and the differences between the two used to accurately d etermine the camera pose. Generally speaking the use of the two dimensional projection of feature points has the disadvantage that informa tion regarding depth is lost. Here, d epth is understood to be the measure from the camera to the feature point. Thi s information must be recovered through knowledge of the geometric relationships between the control points in three dimensional space [2 ]. Rather than using individual feature points, my project used the area features of the image o f an octahedral target to calcul ate the position of the camera By examining the area features as they changed with respect to changes in the camera position, including cha nges in depth, no information was lost in the analysis of the projected imag e.

PAGE 13

13 Figure 1 1. Spherical coordinates Figure 1 2. Layout of the perspective three point (P3P) problem

PAGE 14

14 CHAPTER 2 EXPERIMENTAL DESIGN Description of the Experiment The reliability of using area features as opposed to control points to determine camera position was tested experime ntally. Using an adjustable apparatus designed specifically for this experiment, three images of an octahedral target were taken by a consumer quality camera from each of ninety six different positions. Each of these images was used to generate a set of binary images of the target, from which the area measurement of each face was extracted. These area measurements were then fed into a computer algorithm written in MATLAB to calculate the position of the camera. The calculated positions were then compare d to the known camera coordinates to evaluate the quality of the results, and the overall method. Controlled Variables in the Experiment The resolution and magnification of the camera were known quantities, as well as the geome tric dimensions and the pose of the octahedral target with respect to a reference coordinate system. The origin of the reference coordinate system was located at the center of the target, with the positive x axis of the reference coordinate system directed normal to the yellow face o f the target (Figure 2 1). The camera orientation and the lighting of the target were controlled factors in the experiment. The camera used in my project was a Creative Live Cam Notebook Pro web camera. The coordinate system attached to the camera is sh own in Figure 2 2 The orientation of the camera was such that the y axis of the camera was always parallel to the ground and the x axis was always pointed toward the center of the target

PAGE 15

15 The lighting consisted of two full spectrum light bulbs in two adjustable lamps. This was done so that the lighting could be adjusted to minimize the effects of glare, while simultaneously lighting the target from both sides. This step was necessitated by the fact that the software to operate the camera included a l ight compensation feature that could not be disabled. All other s oftware feature s for image enhancement were disabled so as to not adversely influence the results of the experiment. Description of the Test Apparatus A test apparatus was constructed with specific design considerations. The angles of the camera relative to the target had to be easily adjustable, and the angle measurements read directly from the test device. It had to allow the camera to move along a straight line while maintaining the con dition that the x axis of the camera coordinate system was always directed toward the center of the target. This eliminated the need to track the target, making data acquisition relatively simple. T he size had to be such that the device allowed the targe t to be viewed from as many positions as possible, with sufficient adjustment to the radial distance from the camera to the target, but with a minimal footprint. The device also needed to be designed so that it could be disassembled and reassembled as eas ily as possible, as it was necessary to move the entire device during the course of the project. Lastly, the test apparatus was designed so that it could be automated, t hough this was not used for data acquisition The apparatus used to take all of the da ta measurements in the experiment wa s a wooden construction as illustrated in Figure 2 3 There were four major components to this device : the table, the wheel, the s lide mechanism and the overhead support. The device was constructed such that the line alo ng which the camera travelled, the line acting through the center of the pivot of the wheel, and the vertical axis through the

PAGE 16

16 center of the table intersected at a point coincident with the center of the target. This ensured that the camera was always pointed at the center of the target, thus eliminating any need to track the target as the camera position changed The table base had a circular area laid at its center with degree measurements indicated along the circumference The target was placed at the center of the circular area, and the azimuth angle of the target relative to the camera position was read directly. This was geometrically equivalent to building a device that allowed the camera to rotate in a plane about the target, while at the same time reducin g the overall size of the apparatus The wheel was a piece of wood cut into a 115 arc, with a radius of 96 cm This piece determined the zenith angle of the camera path relative to the horizontal plane passing through the center of the targe t as the wheel was rotated about a pivot at its center The zenith angle measurement was read by means o f a dial marked in one degree increments, aligned with the lower straight edge of the wheel. A needle pointer allowed the zenith angle to be read di rectly. The wheel was held in position by means of a spring clamp set at an appropriate position on the wheel, bracing the wheel against the top of the support structure. This allowed the angle to be accurately set, and easily changed from one position t o the next. The support structure served two functions. The first was the aforementioned brace for the spring clamp, to maintain the zenith angle setting. The second function was to keep the wheel from deviating from the vertical plane. This was achieve d by attaching a set of roller guides to the underside of the top of the support. The wheel

PAGE 17

17 was free to pa ss through the guide rollers while keeping the wheel perpendicular to the table. The last major component of the te st apparatus was the slide mechani sm attached to the inside of the wheel, along its flat edge. It consisted of two parallel lightweight aluminum rails, attached at each end to a wooden block. Between the two rails, another block was installed with holes that were over drilled to allow th e block to slide easily along the rails. This sliding block acted as a base to attach a short arm, oriented perpendicular to the wheel. The camera was mounted to the end of this arm. This allowed the line of action passing through the center of the came ra lens to always intersect the center of the target as the camera moved along the rails and simultaneously maintained the camera orientation such that the bottom edge of the image was always parallel to the horizontal plane. The distance from the camera to the center of the target was adjustable from 17 cm to 82 cm The alignment of the camera to the center of the target was verified by using a laser level mounted to a tripod. Once the tripod was set level to the table, the laser was oriented verticall y and the alignment of the camera and the target checked. Description of the Target surface was an equilateral triangle with edges 10.16 cm long. The base of the target wa s an equilateral triangle with edges 20.32 cm long, rotated 30 relative to the top (Figure 2 4 ). The overall height of the target was 10.16 cm. This configuration was not merely an aesthetic choice. It was noted that the Plucker coordinates of the line s connecting the vertices of the top and bottom faces were as far from linearly dependent

PAGE 18

18 as possible. By extension, the faces of the target were as far as linearly dependent as possible [8]. It was theorized that this target would prove optimal in the sense that from any camera position, an appropriate number of sides would be readily viewable. However, it was also noted that at positions where the line of sight of the camera ran parallel to any vertical face and the zenith angle remained low, only two sides were discernible ( Figure 2 5 ). It was recognized that this condition could prove problematic in obtaining results, since the goal was to use three faces to construct a system of three equations with three unknown variables. The extent to which thi s adversely affected the determination of the camera position was examined as part of the experimental results. Figure 2 1. Orientation of the target in the reference coordinate system

PAGE 19

19 Figure 2 2 Orientation of the camera coordinate system Figur e 2 3 Illustration of the test apparatus

PAGE 20

20 Figure 2 4 Octahedral target Figure 2 5 Ninety degree azimuth view of the octahedral target

PAGE 21

21 CHAPTER 3 ALGORITHM DEVELOPMENT Premise Use of a Single Target Rather than establishing the positions of thr ee distinct targets in a reference coordinate system as depicted in Figure 1 2 my project utilized a single target with a known geometry. This targe t took the form of a triangular antiprism as shown in Figure 2 4 It was theorized that t his geometric co nfiguration would provide a sufficient number of viewable sides to accurately determine camera position over the entire hemispherical region surrounding the target without sacrificing resolution The proposed method relied on the projected area of the vis ible triangular faces to determine the position of the camera. Each of the six side face s of the target was painted a different matte color so that it could be indexed to a predetermined color map. The three angled faces were painted red, green, and blue representing the primary colo rs of light. The vertical faces were painted cyan, yellow, and magenta, being the secondary colors of light. These colors were chosen as they are as far separated from each other as possible in the color spectrum, making th em easier to distinguish from one another. Using Thr ee Projected Areas to Graphically Determine Camera Position The projected area of any one face changed as the face was rotated about either a horizontal or vertical axis. Furthermore, the area of the ima ge appeared smaller as the distance between the camera and the target face increase d. When presented with a scalar value representing the area of a single face, the possible positions for the camera could be represented as a continuous surface, as shown i n Figure 3 1

PAGE 22

22 If three such surfaces were plotted for three adjacent sides of the target, it was shown that the three surfaces intersected at a single point, as shown in Figure 3 2. This was an extension of the laws governing the behavior of points, lines and planes. The intersection of two nonparallel planes defines a line. The intersection of three nonparallel planes defines a point [ 3 ]. This indicated that the use of three area features, as opposed to three points, would return a unique solution for the position of the camera. This example il lustrated the concept that formed the basis of this research. However, the equations u sed to generate these figures wer e somewhat simplified. These equations were in a form that assumed that each face was rotate d about its centroid. The face s of the target actually moved along a circular path center ed about a vertical axis. This simplified case was used for illustration of the conceptual approach to the use of area features as the equations that more accuratel y describe the changes in the projections of the target faces contained nonlinearities that prevented them from being reduced to a form that could be plotted It remained possible to use the more complex equations to construct a solvable system of three e quations, with three unknown variables. The Color Map and Image Indexing Information about the physical target was obtained using photogrammetric techniques. Color histograms were generated based on images of the target from positions normal to each face of the target. These histograms were then used to create a color map, representing the colors of each face under varying light levels. This allowed for faces with varying reflectivity to be viewed as a single entity [ 9 ]. The pixels in the image were ind exed to one of sixty four colors in the color map The map

PAGE 23

23 contained ten shades of gray, including white and black. The remaining indices were separated into six groups of nine indices each, as shown in Figure 3 3 Each group corresponded to a range of p ossible hu es for each colored face. This allowed each face to be viewed under varying lighting conditions, and with minimal concern for inconsistencies in coloration From the indexed image, a set of six binary images was created, one for each face of the target as in Figure 3 4 By counting the number of white pixels in each binary image, the visible area of each face was calculated by a simple conversion. (3 1) The conversion factor of 6400 pixels per square inch was determined by establishing that when the camera is at its c losest position to the target, the bottom edge of the target correspond ed to the 640 pixel edge of the 480x640 image. This gave a value of 80 pixels per inch, or 6400 pixels per square inch. Table 3 1 shows the areas of the binary images in Figure 3 4 in terms of number of pixels, and square centimeters. The minimum allowable possible distance between the camera and the target face was determined empirically to be 25.4 cm from the center of a vertical face of the target from a direction normal to the fa ce This value was necessary as a reference to perform target, then the entire target could not be seen in a single image and the camera position could not be det ermined Camera Position as a System of Three Equations To utilize the areas of the binary images, a formula for the area of the image of

PAGE 24

24 camera. By using three faces of the target, a system of three equations written in terms of these three variables was developed. To solve this system, MATLAB was used. A function was written, taking the area measurements of all six faces as input. With this input, the areas were sorted by size, and the three most suitable quantities used to solve the system using the From this function, one of two possible values w as found. The first possible value was the actual position of the camera. The second possibility was a point with coordinates opposite in sign to those of the actual position of the camera, referred to as its conjugate. While both solutions numerically satisfy the system, only one of these points represented a physically realizable position. To ensure that only the actual position was returned by the func tion, the values generated were checked against a set of conditions based on the known geometry and pose of the target. Regardless of whether the real point or its conjugate was returned as a solution to the system, only the real coordinates were reported Development of the Area Equations To calculate the area of the binary image of each face, the coordinates of the six points that make up the vertices of the triangular f aces first had to be established These points were labeled as shown in Figure 3 5 These coordinates were then converted to the two dimensional coordinates as seen in the image of each face, taking into account the apparent change in the image as a function of changing camera position The three dimensional coordinates of each of the v ertices of the target are listed in Table 3 2 and the two dimensional pro jections of these coordinates are presented in Table 3 3 The image of the real y coordinate of any of the vertices of the

PAGE 25

25 target was affected only by the rotation about the vertica l axis of the target, represented by the azimuthal angle. This wa s the x coordinate of the projected image in the image plane. The image of both the real x and real z coordinates of any vertex of the target was affected by the zenith angle of the camera relative to the center of the target. The orthogonal projection of these two coor dinates corresponded to the y coordinate of the projected image of any vertex in the image plane as illustrated in Figure 3 6 (3 2) (3 3) where i was the number of the point in question. These projected coordinates of the vertices of any given triangular face were then substituted into a matrix determinant (3 4) where i, j, and k wer e the numbers of the points that make up the vertices of any one triangular face [ 10 ] For example, the are a of the yellow face used points one, four, and five (Figure 3 5) Plugging the corresponding values from Table 3 3 into the determinant gave (3 5) Evaluation of the determinant yielded the equation (3 6)

PAGE 26

26 Intuitively, this made sense as basic geometry tells us that the length of the base and the height of the triangle will change according to the cosine of the angle from which it i s viewed. While this formula for the area of the yello w face of the triangle appeared trivia l, the formulas for the areas of the remaining faces proved less so. The resulting formula (Equation 3 6) illustrated how the area of each face of the target changed as the face rotated about its ce ntroid The faces o f the target, however, travel led along a circular path about a vertical axis through the cent er of the target. This required that the angles be adjusted to accommodate the offset of the face from the center of the target. The se modified angles were deter mined in terms of the original angles of rotation and inclination, and the unknown distance betwee n the target and the camera, as shown in Figure 3 7 By basic trigonometry, the following relationships for the angles and and the distance R were determined. (3 7) ( 3 8) (3 9) (3 10) (3 11) where is the angle of rotation each face of the target relative to the x axis of the reference coordinate system, as shown in Figure 3 8.

PAGE 27

27 Having determined these angles, the formulas for the areas of each face, as they changed with the rotations of the target were determined by replacing with the quantity and with Using the yellow side as an example once again, the e quation for the area of the face, taking into account not only the rotation of the face, but the fact that the center of the face is not the same as the center of the target became (3 12) (3 13) Substituting Equations 3 7 Equation 3 9, and Equation 3 10 into Equation 3 13 and simplifying, yielded (3 14 ) However, this was still incomplete for the purpose of determining the position of the camera The image of the target not only changed as a function of angle, but also as a function of the di stance between the camera and the target. Basic geometrical optics tells us that as the camera moves farther away from the target, the image of the target will appear smaller. The height of the target varies inversely with the distance between the target and the lens, due to the magnification equation [ 11 ] (3 1 5 ) where m is magnification, f is the focal length of the camera lens, and s 0 is the distance from the object to the camera lens. From this relationship, it can be safely stated that the area of the binary image generated w ill va ry according to the proportionality

PAGE 28

28 (3 16 ) T herefore (3 17) In other words, the area will vary as a function of distance from the camera to the target face, determ ined by some reference distance, This reference distance was empirically determined to be 25.4 cm. Usin g the pr opo rtionality ( Equation 3 1 7 ), in combination with Equation 3 14 and Equation 3 11 the formulation for the area of the yellow face was d etermined to be (3 1 8 ) The same method was used to generate the formula for the area of each face of the target. These results are listed in Appendix A Figure 3 1 Potential camera positions represented as a continuous surface

PAGE 29

29 Figure 3 2. Representation of the camera position as the intersection of three surfaces at a single point Figure 3 3. The color map to which all images were indexed

PAGE 30

30 Figure 3 4. A set of six binary images created from the color image of the target Table 3 1. Area measurements of extracted from binary images in Figure 3 4 Color of Face Area (pixels) Area (cm2) Red 5797.5 23.3770 Green 5340.5 21.5342 Blue 0 0 Cyan 0 0 Yellow 13863 55.8991 Magenta 39.134 .1578

PAGE 31

31 Figure 3 5. The labeled vertices and faces of the octahedral target

PAGE 32

32 Table 3 2. The three dimensional coordinates of the target vertices Re al Coordinates in Three dimensional Space

PAGE 33

33 Table 3 3. The projected coordinates of the target vertices Projected Image Coordinates

PAGE 34

34 Figure 3 6. The y coordinate in the image plane determined by orthogonal projection of the real x and z coordinates of a target vertex Figure 3 7. Pface is the midpoint of the line segment formed from the intersection of the horizontal plane of the reference coordinate system and the face of the target and O is the origin of the coordinate system, at the center of the target.

PAGE 35

35 Figure 3 8. Top view of the target, indicating the values of for each colored side

PAGE 36

36 CHAPTER 4 DATA AND DATA ANALYSIS Recorded Data Area measurements were taken at forty eight positions in equal increments about the target, at a radius of 36 cm. Half of these angular positions were then repeated at an increased radius of 48 cm, and then again at a radius of 60 cm. This gave a total of ninety six camera positions to test the performance of the algorithm Three sets of area measurements were taken at each position, for a total of two hundred eighty eight data points. The area measurements of the faces extracted from the images at eac h of these data points were fed into a computer algorithm written in MATLAB to calculate the position of the camera. Not every data set returned a result. Of the original two hundred eighty eight data points, two hundred forty eight returned results. The other forty points, not included in the data listing when fed into the computer algorithm did not calculate an explicit solution. These points occurred mainly at positions where the azimuth angle was either 90 or 270 Attempts to re tak e me asurements at these points did not yield calculable results. This indicated a problem unique to these camera positions. All of the positions that gave results are listed in Appendix B. The data table is grouped by azimuth angle, for ease of reading. The calculated Cartesian coordinates are given for each camera position, and the corresponding spherical coordinates. The difference s between the calculated spherical coordinates and the known initial values are then given in the last three columns. T hese values were used to judge the acc uracy of the overall method

PAGE 37

37 Upon initial inspection, the experimental results l isted in Appendix B did not show much promise. It was not until the data was examined in smaller subsets that the conditions under which the algorithm w as successful were determined. Quantitative Analysis When the data was viewed in its entirety, the radial distance deviated on average by 2.97 cm, the azimuth angle by 4.04 and the zenith angle by 3.75 These values did not provide a bas is to evaluate the performance of the algorithm, as the angle measurements were discovered to vary radically as the camera position was changed. When the ex perimental results were evaluated based on the target orientation, a pattern emerged to explain the large fluctuations in the calculated results. By analyzing the data in smaller subsets, the behavior of the algorithm became much clearer When the data measurements at an azimuth of 0 and a radial distance of 36 cm was analyzed, the average deviations were only 1.22 cm in distance, 1.55 in azimuth, and 2.45 in zenith. This was an example of acceptable performance by the algorithm When the data measurements at an azimuth of 270 and the same radial distance were analyzed, the average deviations were 3.17 cm in distance, 13.64 in azimuth, and 9.63 in zenith. It was theorized early in the experiment that the images taken from this angle may not give accurate results, but it was somewhat surprising that the algorithm would show deviations this large. Table 4 1 lists the average difference and standard deviation from the known camera positions for blocks of data corresponding to a particular radial distance and azimuth angle. It was determined that the data with the smallest deviations were those take n at positions where the image exhibited more left to right symmetry. As the image became less symmetric, the differences in the calculated camera positions and the

PAGE 38

38 known positions became more pronounced. When the data sets taken at radial distances of 4 8 cm and 60 cm we re analyzed, the deviations appear ed to follow the same pattern. As the camera position changed fr om a position normal to a vertical face, to a position parallel to a vertical face, the errors in calculation increase d Generating a Unique Solution Whereas the perspective three p oint problem can have as many as four r ealizable solutions, the use of area features produced only two numerically valid solutions when three continuous faces were used in the calculation of the camera position On e was the actual position of the camera. The other was a negation of the actual camera coordinates or their conjugate. The conjugate coordinates can be thought of as a y were on the opposi te side of the target, and usually below it. To accommodate this possibility in the computer algorithm, the coordinates were first converted from spherical coordinates to Cartesian coordinates. T he x and y values were then compared t o the known orientation of the target in the reference coordinate system. For example, if the yellow face of the target was visible then the camera position had to have an azimuth angle between 270 a nd 90 This indicated a positive x coordinate for th e camera. If the calculated x coordinate was negative, then the conjugate of the actual camera coordinates was found. The computer algorithm corrected this by negating the conjugate coordinates before re calculating the spherical coordin ates. Conditions were established for all possible scenarios to check the x and y coordinates to ensure that they fell within the proper quadrant of the horizontal plane relative to the target. The z coordinate was not used, because at low angles it was possible that t he computer algorithm could return a coordinate value that was close to

PAGE 39

39 the actual camera position, but below the horizontal plane passing through the center of the target. Therefore, simply negating results that returned a negative value for the z coordi nate could have potentially taken a reasonably accurate answer, and changed it into a physically unrealizable result based on the target orientation. The Use of Three Continuous Sides I t was determined that the three sides used to calculate the position of the camera had to be continuous. At low angles, the three dominant areas form a continuous region in the camera image. At higher zenith angles, it was possible for the camera image to include the angled faces on the side of the target opposite the camer a. When the red, green, and blue faces of the target began to dominate the visible areas, the algorithm generated incorrect results. This was due to the fact that the three regions representing the possible positions of the camera for the three faces ove rlapped at two points. One of these points was the actual position of the camera. The other was a point that lies inside the target itself. This point was the closest to the origin of the reference coordinate system, and was therefore found first. Due to the nature of the in MATLAB, the first solution found is the only one given. In this circumstance, the computer algorithm was adjusted to use the largest two areas, and the fourth largest visible area In a properly taken image, the fou rth largest visible face was the face between the largest two. This condition proved suf ficient to generate a unique result If the fourth largest face did not fall between the largest two, then the region used to calculate the camera position remained d iscontinuous. In this instance, the program terminated without finding a solution. The resulting error stated simply that an explicit solution could not be found. Visually interpreted, Figure 3 2 was an example of mapping all of the possible camera posi tions for three continuous face s

PAGE 40

40 of the target. If the three faces were not continuous, the two outer surfaces would not intersect. Therefore, a single point of intersection of all three surfaces would not exist that represented the position of the camer a. This condition had a detrimental effect on obtaining results for those camera positions with an azimuthal angle at 180 and an increasingly steep zenith angle. At these positions, the blue face of the target dominated the image, with the magenta and c yan faces on either side. As the zenith angle increased, the magenta and cyan faces became less visible, as these sides are vertical. At a steep zenith angle, a suitable camera image was unattainable, and therefore no numerical results could be generated Target Symmetry The se trends indicated that the results generated by the computer algorithm developed were more accurate when the image of the target exhibited greater left to right symmetry The symmetry of the target image was important, as it was fou nd to be related to the condition that the region used to calculate the camera position needed to be continuous. If the image taken was asymmetrical, then the center of the region used referre d to as the center of perspective [1 ], did not fall in line wi th th e vertical axis of the target. The computer algorithm assumed that the center of per spective and the center of the target were the same point (or at least very close to the same). Where the image exhibited little left to right symmetry, t his led to an incorrect calculation of the azimuth angle. This in turn affected the calculation of the zenith angle and the radial distance. For example, when the target was viewed from an azimuth of 18 0 the blue side was dominant, and the image was symmetrical fr om left to right. The symmetry of the

PAGE 41

41 target led to accurate results, until the zenith angle was increased to a point where a suitable image could no longer be obtained. As the azimuth angle approached 90 or 270 the accuracy of the measurements was c onsiderably less, as the image taken of the target showed the least amount of symmetry. At low zenith angles, only two faces of the target were visible as in Figure 2 5 These were still used to calculate a position, but the asymmetry of the image led t o highly inaccurate results, with deviations in azimuth values in excess of fifteen degrees At steeper zenith angles, even with clear images, the computer algorithm was not always able to calc ulate the camera position. At the outset of this project, it was suspected that these azimuth angles might not generate accurate results at low angles, but the extent to which these angles affect the results was not fully realized until the data was analyzed. It was concluded that any position that placed the line of sight of the camera parallel to a vertical face of the target would exhibit the same asymmetry, and would not yield accurate results Figure 4 1 is a graph illustrating the correlation between the symmetry of the target image and the calculated angles f or those points taken at a radial distance of 48 cm. It can be seen that the calculated results deviate further from the known camera positions as the image becomes less symmetric. At a 0 azimuthal angle the symmetry is high, and the calculated results fall very close to the known angle values. At a 135 azimuthal angle there was less left to right symmetry, followed by the image at 45 At 90 the image was at its least symmetric, which led to the largest deviations in calculated results.

PAGE 42

42 Real Time A pplication The computer algorithm used did accurately calculate the position of the camera when given suitable conditions. However, this calculation relied heavily on computer resources, and was found to take as long as eight minutes to calculate an answe r. The fastest calculations were those in which the camera was close to the target, and the image was hi ghly symmetrical. This presented the smallest search space in which to find a solution. Even under these conditions, the calculation often took as lo ng as a full minute This being the case, it was determined that the use of area features to determine camera position is not suitable for a real time application. However, it would be possible to use this method to generate a table of values that cross r eferences the area values of each face and the position of the camera. From this data, an algorithm cou ld be developed that correlated first the proportion of the areas to each other as viewed by the camera to determine the azimuth and zenith angles, and then used the actual area values to calculate the distance from the camera to the target. While this would require an extremely large table of values to determine position with high resolution, it would likely be less taxing on computational resources.

PAGE 43

43 Table 4 1 Differences between the calculated camera position and the know n camera position. Known Position rho = 36 cm, theta = 0 Average 1.2203222 1.5549577 2.4508424 Standard Deviation 1.6240556 1.7082754 3.0563 279 rho=36 cm, theta = 45 Average 2.6397294 3.1201859 3.8080933 Standard Deviation 2.6852609 3.8822934 4.4786806 rho = 36 cm, theta = 90 Average 3.9333667 10.3999554 6.5778079 Standard Deviation 4.1477263 10.5091635 8.7108283 rho = 36 cm theta = 135 Average 1.9666588 3.700479 2.736535 Standard Deviation 2.3206093 3.9987532 3.2060988 rho = 36 cm, theta = 180* Average 2.29885 .7150299 5.5655955 Standard Deviation 2.3797067 .9101665 5.8514053 rho = 36 cm, theta = 2 25 Average 1.0535778 3.4153505 3.6367662 Standard Deviation 1.2994299 4.7454785 4.1399939 rho = 36 cm, theta = 270 Average 3.1731154 13.6369807 9.6325336 Standard Deviation 3.8550048 14.0570483 11.7707347 rho = 36 cm, theta = 315 Average 2.2911944 2.924257 3.2048007 Standard Deviation 2.4730788 3.5989299 4.2653136 rho = 48 cm, theta = 0 Average 3.6063778 1.4580162 2.0659694 Standard Deviation 3.7851168 2.211019 2.2833984 rho = 48 cm, theta = 45 Average 4.0537176 2.7 678519 3.8336338 Standard Deviation 4.2247261 3.1000073 4.8937589 rho = 48 cm, theta = 90 Average 4.62485 9.6804336 2.7772613 Standard Deviation 4.6721269 10.2245532 3.0894952 rho = 48 cm, theta = 135 Average 2.5389444 1.3137132 2.451 2426 Standard Deviation 2.8740663 1.9354358 2.8556876 rho = 60 cm, theta = 0 Average 3.1366111 1.7089015 1.2643051 Standard Deviation 3.3549385 2.2974668 1.4503842 rho = 60 cm, theta = 45 Average 5.2494059 3.7799709 5.1900413 Standard Deviation 5.3491025 4.6715518 6.1226228 rho = 60 cm, theta = 90 Average 5.4481125 11.4650605 2.5356993 Standard Deviation 5.5017783 12.6452809 2.8754669 rho = 60 cm, theta = 135 Average 2.7917389 1.7949527 4.2373943 Standard Deviation 3 .131582 2.3339517 4.9633699 *the point at phi=75 was omitted as it contained a gross error

PAGE 44

44 Figure 4 1. Correlation between calculated angles and image symmetry 0 15 30 45 60 75 -45 0 45 90 135 180 Phi ( ) Theta ( ) theta=0 theta=135 theta=45 theta=90

PAGE 45

45 CHAPTER 5 CONCLUSIONS AND FUTURE WORK Conclusions Using only area features extracte d from the image of three continuous sides of an octahedral target, an algorithm was developed that successfully determined a unique solution to the location determination problem. The camera position relative to the coordinate system attached to the targ et was found provided that the image possessed left to right symmetry. Given that the symmetry played such a significant role in the obtaining of accurate results, it was determined that an octahedral was not an ideal candidate for the shape of the targe t. Improving Results It was determined that the symmetry of the target image had a profound effect on the accuracy of the results. Therefore, to obtain better results at all locations, the target must be redesig ned to exhibit a greater degree of left to r ight symmetry. The faces of the target should remain triangular if the same methodology is to be used. Though not critical, the colors used to distinguish the faces should remain the same if the same color map is to be used The target could be mod ified to a tetradecohedral form by dividing each of the octahedral faces into two angled faces, as shown in Figure 5 1 and figure 5 2 These faces would then be painted in pairs, thus requiring only the area equations themselves to be modified. A target in th is shape would exhibit a higher degree of symmetry, requiring only minimal modifications to the methods used in this project In theory, this would generate accurate results over the entire hemispherical region surrounding the target with the possible ex ception of those points falling very close to the vertical axis

PAGE 46

46 passing through the center of the target In fact, using this modified target, it may prove possible to accurately determine camera positions below the horizontal plane that passes through th e center of the target. Figure 5 1. Top view of the proposed tetradecohedral target Figure 5 2. Isometric view of the proposed tetradecohedral target

PAGE 47

47 APPENDIX A AREA EQUATIONS USED TO CALCULATE CAMERA POSITION

PAGE 48

48 APPENDIX B EXPERIMENTAL DATA

PAGE 49

49 Table B 1. Experimental data showing the differences between the known and calculated camera position Rho (cm) Thet a () Phi () X (cm) Y (cm) Z (cm) Rho* (cm) Theta* () Phi* () (cm) () () 36 0 0 35.1689 0.0743 3.7691 35.1689 0.121046 6.117106 0.8311 0.121046 6.117106 36 0 0 35.1663 0.2535 3.7526 35.3668 0.413016 6.090833 0.6332 0.413016 6.0908 33 36 0 0 35.1711 0.3871 3.7163 35.369 0.630583 6.031326 0.631 0.630583 6.031326 36 0 15 34.9959 1.0261 8.0753 35.9302 1.679464 12.98817 0.0698 1.679464 2.011828 36 0 15 35.0595 0.8496 8.085 35.9897 1.388182 12.98216 0.0103 1.388182 2.017838 36 0 15 3 5.0576 0.8591 8.0831 35.9876 1.403774 12.97981 0.0124 1.403774 2.020189 36 0 30 30.7264 1.0214 18.355 35.8059 1.903912 30.83886 0.1941 1.903912 0.83886 36 0 30 30.7259 0.9774 18.3928 35.8236 1.821981 30.89235 0.1764 1.821981 0.892351 36 0 30 30.8185 1.1 11 18.302 35.8606 2.064606 30.68822 0.1394 2.064606 0.688223 36 0 45 23.2929 0.6619 25.83 34.7877 1.627701 47.94509 1.2123 1.627701 2.945085 36 0 45 23.3766 0.7205 25.8443 34.8556 1.765378 47.8566 1.1444 1.765378 2.8566 36 0 45 23.289 0.7153 25.8581 34. 807 1.759234 47.97891 1.193 1.759234 2.978905 36 0 60 15.5319 0.2623 29.8336 33.6356 0.967509 62.49437 2.3644 0.967509 2.494371 36 0 60 15.5699 0.234 29.8262 33.6464 0.861033 62.43185 2.3536 0.861033 2.431845 36 0 60 15.7819 0.7417 29.7912 33.7214 2.690 743 62.06142 2.2786 2.690743 2.061423 36 0 75 8.3186 0.3445 32.0504 33.1142 2.371448 75.43813 2.8858 2.371448 0.438132 36 0 75 8.3392 0.3946 32.0884 33.1566 2.70914 75.41651 2.8434 2.70914 0.416507 36 0 75 8.102 0.2561 31.9966 33.0074 1.810487 75.78374 2.9926 1.810487 0.783741 36 45 0 24.7117 23.895 1.6919 34.4165 44.03739 2.817764 1.5835 0.962605 2.817764 36 45 0 24.2893 23.66 0.5693 33.913 44.24808 0.961874 2.087 0.751923 0.961874 36 45 0 24.335 23.7253 0.3964 33.9889 44.27318 0.668236 2.0111 0.7268 24 0.668236 36 45 15 22.8375 21.9804 11.9249 33.8658 43.90441 20.61717 2.1342 1.095594 5.617171 36 45 15 22.7203 21.9593 11.8598 33.7502 44.02421 20.57292 2.2498 0.975789 5.572916 36 45 15 22.5584 21.7537 11.9119 33.5261 43.95963 20.81198 2.4739 1.04036 7 5.811983 36 45 30 18.4834 17.6413 20.1487 32.5395 43.66462 38.25822 3.4605 1.335376 8.258224 36 45 30 18.909 17.9225 20.0301 32.8629 43.46575 37.5537 3.1371 1.534251 7.553695 36 45 30 18.8271 17.8327 20.0058 32.752 43.44623 37.64924 3.248 1.553772 7.6 49241

PAGE 50

Table B 1. Continued 50 Rho (cm) Thet a () Phi () X (cm) Y (cm) Z (cm) Rho* (cm) Theta* () Phi* () (cm) () () 36 45 45 15.7394 19.3396 22.0072 33.2576 50.85982 41.43119 2.7424 5.859818 3.56881 36 45 45 15.7579 19.3116 22.2453 33.4081 50.78621 41.74872 2.5919 5.786206 3.251279 36 45 60 10.9952 13.814 28.1135 33.1977 51.4821 57.87067 2.8023 6.4821 2.129325 36 45 60 10.8124 13.6984 27.8271 32.8466 51.71532 57.90654 3.1534 6.715316 2.093459 36 45 60 10.8153 13.7134 27.8127 32.8416 51.73834 57.87313 3.1584 6.738337 2.126866 36 45 75 6.4977 7.367 31.8218 33.3034 48.58768 72.84506 2.6966 3.58768 2.154939 36 4 5 75 6.5163 7.45 31.7927 33.2978 48.82475 72.70758 2.7022 3.824748 2.292421 36 45 75 6.4653 7.4565 31.8635 33.3569 49.07246 72.79062 2.6431 4.072455 2.209381 36 90 0 5.8706 28.5619 7.5846 30.1293 78.38521 14.58023 5.8707 11.61479 14.58023 36 90 0 5.886 3 28.1947 7.6759 29.8079 78.20755 14.92252 6.1921 11.79245 14.92252 36 90 0 5.9019 28.0356 7.8534 29.707 78.11199 15.32907 6.293 11.88801 15.32907 36 90 15 5.8722 32.4446 3.6532 33.1734 79.741 6.322467 2.8266 10.259 8.677533 36 90 15 5.8722 32.4205 3. 7092 33.1562 79.73354 6.423163 2.8438 10.26646 8.576837 36 90 15 5.8743 32.4105 3.7213 33.1481 79.72685 6.445777 2.8519 10.27315 8.554223 36 90 30 4.3679 28.357 16.9474 33.3229 81.24341 30.56942 2.6771 8.756593 0.56942 36 90 45 2.596 21.7166 24.5392 32. 8713 83.18322 48.29013 3.1287 6.816784 3.290132 36 90 60 2.2806 14.6931 28.3536 32.0158 81.1772 62.32683 3.9842 8.822802 2.326832 36 90 75 1.4903 7.7065 31.6321 32.5918 79.05512 76.06395 3.4082 10.94488 1.063953 36 90 75 1.7141 8.461 31.3058 32.4742 78. 54754 74.58328 3.5258 11.45246 0.416718 36 90 75 1.8015 8.5398 31.2041 32.4017 78.08791 74.37376 3.5983 11.91209 0.62624 36 135 0 24.0587 26.051 1.4274 35.4896 132.7232 2.30507 0.5104 2.27681 2.30507 36 135 0 23.8878 26.4009 1.1605 35.6228 132.1391 1.866883 0.3772 2.860888 1.866883 36 135 0 23.9043 26.3954 1.0144 35.6253 132.1647 1.631669 0.3747 2.835265 1.631669 36 135 15 25.131 23.5245 9.1199 35.6109 136.8911 14.83866 0.3891 1.891096 0.161344 36 135 15 25.1313 23.4851 9.1516 35.5933 136.9394 14.89899 0.4067 1.939352 0.101013 36 135 30 21.5585 19.2293 19.6437 34.9344 138.2683 34.21522 1.0656 3.268344 4.215219 36 135 30 21.1495 19.4328 19.4705 34.6992 137.4223 34.13344 1.3008 2.422264 4.133443 36 135 30 21.2493 19.507 19.3575 34.7386 137. 4479 33.8647 1.2614 2.447859 3.864695 36 135 45 15.9879 14.0702 25.6446 33.3351 138.6505 50.29082 2.6649 3.65051 5.29082 36 135 45 16.0648 14.194 25.491 33.3068 138.5379 49.93731 2.6932 3.537902 4.937315

PAGE 51

Table B 1. Continued 51 Rho (cm) Thet a () Phi () X (cm) Y (cm) Z (cm) Rho* (cm) Theta* () Phi* () (cm) () () 36 135 45 15.9573 14.2335 25.513 33.2888 138.2 679 50.03304 2.7112 3.267861 5.033041 36 135 60 11.1215 9.572 29.1727 32.6551 139.2822 63.29822 3.3449 4.282236 3.298216 36 135 60 11.1099 9.5559 29.1764 32.6497 139.3004 63.33138 3.3503 4.30036 3.331381 36 135 60 11.2223 9.5699 29.1938 32.7078 139.5 439 63.19717 3.2922 4.543866 3.197167 36 135 75 6.2741 5.0184 31.8351 32.8333 141.3451 75.83603 3.1667 6.345092 0.836034 36 135 75 6.1974 5.0693 31.7926 32.7853 140.7178 75.86467 3.2147 5.717793 0.864667 36 135 75 6.0605 4.6806 31.7813 32.6908 142.32 06 76.45312 3.3092 7.320642 1.453118 36 180 0 33.6842 0.6062 3.0722 33.8295 181.031 5.21046 2.1705 1.031016 5.21046 36 180 0 33.6538 0.5873 3.0941 33.8009 180.9998 5.25216 2.1991 0.99978 5.25216 36 180 0 33.6064 0.6046 3.0872 33.7533 181.0307 5 .247813 2.2467 1.030675 5.247813 36 180 15 32.8353 0.8608 10.8074 34.5789 181.5017 18.21258 1.4211 1.501704 3.212582 36 180 15 32.8301 0.8588 10.8975 34.6022 181.4985 18.35702 1.3978 1.498454 3.357017 36 180 15 32.8493 0.834 10.7343 34.5687 178.545 6 18.0906 1.4313 1.454351 3.090601 36 180 30 26.1837 0.0783 21.17 33.6714 180.1713 38.95605 2.3286 0.171337 8.956047 36 180 30 26.44 0.0963 20.5838 33.5079 179.7913 37.90092 2.4921 0.208682 7.900924 36 180 30 26.4612 0.0953 20.5293 33.5034 179.7937 37.80508 2.4966 0.20635 7.805078 36 180 45 20.6752 0.0837 25.5448 32.8635 179.768 51.01409 3.1365 0.231951 6.014093 36 180 45 21.0013 0.0377 25.2175 32.8173 180.1029 50.2122 3.1827 0.102853 5.2122 36 180 45 20.9251 0.0523 25.4097 32.9168 180.1432 5 0.52817 3.0832 0.143204 5.528172 36 180 75 4.8657 4.0562 30.7675 31.4128 140.1844 78.36607 4.5872 39.81565 3.366071 36 225 0 23.4283 25.6067 4.799 35.0374 227.5437 7.872433 0.9626 2.543716 7.872433 36 225 0 23.5612 25.7611 4.7525 35.2328 227.5538 7.752172 0.7672 2.553845 7.752172 36 225 0 23.4997 25.725 4.8959 35.185 227.5884 7.99852 0.815 2.588407 7.99852 36 225 15 24.6224 25.7364 7.6106 36.4218 226.2672 12.06126 0.4218 1.267248 2.938741 36 225 15 24.5668 25.8049 7.6334 36.4375 226.408 12.09265 0.4375 1.408005 2.90735 36 225 15 24.614 25.7118 7.7014 36.4178 226.2496 12.20871 0.4178 1.249644 2.791287 36 225 30 21.1492 22.0787 19.2782 36.1443 226.2318 32.23333 0.1443 1.231802 2.233329 36 225 30 21.1892 22.111 19.2668 36.1813 226.2 196 32.17499 0.1813 1.219562 2.174986 36 225 30 21.2421 22.0176 19.3153 36.1812 226.027 32.26585 0.1812 1.027009 2.265849 36 225 45 16.4391 16.5144 26.1634 35.0356 225.1309 48.31103 0.9644 0.130923 3.311032

PAGE 52

Table B 1. Continued 52 Rho (cm) Thet a () Phi () X (cm) Y (cm) Z (cm) Rho* (cm) Theta* () Phi* () (cm) () () 36 225 45 16.352 16.4915 26.1319 34.9605 225.2434 48.37171 1.0395 0.243358 3.371709 36 225 45 16.457 16.6388 25.9914 34.9748 225.3147 48.00018 1.0252 0.314731 3.000175 36 225 60 9.7916 11.8487 31.0524 34.6485 230.4301 63.66452 1.3515 5.430149 3.664523 36 225 60 9.8108 11.9574 31.0466 34 .686 230.6318 63.51798 1.314 5.631828 3.51798 36 225 60 9.7866 11.953 30.9831 34.6208 230.6909 63.49888 1.3792 5.690865 3.498884 36 225 75 4.4009 6.5341 32.5185 33.4592 236.0386 76.38183 2.5408 11.03863 1.381833 36 225 75 4.3324 5.8438 32.7416 33. 54 233.448 77.47339 2.46 8.447985 2.473392 36 225 75 4.2708 5.9783 32.6218 33.4389 234.4586 77.3076 2.5611 9.458605 2.307597 36 270 0 5.9237 26.528 9.8645 28.916 282.5876 19.94657 7.084 12.58764 19.94657 36 270 0 5.9231 26.547 9.8553 28.9301 282.5 777 19.917 7.0699 12.57768 19.917 36 270 0 5.9107 26.6638 9.8179 29.0222 282.4989 19.77271 6.9778 12.49893 19.77271 36 270 15 5.9094 32.7737 1.1955 33.3236 280.2211 2.055951 2.6764 10.22113 12.94405 36 270 15 5.903 32.7666 1.1738 33.3147 280.2125 2. 019156 2.6853 10.21246 12.98084 36 270 15 5.9046 32.7624 1.3061 33.3159 280.2165 2.246775 2.6841 10.21645 12.75322 36 270 30 5.3902 32.0471 13.7203 35.2748 279.5476 22.88942 0.7252 9.547568 7.110584 36 270 45 5.8001 25.3781 22.8185 34.6175 282.8737 4 1.23587 1.3825 12.8737 3.764133 36 270 45 5.8003 25.3031 22.8643 34.5929 282.911 41.37268 1.4071 12.91101 3.627318 36 270 60 5.2673 17.3424 28.4824 33.7602 286.8948 57.52953 2.2398 16.89479 2.470471 36 270 75 3.3201 10.0439 32.2899 33.9785 288.2918 7 1.8608 2.0215 18.29176 3.139196 36 270 75 3.4832 10.1032 32.1838 33.9117 289.0223 71.63101 2.0883 19.02225 3.368991 36 270 75 3.5502 10.0671 32.0375 33.7914 289.4254 71.57215 2.2086 19.42539 3.42785 36 315 0 25.2183 24.5105 0.1873 35.1676 315.8154 0. 305154 0.8324 0.815447 0.305154 36 315 0 25.2484 24.373 1.2378 35.1149 316.0107 2.020094 0.8851 1.010683 2.020094 36 315 0 25.1826 24.3786 1.0575 35.0656 315.9294 1.728173 0.9344 0.929392 1.728173 36 315 15 23.4195 22.0825 13.1785 34.7819 316.6831 22 .2649 1.2181 1.683056 7.264898 36 315 15 23.1984 21.9396 13.3568 34.6109 316.5974 22.70042 1.3891 1.59744 7.700424 36 315 15 23.1939 21.8683 13.4781 34.6099 316.685 22.91904 1.3901 1.684991 7.919042 36 315 30 19.2906 19.1316 20.1924 33.8509 315.2371 36.62044 2.1491 0.237102 6.620445 36 315 30 19.24 19.2231 20.0346 33.7801 315.0252 36.3766 2.2199 0.025175 6.376597 36 315 30 19.3754 19.21 20.1479 33.9171 315.2456 36.44381 2.0829 0.245602 6.443805

PAGE 53

Table B 1. Continued 53 Rho (cm) Thet a () Phi () X (cm) Y (cm) Z (cm) Rho* (cm) Theta* () Phi* () (cm) () () 36 315 45 15.16 17.7846 22.8248 32.6663 310.445 44. 32485 3.3337 4.554967 0.675154 36 315 45 15.0185 17.3342 23.0645 32.5269 310.9059 45.16088 3.4731 4.094062 0.160876 36 315 45 15.0381 17.322 23.0662 32.5307 310.9629 45.15846 3.4693 4.037107 0.15846 36 315 60 11.0354 13.0878 28.0479 32.8596 310.137 5 8.6018 3.1404 4.863009 1.398203 36 315 60 10.9372 13.0469 28.0532 32.815 309.9731 58.74746 3.185 5.026938 1.252544 36 315 60 11.0103 13.1071 28.328 33.0983 310.0311 58.85647 2.9017 4.96888 1.143528 36 315 75 7.5598 6.1668 31.7033 33.1704 320.7946 72. 89535 2.8296 5.794641 2.104648 36 315 75 7.5517 6.2282 31.5925 33.0742 320.4862 72.7848 2.9258 5.486184 2.215204 36 315 75 7.5653 6.2182 31.6372 33.1182 320.5819 72.80084 2.8818 5.581949 2.199163 48 0 0 44.7054 0.8201 0.799 44.7201 1.050947 1.023741 3.2799 1.050947 1.023741 48 0 0 44.6848 0.7937 1.0268 44.7037 1.017592 1.316145 3.2963 1.017592 1.316145 48 0 0 44.7051 0.6778 1.0132 44.7217 0.868628 1.298185 3.2783 0.868628 1.298185 48 0 15 44.5625 0.6563 12.6647 46.3319 0.84377 15.86356 1.6681 0 .84377 0.863561 48 0 15 44.4268 0.9613 13.204 46.3574 1.23956 16.54871 1.6426 1.239563 1.548708 48 0 15 44.6221 0.3733 12.2638 46.2782 0.47931 15.36705 1.7218 0.479314 0.367051 48 0 30 37.5866 0.2331 24.5505 44.8947 0.355325 33.15092 3.1053 0.355325 3.150925 48 0 30 37.4468 0.0899 24.6576 44.836 0.137552 33.36364 3.164 0.137552 3.363635 48 0 30 37.6257 0.1347 24.5468 44.925 0.205118 33.12004 3.075 0.205118 3.120041 48 0 45 29.2397 0.045 32.5228 43.7343 0.088178 48.04277 4.2657 0.088178 3.042766 4 8 0 45 29.2762 0.2113 32.5116 43.751 0.413523 47.9967 4.249 0.413523 2.996704 48 0 45 28.9032 0.0021 32.7834 43.7052 0.00416 48.59927 4.2948 0.004163 3.599267 48 0 60 19.8082 0.3635 37.5363 42.4437 1.05132 62.17506 5.5563 1.051316 2.175056 48 0 60 1 9.614 0.6748 37.8333 42.6207 1.97043 62.58247 5.3793 1.970427 2.582474 48 0 60 19.5931 0.5291 37.8527 42.6262 1.54686 62.62474 5.3738 1.546862 2.624742 48 0 75 12.4012 1.0394 42.3637 44.1537 4.791018 73.62942 3.8463 4.791018 1.370581 48 0 75 12.4271 1.0593 42.3441 44.1427 4.872179 73.58802 3.8573 4.872179 1.411978 48 0 75 12.3587 1.1484 42.358 44.139 5.308816 73.66811 3.861 5.308816 1.331889 48 45 0 33.0585 30.9159 3.9597 45.435 43.08179 4.999723 2.565 1.918206 4.999723 48 45 0 34.2812 30.9394 5.4 821 46.5027 42.06682 6.770217 1.4973 2.93318 6.770217 48 45 0 34.507 31.0036 4.9541 46.6529 41.93883 6.095758 1.3471 3.061169 6.095758

PAGE 54

Table B 1. Continued 54 Rho (cm) Thet a () Phi () X (cm) Y (cm) Z (cm) Rho* (cm) Theta* () Phi* () (cm) () () 48 45 15 29.5314 27.1906 18.5564 44.2241 42.63686 24.80928 3.7759 2.363138 9.809282 48 45 15 29.96 27.8539 17.8627 44. 6376 42.9137 23.58889 3.3624 2.086303 8.588894 48 45 15 29.713 27.5258 18.1897 44.4004 42.81169 24.18436 3.5996 2.188313 9.184361 48 45 30 25.766 28.2262 20.5433 43.3893 47.60893 28.25942 4.6107 2.608926 1.740577 48 45 30 25.7608 28.2713 20.3952 43.3457 47.66023 28.06833 4.6543 2.660228 1.93167 48 45 45 20.9829 24.2981 29.2691 43.4438 49.18738 42.35514 4.5562 4.187378 2.644863 48 45 45 20.2685 24.5237 29.6212 43.47 50.42674 42.9545 4.53 5.426738 2.045502 48 45 45 21.1909 24.8508 28.7439 43.5066 49.544 94 41.35163 4.4934 4.544941 3.648371 48 45 60 14.5157 16.0712 36.718 42.6287 47.91127 59.46804 5.3713 2.911275 0.531962 48 45 60 14.4334 15.9532 36.7518 42.5854 47.86329 59.65654 5.4146 2.863289 0.343458 48 45 60 14.0552 16.8492 36.8733 42.9079 50.16594 59.2448 5.0921 5.165943 0.755197 48 45 75 8.8677 9.2038 41.5236 43.446 46.06548 72.89198 4.554 1.065484 2.10802 48 45 75 8.758 8.7314 41.3654 43.1745 44.91286 73.35507 4.8255 0.087142 1.644927 48 45 75 8.9696 9.2824 41.3691 43.3362 45.98183 72.67101 4.6638 0.98183 2.328993 48 90 0 5.9822 41.7902 3.825 42.3891 81.85354 5.17715 5.6109 8.146455 5.17715 48 90 0 5.905 43.1248 2.9118 43.6245 82.20308 3.827162 4.3755 7.796919 3.827162 48 90 0 5.9876 41.9426 3.1581 42.4853 81.87552 4.262946 5.5147 8.124 48 4.262946 48 90 15 5.8965 42.6388 10.2821 44.2556 82.12653 13.43455 3.7444 7.87347 1.565449 48 90 15 5.8915 42.7217 9.9308 44.2546 82.1482 12.96767 3.7454 7.851802 2.032327 48 90 45 5.0338 29.187 31.7988 43.4556 80.21462 47.03371 4.5444 9.785378 2.033 71 48 90 45 4.9922 29.1541 31.9029 43.5049 80.2832 47.16519 4.4951 9.7168 2.165191 48 90 75 3.7291 11.3768 41.3326 43.0316 71.85183 73.84585 4.9684 18.14817 1.154155 48 135 0 30.8145 35.3348 0.9356 46.893 131.0908 1.143229 1.107 3.909219 1.143229 48 135 0 30.7029 35.8815 0.5548 47.2277 130.5528 0.673088 0.7723 4.447225 0.673088 48 135 0 30.6861 35.7911 0.7009 47.1501 130.6087 0.85175 0.8499 4.39131 0.85175 48 135 15 32.3954 31.6708 12.4799 46.992 135.648 15.40112 1.008 0.647998 0.401117 48 135 15 32.7053 31.5283 12.5212 47.1216 136.0498 15.40978 0.8784 1.049756 0.409781 48 135 15 32.5472 31.5959 12.542 47.063 135.8497 15.45575 0.937 0.849686 0.455747 48 135 30 27.5509 27.0145 25.1205 46.042 135.5632 33.06557 1.958 0.563223 3.065573 48 135 30 27.6421 27.0362 25.1243 46.1115 135.6349 33.01506 1.8885 0.634879 3.015063

PAGE 55

Table B 1. Continued 55 Rho (cm) Thet a () Phi () X (cm) Y (cm) Z (cm) Rho* (cm) Theta* () Phi* () (cm) () () 48 135 30 27.5696 26.9134 25.1779 46.0254 135.69 33.1644 1.9746 0.690044 3.164398 48 135 45 20.4059 20.4016 34.1127 44.68 135.006 49.77278 3.32 0.006037 4.772777 48 135 45 20.715 20.3879 34.0233 44.7478 135.456 49.49375 3.2522 0.455955 4.493753 48 135 45 20.6858 20.3115 33.9848 44.6702 135.5231 49.53422 3.3298 0.523088 4.534219 48 135 60 13.6652 13.4832 39.4448 43.8683 135.3841 64.04844 4.1317 0.384099 4.048441 48 135 60 14.3612 14.0021 39.2761 44.1012 135.7254 62.94754 3.8988 0.725367 2.947542 48 135 60 13.8757 13.981 39.2787 43.9411 134.7834 63.36678 4.0589 0.21658 3.366778 48 135 75 6.8981 6.7761 42.8558 43.9331 135.5112 77.28533 4.0669 0.511174 2.285327 48 13 5 75 6.9674 6.7424 42.8683 43.9511 135.9402 77.25573 4.0489 0.940232 2.255732 48 135 75 7.1531 6.5086 42.6984 43.7799 137.701 77.23805 4.2201 2.700965 2.238049 60 0 0 57.5148 0.2315 2.0963 57.5535 0.23062 2.087376 2.4465 0.230617 2.087376 60 0 0 57 .5381 0.3732 1.9464 57.5722 0.37162 1.937423 2.4278 0.371623 1.937423 60 0 0 57.543 0.2438 1.9734 57.5774 0.24275 1.964134 2.4226 0.242751 1.964134 60 0 15 56.8814 0.3471 15.3658 58.9213 0.34962 15.11665 1.0787 0.349624 0.11665 60 0 15 56.7855 0.2165 15.6305 58.8978 0.21844 15.38973 1.1022 0.218444 0.389734 60 0 15 56.7514 0.8846 14.3649 58.5478 0.893013 14.2027 1.4522 0.893013 0.797302 60 0 30 49.3933 0.4854 28.9514 57.2548 0.563041 30.37508 2.7452 0.563041 0.375076 60 0 30 49.5047 0.7446 2 8.8279 57.2916 0.861721 30.21056 2.7084 0.861721 0.21056 60 0 30 49.8205 0.7652 28.5839 57.4431 0.879945 29.8416 2.5569 0.879945 0.158397 60 0 45 38.469 1.2763 41.2411 56.4121 1.900226 46.97607 3.5879 1.900226 1.976068 60 0 45 38.7097 1.3832 41.1926 56. 5436 2.046459 46.76161 3.4564 2.046459 1.761605 60 0 45 38.4946 1.4318 41.2341 56.4281 2.130125 46.94817 3.5719 2.130125 1.948168 60 0 60 26.3466 0.8934 49.1691 55.7902 1.942127 61.80226 4.2098 1.942127 1.80226 60 0 60 26.4193 0.9859 49.3004 55.9418 2.1 37139 61.79725 4.0582 2.137139 1.79725 60 0 60 26.4253 0.767 49.2222 55.8722 1.662556 61.76049 4.1278 1.662556 1.760491 60 0 75 15.3297 1.2279 52.9653 55.1528 4.579581 73.80902 4.8472 4.579581 1.190978 60 0 75 15.3979 1.3023 52.9868 55.1941 4.83437 73.7 414 4.8059 4.83437 1.258601 60 0 75 15.3522 1.3207 52.9501 55.1466 4.916865 73.77458 4.8534 4.916865 1.225419 60 45 0 40.2692 39.3483 4.6046 56.4898 44.33731 4.675483 3.5102 0.662685 4.675483 60 45 0 40.3573 39.364 5.0044 56.5976 44.28615 5.072765 3.402 4 0.713849 5.072765

PAGE 56

Table B 1. Continued 56 Rho (cm) Thet a () Phi () X (cm) Y (cm) Z (cm) Rho* (cm) Theta* () Phi* () (cm) () () 60 45 0 40.3752 39.2876 4.1776 56.49 44.21782 4.241058 3.51 0.782185 4.241058 60 45 15 36.6458 34.764 22.3598 55.2396 43.49048 23.87725 4.7604 1.509516 8.877255 60 45 15 36.5183 34.4676 22.8211 55.1579 43.34525 24.44003 4.8421 1.6547 48 9.440029 60 45 15 37.0321 34.5696 23.0213 55.6455 43.03028 24.43836 4.3545 1.96972 9.438364 60 45 30 30.7021 27.6865 34.0527 53.5607 42.04347 39.4777 6.4393 2.956532 9.477699 60 45 30 30.7084 27.7255 34.0569 53.5871 42.07773 39.46023 6.4129 2.922265 9.460225 60 45 30 30.681 27.6291 34.1238 53.5642 42.00392 39.57319 6.4358 2.996078 9.573194 60 45 45 25.4739 30.1739 36.8688 54.0249 49.82773 43.03466 5.9751 4.82773 1.965345 60 45 45 25.3964 30.3445 36.7305 53.9898 50.07283 42.86891 6.0102 5.072829 2.1 31087 60 45 45 25.2637 30.0783 36.8572 53.8647 49.97205 43.177 6.1353 4.972055 1.822996 60 45 60 18.8115 21.0529 46.4356 54.3448 48.2181 58.70034 5.6552 3.218102 1.299659 60 45 60 18.6937 21.7926 45.9439 54.1776 49.377 57.99736 5.8224 4.376996 2.002638 60 45 60 18.2221 22.2712 45.6767 53.9853 50.71027 57.78947 6.0147 5.710271 2.210532 60 45 75 9.863 14.1906 52.1909 54.9777 55.19925 71.67916 5.0223 10.19925 3.320836 60 45 75 9.9458 14.0546 52.3018 55.0629 54.7147 71.77846 4.9371 9.714697 3.221537 60 9 0 0 5.8947 54.0448 2.8641 54.4407 83.77532 3.015696 5.5593 6.22468 3.015696 60 90 0 5.8936 53.2542 4.3051 53.752 83.68482 4.593848 6.248 6.31518 4.593848 60 90 0 5.8853 54.2003 2.3385 54.569 83.80286 2.456105 5.431 6.197141 2.456105 60 90 30 7.636 46 .7057 29.2065 55.6125 80.71476 31.68027 4.3875 9.285243 1.680271 60 90 45 9.0296 37.1584 39.4664 54.9534 76.34169 45.90436 5.0466 13.65831 0.904363 60 90 45 7.328 35.3561 42.4019 55.6927 78.29052 49.5838 4.3073 11.70948 4.583798 60 90 60 7.0137 24.157 4 7.4628 53.7166 73.80997 62.07708 6.2834 16.19003 2.077079 60 90 75 5.5675 13.6834 51.6054 53.6782 67.85958 74.02557 6.3218 22.14042 0.974435 60 135 0 39.6082 41.4802 5 57.5709 133.6775 4.982377 2.4291 1.322491 4.982377 60 135 0 41.0999 41.4891 5.341 58.6436 134.73 5.225486 1.3564 0.270004 5.225486 60 135 0 39.591 41.6021 4.9689 57.6444 133.5811 4.94499 2.3556 1.418889 4.94499 60 135 15 39.0436 41.0172 15.1006 58.6075 133.5879 14.93104 1.3925 1.412127 0.068956 60 135 15 38.9677 41.0474 15.123 5 8.584 133.5111 14.95987 1.416 1.488859 0.040127 60 135 15 38.9568 40.6999 13.8346 58.013 133.7464 13.7965 1.987 1.253584 1.203504 60 135 30 33.3565 35.1222 32.7066 58.4461 133.523 34.02833 1.5539 1.477025 4.028331

PAGE 57

Table B 1. Continued 57 Rho (cm) Thet a () Phi () X (cm) Y (cm) Z (cm) Rho* (cm) Theta* () Phi* () (cm) () () 60 135 30 34.0873 35.6747 30.9214 58 .2302 133.6965 32.07437 1.7698 1.303512 2.074371 60 135 30 33.9898 35.5566 31.1317 58.2131 133.7094 32.32959 1.7869 1.290589 2.329594 60 135 45 25.0797 25.1334 44.7868 57.1536 134.9387 51.59335 2.8464 0.061274 6.593351 60 135 45 25.0243 25.15 44.8765 57.207 134.8565 51.67062 2.793 0.143541 6.670623 60 135 45 25.6738 23.6315 45.7425 57.5324 137.3719 52.66232 2.4676 2.371922 7.662322 60 135 60 15.9931 15.2068 51.4007 55.938 136.4437 66.76394 4.062 1.443663 6.763945 60 135 60 16.4435 14.2462 51.814 3 56.1967 139.0952 67.22282 3.8033 4.095236 7.222822 60 135 60 17.2508 16.8972 50.7011 56.1579 135.5933 64.53279 3.8421 0.593272 4.53279 60 135 75 6.6356 7.3981 54.4681 55.3673 131.89 79.65985 4.6327 3.110022 4.659854 60 135 75 7.6625 6.4636 54.1159 55.0366 139.8511 79.50534 4.9634 4.85114 4.505341 60 135 75 7.6138 8.8838 53.9523 55.2064 130.598 77.76431 4.7936 4.401999 2.764314 *indicates a value generated by the algorithm to be compared with the known camera position

PAGE 58

58 LIST OF REFERENCES [1] Fi for Model Fitting with Applications to Image Communications of the ACM, 24 (6), pp. 381 395. anipulators Servoing, 7 pp. 1 31. [3] Crane, C.D. and Duffy, J., 1998, Kinematic Analysis of Robot Manipulators, Cambridge University Press, Cambridge, UK, Chap. 8. Point Camera Pose Det IEEE Transactions on Pattern Analysis and Machine Intelligence, 21 (7), pp. 774 780. 2 pp. 5 8. 43 pp. 1165 1177. Orientation Regulation for the Camera in 22 (9), pp. 457 473. [8] Lee, J., Duffy, J., Keler, J., 1999, "The Optimum Quality Index for the Stability of In pp. 15 20. [9] Klaus, B. and Horn, P., 1986, Robot Vision, The MIT Press, Cambridge, MA, Chap. 2, Chap. 5. [10] Crawley, E. S. and Evan, H. B., 1918, Analytic Geometry, University of Pennsylvania, Philadelphia, PA, Chap.1. [11] Blaker, J., 1971 Geometric Optics, Marcel Dekker, Inc., New York, NY, Chap. 8.

PAGE 59

59 BIOGRAPHICAL SKETCH Joseph P. Osentowski was born in Jacksonville, Florida. In June of 1996, he received his b achelor degr ee from Jacksonville University with majors in p hilosophy and physics, and a m inor in m athematics. Having worked many years as a union stagehand and sound engineer, a high school teacher in Duval County, as well as for the Jacksonville Museum of Contemporary Art, he enrolled at th e University of F lorida receiv ing his master s degree in mechanical engineering in the summer of 2012 H e is currently pursu ing a career in the area of robotics, with a focus on biomimetic designs.