<%BANNER%>

Image Segmentation and Object Tracking for Micro Air Vehicles


PAGE 1

IMAGE SEGMENTATION AND OBJECT TRACKING FOR A MICRO AIR VEHICLE By TED L. BELSER II A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2006

PAGE 2

Copyright 2005 by Ted L. Belser II

PAGE 3

ACKNOWLEDGMENTS I thank Dr. Dapeng Wu for his role as my supervisory committee chair. I thank Drs. Michael Nechyba and Eric Schwartz for the semesters of challenging coursework, which no doubt increases the value of my degree. I thank the AVCAAF team for giving me access to their data and research papers. I thank the Intel Corp. for making available the Open Source Computer Vision (OpenCV) library. I thank my family and friends for their support while attending the University of Florida. I thank Aime Baum for being my best friend and for putting up with the late nights of work required to finish this thesis. iii

PAGE 4

TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iii LIST OF TABLES.............................................................................................................vi LIST OF FIGURES..........................................................................................................vii CHAPTER 1 INTRODUCTION........................................................................................................1 Problem Definition.......................................................................................................1 Approach.......................................................................................................................2 Three Processes to Achieve Object Tracking........................................................2 Assumptions..........................................................................................................3 2 A GRAPH-BASED SEGMENTATION ALGORITHM.............................................4 Functional and Performance Requirements for Segmentation.....................................4 Graph-Based Segmentation..........................................................................................4 The Comparison Predicate....................................................................................5 The Segmentation Algorithm................................................................................6 Qualitative Analysis..............................................................................................7 Parameters of the Segmentation Algorithm..........................................................8 3 THE LUCAS KANADE FEATURE TRACKING ALGORITHM.............................9 The Lucas Kanade Correspondence Algorithm............................................................9 The Pyramidal Implementation of the Lucas Kanade Correspondence Algorithm....14 The Residual Function.........................................................................................14 Functional and Performance Requirements.........................................................14 The Pyramid Representation...............................................................................16 Pyramidal Feature Tracking................................................................................16 Parameters of the Pyramidal Feature-tracking Algorithm...................................17 4 SYSTEM INTEGRATION........................................................................................19 System Overview........................................................................................................19 The Mediator Design Pattern......................................................................................20 iv

PAGE 5

System Timing and the Observer Design Pattern.......................................................21 System Interaction......................................................................................................21 5 SYSTEM PERFORMANCE ANALYSIS.................................................................23 System Performance Requirements............................................................................23 Computational Complexity of Each Algorithm..........................................................24 Analysis and Discussion.............................................................................................24 Description of the Test Video Sequence.............................................................24 Preprocessing.......................................................................................................25 Method........................................................................................................................26 Environment........................................................................................................26 The Segmentation Algorithm..............................................................................27 Pyramidal Lucas-Kanade Feature Tracker..........................................................29 Coupling of Algorithms.......................................................................................30 6 CONCLUSION AND FUTURE WORK...................................................................32 Future Work for the Pyramidal Implementation of the Lucas Kanade Feature Tracker....................................................................................................................32 Future Work for the Segmentation Algorithm............................................................33 LIST OF REFERENCES...................................................................................................34 BIOGRAPHICAL SKETCH.............................................................................................35 v

PAGE 6

LIST OF TABLES Table page 5-1 CPU Time for the Graph-Based Segmentation Algorithm.........................................28 5-2 CPU Time for the Pyramidal Implementation of the Lucas Kanade Feature-tracking Algorithm...........................................................................................................................30 vi

PAGE 7

LIST OF FIGURES Figure page 2-1 Graph-based Image Segmentation Results...................................................................7 4-1 The System Overview................................................................................................19 5-1 Graph-based Segmentation Output.............................................................................27 5-2 Segmentation Algorithm Performance.......................................................................28 5-3 Results from the Pyramidal Lucas-Kanade Feature Tracker......................................29 5-4 The Performance of the Coupled Segmentation and Tracking Algorithms...............31 vii

PAGE 8

Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science IMAGE SEGMENTATION AND OBJECT TRACKING FOR A MICRO AIR VEHICLE By Ted L. Belser II May 2006 Chair: Dapeng Oliver Wu Major Department: Electrical and Computer Engineering This thesis describes a system that can perform object tracking in video produced by a camera mounted on a micro air vehicle (MAV). The goal of the system is to identify and track an object in full motion video while running in real-time on modest hardware (in this case a Pentium III running at 800Mhz with 512 MB RAM). To achieve this goal, two vision processing algorithms are coupled. A graph-based segmentation algorithm is used to identify individual objects in the image by discriminating between regions of similar color and texture. A pyramidal implementation of the Lucas-Kanade feature tracker is used to track features in the video. Running at a lower frequency than the tracking algorithm, the segmentation algorithm labels the features according to the corresponding object. By tracking the labeled features, the Lucas-Kanade feature tracker tracks the objects in the video. Analysis and experimentation show that the pyramidal implementation of the Lucas-Kanade is both efficient and robust. The system performance however is viii

PAGE 9

dominated by the performance of the segmentation algorithm. The segmentation algorithm, while capable of meeting the functional requirements of the system, requires two to three times more processing power than the feature tracking algorithm requires. The system described in this thesis is capable of meeting the requirements for object tracking on a MAV platform. The analysis suggests that the pyramidal implementation of the Lucas-Kanade is an essential component of the MAV platform due to its efficiency and robust performance. The analysis also suggests a direction for improvement. While the segmentation algorithm was able to fulfill the requirements, it did so at a high computational cost. One possible direction for future work is to improve the performance of the segmentation process. ix

PAGE 10

CHAPTER 1 INTRODUCTION Problem Definition In its mission statement the AVCAAF (Active vision for Control of Agile Autonomous Flight) group at the University of Floridas Machine Intelligence Laboratory describes the potential missions of a MAV (Micro Air Vehicle). The potential missions include search and rescue, moving-target tracking, immediate bomb damage assessment, and identification and localization of interesting ground structures. To achieve this mission statement the MAV platform utilizes a number of instruments. As is evident in the groups name, the AVCAAF is focused on vision processing systems for flight vehicle control. The primary instrument for vision-based control is the video camera. The video camera coupled with a computer running sophisticated vision processing algorithms forms a versatile system capable of performing functions such as automatic attitude control, object recognition and object tracking. Examples of attitude control and object recognition applications are discussed in AVCAAFs research papers [1, 2, 3 and 4]. Object tracking however is not discussed in these papers and is the focus of this thesis. To track an object it must first be identified. Object identification solutions are not trivial and can arguably be called a central problem in computer vision. The problem of what makes an object an object was once a question for philosophers; but in computer vision, it is a question for engineers. In many ways, engineers have not answered this question. Our systems are capable of identifying specific, narrowly defined classes of 1

PAGE 11

2 objects, but there is no general solution to object identification. This paper does not attempt to solve this problem; however understanding this problem helps to put the problem of object identification and tracking into context. In image processing, segmentation is the partitioning of a digital image into multiple regions according to a criterion. Segmentation can be applied to identify objects in a scene if a suitable segmentation criterion can be defined to identify objects of interest. In their paper Intelligent Missions for MAVs: Visual Contexts for Control, Tracking and Recognition[2] Todorovic and Nechyba discuss an object segmentation algorithm. While this algorithm performs object segmentation efficiently, it is computationally excessive to use the same algorithm for object tracking. Once an object is acquired, the tracking problem has constraints such as a spatial and temporal locality that reduce the complexity of the tracking problem. Furthermore, an object can be expected to maintain its shape and appearance in a sequence of frames. Given these constraints the problem is to design and implement an object tracking system that meets the following requirements: The system should locate objects of interest The system should track the object(s) of interest through sequential frames in video The system should run in real-time on standard hardware (Pentium III 800 MHz) Approach Three Processes to Achieve Object Tracking The approach taken to solve this problem is to divide the tracking task into three processes. The first process identifies and enumerates objects in the image. The second process identifies significant features on each the objects. The third process is correspondence of each feature between adjacent frames in a video sequence. Object tracking is possible by defining which objects own which features and the tracking those

PAGE 12

3 features over a sequence of frames. This method is described in the remaining chapters. First, each of the processes used are described in detail. Chapter 2 describes the segmentation process. Chapter 3 describes the feature extraction process and the feature tracking process. The system organization and implementation is discussed in Chapter 4. Chapter 5 is an evaluation of the system's performance including limitations of the system and how the parameters of the individual processes govern the performance of the system as a whole. Finally, in Chapter 6 suggestions for future research are proposed. Assumptions The segmentation, feature extraction and feature tracking processes do not need to run with equal effort. The feature tracking process should run frequently in order to accurately track features from frame to frame. While the segmentation and feature extraction processes may run less frequently. By making reasonable assumptions, the segmentation and feature extraction processes can occur at a frequency significantly less than the frame rate of the video sequence. The essential mechanism of the feature tracking process is correspondence. Correspondence identifies the offset between a specific feature in two frames. These two frames may represent images from cameras that differ in time and/or space. In the case of a moving camera, the two frames differ in time and space. By using a correspondence mechanism, feature tracking is possible. By associating features with objects it is therefore possible to track an entire object. A few assumptions however must be made. 1. An object to be tracked has identifiable features. 2. These features can be tracked by a correspondence mechanism over a sequence of frames. 3. Each feature instance belongs to only one object during the period of time in which the tracking occurs.

PAGE 13

CHAPTER 2 A GRAPH-BASED SEGMENTATION ALGORITHM Functional and Performance Requirements for Segmentation In [5] a graph-based image segmentation algorithm is defined. The segmentation algorithm is designed to meet the following requirements: 1. The algorithm should capture perceptually different regions of an image. 2. The algorithm should account for local as well as global information while partitioning the image regions. 3. The algorithm should be efficient, running in time nearly linear in the number of image pixels. Graph-Based Segmentation In Felzenszwalb and Huttenlocher [5] a graph-based approach is presented. An image is represented as an undirected graph EVG, V represents the vertices of the graph and corresponds one-to-one with the pixels in the image. E represents the edges between the pixels. In the actual implementation, E represents every adjacency in the four-connected adjacency of every pixel. Each edge in E is given a non-negative weight jivvw, that is a measure of the dissimilarity between the pixels belonging to vertices v i and v j In the implementation, this difference is the distance between the color values of v i and v j in the RGB color space. The objective is to produce a segmentation S composed of components C. Each component is defined by a set of edges between vertices representing pixels of low dissimilarity as defined by a comparison predicate. EE 4

PAGE 14

5 The Comparison Predicate Requirement 2 above describes a need to take into account global and local information when forming a component in the segmentation. The comparison predicate described by Felzenszwalb and Huttenlocher [5] is designed to meet this requirement. The predicate is designed to measure dissimilarity between components relative to the internal dissimilarity within each component. This technique is capable of identifying regions that have little internal dissimilarity while also identifying regions where there is great internal dissimilarity. The predicate is defined in terms of two quantities, the minimum internal distance of two components C 1 and C 2 and the difference between components C 1 and C 2 The minimum internal distance captures the global information of the two components and forms the basis on which the predicate decides if the two components are actually different. The minimum internal difference is defined as 221121min,min,CCIntCCIntCCInt where the internal difference Int is defined as the largest weight in the minimum spanning tree of the component, ECMST, ewCIntmax for ECMSTe, The difference between the components C 1 and C 2 is the minimum weight in the edges connecting the two components, .,min,21jivvwCCDiff for EvvCvCvjiji ,,,21 By comparing the minimum internal difference of the components with the smaller of the two internal differences, a predicate 21,CCD can be defined. If true, edge

PAGE 15

6 21,,,CvCvvvjiji21,CCD forms a boundary between components C 1 and C 2, otherwise C 1 and C 2 are the same component. CkCmoo,,1 True if 21min21,,CCIntCCDiff ; False otherwise This formulation has only one parameter, k. The parameter k is coupled with the system by way of the threshold function The threshold function C defines by how much the differences between the components must exceed the lesser of the two component internal differences. A higher value decreases the likelihood that two components will be declared different and therefore encourages larger components. The function also serves to minimize error due to small component sizes in the early stages of the computation. Specifically, it scales a constant k by the inverse of the component size: The parameter k determines granularity of the segmentation. Larger values of k produce larger components and therefore fewer components per image. Smaller values of k produce smaller components and therefore more components per image. The Segmentation Algorithm Felzenszwalb and Huttenlocher [5] apply the predicate function using the following algorithm: 4. Calculate the weight for all edges in E. 5. Sort E into non-decreasing order by edge weight resulting in the sequence 6. Assign all vertices in V one-to-one with components rCC,,1 so that each vertex belongs to its own component. 7. for i = 1 to m do the following

PAGE 16

7 a. Let jjiijiiCvCvvv o ,,, b. if C i and C j are different components AND jiCCD, is false, then merge C i and C j ; otherwise do nothing. 8. Return S where n is the number of components remaining. nCC,,1 This algorithm runs in time where m denotes the number of edges in E. )log(mmO Qualitative Analysis Figure 2-1 shows an image and its segmentation using this technique. Regions of little dissimilarity, such as the dark region to the left of the butterfly, are properly segmented. More interesting is the left side of the leaf on which the butterfly sits. This region shows dissimilarity in the form of dark and light ridges formed by the veins in the leaf. This region is segmented as a single component despite large internal dissimilarities. A B Figure 2-1. Graph-based Image Segmentation Results. A) Original Image. B) Segmented image labeled in randomly chosen colors. A close inspection of the upper wing reveals much smaller speckles of white between the larger dots of white in the. The dark speckled background of the wing is segmented as a single component. Also evident from this picture is that the algorithm does not favor an orientation; it is capable of identifying regions of any orientation. This

PAGE 17

8 is an important requirement for the MAV platform where the image orientation changes with the pitch and roll of the aircraft. Parameters of the Segmentation Algorithm The author of [5] also published C++ language code implementing this segmentation algorithm. This implementation accepts the following parameters: I size This quantity represents the size of the input image. As discussed above, this algorithm performs within sizesizeIIlog Otime. This parameter is the only parameter that affects performance. k This quantity represents the threshold used to perform the segmentation. It affects the size resulting components and therefore the total number of components. Larger values of k produce larger components and therefore fewer components. Smaller values of k produce smaller components and therefore more components. This parameter does not affect the algorithms performance. C min-size This quantity defines the minimum size of the components produced by the segmentation. Any component less than the minimum size are merged with an adjacent component. This process is performed after the segmentation is completed. This parameter can improve the performance if its value is 1 in which case it can be ignored.

PAGE 18

CHAPTER 3 THE LUCAS KANADE FEATURE TRACKING ALGORITHM The Lucas Kanade Correspondence Algorithm Lucas and Kanade [6] describe an algorithm for registering like features in a pair of stereo images. The registration algorithm attempts to locate a feature identified in one image with the corresponding feature in another image. Despite stereo vision being the motivation for the algorithm, there are other applications for this technique. Tracking motion between frames of a single motionless camera is simply the correspondence of features through time. In the case of a moving camera, feature tracking is the correspondence of features in images differing in time and space. The two situations are analogous with the stereo application in that the problem is finding the offset of a feature in one image to the feature in the second image. The algorithm solves the following problem. Two images that are separated by space or time or both (within a small amount of time or space) have corresponding features. These features exists somewhere in space relative to the camera. As the camera moves, its position relative to these features changes. This change in position is reflected as a movement of the feature on the image plane of the camera. This algorithm identifies the movement (the offset) of a feature in two sequential images. Lucas and Kanade describe this more formally [6], define the feature of interest in image A as the vector x and define the same feature in the image B as hx The problem is to find the vector h The algorithm works by searching image B for a best match with the feature in image A. The best match is defined as the feature in image B 9

PAGE 19

10 that differs the least with the feature in image A. An exhaustive search of image B for the feature is impractical and would fail to recognize important constraint of locality that is likely to exist. Lucas and Kanade identify two aspects of the searching algorithm: 1. The method used to search for the minimal difference. 2. The algorithm to calculate the value of the difference. Lucas and Kanade point out that each of these aspects are loosely coupled so implementation can be realized through any combination of searching and differencing algorithms. Lucas and Kanades approach uses the spatial intensity gradient of the image to find the value of h. The process is iterative and resembles a method similar to the Newton-Raphson method where accuracy increases with each iteration. If the algorithm converges, it converges in NMlog2 O time where is the size of the image and 2N 2 M the size of the region of possible values of h Lucas and Kanade first describe their solution in the one-dimensional case. Let image A be represented by function xF and image B be represented as hxFxG Lucas and Kanades solution depends on a linear approximation of in the neighborhood of xF x For small h xFhxFhxFxG (1) xFxFxGh (2) In other words by knowing the rate of change of intensity around x in and the difference in intensity between xF xF and xG the offset h can be determined. This approach assumes linearity and will only work for small distances where there are no

PAGE 20

11 local minima in the error function. Lucas and Kanade suggest that by smoothing the image, minima produced by noise in the image can be eliminated. Equation 2 is correct if hxFxG To find the value of h where this is true, the possible values of h need to be explored and a best match determined. To identify this best match the following error function is defined: xxGhxFE2 (3) The value of h can be determined by minimizing (3). xxxGxFhxFxFxGxFhxFhhE202 (4) xxxFxFxGxFh2 (5) These approximations rely on the linearity of xF around x. To reduce the effects of non-linearity, Lucas and Kanade propose weighting the error function more strongly where there is linearity and less strongly where there is not linearity. In other words where is high, the contribution of that term to the sum should be less. The following equation approximates xF xF hxFxGxF (6) Recognizing that this weight will be used in an average, the constant factor h1 can be dropped and the weighting function can be

PAGE 21

12 xFxGxw1 (7) Including this weighting function (7), the iterative form of (5) becomes xkxkkkkhxFxwhxFxGhxFxwhhh210,0 (8) Equation (8) describes the iterative process where each new value of h adds on to the previous value. The weighting function serves to increase the accuracy of the approximation by filtering out the cases where the linearity assumption is invalid. This in turn will speed up the convergence. Each iteration is calculated until the error function value is below a threshold. Lucas and Kanade describe how the one-dimensional case can be generalized into an n-dimensional case. Similar to the one-dimensional case, the objective is to minimize the error function: RxxGhxFE2 (9) where x and h are n-dimensional row vectors. The one-dimensional linear approximation in equation 1 becomes xFxhxFhxFxG (10) where x is the gradient operator with respect to x Using this multi-dimensional approximation Lucas and Kanade minimize E.

PAGE 22

13 xxxGxFhxGxFxGxFhxFhEh202 (11) Solving for h produces 1xTxTxFxFxFxGxFh (12) The Lucas Kanade method described above works for a translation of a feature. Recognizing this, they generalize their algorithm even further by accounting for an arbitrary linear transformation such as rotation, shear and scaling. This is achieved by inserting a linear transformation matrix A into the equations. Equation (1) becomes ,hAxFxG and the error functions 3 and 9 become xxGhAxFE2 (13) resulting in a system of linear equations to be solved simultaneously. Because of the linearity assumption, tracking large displacements is difficult. Smoothing the image can remove the high frequency components that make the linearity assumption more valid and allow for a larger range of convergence. Smoothing the image however, removes information from the image. Another method for tracking large displacements is the pyramidal method. This method described by Bouguet [7] makes use of the pyramidal approach to refine the search for h. Details of how it works are explained in the next section. The pyramidal approach makes use of a pyramid of images

PAGE 23

14 each containing the information from the source image represented in a graduated degree of resolution from coarse to fine. The Pyramidal Implementation of the Lucas Kanade Correspondence Algorithm The Open Source Computer Vision Library (OpenCV) sponsored by Intel Corporation is a library written in the C programming language that contains a Lucas-Kanade feature-tracking algorithm. The OpenCV implementation makes use of the pyramid method suggested by Lucas and Kanade in their original paper. The Residual Function The OpenCV mathematical formalization differs slightly from the Lucas Kanade formalization. Described by Bouguet [7] the formulation is as follows: Let and represent the images between which the feature correspondence should be determined. OpenCV defines the residual function, analogous to the error function (9), as yxA, yxB, xxxxyyyywuwuxwuwuyyxyxdydxByxAddd2,,, (14) This equation defines a neighborhood of size 1212 yxww While the Lucas-Kanade algorithm defines a region of interest over which the error function should be summed, the OpenCV implementation is more specific and defines a small integration window defined in terms of and w. xw y Functional and Performance Requirements The pyramidal algorithm is designed to meet two important requirements for a practical feature tracker: 1. The algorithm should be accurate. The object of a tracking algorithm is to find the displacement of a feature in two different images. An inaccurate algorithm would defeat the purpose of the algorithm in the first place.

PAGE 24

15 2. The algorithm should be robust. It should be insensitive to variables that are likely to change in real world situations. Variables such as variation in lighting, the speed of image motion and patches of the image moving at different velocities. In addition to these requirements, in practice, the algorithm should meet a performance requirement: 3. The algorithm should be computationally inexpensive. The purpose of tracking is to identify the motion of features from frame to frame so the algorithm generally will run at a frequency equal to the frame rate. Most vision systems perform a series of processing functions to meet specific goals and functions that are run at a frequency equal to the frame rate of the source video need to use a little of the system resources as possible. In the basic Lucas Kanade algorithm there is a tradeoff between the accuracy and robustness requirements. In order to have accuracy, a small integration window insures that the details in the image are not smoothed out. Preventing the loss of detail is especially important for boundaries in the image that demarcate occluding regions. The regions are potentially moving at different velocities. For the MAV application this is clearly an important requirement due to the velocity of the camera and the potential difference in velocity with objects to be tracked. On the other hand, to have a robust algorithm, by definition, it must work under many different conditions. Conditions where there are large displacements between images are common are common for the MAV platform and warrant a larger integration window. This apparent conflict defines a zero-sum tradeoff between accuracy and robustness. The solution to meeting each requirement without a likely counterproductive compromise is to define a mechanism that decouples one requirement from the other. The method that succeeds in doing this is the pyramidal approach.

PAGE 25

16 The Pyramid Representation The pyramid representation of image yxA, is a collection of images recursively derived from yxA,A The images are organized into pyramid levels where the original image is Each image in the pyramid, increasing in level, is a down sampling of the previous level. For example is down sampled to produce and down sampled to produce up to An image of size 360x240 and mLL1 yx, 0L 0L 1L3 1L 2L mL mL produces a pyramid of three images with dimensions 180x120, 90x60 and 45x30 pixels at levels and respectively. 1L 2L 3L Pyramidal Feature Tracking The goal of feature tracking is to identify the displacement of a feature from one image to another. In the pyramidal approach, this displacement vector is computed for each level of the pyramid [7]. Computing the displacement vector TLyLxLdd dis a matter of minimizing the following residual function. xLxxLxyLyyLywuwuxwuwuyLyLyLxLxLLLyLxLLLdgydgxByxAddd2,,, (15) Note that this residual function is similar to equation (14) but differs by the term TLyLxLggg This term represents the initial guess used to seed the iterative function. The calculation starts at the highest level of the pyramid with Using this guess, the displacement vector 00mLg 1Lmd is found by minimizing equation (15). This d is then used to find the next g using this expression: LLLdgg21 (16)

PAGE 26

17 The final displacement found by minimizing the residual function (15) for each level in the pyramid is mLLLLdd02 (17) The advantage of the pyramid implementation is that a small integration window can be used to meet the accuracy requirement while the pyramidal approach provides a robust method to track large displacements. The size of the maximum detectable displacement is dependent on the number of levels in the pyramid. The gain over the maximum detectible displacement in the underlying Lucas Kanade algorithm is: 121maxmLGaind (18) For a three-level pyramid, this produces a gain of 15 times the largest displacement detectable by the underlying Lucas Kanade step. Parameters of the Pyramidal Feature-tracking Algorithm Applying the pyramidal algorithm using the OpenCV C-language library is a matter of choosing the best values for the following parameters for the particular application. I size This quantity represents the source image size. Specifically the size of level L = L 0 in the pyramid representation. Larger images have greater detail and therefore can produce more accurate results. Larger images also require more pyramid levels to track feature displacement. These extra pyramid levels can contribute to more CPU time. Each pyramid level represents a standard Lucas Kanade calculation. Q F This quantity represents which features are selected for tracking. In terms of tracking quality, the upper %100)1( FQ features are selected for tracking. The method for determining tracking quality is defined by Bouguet, Shi and Tomasi [7, 8]. N F This quantity represents the number of features to track. If the number of features in an image meeting the constraint defined by Q F is greater than N F only the best N F features are selected for tracking [8].

PAGE 27

18 W This quantity is equivalent to the values w x and w y in equation 14. This quantity controls the size of the integration window and therefore determines the accuracy of the tracking algorithm. L m This quantity represents the number of pyramid levels in the image. As described above, this value determines the maximum detectible displacement of a feature. It also determines how many times the Lucas Kanade algorithm is performed for each feature.

PAGE 28

CHAPTER 4 SYSTEM INTEGRATION System Overview The following describes an object-oriented system framework in which vision processing components are easily interconnected to form a vision-processing pipeline. Typical vision-processing pipelines involve a number of transformations, filters and samplers connected in a meaningful way to perform a task. Figure 4-1. The System Overview The system described by this thesis is formed from the following components: 4. Video Source (a camera or file) The OpenCV library provides functionality to capture video from a file or a live camera. 5. Feature extraction component The feature extraction component is described in Chapter 3 as part of the feature tracking system. 6. Image Segmentation Component The segmentation component is described in Chapter 2. 7. Optical Flow Component The optical flow component is described in Chapter 3. 19

PAGE 29

20 8. Video player / video recorder The OpenCV library provides functionality to write a video stream to a file or to display it on the computer monitor. The Mediator Design Pattern By its nature, video processing is a demanding application for which data management is essential to a high performance application. Furthermore, the design process and experimentation processes require that any vision system be made of reusable, decoupled, extensible and manageable components. To meet these requirements, the system described in this thesis utilized the mediator design pattern [9]. The mediator design pattern defines a mediator class that serves as a means to decouple a number of colleague classes. The mediator encapsulates the interaction of the colleague classes to the extent that they no longer interact directly. All interaction happens through the mediator. The primary role of the mediator in this implementation is to distribute video frames from the output of one component to the inputs of other components. Each component inherits a colleague class which implements an interface that allows the mediator to interact with the individual component. When a component is initialized, it can subscribe to the output of another component. Any frames produced by the source component will automatically be communicated to the subscribing component. The following describes the typical message exchange between a colleague and the mediator. When the colleague changes its internal state in a way that other colleagues should know about, it communicates that change in state to the mediator class. In this particular implementation, the colleague class notifies the mediator whenever the colleague has produced a frame of video. The mediator then grabs the frames from the

PAGE 30

21 output of the source colleague class and distributes the frames to the input of colleagues subscribing to the output of the source class. This mediator architecture is designed for expansion. For example, the mediator class could implement a frame cache for an asynchronous system. Another possible improvement is the addition of a configuration file for runtime configuration of the vision system. The system would simply parse the configuration file and based on the contents it could connect the components in the correct order all without a line of code or a compiler. System Timing and the Observer Design Pattern A central clock controls the timing of the entire system. The clock is implemented using the observer design pattern. The purpose of this design pattern is to allow a number of objects to observe another object, the subject. Whenever the state of the subject changes, all other subscribing objects, the observers, are notified. In this implementation, the subject is an alarm clock. Any class inheriting the observer class CWatch can subscribe to the clock. As a parameter of the subscribe method call, the observing class passes a time period in milliseconds. The clock will then notify the observer periodically according to the time period. In this implementation, the clock is useful when capturing data from a live camera. The clock also makes possible multiple frequencies within the system. For example a feature-tracking algorithm can run at a rate of 30Hz while an image segmentation algorithm can run at a different rate such as1Hz. System Interaction Figure 4-1 illustrates how the components of the system are organized. The feature-tracking algorithm runs in parallel with the serial combination of the

PAGE 31

22 segmentation algorithm and the feature-extracting algorithm. This organization is necessary to meet the performance requirements of the system. Segmenting an image at the full 30Hz frame rate is both computationally expensive and unnecessary. Instead, the segmentation can happen at a much lower frequency while the feature tracking algorithm runs at a full 30Hz. The mediator pattern described earlier makes implementing a parallel architecture practical and easy. Furthermore the system timing functionality provided by the clock allows for multiple frequencies within the system. The current implementation is not multi-threaded so this configuration does not currently make full use of the benefits provided by the architecture. For example, the segmentation algorithm may run for one second. In this second, the processor never passes control to the clock object and therefore the more frequent feature tracking updates are never made. In a multi-threaded environment however, this blocking would not occur. The low frequency segmentation algorithm could run concurrently with the higher frequency feature-tracking algorithm for the minor cost of thread management. The analysis in Chapter 5, System Performance, assumes that the system runs in a multi-threaded environment.

PAGE 32

CHAPTER 5 SYSTEM PERFORMANCE ANALYSIS System Performance Requirements A MAV such as the one developed by the AVCAAF group has a limited performance envelope. As a fixed wing aircraft, it cannot hover nor slow down in order to complete a calculation before making a decision. This characteristic translates into strict performance requirements for the underlying control systems. While the object tracking system described by this paper is not a critical control system, it does run concurrently with the other vision systems and therefore indirectly affects overall MAV performance. While an object tracking system may not provide control critical services, it would likely provide mission critical services for missions such as surveillance, search and rescue or even function as part of a landmark-based navigation system. Another possible role critical to the survival of the MAV is collision avoidance. An object tracking system should meet the following performance requirements. 9. The system should be computationally efficient It should not require a significant share of the available processing power. Other more critical systems such as flight stability will run concurrently with the tracking system and should not be adversely affected by an inefficient tracking algorithm. 10. The system should run at a rate useful for the particular mission An update rate of 1 update every 5 seconds may be sufficient for a navigation by landmark mission, but totally unsuitable for navigating a forest of skyscrapers. 23

PAGE 33

24 Computational Complexity of Each Algorithm The dominant processes in this system are the feature tracking algorithm and the image segmentation algorithm. This section describes the coupling of these components and how the parameters of the components affect the performance of the system overall Chapters 2 and 3 describe the complexity of each component. The complexity of the segmentation algorithm is sizesizeIIlog O while the complexity of the tracking algorithm is size FmIwNLOlog2 While the tracking algorithm has the w 2 term, experimentation shows that the segmentation algorithm is the most computationally costly. The reason for the disparity in actual CPU time is that the is typically much smaller than the size of the image I 2wNLFm size Analysis and Discussion Description of the Test Video Sequence The experiment was conducted on a video sequence that represents a typical MAV flight. In this sequence the MAV is flying at an approximate altitude of 50 ft and an average speed of 25mph. The conditions are a sky with scattered cumulus clouds and unlimited visibility (no haze). The MAV is flying over a green field with regular patches of brown. Tethered to a fence in the field are two red, helium-filled 8ft diameter balloons. The balloons are partially covered on the bottom with a black tarp. The presence of the tarp produces a black wedge in the circular image of each balloon. The video sequence captures the relative motion of two balloons. Balloon A is on the left and balloon B is on the right. The first frame in the video sequence contains the first frame in which balloon A entered the picture. The last frame in the video sequence corresponds to the last frame in which balloon B is present.

PAGE 34

25 In the initial frames of the video sequence, both balloons A and B are present. Balloon A is on the left and balloon B is on the right. As the MAV flies toward the balloons, both balloons translate from right to left until only balloon B is within the image. The MAV then turns left toward balloon B and flies directly toward it. After leaving, balloon A never reenters the frame. To make the turns the MAV must roll and therefore each turn is marked by a rotation of the entire picture. Also, the MAV must make slight altitude changes that are reflected as vertical motion in the picture. The video sequence is 108 frames long and was captured at 30 frames per second. Each frame in the sequence has a resolution of 720 by 480 pixels and is interlaced. Preprocessing The video sequence was preprocessed before running the experiments described in this chapter. First the video was de-interlaced and then it was down sampled to 50% of the original size. De-interlacing was achieved by duplicating one of the fields for each frame in the interlaced sequence. This reduced the vertical resolution by 50%, but this method prevented the introduction of ghost artifacts in the video. The video was down sampled by applying a radius-1 Gaussian blur and then a point-sampled resize by half. This method avoids artifacts caused by aliasing, but at the cost of a blurrier output. Bilinear or bicubic filters may be a better choice to down sample the video and maintain quality without producing artifacts but the result of the Gaussian blur was sufficient.

PAGE 35

26 Method The Segmentation algorithm and tracking algorithms are the primary subjects of the experiment. Each algorithm has a set of parameters as described earlier in their respective chapters. The objective of this experiment is to search the parameter space of these algorithms to find an optimal configuration. The parameter space is huge and therefore an exhaustive search of the parameter space is not feasible. Furthermore, because of the nature of the problem, quantitative experimental results cannot be used to determine the success of each experiment. Instead, a human must evaluate each result set. The human observer must classify the experiment as successful or unsuccessful. To search the parameter space, the parameters were adjusted with increasing computational efficiency until the algorithm no longer met its requirements. Each of the algorithms was tested individually and compared against its requirements. Neither algorithm depends on the other to meet its requirements. Each algorithm however will run on the same hardware and is therefore constrained by the availability of CPU time. This analysis first describes how each algorithm behaves independent of the other. The analysis then describes the performance of a system containing these two algorithms, specifically the frequency at which each algorithm can run in real-time given a specific hardware configuration. The computation performance of each experiment was measured using a code profiler. The profiler measured the total CPU time used by each algorithm. Environment The experiments described in this chapter were performed on a Pentium III running at 800MHz with 512 MB RAM.

PAGE 36

27 The Segmentation Algorithm Each experiment to measure the performance of the segmentation algorithm was performed by running the algorithm on all 108 frames in the video sequence. To verify that the parameter k does not affect the computational performance, table 5-1 shows the results for k = 100, 1000, 5000 and 10000. The different values of k did not change the time of computation, but did change the quality of the output. As expected, lower values of k produced smaller components and therefore more components while larger values of k produced the opposite. By qualitative observation, k = 5000 produced the best segmentation of the test video. This value of k was used to perform the remaining experiments. A B C D Figure 5-1. Graph-based Segmentation Output. A) Original image, B) k = 100, C) k = 5,000, D) k = 10,000.

PAGE 37

28 Table 5-1. CPU Time for the Graph-Based Segmentation Algorithm Experiment k C min-size [pixels] I size [pixels] Total CPU Time [sec] Average CPU Time per Frame[sec] 1 100 50 360x240 452.7 4.2 2 1000 50 360x240 406.1 3.8 3 5000 50 360x240 443.0 4.1 4 10000 50 360x240 428.0 4.0 5 5000 50 270x180 (75%) 224.2 2.1 6 5000 50 180x120 (50%) 107.4 1.0 7 5000 50 90x60 (25%) 27.0 0.25 The computational complexity of the segmentation algorithm dictates that the algorithms time of execution is dependent on the size of the image. Table 5-1 shows the CPU time for the algorithm running on an image scaled by 1.0, 0.75, 0.5 and 0.25. The total CPU time values accurately reflect the algorithm complexity sizesizeIIOlog 1 2 3 4 5 6 7 8 9 10 11 12 0.00% 10.00% 20.00% 30.00% 40.00% 50.00% 60.00% 70.00% 80.00% 90.00% 100.00% Segmentation Algorithm Performance Segmentation Period CPU Load 360x240 270x180 180x120 Image Size Figure 5-2. Segmentation Algorithm Performance. The algorithm meets its requirements for scale values 1.0, 0.75 and 0.5 but not for 0.25. There is a particular set of frames (frames 68 90) where the algorithm fails at

PAGE 38

29 every scale. In these frames the algorithm confuses a brown patch of dirt and grass in a field with the red balloon in the foreground. Pyramidal Lucas-Kanade Feature Tracker The dominant parameters in the feature-tracking algorithm are L m w, and N F Table 5-2 shows results for combinations of L m and w. The algorithm requires a set of features to track as input. The OpenCV library contains a function cvGoodFeaturesToTrack that finds the best features suitable for the tracking algorithm [3, 4]. This function requires values for the parameters Q F and N F These parameters were described in chapter 3. A B Figure 5-3. Results from the Pyramidal Lucas-Kanade Feature Tracker. A) N F = 200, B) N F = 1,000 To improve performance the 30 FPS video was sampled at the lower frequencies of 15, 10 and 5 FPS. Running at lower frequencies requires that the algorithm work for larger displacements of the features between frames. This scenario is precisely what the pyramidal L-K tracker was designed to fulfill. The algorithm maintained a good track for the 15 and 10Hz runs. The algorithm started losing points during tracking for the 5Hz run even with the maximum number of Pyramid levels L m = 4.

PAGE 39

30 Table 5-2. CPU Time for the Pyramidal Implementation of the Lucas Kanade Feature-tracking Algorithm. Experiment Frames per Second L m w N F Average CPU Time per Frame [sec] CPU Load 8 30 1 2 100 6,677.45 20.03% 9 30 2 2 100 8,495.98 25.49% 10 30 3 2 100 7,287.76 21.86% 11 30 1 3 100 6,909.44 20.73% 12 30 2 3 100 7,950.88 23.85% 13 30 3 3 100 8,758.49 26.28% 14 30 3 3 50 5,617.95 16.85% 15 30 3 3 200 15,172.56 45.52% 16 30 3 3 1000 73,582.15 220.75% 17 15 3 3 200 13,276.56 19.91% 18 10 3 3 200 14,025.36 14.03% 19 5 4 3 200 13,326.06 6.66% In experiment 16, the feature extractor quality parameter had to be set to 0.005 to get enough points to meet the N F = 1000. Coupling of Algorithms The segmentation algorithm requires significantly more computation than the feature tracker. This is easy to understand by recalling the computational complexity of the algorithms. The segmentation algorithm was highly dependent on the size of the image: I size While the feature tracking algorithm was highly dependent on the number of feature points to be tracked: N F This difference is intuitive: the segmentation algorithm must run on every pixel in the image while the feature tracking algorithm must only run on a small set of pixels. Also the tracking algorithm exhibited robust performance at low frequencies: 15 and 10Hz. As with most algorithms there is a tradeoff between accuracy and performance. Figure 5-4 charts the performance of the coupled algorithms against the period of the segmentation algorithm.

PAGE 40

31 1 2 3 4 5 6 7 8 9 10 20.00% 30.00% 40.00% 50.00% 60.00% 70.00% 80.00% 90.00% 100.00% Coupled Performance Segmentation PeriodCPU Load 30 FPS15 FPS10 FPS 2 70 x1 80 36 0 x24 0 Tracking Frequency 1 80 x12 0 Figure 5-4. The Performance of the Coupled Segmentation and Tracking Algorithms. Results plotted for images with sizes 360x240, 270x180 and 180x120 at tracking frequencies of 30, 15 and 10 FPS. The AVCAAF MAV flies at a speed of 36.7 ft/sec (25 MPH). Assuming that the MAV is flying in a straight line in a static world, in order to detect an object before it is within 100ft, the MAV must refresh its knowledge of objects once every 2.73 seconds (the time to cover 100ft at 25MPH). Figure 5-4 shows that this can be achieved using the 50% scaled image (180x120) and a tracking frequency between 10 and 30 without loading the CPU any more than 65%. Tracking at 10 FPS puts a 50% load on the processor while tracking at 30 FPS loads the processor approximately 64%. Using the 75% scaled image, the system would exceed the available CPU power at a 30 FPS tracking frequency and use 100% of the processor for tracking frequencies of 10 and 15 FPS. Using 100% of the processor for object tracking is impractical because there are likely other systems more critical to MAV flight requiring CPU resources

PAGE 41

CHAPTER 6 CONCLUSION AND FUTURE WORK The results from chapter 5 show that the Pyramidal Implementation of the Lucas Kanade feature tracker is robust and computationally efficient algorithm. It is capable of meeting functional and performance requirements over a range of configurations. The graph-based segmentation algorithm, while capable of meeting the functional and performance requirements marginally, did not perform in a robust or computationally efficient manner. Future Work for the Pyramidal Implementation of the Lucas Kanade Feature Tracker The Lucas Kanade algorithm was designed an image registration algorithm for the purposes of stereo vision, but it has many applications beyond stereo vision. This paper shows how the algorithm can be used to track moving objects from a moving camera. Structure from motion and image mosaic registration are other applications that could be useful in MAV missions. The Lucas Kanade algorithm is a unifying framework [10]. They pyramidal implementation allows for efficient computation of optical flow for a specific set of points in the image. An arbitrary set of points or even the entire image can be calculated. Using the architecture described in Chapter 4, the optical flow processing component can be shared by other video components in the system. This reduces redundant calculations. 32

PAGE 42

33 Future Work for the Segmentation Algorithm A possible optimization for the segmentation algorithm is to optimize for distance in the image (the further the distance the less detailed the image has). The segmentation algorithm functions at a fixed level of detail. This fixed level of detail is optimized for a specific distance in the image, but not for all distances in the image. The AVCAAF MAV uses a vision system for estimating the location of the horizon in the image[3]. One improvement would be to use the horizon estimate as a way to identify the distance to a pixel in 3D space. The segmentation algorithm could be adjusted to discriminate at finer detail for pixels further in the image.

PAGE 43

LIST OF REFERENCES [1] S. Todorovic, M.C. Nechyba, Detection of Artificial Structures in Natural-Scene Images Using Dynamic Trees, Proc. 17 th Intl. Conf. Pattern Rec., Cambridge, UK, Intl. Assoc. Pattern Rec., 2004, pp. 35-39. [2] S. Todorovic, M.C. Nechyba, "Intelligent Missions for MAVs: Visual Contexts for Control, Tracking and Recognition," Proc. 2004 IEEE Intl. Conf. on Robotics and Automation, New Orleans, LA, Apr. 2004, pp. 1640-1645. [3] S. Todorovic, M.C. Nechyba, P.G. Ifju, "Sky/Ground Modeling for Autonomous MAV Flight," Proc. IEEE Intl. Conf. Robotics and Automation, Taipei, Taiwan, vol. 1, 2003, pp. 1422-1427. [4] S. Todorovic, M.C. Nechyba ,"Towards Intelligent Mission Profiles of Micro Air Vehicles: Multiscale Viterbi Classification," Lecture Notes in Computer Science, Computer Vision ECCV 2004: 8 th European Conf. on Computer Vision, Prague, Czech Republic, vol. 3022, May 2004, pp. 178-189. [5] P.F. Felzenszwalb and D.P. Huttenlocher, Efficient Graph-Based Image Segmentation, Intl J. Computer Vision, vol. 59, no 2, September 2004, pp. 167-181. [6] B.D. Lucas and T. Kanade, An Iterative Image Registration Technique with an Application to Stereo Vision, Proc. Image Understanding Workshop, Washington, D.C., 1981, pp. 121-130. [7] J. Bouguet, Pyramidal Implementation of the Lucas Kanade Feature Tracker Description of the Algorithm, OpenCV Documentation, Intel Corporation, Microprocessor Research Labs, Santa Clara, CA, 2000. [8] J. Shi and C. Tomasi, Good Features to Track, Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition, Seattle, WA, 1994, pp. 593-600. [9] E. Gamma, R. Helm, R. Johnson, J. Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software, Addison-Wesley, Boston, MA, 1995. [10] S. Baker and I. Matthews, Lucas-Kanade 20 Years On: A Unifying Framework, Intl J. Computer Vision, vol. 56, no. 3, February 2004, pp. 221-255. 34

PAGE 44

BIOGRAPHICAL SKETCH Ted Belser (Von) was born in Gainesville, Florida, in 1978. Von participated in the International Baccalaureate Program at Eastside High School in Gainesville. He continued his education at the University of Florida where he earned his Bachelor of Science degrees in electrical engineering and computer engineering. While participating in a software development startup company he attended graduate school at the University of Florida and earned a Master of Science degree in electrical engineering. 35


xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20110218_AAAAEG INGEST_TIME 2011-02-18T23:33:09Z PACKAGE UFE0013400_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 6273 DFID F20110218_AACRLU ORIGIN DEPOSITOR PATH belser_t_Page_40thm.jpg GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
918c9c347e720be64acf1b94771946f4
SHA-1
8bf69cf357935932c227c166b996381fecb05045
8423998 F20110218_AACRGX belser_t_Page_32.tif
6bf746228f92068c32d62874453225a4
36ac929d006360ea3b7fdd8a109b8e844f3504e7
25697 F20110218_AACQZZ belser_t_Page_02.jp2
793c2900c7c996e13ebd7226aa5c62d1
b0dbc28a0d92d4a706faae4e400bb158f8db452f
4972 F20110218_AACRLV belser_t_Page_41thm.jpg
8085e52424401a7a4d50c670c07f55a1
f802102c589f4b05cd230fe2fc8ccc5ae8c0a4d3
1348 F20110218_AACREA belser_t_Page_31.txt
ee13e20148c59d76fa66f64f22eed115
a48a8e21fcbe68db09a748978643790c8d6839c5
F20110218_AACRGY belser_t_Page_33.tif
8bde81cd3498f26372ba05225a714962
47706cc8099e3fa08ea307e46aef2cb2113dca64
5694 F20110218_AACRLW belser_t_Page_43thm.jpg
f5151fe65bc018e6c76ae9b3033dbef6
7b609b294bb187df106e6b7a39cb737c35e7d082
46295 F20110218_AACREB belser_t_Page_30.pro
3709b96e36e1958118ede7629b028c80
30dc25495d50c82efba9b7cc910306fdad3b4f8a
F20110218_AACRGZ belser_t_Page_36.tif
0ec4bfa31393abd13893b0260ed83399
1535e90dd50e8f143933bc2ad869e02a1be5d0f6
2147 F20110218_AACRLX belser_t_Page_44thm.jpg
e5d1af7695a909a91fa79abb1a7d48aa
d2376afa16bc15dc1a90fa7f4e5f17060bf2842e
82335 F20110218_AACREC belser_t_Page_11.jpg
ec89a569c9d4169abe0a3e5c6af255dd
9f12409aad768cea8cffe97cd350523c6bbe69d5
54816 F20110218_AACRLY UFE0013400_00001.mets FULL
000682beebf6b0ad64b0db47da29e8dc
b9e6d6f8c1667d3a4e8761c32574d9f45edffb2a
21313 F20110218_AACRJA belser_t_Page_07.jpg
80a450d08ad540ad85ff24337ffb9eb8
240c09ce45741f6441d9f380ce1f00d8fccb5c98
1617 F20110218_AACRED belser_t_Page_20.txt
0b008250c7ff8849292e50695cf8a788
5e1f25786e2675b4d4a7ebfd39137f18181e2de7
64576 F20110218_AACRJB belser_t_Page_08.jpg
9ef9c31341b9395ff002e46cf12c0c22
c0f3d2a0edea27e9e2a4c76c5a45b671bb586400
57819 F20110218_AACREE belser_t_Page_31.jpg
460d426f88796b1a047fbb9541984057
cbcae4660c7039ec8ee69ab4247547905d28dcf9
19568 F20110218_AACRJC belser_t_Page_08.QC.jpg
9ad50b002f633486bbaa74a599f1d393
eea2d909819b2608c3257a0796c4877c255f2839
F20110218_AACREF belser_t_Page_17.tif
7cf43aeff0c30e964f21ce42794658df
2d6d366b210c5aaa9545f030604880737e3a3450
38184 F20110218_AACRJD belser_t_Page_09.jpg
9e92549f44d4f764eb52e5944e669adc
d71d09e4f9d8ac78526bbd2d4c5c205bacd36e73
1048044 F20110218_AACREG belser_t_Page_26.jp2
9cf82f838ff2c73c84ab10b276e2d54e
7ac479135d7170adf1d3d80bba434e8f3332613b
11844 F20110218_AACRJE belser_t_Page_09.QC.jpg
e3f1138f70b87870015899d0e13f8ff2
85a9e64010d63cbc10cf66b5cc7bf287d3b952fe
415094 F20110218_AACREH belser_t_Page_42.jp2
9e3be6f6b912f6f07885d4b1a4bb0827
f75ce5bfa75e6615b821f1d70e396c65910fc6bc
25255 F20110218_AACRJF belser_t_Page_12.QC.jpg
50f289a52592b271aaff4238c568ffc4
8923c656fb72799a3e0ef11395d5591027e7892a
1883 F20110218_AACREI belser_t_Page_40.txt
50bacbb53ec0d9577a44aff8384e608a
f9e005bc380dabe6f8cd318f1eabea77db8acf43
20567 F20110218_AACRJG belser_t_Page_13.QC.jpg
f679a282df12f63d23e0786c96c05c93
ab33ec4c47b4fb374b43f7232d8585f657a7a8c0
5459 F20110218_AACREJ belser_t_Page_39thm.jpg
8d4df2c9b8c209b55eb62cf99f625096
e8b0f32939343ea608ccb20d2600bcd34200a7e3
69488 F20110218_AACRJH belser_t_Page_14.jpg
b3c14c10947bfa349dfce3e1958a87b6
e6b41f3bf6a365f11dba548ecc5bd35fc954d5ae
656508 F20110218_AACREK belser_t_Page_28.jp2
67e45a8eb4a9f4cd54f402c4b5e43e27
2f6cf5cbb2df9ea3ff681ccedbc4fa5738fff369
5603 F20110218_AACREL belser_t_Page_19thm.jpg
43dcb61611da671a8c3a998edde6d121
e86db8b37c11e2f8f0e9a107eb72e2c4b3b7ceb8
65677 F20110218_AACRJI belser_t_Page_15.jpg
199e6f0707f3fb53c301ea60d9a93d38
2ad33a8f0e6a1cce493262ca539a678f99040ab2
20130 F20110218_AACREM belser_t_Page_01.jpg
f5b75b0f7e06132d98727622af64eb76
aeadc3622b5d96ffcf3e16c32a745af36f3d9e9f
69332 F20110218_AACRJJ belser_t_Page_16.jpg
606fef1aea2ee305fa4da8f12986c848
bee7a4e8fedaa4c47d1485c022d27b20d13ba736
823634 F20110218_AACREN belser_t.pdf
19192a153bc7c235fdafddf970b4d040
14ffccfd176cc9d08a48cd2079c396f8fb5f5ec5
55563 F20110218_AACRJK belser_t_Page_17.jpg
060bd9a5274cbcc3808beccfe81d6a93
379690139decf32c19dc97f03335f833801abb95
15717 F20110218_AACRJL belser_t_Page_17.QC.jpg
18e43d2ac7007d66688cab19bb71248d
a554e1038ad8d0f497f90b48a61be8b8f8f111fe
744150 F20110218_AACREO belser_t_Page_36.jp2
8a5cc59771f02b9f1d7323973598f087
9394781e0421989572e3390045f930dcfe93c1c8
76257 F20110218_AACRJM belser_t_Page_18.jpg
11aee90008aa24063840d8ba5f116768
317d14e24376c8279cc3d11638c6ca898177c559
84959 F20110218_AACREP belser_t_Page_24.jpg
d810c44364e9c8952268c5120c8ac9ac
cbe901c50850f3cbe13ca86c3f65aa471b1e9598
24357 F20110218_AACRJN belser_t_Page_18.QC.jpg
799c7d95bad548ad7c72f0f5a3222ba2
03e4cf0769103d8c67d072200c1c06ef2f72f080
18209 F20110218_AACREQ belser_t_Page_31.QC.jpg
6645a25a73542c3a36679a5728468777
a7087638c7287f1f9be2aaf573b433693bd39aa3
21020 F20110218_AACRJO belser_t_Page_19.QC.jpg
0ce8c54cbeb19cbed3c0181729862426
8fed87cdf6c06d3528b1745e35123d8afbb97397
2842 F20110218_AACRER belser_t_Page_04.txt
792edaa467e2c0a9dd3191b2661a465a
edc548d12c739f5e2e2b945b9fb05a4e8628ea5b
52435 F20110218_AACRJP belser_t_Page_20.jpg
568dfc231019a3a991fa0c3a07cd4f7d
7c72b2a22a1a0cb50a3aa56b9166980cea9c775d
559 F20110218_AACRES belser_t_Page_27.txt
759b2759513ec4eaea913eab820f3ad9
cb0cee3e88dd23d624332e8ef47133fb9e4e3ca0
14538 F20110218_AACRJQ belser_t_Page_21.QC.jpg
fa1230206a14ab7bcaf3ab5aeb18a868
a5a87b0c347ad962eacf6ef29588d1b310e002bf
3699 F20110218_AACRET belser_t_Page_17thm.jpg
0d5d9b96aed317afde0d6bc1df7902ca
943f947d6ec3803af7ed8b1ea84ba6bfdcda7648
60275 F20110218_AACRJR belser_t_Page_22.jpg
323bbd7d0789f76d4abc8c622ff182f2
251e0f0c32b4198e1fe793d5e59b96b385997e28
42505 F20110218_AACREU belser_t_Page_19.pro
627b3c1dce4d65b6a071d68607c9075d
bd55aa7e53aa14a3793cb99595ed974a939a114b
73880 F20110218_AACRJS belser_t_Page_23.jpg
0c663c1ed376a315e23a04ba1f02f378
d40de6a901af3a5fd40c1694e84e0d07e117c901
39007 F20110218_AACREV belser_t_Page_40.pro
dfd4d652eb08db29537bc54ab28b41b7
3a7375b1abdc7b1075b69449afdc08b1665846bf
24440 F20110218_AACRJT belser_t_Page_23.QC.jpg
1ad2d352e559b19b0eed29c335d1ca52
65fa09822c6140eb66e272f46e6b528e681ea7d0
51631 F20110218_AACREW belser_t_Page_43.pro
46131b4ad4ab1cee125054e78579487f
24df09465d5f1a527c926f14e0bec924b9a5b948
66622 F20110218_AACRJU belser_t_Page_25.jpg
cf419ee5d1a50bb28fdd0230a9bd6804
e8e1cb3c4a3ba71a2e1cda6406e957fdfd8ead2b
215935 F20110218_AACREX belser_t_Page_01.jp2
05c20866d627771731ea7a837f01429d
5b695abd860cd47ddd4031d6b096610f2327d010
78601 F20110218_AACRJV belser_t_Page_26.jpg
852923be4c563ebc5e23870aaafe2f7f
4146921a3fad6f1f04dd53d753dc1306bd5afbe9
75702 F20110218_AACRCA belser_t_Page_30.jpg
b1404b73799ffb9519916e393aed0144
d0c1ccb91ab4968860d2007f344030bf7fd51f55
F20110218_AACREY belser_t_Page_43.tif
b1ab5162598a1efa6759f2e5cc454f07
f529d581bec23cd1d86d9d07a7a4f633a72746af
23223 F20110218_AACRJW belser_t_Page_26.QC.jpg
f29470fb2c5472293f94983b24239fdf
93b5c66c9dd16835fdf5ca209a9e2a395d055803
5241 F20110218_AACREZ belser_t_Page_14thm.jpg
dfd31ee82c2ae4cbb7140944bec20c17
49b1c5ff501da33cc48bb764da0d7bb4b8391a27
24459 F20110218_AACRJX belser_t_Page_29.QC.jpg
1a85b625911dceda02efbbd63254002b
2abea721339cf915ac249b62aaa407b6c8975a2b
21454 F20110218_AACRCB belser_t_Page_37.QC.jpg
1d6307f5e6dc8ad610c36738c4c6416d
699bf80a9779a3afa207ec6906b8a130621e9a22
F20110218_AACRHA belser_t_Page_37.tif
dfed1be8b790ce46b045f6a5f971f2c8
73282d4204558676e899609e0792133b258c7a7d
24438 F20110218_AACRJY belser_t_Page_30.QC.jpg
fe08d0719e6b0f72c2ba3c16f7f433af
4a1bb25cd3b3c27e4c85985faf0fed031bd51673
5258 F20110218_AACRCC belser_t_Page_15thm.jpg
89d37911f28593d67882a92201e0a1b8
c4f245317b60e4dd0d2ee1e3ada9e8c3b61b2013
F20110218_AACRHB belser_t_Page_38.tif
f49e64b12c4106df3e7ddd3dca1f42d4
af667145092ea41e9e1b8617fed450278b813c5f
66886 F20110218_AACRJZ belser_t_Page_32.jpg
d2135bad10acd2b98535ca0450e42893
f1d40dd374459e23193e5d93f4b04da8361f65eb
F20110218_AACRCD belser_t_Page_31.tif
a57b1a29d9030029908403d48cf08e18
4c965452b15048dc2361d3ebed7a70daaadd9175
F20110218_AACRHC belser_t_Page_40.tif
cb3425fc869c4040194195ff503ac8aa
8889b9efd348a506252140b989cbba45f0908852
1705 F20110218_AACRCE belser_t_Page_07thm.jpg
a2c41ef64939f9dc6d7593f2e49f295f
eb8baae895936be3a369fadeeec56fd98f2eaed7
F20110218_AACRHD belser_t_Page_41.tif
2787abcdf5a13211479646a0b70035db
e67a4f0b56aa2f365446becf66cbb359c04955f4
18285 F20110218_AACRCF belser_t_Page_22.QC.jpg
6bad8819ef36210c8d308de82d53b05a
e66b7c5f13110fc279dd7d5a2a8ea97e26ddec33
357 F20110218_AACRHE belser_t_Page_06.txt
b4a0ef2a886873d2442e7e60187b8667
e8abb79cd1adb5c552f452069ee4e72fe3146a38
1579 F20110218_AACRCG belser_t_Page_01thm.jpg
57415d02e7c1c392fa6e57bfd41907d4
a23766f54589dbe4afbb057c774cda153e68d77a
773 F20110218_AACRHF belser_t_Page_07.txt
6a37334aeea5ba9c3398cfea241a1e5f
3743e78873ca89d763b8e7f5137c5682ea430934
2685 F20110218_AACRCH belser_t_Page_42thm.jpg
9d752ce25fae0847b71f2b387dd033b4
24c61772d1998e7c331e141e3a21336d904b1943
38233 F20110218_AACRCI belser_t_Page_25.pro
63eaeaa7e6dca569433439a982074b22
6bc18636388fee4a0e981db5aaf517c317f6d291
1674 F20110218_AACRHG belser_t_Page_08.txt
3edb400bbf4cc33035e4208ebd383da1
bd47635ac57abd1544d4a00ff8fb8059e587e6f9
79763 F20110218_AACRCJ belser_t_Page_40.jpg
5ef2a6caeb20ca944d6a286c4baa89f0
81fe12c15458b0e5eaa05c7083408f980e7fab32
2029 F20110218_AACRHH belser_t_Page_11.txt
934653ac35814045283d406f6b261f5b
b7392b6bdc7acff30e62f401ec71087f2228ac82
382052 F20110218_AACRCK belser_t_Page_03.jp2
5149c57a1a6b306b0dfe1c6cd4bb4722
4983c1b9bfb53febbc42464f606d943005a6a802
1964 F20110218_AACRHI belser_t_Page_12.txt
2c8fa110ec3a93f63a0b9d0190a16d37
2f9f5a1c6b989171acca2a8db37b6aac3bca0b3a
5215 F20110218_AACRCL belser_t_Page_10thm.jpg
b09bb9e1a787c6b728eb5fe0482c19f1
232013e5dd5aa6457f0fb88a916be3cb6090fbf3
1550 F20110218_AACRHJ belser_t_Page_15.txt
5a303f1d0e6b0f0868d76eb52572fd4f
dde4fd936f0d643a1d4405378674397fca99df9f
6344 F20110218_AACRCM belser_t_Page_07.QC.jpg
274ee87564d4f553203525f78f62c755
2051cdc0b50f941a477d098b8556f0e86d5e9f71
848 F20110218_AACRCN belser_t_Page_09.txt
48e4aeb660d4dd6386ba1744225ee0b3
803add5fbe5897aa982d45bcdd3f256753eced1f
1402 F20110218_AACRHK belser_t_Page_16.txt
b794c2633d908d47d42051d9e71c0f5f
7712fccf97d5cb689dfce28fe0904790637d4340
72099 F20110218_AACRCO belser_t_Page_10.jpg
c89b338d07e293834c89709ce9cb54a5
ee3be24bb90280df8d9a522c647c18c7708c1cca
1351 F20110218_AACRHL belser_t_Page_17.txt
40dae32a7bf6e419c3434193df13b1f0
b7bbc4702685a88e7effd7a693349feaacd2d9f5
5947 F20110218_AACRCP belser_t_Page_16thm.jpg
4b6b44f215b1550f2616b7ce5b31393d
5121314d1d439e3ea0447bf9d3b7a7e9dd33d247
1840 F20110218_AACRHM belser_t_Page_18.txt
926db6072e485a1ac3d6863ce6043017
525538f990913fc6af110d394d5ea0852c87a2bc
103 F20110218_AACRCQ belser_t_Page_02.txt
0c95f1b940c8c6173e8fb8e33f690852
0b193333bdbc78e3f0003ab90112f32d1ce6b86c
1327 F20110218_AACRHN belser_t_Page_21.txt
f4a0c61c0804a2b18975872fbddbf592
16d93247129fee5ac8a250ed6a2ae395dbc90c3f
35743 F20110218_AACRCR belser_t_Page_41.pro
35a3dac12d6648be83007a62791115c4
b2a8358b7220ec4ad53ac6b1ca328383e91c1d65
1430 F20110218_AACRHO belser_t_Page_22.txt
553fb7691348275e929479fcce2fe28f
9d6bd0b348d2cd292a0831f01869a642f0907db2
F20110218_AACRCS belser_t_Page_09.tif
aa61fee4436292c98992a18e04f97d4c
79c25d3c1a10d58453c14db3c1a4cf1c7fdc132f
1788 F20110218_AACRHP belser_t_Page_23.txt
f7662f794516404caf03770422dfc631
55cd9491ae4037d7cc8250b3369fe4aac25ee1aa
22085 F20110218_AACRCT belser_t_Page_10.QC.jpg
793d913746381df4b96ed798f710a08d
58f9ce4ae491dc65bb273ab2c68533413465394d
2049 F20110218_AACRHQ belser_t_Page_24.txt
c392c94e92b8a03a5faf01131ac9e4ba
9ce5c397dab379ac5204119c4dcd68108132de78
2630 F20110218_AACRCU belser_t_Page_05thm.jpg
2c68b612bc5d40c9a0fab0266a5b15b8
e714d2c5ed60fdf49327b68f2347d2f20172d5be
2075 F20110218_AACRHR belser_t_Page_26.txt
d54ad5cf943927b6788f6edd1586ac22
930e5cc9f1ec3e18bcd86b240d8c4e51eb4e7e98
484816 F20110218_AACRCV belser_t_Page_09.jp2
609b187ec4a5473d377443ce45c40305
5dba9a8ac4e0b568dc79aba32b48429a0ecb9399
1398 F20110218_AACRHS belser_t_Page_28.txt
1fc4851ccdb7b2a677c53fedff545088
cc4e892c90d1588492d09628ef9be88ac60d9eb3
1581 F20110218_AACRCW belser_t_Page_27thm.jpg
5c00f4196378cdb5e088095c891f1423
ae1dc5433b9c38a09f1b2044b32e1cce549bc25a
1918 F20110218_AACRHT belser_t_Page_29.txt
428bc88a844ab47ebfd4c3304769b6bf
1aabe4e72659111aaf752e5a15c3d274763146a8
F20110218_AACRCX belser_t_Page_26.tif
2c938e15d37ee7a2ba680dab53a077b2
2415227a1794e8b50b0081346c7aa11c0320c7bf
1662 F20110218_AACRHU belser_t_Page_34.txt
40f842b7e68d1fcf309d56329fbe3ce9
1df6fddac706880764fff5fac65e3019255732fb
F20110218_AACRCY belser_t_Page_30.tif
cb85fd9fa883d82504f6256add7db184
600b4ebc7d7faa3956c558f4007dbf289820254e
1081 F20110218_AACRHV belser_t_Page_36.txt
46c946a2ef74b0ef586e461f7908d2ff
96ff1fe32f530f64b233f62d3240ee880b19efd5
F20110218_AACRCZ belser_t_Page_23.tif
9579a91421ba0504c7c5fafc85db0621
5e6b526217a652b052aecabe5a4788db51a5e823
2288 F20110218_AACRHW belser_t_Page_37.txt
e67bb057c72cd88ac0129064f7373b02
842b67194565e1949a9ebc381434c2e4b1766141
25205 F20110218_AACRAA belser_t_Page_40.QC.jpg
81cda390924db6360f0b4cfe7df41aab
6c5557dded7215ed993e2b48cf79ec99e83a1a24
764 F20110218_AACRHX belser_t_Page_42.txt
5d1d4a5e2a1c8b83560ae295bc5097ac
d61d8f9e4e422a54389de245a3970a21ffae7b81
F20110218_AACRAB belser_t_Page_03.tif
49fe5c6fecbda036af3a01c931644715
f3ed4e939d1e2fa51617dc8323bdbfdcde085afd
905648 F20110218_AACRFA belser_t_Page_34.jp2
81ec42cd86a90871623db56da66b5544
28187f2db1731acf309aa04d17ccf54e9ce36601
7597 F20110218_AACRHY belser_t_Page_01.pro
7193a8fc3b05e3a349daa8ca65464cf1
ae02a8f85f0f1fd5268f480f5b3eaaabb3192ffa
41176 F20110218_AACRAC belser_t_Page_34.pro
39ddbe3bbfca60d23eac8ce822c02f09
135d6a9fca813d6e57069b4244a3ec70ef162ae3
26820 F20110218_AACRFB belser_t_Page_44.jpg
79651517cc34c7c070bad85e6e5f0e26
b4a96ffbfbb27b7f83b6009ccb158307e50b67e4
1200 F20110218_AACRHZ belser_t_Page_02.pro
400cfb150d1b888c4cbcb075484dc59e
0e37f4db42a53e78c606fec8df36c7a31ee4461b
1648 F20110218_AACRAD belser_t_Page_25.txt
39dee7dd85455ba833171cf8d8843fb1
25084069083f802d45b5e1722d71a2b7ab8394a3
2173 F20110218_AACRFC belser_t_Page_05.txt
cc529cc7e8cf3c222d1439bbd05e0e04
7c18da2ae8cb27902a6cbb74161f4aa351a4dde7
4783 F20110218_AACRAE belser_t_Page_32thm.jpg
8061d8cb6841bddcb7273488b119879b
d914456fe7ee4cff9e51e407f3ddeeaf90746e94
19960 F20110218_AACRKA belser_t_Page_32.QC.jpg
3933eff8139dde915e2ed43377f94c75
34b4cf9a46a013beca24ef4f3810953d456af172
31772 F20110218_AACRFD belser_t_Page_38.pro
549a383d0022f012a53d381571558e81
5c181cd01d06f62b8fa78bba94891ae526474ed3
F20110218_AACRAF belser_t_Page_39.tif
8862d93335c2de442943ffae592a189c
72a208373af3e2396d95d9c3946aec64f81d273b
69586 F20110218_AACRKB belser_t_Page_34.jpg
06f3044f04147a83aed2779e59fcdf38
4c52e9ae4c00b6715373ef10f4ce947ded5e5e15
F20110218_AACRAG belser_t_Page_02.tif
a5821d91b26e670d9139b94ba301e365
0150fad0eb8fff80de0d0fcbed3fff665cd849c4
23085 F20110218_AACRKC belser_t_Page_35.QC.jpg
cf4795cba382e2a69846c74ade018978
c83e2f3280cde0e39d65a058f82826b4529c702f
F20110218_AACRFE belser_t_Page_05.tif
60339da0e925386300cc24fe1401fae2
413030fe8870390ddc4a4312fb2bc6657a879284
1296 F20110218_AACRAH belser_t_Page_38.txt
2010586b39d82597eec8c3db0c139ff6
3a76a3f458136eb290db88846f2f53e00a9d6077
66469 F20110218_AACRKD belser_t_Page_37.jpg
f13e5b692075ad77572972a9d167f945
885326f86c15f08f1d02f2e91c79ad59596f0f13
F20110218_AACRFF belser_t_Page_29.tif
51f4bad62227552bcd8487bb6deccece
474d98ef5eab151e6fdea03402cdbc6d24a50bf5
F20110218_AACRAI belser_t_Page_27.tif
b143d840734a996e95bc433f4bd6c2fc
30201f4140aaeb14570f0a8b3023801bde4402c2
74024 F20110218_AACRKE belser_t_Page_38.jpg
623811d7a7eeecd69f95a1cf5aa94364
74b7dbf821d2888a12c2aaf261b5544dc8521158
2400 F20110218_AACRFG belser_t_Page_39.txt
02ed49574415bf9966a1f0d592b9e41a
f534a1df025291928b1f814f500a7ee15590e54f
F20110218_AACRAJ belser_t_Page_28.tif
cbf8343f37af3649aa88b30ef3c19ca2
1a13a8d1e73bf824b40452caa9bf9927ae6acb1c
22217 F20110218_AACRKF belser_t_Page_39.QC.jpg
5e14f7fa2ccd980307bd958ff7e307f7
68acb02528c0309ae40ba90345536e3a0cbefa77
5745 F20110218_AACRFH belser_t_Page_37thm.jpg
c71bb353a8f1667560719a2b274c50cd
97dbbe614e99ccb92eaeab32922619d6dcd111d5
1536 F20110218_AACRAK belser_t_Page_41.txt
336367f7b0adff6ecc76de6e0b133d5a
313cdd14abcf3af7f212d9e78618926688aa4e4f
19586 F20110218_AACRKG belser_t_Page_41.QC.jpg
f6d8163c774f9cb482a74845b562dd85
d04942ac82c1bb1e4abb7df09be88983a4c2c173
22964 F20110218_AACRFI belser_t_Page_27.jpg
aa977bbd5dfbe44c24e15527085920ff
6d2b40acb621631251aba52b49134741689bd584
8528 F20110218_AACRAL belser_t_Page_44.QC.jpg
88b221e9d20cf764d2433820181e41ec
968595c49fd20c8a6e563bb9949ec56dc6ef05c6
64774 F20110218_AACRFJ belser_t_Page_13.jpg
fe16ff6c1879d26fda51bb7a4b3eb8eb
40f27fbb8a02c79fae047e4774e77fbe663d64bd
F20110218_AACRAM belser_t_Page_11.tif
eb05325077c7255857def003411ba38a
2a290b345f202f7854b22a41fca59926ce2459a9
33699 F20110218_AACRKH belser_t_Page_42.jpg
be7169e837e727195e9325d38ce0609f
42b4dcb407035510214450f00b9db24807a7ead5
1881 F20110218_AACRFK belser_t_Page_33.txt
0ee0a4ca6299290e7925bfffbb54b3a3
f2456bacc0b5276a05892b12075ebd8e8c44327f
4391 F20110218_AACRAN belser_t_Page_20thm.jpg
b1a00a95e4e9bb42fa15aecf9a248379
3a43a91002504af4793775b04e15dc2104dc3720
10332 F20110218_AACRKI belser_t_Page_42.QC.jpg
32c994d3f578889cba243217a9b76920
554159ad2d08b16e712a72dab985391fc04610fd
22421 F20110218_AACRFL belser_t_Page_25.QC.jpg
82d14fcdcbbe0210105b87b9fc9a56f1
2caa2dd8c60607859e1d4eec50bbf30e15e579ef
F20110218_AACRAO belser_t_Page_35.tif
1d6b9bbf99ffed5c1f4d97e8ccb881ab
e84d0a7bbf80336176658300c24f7adf5986f606
51682 F20110218_AACRFM belser_t_Page_28.jpg
975756380498a94b4a7f734c876c056d
921314910810d469cd3440b4ddec1dba4d578393
44756 F20110218_AACRAP belser_t_Page_18.pro
3e2c00036fcf153734a3411b49ff36f4
4c46e128b2e34d5465fdb0462cda3252baf10912
799128 F20110218_AACRKJ belser_t_Page_04.jp2
0872bcb723fc9a5066609926bf7e748c
fd90cf94e367608341fe732ea5a91250fc4aea0d
44113 F20110218_AACRFN belser_t_Page_35.pro
405124dc7af8707f74b9ef636038f1d2
c33017b9a17793f6aa69d44c6d53f6d99d74dc19
1612 F20110218_AACRAQ belser_t_Page_13.txt
fae7811db7b40032af14abae06c285b9
3fd0228f94fe0774b1666253deac54c7fe07b682
142012 F20110218_AACRKK belser_t_Page_06.jp2
10cc9eb61cdef37323c92b26a67f3672
05aeb2a8263671ffbf46657fb781d4af8a6dd621
38493 F20110218_AACRFO belser_t_Page_15.pro
84ca805b47097b2cb416c0c3ac56eef8
6fe3f171085d4ebd245e9da716f4b12e95188bf8
21235 F20110218_AACRAR belser_t_Page_14.QC.jpg
63b4eb76e0aa4717454edafa6cf3a005
7b00f6702d4bd5408e2fb26be6650c60349c5226
252834 F20110218_AACRKL belser_t_Page_07.jp2
9500c5fb048993c21623db77fdb53bd2
3bcd352f3347969aa477233e43275dc3f122cdbd
F20110218_AACRFP belser_t_Page_19.tif
88d09a6678b57139c5e6e52926afd916
cd3e0cb3983144a4013c52eb71c77786aeb7728b
1051905 F20110218_AACRAS belser_t_Page_43.jp2
c36b32baafd14ae59b3dfb5c8a7fd2e1
62340531c9976406e7de528a21ba6fc785dafc93
843975 F20110218_AACRKM belser_t_Page_08.jp2
189a93f8c0f5156d2a8f0d96682f5423
631336dd578ff94e9dd294bdb4d693d3ec1eaddb
1785 F20110218_AACRFQ belser_t_Page_19.txt
15ac88ea940fb5406aad8a24851fc99b
88722b081e15b103a7fb2a6721fdf85ee4afc130
70957 F20110218_AACRAT belser_t_Page_19.jpg
ee624147971df0c64f24be470411627d
881b5bb398dd1d280f0ffc1ff9e7f649ce746e1b
948256 F20110218_AACRKN belser_t_Page_10.jp2
40e08a28aac3d99ee10022fd141d4924
2c645256667fce075dcf7b1de239d8e2a002429b
1718 F20110218_AACRFR belser_t_Page_32.txt
218312c18bec306298530b9d0217a28d
9c1622981b4cee447c832742a7cd389f19c3a48f
F20110218_AACRAU belser_t_Page_42.tif
ed3f2f3cdcc4d955dbe1423b1e6fccb6
b2bed05a18496a3febc08db142ba461cb8f0d2bb
1051891 F20110218_AACRKO belser_t_Page_11.jp2
3f6f73e5f2725ed305a6192f39c79dd7
5ccb26e606be1a0e174ea1d81c71798b736d6c21
F20110218_AACRFS belser_t_Page_34.tif
235a81d3e816c9facde48e086fd5a6b7
909f91aefb505556ae6f5fa23c29944d4f5bf123
71342 F20110218_AACRAV belser_t_Page_39.jpg
1f7b46b82dfeb4380985021b614a6a78
e758d2f74b6b9edc10c7c877c5e3bc32c31a4827
1051984 F20110218_AACRKP belser_t_Page_12.jp2
f5a90525803c2e2ec48e669afb5f7692
192be02e83ca3ae832aeef285b9d2ecd0c59ec96
29294 F20110218_AACRFT belser_t_Page_21.pro
4b4c49661898a51cb754bc42e68f3f38
a2b70ce5b673886e06e4a22b49f044475fa7be9f
2576 F20110218_AACRAW belser_t_Page_03thm.jpg
53d8b05a398d41f37d68a3e42d36313c
8acfbe609cadcb3b1878a4d91d0a44587242c639
857784 F20110218_AACRKQ belser_t_Page_13.jp2
f5200a2bb7af677613fcfa2fd84fd733
57239a93b144e77a5156c46612fec728e7586e14
84015 F20110218_AACRFU belser_t_Page_43.jpg
0e8ecbc2212556b122c2f32c660a716c
a2062864fb0ff0807578beaef6afe0b725a5805a
78741 F20110218_AACRAX belser_t_Page_29.jpg
53fa5f83ba7b02a9a2350d12017484b4
b89610c6143d867ff651317ec89b18b120953785
858442 F20110218_AACRKR belser_t_Page_15.jp2
a0a37ce4345a3a1d3cfe2730fd8ddc96
c92737ec8cbbb34a6093fa4e582cbd480538f96f
1610 F20110218_AACRFV belser_t_Page_14.txt
6846feafec958459d90da270a0c1100c
e45eaace8cb413ebd83ae5e60c705dba4dc3cfab
1127 F20110218_AACRAY belser_t_Page_06thm.jpg
304bfd7b5c43100732a5d38d61689daa
0596549799bf1c89071758cf892def45a832746f
908071 F20110218_AACRKS belser_t_Page_16.jp2
48f0e48aed5c0966d03ab305e9be1024
c1f20340e2539eff997d546586c28bf8cfa70515
6140 F20110218_AACRFW belser_t_Page_27.QC.jpg
775d353d80e7166e80bb00c0dcc12e6d
82ca4a0f134872fa96ebe6750be8db9dab47deeb
5471 F20110218_AACRAZ belser_t_Page_25thm.jpg
ac387906f3f9e91cb367815b6caffc5c
92fdaffc36a6d8ccf3f8e0f93583273f337e4d27
717359 F20110218_AACRKT belser_t_Page_17.jp2
93b85d8e5749f43eaa77337188c9ceb0
de03cb524b622d718d745f41d691d4ca237e33d4
F20110218_AACRFX belser_t_Page_16.tif
1d81427cfd308fa1809a26b69dae04ce
f3eabf376ac46320e180615f975e498881de2cb3
915021 F20110218_AACRKU belser_t_Page_19.jp2
1bf6adb777b3bc1e5f4346c76477f9f1
10696fb760899102e5a7c67e50cc00f04d16031b
264049 F20110218_AACRFY belser_t_Page_27.jp2
8e478cf763b37bca42cdd069d7a3585e
f042158fb09a07103e75dbac4b358b44da4d7c14
675920 F20110218_AACRKV belser_t_Page_20.jp2
ac283bfa78d4bc5faa6a64977a616b5c
efc72d23c9d7fdffd016f65b072fbec6a778fbc9
6019 F20110218_AACRDA belser_t_Page_12thm.jpg
025bedadea5c1864c98e7ff6d685def3
b974882c169a76429ea715a41ecf5f7273237c5a
540 F20110218_AACRFZ belser_t_Page_02thm.jpg
cede03decfa2d0ab4ff25cd9b3aa0310
c00f3111885bb27b00edc72fb39969574b8ee2ae
775980 F20110218_AACRKW belser_t_Page_22.jp2
315e35c63ac76f933a9581eb100549fb
562e9d28e3f5bc9abda0f327bf823602cbf4b47e
3034 F20110218_AACRDB belser_t_Page_09thm.jpg
5a1aac32d05d82701ba4dd5f8eeeb36d
45c04c7c1c28d200d8253107db50e2f5efce9c2f
967790 F20110218_AACRKX belser_t_Page_23.jp2
5e4cd1078947d80f2ad79528d10a8e30
a1ed94bccc6a9003e44ddad920769afd56b07ce1
16422 F20110218_AACRIA belser_t_Page_03.pro
d9e246a852d2ee6bda6dff9cc8b9a5f7
573fe0faed384ce2681cbf0dc8170f9b5700b90c
1051934 F20110218_AACRKY belser_t_Page_24.jp2
973b0c4d565d6a0913cac81892899290
f234eb995f660cceeb357f127f6f925bddc2e5de
16618 F20110218_AACRDC belser_t_Page_20.QC.jpg
c96d2d09313bdf934bd18f2cb6842a87
77e70677cc709dd85af78f6e5bd1249c23b60c3c
69615 F20110218_AACRIB belser_t_Page_04.pro
520a7da5f16e200b6758f2261ff6cc96
1f9afd1e8a7aa2a2e4e2d77fdc68a9daf310d1a4
845769 F20110218_AACRKZ belser_t_Page_25.jp2
446f984e6e798c4c90725465a8202135
9a9b49b753ed34dd4965586ce50842e798d0a39b
47382 F20110218_AACRDD belser_t_Page_29.pro
9d35845e8738c0290325f342fd968d3f
61f02879f5081fb2521f0c6203c182e9e612e7d0
50961 F20110218_AACRIC belser_t_Page_05.pro
5ef16440262344ce8feeca6bf6181bac
166af12da6546f85bd7a21fdccc67216a0629d7a
41244 F20110218_AACRDE belser_t_Page_37.pro
d712e185968721ffdc070aab5910e6c3
1547309dde31ebea613fe69404ae66cf5fa745d4
8520 F20110218_AACRID belser_t_Page_06.pro
609ff2f67de8d284cd8e66b590e1c61c
6edcac9b43948cbbbf40f43790db792a8a0cf567
55950 F20110218_AACRDF belser_t_Page_36.jpg
db3a79c8d073a1fb2ba2ae89222b5199
f988dde954696a015935c42e0f4812d6198607d2
18546 F20110218_AACRIE belser_t_Page_07.pro
e5f2f7d6dac126385c3b528781fc6668
5212129a9d178593dd15baa1843a072c340a1e49
639798 F20110218_AACRDG belser_t_Page_21.jp2
910bd6ac0d77099ceb28e08b32967a6a
0cc6e9d771b6b67e46935b8c4a9f8d61f84efa06
42483 F20110218_AACRIF belser_t_Page_10.pro
51705d37c2e43e984013c40ec3d61dda
ef9b7dafe03fa561ec99bc11197c0e48519642a1
6171 F20110218_AACRDH belser_t_Page_11thm.jpg
8d9370b44f4a6b0c39ebf7c3c0e2a23b
1cafdc697756b5a92ef60d8d68e84877d1438ee6
50551 F20110218_AACRIG belser_t_Page_11.pro
5414b33a017ffeda7bd3de0002e07c15
8c958ae400e53a75f3003049114eddaf865c217f
5711 F20110218_AACRDI belser_t_Page_35thm.jpg
a0f7cb00c2cc460c101c7a1f11eb5302
b1208f4176f4e3957093823cd336cb5880c9ecc6
28802 F20110218_AACRDJ belser_t_Page_28.pro
c398266fca2511fa5ec2c1ddd6bfd331
d73535973ce6cb7b3c2a6202046cf8d2ddc50467
37640 F20110218_AACRIH belser_t_Page_13.pro
6b2342a2bac534d5938341b41f1bc68a
c93866ccdc18f1140937cb4ccee5a115f7f7e946
707 F20110218_AACRDK belser_t_Page_03.txt
f11706320901b83b94fc780d5361362c
ad74877e92b7b11b6b3c8f2775e6d1305cf8a963
40065 F20110218_AACRII belser_t_Page_14.pro
d2e2abad1ab817c5a1cdc10d85f515f0
c473846d89fff4130a799cb56324356d1bbd332e
50882 F20110218_AACRDL belser_t_Page_24.pro
b6f2cacceeceee7ed638e437a55c4176
d95b4f799de2b2bedc1fe02f052042381941ab1c
32700 F20110218_AACRIJ belser_t_Page_17.pro
6bd75c9e4917dafd34fb38c0ec29d5fa
d3a658534ca47792d19c1597d3e73fb535193afd
78325 F20110218_AACRDM belser_t_Page_12.jpg
f72b1b03f5c4c738607626df11e3f5f5
4fe1d3704c02cf9a6a946d11148d388c0e13bebd
32553 F20110218_AACRIK belser_t_Page_22.pro
08ff39ea60603f2244dad7d81fcbccda
43663e1ddacd16e2d8985961b60059ffca0c5493
5949 F20110218_AACRDN belser_t_Page_30thm.jpg
483581dd9630f7eae140b3c53871b6d7
dd84a78df724b4e77446ab08a6e0e2e295108e89
42885 F20110218_AACRIL belser_t_Page_23.pro
ed6ef1e1c3854725fe27583edcf13c9f
2467e657c17b64a170817ba8de6be14848579a85
F20110218_AACRDO belser_t_Page_10.txt
9fb04f6092fc13d1457b02c3526f0755
034b8c4c38644cba09c10ab798b6bdbf11d9854a
49453 F20110218_AACRIM belser_t_Page_26.pro
172ac74ccf4f8c7a1cc14b9708a7c4c5
c33442657e471d1ca52261379d1301db3187e88f
45889 F20110218_AACRDP belser_t_Page_05.jpg
f2e3e06804e36b5721e636e3726662a5
a9ece0fb8f1be9c0e6fcde5c29b6911ec85a04af
33792 F20110218_AACRIN belser_t_Page_31.pro
8edf894e4ccceccfdbe5ac940cd70d39
7f013b8a6d6ab0e723e6e8aa3d7d8475daabf10e
21357 F20110218_AACRDQ belser_t_Page_34.QC.jpg
3e8d2e45da20957d5114fef7f6573d1d
b4b95398aa3df6077a34c7b801641ecc798d4731
40323 F20110218_AACRIO belser_t_Page_32.pro
99f03aaeba50e015f740abedd9d6af29
6de72fea04bbed737f3fc2d13b4f642b67b32e37
4956 F20110218_AACRDR belser_t_Page_13thm.jpg
48ccb28f91e9507cde8bb50a2744f900
2fc0f0debae5f6308885ee5db23d60971530202b
46107 F20110218_AACRIP belser_t_Page_33.pro
fac0351689d51207a4dd96ecba79aa97
884e608115882eb2e1e0ff92de976838b11345ca
21361 F20110218_AACRDS belser_t_Page_15.QC.jpg
c87c877f97f358fdbdb5b7758b827cdd
d4edca66b77e23dde111b33231c47793b04a73af
48433 F20110218_AACRIQ belser_t_Page_39.pro
01ce77839dd48c56835b81f6d490d7c8
7f4b0fc10ed27834d6255d3204a74e0defb18a6e
63795 F20110218_AACRDT belser_t_Page_41.jpg
1dfed40724e9cfc9bc86b08175aa3c6f
91528357a30f8a5170c3d7b9bc409a22c96d7c2d
13870 F20110218_AACRIR belser_t_Page_44.pro
44c5e23fc876fcef731705bcf539c73a
09b20843afb318b2d70457f87b0474a2df9b24a8
562856 F20110218_AACRDU belser_t_Page_05.jp2
3f972ee9e1dacd1792289d5916afd65d
696c0ad79e66217f4d8e96ce253e7220df7669d2
6135 F20110218_AACRIS belser_t_Page_01.QC.jpg
c800bb1f3f6b57caf50e2ae7abf888b0
e9deca2743750c64344990a9abd2e7d312b13f99
50314 F20110218_AACRDV belser_t_Page_21.jpg
e0991684a981dbad4df8c7155a53fd61
a2daa383c05a7f156ac7cef99429e03d692e45cf
4040 F20110218_AACRIT belser_t_Page_02.jpg
a8ce1071e8d97b054cf62a19f156c811
6b6f7b55225357b6ca16f176e3fe8ad5b949e92f
1895 F20110218_AACRDW belser_t_Page_30.txt
ef2fd79872aa503f7af9612847447d1f
6f0ee52dda7b0107722eabb9717412059158c0ea
31076 F20110218_AACRIU belser_t_Page_03.jpg
f44f2fbeb7e76347e6a59ff5ff5da248
243ab6dfd20003df7c7d3d20a19b237fd9c93d54
33977 F20110218_AACRDX belser_t_Page_16.pro
65e35bd788d26fc70655223894ed3048
86c9f1b78ab8ed183f055238d33165260c8063c5
9616 F20110218_AACRIV belser_t_Page_03.QC.jpg
c42e39fe28d13220fe6c1421f3d77460
ad21c04f82f444de0f3fa5c3d7ebbc67c6f18701
23721 F20110218_AACRDY belser_t_Page_16.QC.jpg
8d27b2a2431929d156623f412bf06ea3
7449f988c7ec72930575c9087921962440622676
15424 F20110218_AACRIW belser_t_Page_04.QC.jpg
6b67ed8b52a6d2f1f6ad8671187ea38e
a15e4be17dbef738428269730d8f2eb99416e12e
600 F20110218_AACRBA belser_t_Page_44.txt
d85687eff4da9d3857e26a8e080cb28f
50764cf73079b7754c3ef1f32f0fd9069a158ff5
4355 F20110218_AACRDZ belser_t_Page_21thm.jpg
96909c4ac4003d796e8d878b1d40c505
5c237a1807f6e24e73fd134629589381377feb9f
10684 F20110218_AACRIX belser_t_Page_05.QC.jpg
5695a53dd929fd0ba7dd921fc0dabf1c
a6b55be961286d36aa8160cfd1c5e6898e0dc8b8
5688 F20110218_AACRBB belser_t_Page_24thm.jpg
29356099cfec152267e0a65e8c063570
df7a80ff9ee4d4a37bef1e2af1e9f758e6adeb6d
12656 F20110218_AACRIY belser_t_Page_06.jpg
9d1b4445aa24d8342f51d25ccdd396bd
5efab1a22793f67a820cbd9059a7ba4f7d5baaee
18142 F20110218_AACRBC belser_t_Page_36.QC.jpg
87e0175787f765f81977167eb72e4917
4387261895abb4cf721a66a301749722e526cbee
12350 F20110218_AACRGA belser_t_Page_27.pro
7a40213aa1151e955e5328142072003e
0fbff3c087cbef1cc834d255d792ddc0dd1a4085
3763 F20110218_AACRIZ belser_t_Page_06.QC.jpg
ebb2502414978fcf57ed70b11c392237
3d414f133191f2f9c2477656768e57063b76cc5d
F20110218_AACRBD belser_t_Page_25.tif
f98491be2278d92bdabb65671bac2840
293958ad1e186306dbf57f2afb2f1f55fa2b812a
F20110218_AACRGB belser_t_Page_06.tif
6c0f7d6692a590f809aa75d836b4eb3c
2360a1badc63d3e5486a59fd405ce099bb7f80bc
23805 F20110218_AACRBE belser_t_Page_43.QC.jpg
7915e4e9a90a96839dda7b32e5b1f5e6
cf3e7b9da02ce02ed84bec1a97e435078d55b141
49379 F20110218_AACRGC belser_t_Page_12.pro
440abe3a2d264445c5bead8d03927762
3ae01f6cf9acb2bceb5d0f1b457e47d24cb28a64
F20110218_AACRBF belser_t_Page_14.tif
43b34e19b7734604456a55c40abd1844
66268fb10060b705628b1dc6bb432a692e7bf6bc
1051956 F20110218_AACRLA belser_t_Page_29.jp2
965e0c6af074e2710f157711b67106b5
c04cef528d8ccfdd23c45c697753dcecce06c72e
32015 F20110218_AACRGD belser_t_Page_20.pro
eb6fa0d57ea58e9afc4917c016949304
e08963edb0a853255c6f053df2a705c9c4611d0a
78611 F20110218_AACRBG belser_t_Page_33.jpg
68ad1fb82b763cddac093cb046661d22
34f613136a3b71dc7ad1d7ed3a79525697e5b55b
1017789 F20110218_AACRLB belser_t_Page_30.jp2
b2acb705e31b68f1dc96d074816ceebc
1a001b94964e0c28ea42ffd009737fc0be588538
23211 F20110218_AACRGE belser_t_Page_36.pro
c3a0bfd87fbb49233d872b3e3e674870
4dbacf56b21792edc77b8ee0e76b73cfb52559bd
2156 F20110218_AACRBH belser_t_Page_43.txt
261e14f37a328d1f92675591ae3babde
a665c39522c9da48a558237e8cd50208bf7bd438
900324 F20110218_AACRLC belser_t_Page_32.jp2
e21edfdad372076d1b8ea2c0ceb92ba1
cc625273742763f58f034c306a090cb72b38115d
1383 F20110218_AACRBI belser_t_Page_02.QC.jpg
cb619533328ba717f146a8a71d490886
8c670220740c7ee0cbf9adf2c50f6b61e1f5164e
1026344 F20110218_AACRLD belser_t_Page_33.jp2
6ef77dc90a1069e9eabfbb29e2d8e2f3
5f9cb7f7aa57e0d27997d3fdb2775f948594a8ff
762571 F20110218_AACRGF belser_t_Page_31.jp2
8c43f61f1354efb9c9f2e667bbf796ad
ed0f785c1edb0743def61f30e396b394f989ecd7
17019 F20110218_AACRBJ belser_t_Page_28.QC.jpg
30fb9381a21fcfdcba21a04f93c6c5e4
430dafb05de1d59ba0a61240db9db71fabf6130a
993325 F20110218_AACRLE belser_t_Page_35.jp2
9d6f8955447e86453ba4070deab36a28
9b63f91fe356f0da4f86f8bc059f54c7cb5f1dfa
1051963 F20110218_AACRGG belser_t_Page_38.jp2
d9931013a5db46d86dfec4fa1b3d458d
c0e40ffc98a8f472421f6e922d98646265cebdf1
F20110218_AACRBK belser_t_Page_44.tif
33cf17a284ab205bf46804bd01319ca6
cc952d168abdc95aa3f01ea8d933c5fec41bc1a5
802055 F20110218_AACRLF belser_t_Page_37.jp2
6cc1cf6fdb421ffb7a0fe93f6b237738
f4cb78e17ad82ab5ecee445fc69dbd3a270d6188
73584 F20110218_AACRGH UFE0013400_00001.xml
e8a81b3e08df03f89110ce5a65d6d1bc
ca13d48ea2ebc381c6fee10a679d5df696d61db2
5628 F20110218_AACRBL belser_t_Page_26thm.jpg
b03d8dbd84b5d0d1d9b657fc83429452
282f0f32be231dc5da65760a30bd8440535bdd0f
931356 F20110218_AACRLG belser_t_Page_39.jp2
7c986a4498653b761b131dbde4881e19
74f96edf65efae5fb45ea110444207c1e41bab39
18554 F20110218_AACRBM belser_t_Page_42.pro
33e85cc618e0488a576c61e7c040c63b
9c3bfab83e9680616d8871c4208bc998871157e2
1001118 F20110218_AACRLH belser_t_Page_40.jp2
e41d4e6ca3dfd8b09c9e0af3404079ca
8ce40af80feb4bf2790c0a7f23e54b7dd99e4948
25777 F20110218_AACRBN belser_t_Page_33.QC.jpg
7259dfea9cf1f7767015ebb94e560c4c
315db9a6f142868ec5b75b674885d82af20f9e3e
824242 F20110218_AACRLI belser_t_Page_41.jp2
aee46c81d85c5095629d7280a5d206f1
9fa5ea890001225bb7f90fc54a37730a63726c0b
F20110218_AACRGK belser_t_Page_01.tif
2ab7e846aac5c879f8762afb01281bba
be64b1a750e697ec587b1efcd136f1280503ff4a
23547 F20110218_AACRBO belser_t_Page_38.QC.jpg
7f1fcd521346e248b51dc291ed59630c
f3030077c7d5c9ac68c4dff872c35f3ee655628c
325912 F20110218_AACRLJ belser_t_Page_44.jp2
f31a9b2fba35d076b61cb6d1a2d380e1
db6f68630bd4a3089ad814afbe102eff280f6bae
F20110218_AACRGL belser_t_Page_04.tif
2c020c1eff60ebbaf60eda663c62a394
abb39c1f5c325b708164b63787443ade0c1e7b41
25556 F20110218_AACRBP belser_t_Page_11.QC.jpg
37e4a8ab153f82446b38852e5f16c1d5
cd2cdad28c3b6a6bc07936b6bf5dd1cad4852024
F20110218_AACRGM belser_t_Page_07.tif
e5d8c3b83aba3fcfbbe4911c36465387
abe9aeb05c991f91408a9f100c534d17b1613593
1809 F20110218_AACRBQ belser_t_Page_35.txt
1504597fdaf16004269164f38191dce7
bbefb99bf73f6e956c65a38287679bb1a93f6f04
3620 F20110218_AACRLK belser_t_Page_04thm.jpg
7c4eec290497fd85f3a208ba639b853a
e01f0dc93a0d8a662045af125cd458e531bac009
F20110218_AACRGN belser_t_Page_08.tif
605b2fc5ba1145f897bc28f12f87810e
ec5fa70db42ab06f974ee610d089cf1fec54dd52
916303 F20110218_AACRBR belser_t_Page_14.jp2
91aecddc85d2f39ef34223e3d67c9816
5a02e8c7da9f9e200511850cead245c283ed0af8
4822 F20110218_AACRLL belser_t_Page_08thm.jpg
dd8732b15d56ee3a6708f2ae12993bf8
038103ddc1a39cf2c5127345c0aac75283fa256a
F20110218_AACRGO belser_t_Page_10.tif
0fbe98ccf0d636ded41f1769f72ab0ca
038e5747b8eaa03911494d1df026149b5adb0362
5742 F20110218_AACRLM belser_t_Page_18thm.jpg
57d1b1441750a38db39a05240154b4b4
9cda24611a8b512c3f239ae5c8e291022bebafba
F20110218_AACRGP belser_t_Page_12.tif
c34b970a3627d013855a2627982a2c47
d0157bdb134c58796ba317397c44a6a0e08670ad
74064 F20110218_AACRBS belser_t_Page_35.jpg
6d8bfa6dbb272f57a22ef28ba5a9eeab
0db7da3a28f45afb89d9f324d02af4d4d7355dbc
5730 F20110218_AACRLN belser_t_Page_23thm.jpg
5002cf9c8170af858adce1bdf988f159
f924adcd9081f0cfdde77cab9ba550e9169992ec
F20110218_AACRGQ belser_t_Page_13.tif
c39fcb97238ae91bdcf7620102b129fb
e7337a51f7665bd562fe69124b0d4d3289d60b8f
36838 F20110218_AACRBT belser_t_Page_08.pro
f6c50913136673d317dcf88678e64cf7
f15535824d39706a9ab3c069c7635a38c385a067
4757 F20110218_AACRLO belser_t_Page_28thm.jpg
afedf45d59f69a8cf4b2ecc29ed0041a
ca745af573d10d7e08d44e2bfd2bf98fd562be33
F20110218_AACRGR belser_t_Page_15.tif
a3e18490995b669cd13b8343ef287151
283377db945d4fd0e1451d0133e28d6f1b76032d
24072 F20110218_AACRBU belser_t_Page_24.QC.jpg
ef7634ab81d050df7131f5bf56c2ec07
30e418eab98ec4a4f8cd076898a7a0db4f199944
5849 F20110218_AACRLP belser_t_Page_29thm.jpg
327246c87cdb6b832a1d36fc78582f8f
0686b17b2ad8f72326fdff47e9b3fa3749359785
F20110218_AACRGS belser_t_Page_18.tif
7544b4fd7833653a641d1a0109ecef99
13cbb7ab8cc4f07df5230c2352a4e9b57afb419c
6109 F20110218_AACRBV belser_t_Page_38thm.jpg
3541bc1fe7a5535d9d2446ebbe27f30c
9ad6b819d0c2d47b18495be57f8c3d38e9c101d1
4347 F20110218_AACRLQ belser_t_Page_31thm.jpg
aa5483029ee8d4b29a197701138ea3aa
5d9d99bcf521d6f4251eb4ec376f325daaf51ba9
F20110218_AACRGT belser_t_Page_20.tif
8d5da2a6643df19dad65122b9baa10c9
d16c80413dc70e1a2077715d89946a64008a9915
64440 F20110218_AACRBW belser_t_Page_04.jpg
b2bd2cc7756cf92b91422cffe63bc9a4
665ac00533be3194465bc5b482f5f352a80df8b9
6023 F20110218_AACRLR belser_t_Page_33thm.jpg
153ca06d33af47bdb97e92e6ced1ba92
b6219e3c77539f05121911bf91b214cbc53b7a51
F20110218_AACRGU belser_t_Page_21.tif
e4b248cb534d98b5bd3534ca2a0baa4e
a677bd431bc49169cdc678d5fc2bd71184218edb
428 F20110218_AACRBX belser_t_Page_01.txt
9b61af63d2d0a0f0d8ac3cac31d1a653
5371f8d619aa0fc9916ea9e31a69c5cb141b6bf3
5499 F20110218_AACRLS belser_t_Page_34thm.jpg
211a60e7ac66b98e52655652510bf287
04e0b42452c2a4706d46ea37c4c7e1137671b058
F20110218_AACRGV belser_t_Page_22.tif
aaa5798276c7b7b99f4685b688a539dc
6d7e456f977bc8488b602bf7f94ef8185d88c53c
21170 F20110218_AACQZY belser_t_Page_09.pro
cc5835d95fb0993bffcb5e7db833bdfa
e458865e1d2751ee7ad0750e93c4492ec46c312d
4832 F20110218_AACRBY belser_t_Page_22thm.jpg
e2fdbf6c543b851ef4513cfe4c8a7ac4
7ae299a953b5d56e717cffb6f1b21fee14720098
5122 F20110218_AACRLT belser_t_Page_36thm.jpg
3648a82d455036a3453c75e0b45efefa
26f6d9e6041d5543d22d41933a719a7e8b47a24b
F20110218_AACRGW belser_t_Page_24.tif
bcb23a3365d71b1fca42234f5bbe6ec8
524355f70b826158c695ac850bb9fc10d3814f5b
1001583 F20110218_AACRBZ belser_t_Page_18.jp2
17ecd7457fa8f133c43379caa3d9f7db
9ed895990ecfd394b0ac7836084ad1e2eb4db70e


Permanent Link: http://ufdc.ufl.edu/UFE0013400/00001

Material Information

Title: Image Segmentation and Object Tracking for Micro Air Vehicles
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0013400:00001

Permanent Link: http://ufdc.ufl.edu/UFE0013400/00001

Material Information

Title: Image Segmentation and Object Tracking for Micro Air Vehicles
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0013400:00001


This item has the following downloads:


Full Text












IMAGE SEGMENTATION AND OBJECT TRACKING FOR
A MICRO AIR VEHICLE

















By

TED L. BELSER II


A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE

UNIVERSITY OF FLORIDA


2006





























Copyright 2005

by

Ted L. Belser II















ACKNOWLEDGMENTS

I thank Dr. Dapeng Wu for his role as my supervisory committee chair. I thank

Drs. Michael Nechyba and Eric Schwartz for the semesters of challenging coursework,

which no doubt increases the value of my degree. I thank the AVCAAF team for giving

me access to their data and research papers. I thank the Intel Corp. for making available

the Open Source Computer Vision (OpenCV) library.

I thank my family and friends for their support while attending the University of

Florida. I thank Aimee Baum for being my best friend and for putting up with the late

nights of work required to finish this thesis.
















TABLE OF CONTENTS



A C K N O W L E D G M E N T S ................................................................................................. iii

LIST OF TABLES ............. ...................... .............................. ............ vi

L IST O F FIG U R E S .... ...... ...................... ........................ .. ....... .............. vii

CHAPTER

1 IN TR OD U CTION ............................................... .. ......................... ..

Problem D definition .................................. .. ........... .... ............. .
A approach .......................... ...... ..... ............................ . 2
Three Processes to Achieve Object Tracking.......................................................2
A ssu m option s ....................................................... 3

2 A GRAPH-BASED SEGMENTATION ALGORITHM .............................................4

Functional and Performance Requirements for Segmentation ...................................4
G raph-B ased Segm entation ........................................ .......................................4
The Com prison Predicate ............................................................................. 5
The Segm entation A lgorithm ........................................ .......................... 6
Qualitative Analysis ............... .............. ....... ....... .......... .. .........7
Param eters of the Segm entation Algorithm ........................................ ...............8

3 THE LUCAS KANADE FEATURE TRACKING ALGORITHM..........................

The Lucas Kanade Correspondence Algorithm ................................ .......................
The Pyramidal Implementation of the Lucas Kanade Correspondence Algorithm.... 14
The Residual Function......................................... ..................... ............... 14
Functional and Performance Requirements............................................ 14
The Pyram id R presentation ........................................ .......................... 16
Pyram idal Feature Tracking ............... ...... .... ................ ............... .... 16
Parameters of the Pyramidal Feature-tracking Algorithm...............................17

4 SY STEM IN TEGRA TION ............................................... ............................. 19

Sy stem O verview .......... .................................................................. ......... .. 19
The M editor D esign Pattern ........................................................ .............. 20










System Timing and the Observer Design Pattern ..................................................21
System Interaction ........................ ...................... .. .... .... ........ ... .... 2 1

5 SYSTEM PERFORMANCE ANALYSIS ...................................... ............... 23

System Perform ance R equirem ents.................................... ..................................... 23
Computational Complexity of Each Algorithm........................................................24
A naly sis and D iscu ssion ............................................................................. ... ........24
Description of the Test Video Sequence ..........................................................24
P rep ro cessin g ................................................ ................ 2 5
M eth o d ..............................................................................................2 6
E nvironm ent .......................................................................26
The Segm entation A lgorithm ........................................ ......................... 27
Pyramidal Lucas-Kanade Feature Tracker ........................................................29
C oupling of A lgorithm s ............................................. ...... ....... ............... 30

6 CONCLUSION AND FUTURE WORK ....................................... ............... 32

Future Work for the Pyramidal Implementation of the Lucas Kanade Feature
Tracker ............................................. ...... ............................... 32
Future Work for the Segmentation Algorithm............................... ...............33

L IST O F R E F E R E N C E S ........................................................................ .....................34

B IO G R A PH IC A L SK E TCH ..................................................................... ..................35




























v
















LIST OF TABLES


Table


5-1 CPU Time for the Graph-Based Segmentation Algorithm.........................................28

5-2 CPU Time for the Pyramidal Implementation of the Lucas Kanade Feature-tracking
A lg orith m ............................................................................. ... 30


page
















LIST OF FIGURES


Figure page

2-1 Graph-based Image Segmentation Results. ........................................ ............... 7

4-1 T he Sy stem O v erview ........................................................................ .................. 19

5-1 Graph-based Segm entation Output........................................ ......................... 27

5-2 Segmentation Algorithm Performance. ........................................... ............... 28

5-3 Results from the Pyramidal Lucas-Kanade Feature Tracker............... .................29

5-4 The Performance of the Coupled Segmentation and Tracking Algorithms. ..............31















Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Science

IMAGE SEGMENTATION AND OBJECT TRACKING FOR
A MICRO AIR VEHICLE

By

Ted L. Belser II

May 2006

Chair: Dapeng Oliver Wu
Major Department: Electrical and Computer Engineering

This thesis describes a system that can perform object tracking in video produced

by a camera mounted on a micro air vehicle (MAV). The goal of the system is to identify

and track an object in full motion video while running in real-time on modest hardware

(in this case a Pentium III running at 800Mhz with 512 MB RAM).

To achieve this goal, two vision processing algorithms are coupled. A graph-based

segmentation algorithm is used to identify individual objects in the image by

discriminating between regions of similar color and texture. A pyramidal implementation

of the Lucas-Kanade feature tracker is used to track features in the video. Running at a

lower frequency than the tracking algorithm, the segmentation algorithm labels the

features according to the corresponding object. By tracking the labeled features, the

Lucas-Kanade feature tracker tracks the objects in the video.

Analysis and experimentation show that the pyramidal implementation of the

Lucas-Kanade is both efficient and robust. The system performance however is









dominated by the performance of the segmentation algorithm. The segmentation

algorithm, while capable of meeting the functional requirements of the system, requires

two to three times more processing power than the feature tracking algorithm requires.

The system described in this thesis is capable of meeting the requirements for

object tracking on a MAV platform. The analysis suggests that the pyramidal

implementation of the Lucas-Kanade is an essential component of the MAV platform due

to its efficiency and robust performance. The analysis also suggests a direction for

improvement. While the segmentation algorithm was able to fulfill the requirements, it

did so at a high computational cost. One possible direction for future work is to improve

the performance of the segmentation process.














CHAPTER 1
INTRODUCTION

Problem Definition

In its mission statement the AVCAAF (Active vision for Control of Agile

Autonomous Flight) group at the University of Florida's Machine Intelligence Laboratory

describes the potential missions of a MAV (Micro Air Vehicle). The potential missions

include search and rescue, moving-target tracking, immediate bomb damage assessment,

and identification and localization of interesting ground structures. To achieve this

mission statement the MAV platform utilizes a number of instruments. As is evident in

the group's name, the AVCAAF is focused on vision processing systems for flight

vehicle control. The primary instrument for vision-based control is the video camera.

The video camera coupled with a computer running sophisticated vision processing

algorithms forms a versatile system capable of performing functions such as automatic

attitude control, object recognition and object tracking. Examples of attitude control and

object recognition applications are discussed in AVCAAF's research papers [1, 2, 3 and

4]. Object tracking however is not discussed in these papers and is the focus of this

thesis.

To track an object it must first be identified. Object identification solutions are not

trivial and can arguably be called a central problem in computer vision. The problem of

what makes an object an object was once a question for philosophers; but in computer

vision, it is a question for engineers. In many ways, engineers have not answered this

question. Our systems are capable of identifying specific, narrowly defined classes of









objects, but there is no general solution to object identification. This paper does not

attempt to solve this problem; however understanding this problem helps to put the

problem of object identification and tracking into context.

In image processing, segmentation is the partitioning of a digital image into

multiple regions according to a criterion. Segmentation can be applied to identify objects

in a scene if a suitable segmentation criterion can be defined to identify objects of

interest. In their paper "Intelligent Missions for MAVs: Visual Contexts for Control,

Tracking and Recognition" [2] Todorovic and Nechyba discuss an object segmentation

algorithm. While this algorithm performs object segmentation efficiently, it is

computationally excessive to use the same algorithm for object tracking. Once an object

is acquired, the tracking problem has constraints such as a spatial and temporal locality

that reduce the complexity of the tracking problem. Furthermore, an object can be

expected to maintain its shape and appearance in a sequence of frames. Given these

constraints the problem is to design and implement an object tracking system that meets

the following requirements:

* The system should locate objects of interest
* The system should track the objects) of interest through sequential frames in video
* The system should run in real-time on standard hardware (Pentium III 800 MHz)

Approach

Three Processes to Achieve Object Tracking

The approach taken to solve this problem is to divide the tracking task into three

processes. The first process identifies and enumerates objects in the image. The second

process identifies significant features on each the objects. The third process is

correspondence of each feature between adjacent frames in a video sequence. Object

tracking is possible by defining which objects own which features and the tracking those









features over a sequence of frames. This method is described in the remaining chapters.

First, each of the processes used are described in detail. Chapter 2 describes the

segmentation process. Chapter 3 describes the feature extraction process and the feature

tracking process. The system organization and implementation is discussed in Chapter 4.

Chapter 5 is an evaluation of the system's performance including limitations of the

system and how the parameters of the individual processes govern the performance of the

system as a whole. Finally, in Chapter 6 suggestions for future research are proposed.

Assumptions

The segmentation, feature extraction and feature tracking processes do not need to

run with equal effort. The feature tracking process should run frequently in order to

accurately track features from frame to frame. While the segmentation and feature

extraction processes may run less frequently. By making reasonable assumptions, the

segmentation and feature extraction processes can occur at a frequency significantly less

than the frame rate of the video sequence.

The essential mechanism of the feature tracking process is correspondence.

Correspondence identifies the offset between a specific feature in two frames. These two

frames may represent images from cameras that differ in time and/or space. In the case

of a moving camera, the two frames differ in time and space. By using a correspondence

mechanism, feature tracking is possible. By associating features with objects it is

therefore possible to track an entire object. A few assumptions however must be made.

1. An object to be tracked has identifiable features.

2. These features can be tracked by a correspondence mechanism over a sequence of
frames.

3. Each feature instance belongs to only one object during the period of time in which
the tracking occurs.














CHAPTER 2
A GRAPH-BASED SEGMENTATION ALGORITHM

Functional and Performance Requirements for Segmentation

In [5] a graph-based image segmentation algorithm is defined. The segmentation

algorithm is designed to meet the following requirements:

1. The algorithm should capture perceptually different regions of an image.
2. The algorithm should account for local as well as global information while
partitioning the image regions.
3. The algorithm should be efficient, running in time nearly linear in the number of
image pixels.

Graph-Based Segmentation

In Felzenszwalb and Huttenlocher [5] a graph-based approach is presented. An

image is represented as an undirected graph G = (V,E). Represents the vertices of the

graph and corresponds one-to-one with the pixels in the image. E represents the edges

between the pixels. In the actual implementation, E represents every adjacency in the

four-connected adjacency of every pixel. Each edge in E is given a non-negative weight

w((v,,, )) that is a measure of the dissimilarity between the pixels belonging to vertices

v, and v,. In the implementation, this difference is the distance between the color values

of v, and v, in the RGB color space. The objective is to produce a segmentation S

composed of components C. Each component is defined by a set of edges E' E

between vertices representing pixels of low dissimilarity as defined by a comparison

predicate.









The Comparison Predicate

Requirement 2 above describes a need to take into account global and local

information when forming a component in the segmentation. The comparison predicate

described by Felzenszwalb and Huttenlocher [5] is designed to meet this requirement.

The predicate is designed to measure dissimilarity between components relative to the

internal dissimilarity within each component. This technique is capable of identifying

regions that have little internal dissimilarity while also identifying regions where there is

great internal dissimilarity.

The predicate is defined in terms of two quantities, the minimum internal distance

of two components C1 and C2 and the difference between components C1 and C2. The

minimum internal distance captures the global information of the two components and

forms the basis on which the predicate decides if the two components are actually

different. The minimum internal difference is defined as

Intmln (C1, C)= min(Int(C1 )+ r(C1), Int(C2 )+ r(C2)),

where the internal difference Int is defined as the largest weight in the minimum spanning

tree of the component, MST(C, E),

Int(C) = max w(e) for e MST(C, E)

The difference between the components C1 and C2 is the minimum weight in the edges

connecting the two components,

Diff(C,C2) = minw((v,,v,)). for v, e C1,v, C2,(v,,v,) E

By comparing the minimum internal difference of the components with the smaller of the

two internal differences, a predicate D(C1, C2) can be defined. If true, edge









(v,, v, v e C, v, e C2 forms a boundary between components C1 and C2, otherwise C1

and C2 are the same component.

D(C, C, )= True if Diff(C, C2) > Intmm (C,, C); False otherwise

This formulation has only one parameter, k. The parameter k is coupled with the system

by way of the threshold function T. The threshold function r(C) defines by how much

the differences between the components must exceed the lesser of the two component

internal differences. A higher value decreases the likelihood that two components will be

declared different and therefore encourages larger components. The function r also

serves to minimize error due to small component sizes in the early stages of the

computation. Specifically, it scales a constant k by the inverse of the component size:

r(C) = k


The parameter k determines granularity of the segmentation. Larger values of k produce

larger components and therefore fewer components per image. Smaller values of k

produce smaller components and therefore more components per image.

The Segmentation Algorithm

Felzenszwalb and Huttenlocher [5] apply the predicate function using the following

algorithm:

4. Calculate the weight for all edges in E.

5. Sort E into non-decreasing order by edge weight resulting in the sequence
(01o ., o )

6. Assign all vertices in V one-to-one with components (C,,..., C.) so that each
vertex belongs to its own component.

7. for i = 1 to m do the following








a. Let o, =(v, v,),v, eC,,v, GC,

b. if C, and C, are different components AND
D(C, C,) is false, then merge C, and C,; otherwise do nothing.

8. Return S = (C1,..., C) where n is the number of components remaining.

This algorithm runs in O(m log m) time where m denotes the number of edges in E.

Qualitative Analysis
Figure 2-1 shows an image and its segmentation using this technique. Regions of

little dissimilarity, such as the dark region to the left of the butterfly, are properly

segmented. More interesting is the left side of the leaf on which the butterfly sits. This

region shows dissimilarity in the form of dark and light ridges formed by the veins in the

leaf. This region is segmented as a single component despite large internal

dissimilarities.


4


A B
Figure 2-1. Graph-based Image Segmentation Results. A) Original Image. B)
Segmented image labeled in randomly chosen colors.

A close inspection of the upper wing reveals much smaller speckles of white

between the larger dots of white in the. The dark speckled background of the wing is

segmented as a single component. Also evident from this picture is that the algorithm

does not favor an orientation; it is capable of identifying regions of any orientation. This









is an important requirement for the MAV platform where the image orientation changes

with the pitch and roll of the aircraft.

Parameters of the Segmentation Algorithm

The author of [5] also published C++ language code implementing this

segmentation algorithm. This implementation accepts the following parameters:

S Isize This quantity represents the size of the input image. As discussed above, this
algorithm performs within 0(I, ,e logJI, e)time. This parameter is the only
parameter that affects performance.

* k This quantity represents the threshold used to perform the segmentation. It
affects the size resulting components and therefore the total number of components.
Larger values of k produce larger components and therefore fewer components.
Smaller values of k produce smaller components and therefore more components.
This parameter does not affect the algorithms performance.

* Cmin-sie This quantity defines the minimum size of the components produced by
the segmentation. Any component less than the minimum size are merged with an
adjacent component. This process is performed after the segmentation is
completed. This parameter can improve the performance if its value is 1 in which
case it can be ignored.














CHAPTER 3
THE LUCAS KANADE FEATURE TRACKING ALGORITHM

The Lucas Kanade Correspondence Algorithm

Lucas and Kanade [6] describe an algorithm for registering like features in a pair of

stereo images. The registration algorithm attempts to locate a feature identified in one

image with the corresponding feature in another image. Despite stereo vision being the

motivation for the algorithm, there are other applications for this technique. Tracking

motion between frames of a single motionless camera is simply the correspondence of

features through time. In the case of a moving camera, feature tracking is the

correspondence of features in images differing in time and space. The two situations are

analogous with the stereo application in that the problem is finding the offset of a feature

in one image to the feature in the second image.

The algorithm solves the following problem. Two images that are separated by

space or time or both (within a small amount of time or space) have corresponding

features. These features exists somewhere in space relative to the camera. As the camera

moves, its position relative to these features changes. This change in position is reflected

as a movement of the feature on the image plane of the camera. This algorithm identifies

the movement (the offset) of a feature in two sequential images.

Lucas and Kanade describe this more formally [6], define the feature of interest in

image A as the vector x and define the same feature in the image B as x + h The

problem is to find the vector h. The algorithm works by searching image B for a best

match with the feature in image A. The best match is defined as the feature in image B









that differs the least with the feature in image A. An exhaustive search of image B for the

feature is impractical and would fail to recognize important constraint of locality that is

likely to exist. Lucas and Kanade identify two aspects of the searching algorithm:

1. The method used to search for the minimal difference.
2. The algorithm to calculate the value of the difference.

Lucas and Kanade point out that each of these aspects are loosely coupled so

implementation can be realized through any combination of searching and differencing

algorithms.

Lucas and Kanade's approach uses the spatial intensity gradient of the image to

find the value of h. The process is iterative and resembles a method similar to the

Newton-Raphson method where accuracy increases with each iteration. If the algorithm

converges, it converges in O(M2 log N) time where N2 is the size of the image and M2

the size of the region of possible values of h .

Lucas and Kanade first describe their solution in the one-dimensional case. Let

image A be represented by function F(x)and image B be represented as G(x) = F(x + h).

Lucas and Kanade's solution depends on a linear approximation of F(x) in the

neighborhood of x. For small h,

G(x)= F(x + h) F(x)+hF'(x), (1)

G(x)-F(x) (2)
F'(x) (2)

In other words by knowing the rate of change of intensity around x in F(x) and the

difference in intensity between F(x) and G(x), the offset h can be determined. This

approach assumes linearity and will only work for small distances where there are no








local minima in the error function. Lucas and Kanade suggest that by smoothing the

image, minima produced by noise in the image can be eliminated.

Equation 2 is correct if G(x) = F(x + h). To find the value of h where this is true,

the possible values of h need to be explored and a best match determined. To identify

this best match the following error function is defined:

E= [F(x + h)-G(x)]2 (3)

The value ofh can be determined by minimizing (3).

aE
0=-
ah
-- [F(x)+ hF'(x)-G(x)]2 (4)
Oh
= C 2F'(x)[F(x)+ hF'(x)- G(x)]


h ZF'[G(x)G(x)- F(x)]
hF (5)
xFF'(x-2

These approximations rely on the linearity of F(x) around x. To reduce the effects

of non-linearity, Lucas and Kanade propose weighting the error function more strongly

where there is linearity and less strongly where there is not linearity. In other words

where F"(x) is high, the contribution of that term to the sum should be less. The

following equation approximates F"(x)

G'(x) F'(x)
Fh"(x) (6)
h

1
Recognizing that this weight will be used in an average, the constant factor can be
h


dropped and the weighting function can be









w(X)--= 1 (7)
G'(x)-F'(xl

Including this weighting function (7), the iterative form of (5) becomes

h, = 0,

S w(x )F'(x + hk )[G(x)- F(x + hk)] (8)
hk+ = hk + w(x)F'(x hk2


Equation (8) describes the iterative process where each new value of h adds on to

the previous value. The weighting function serves to increase the accuracy of the

approximation by filtering out the cases where the linearity assumption is invalid. This in

turn will speed up the convergence. Each iteration is calculated until the error function

value is below a threshold.

Lucas and Kanade describe how the one-dimensional case can be generalized into

an n-dimensional case. Similar to the one-dimensional case, the objective is to minimize

the error function:

E=c, [F(Y + h)- G(Y1f (9)

where x and h are n-dimensional row vectors. The one-dimensional linear

approximation in equation 1 becomes

G()= F( +h T)F()+h F(x), (10)


where -is the gradient operator with respect to Using this multi-dimensional

approximation Lucas and Kanade minimize E.









0= E
8h


O F F
O-



= -2- G() + h- Gx()
ah




Solving for h produces


[Y [c1)- F( ]IY 1 (12)


The Lucas Kanade method described above works for a translation of a feature.

Recognizing this, they generalize their algorithm even further by accounting for an

arbitrary linear transformation such as rotation, shear and scaling. This is achieved by

inserting a linear transformation matrix A into the equations. Equation (1) becomes

G() = F(A +h

and the error functions 3 and 9 become

E = [F(A+h)- G() (13)

resulting in a system of linear equations to be solved simultaneously.

Because of the linearity assumption, tracking large displacements is difficult.

Smoothing the image can remove the high frequency components that make the linearity

assumption more valid and allow for a larger range of convergence. Smoothing the

image however, removes information from the image. Another method for tracking large

displacements is the pyramidal method. This method described by Bouguet [7] makes

use of the pyramidal approach to refine the search for h. Details of how it works are

explained in the next section. The pyramidal approach makes use of a pyramid of images









each containing the information from the source image represented in a graduated degree

of resolution from coarse to fine.

The Pyramidal Implementation of the Lucas Kanade Correspondence Algorithm

The Open Source Computer Vision Library (OpenCV) sponsored by Intel

Corporation is a library written in the C programming language that contains a Lucas-

Kanade feature-tracking algorithm. The OpenCV implementation makes use of the

pyramid method suggested by Lucas and Kanade in their original paper.

The Residual Function

The OpenCV mathematical formalization differs slightly from the Lucas Kanade

formalization. Described by Bouguet [7] the formulation is as follows: Let A(x, y) and

B(x, y) represent the images between which the feature correspondence should be

determined. OpenCV defines the residual function, analogous to the error function (9), as


Ux U Wx UY Wy
s(d)=s(dxd,)= (A(x,y)-B(x+dx, y+d))2 (14)
x=ux Wx y=uy -wy

This equation defines a neighborhood of size (2wx + 1)x (2wy + 1). While the

Lucas-Kanade algorithm defines a region of interest over which the error function should

be summed, the OpenCV implementation is more specific and defines a small integration

window defined in terms of w, and w,.

Functional and Performance Requirements

The pyramidal algorithm is designed to meet two important requirements for a

practical feature tracker:

1. The algorithm should be accurate. The object of a tracking algorithm is to find the
displacement of a feature in two different images. An inaccurate algorithm would
defeat the purpose of the algorithm in the first place.









2. The algorithm should be robust. It should be insensitive to variables that are likely
to change in real world situations. Variables such as variation in lighting, the speed
of image motion and patches of the image moving at different velocities.


In addition to these requirements, in practice, the algorithm should meet a performance

requirement:

3. The algorithm should be computationally inexpensive. The purpose of tracking is
to identify the motion of features from frame to frame so the algorithm generally
will run at a frequency equal to the frame rate. Most vision systems perform a
series of processing functions to meet specific goals and functions that are run at a
frequency equal to the frame rate of the source video need to use a little of the
system resources as possible.

In the basic Lucas Kanade algorithm there is a tradeoff between the accuracy and

robustness requirements. In order to have accuracy, a small integration window insures

that the details in the image are not smoothed out. Preventing the loss of detail is

especially important for boundaries in the image that demarcate occluding regions. The

regions are potentially moving at different velocities. For the MAV application this is

clearly an important requirement due to the velocity of the camera and the potential

difference in velocity with objects to be tracked. On the other hand, to have a robust

algorithm, by definition, it must work under many different conditions. Conditions

where there are large displacements between images are common are common for the

MAV platform and warrant a larger integration window. This apparent conflict defines a

zero-sum tradeoff between accuracy and robustness. The solution to meeting each

requirement without a likely counterproductive compromise is to define a mechanism

that decouples one requirement from the other. The method that succeeds in doing this is

the pyramidal approach.









The Pyramid Representation

The pyramid representation of image A(x, y) is a collection of images recursively

derived from A(x,y). The images are organized into pyramid levels L =1...Lm where

the original image A(x,y) is L0. Each image in the pyramid, increasing in level, is a

down sampling of the previous level. For example L0 is down sampled to produce L,

and L1 down sampled to produce L2 up to Lm. An image of size 360x240 and Lm = 3

produces a pyramid of three images with dimensions 180x120, 90x60 and 45x30 pixels at

levels L1, L2 and L3 respectively.

Pyramidal Feature Tracking

The goal of feature tracking is to identify the displacement of a feature from one

image to another. In the pyramidal approach, this displacement vector is computed for

each level of the pyramid [7]. Computing the displacement vector dL = [d dL is a

matter of minimizing the following residual function.

(I+W UL+Wy
L(L)= eL ,dL )= (AL(x,y)- BL(x+g +d,y+ +d ))2. (15)
L L
x=u -wx y=Uy Wy

Note that this residual function is similar to equation (14) but differs by the term

L = [gL gL This term represents the initial guess used to seed the iterative


function. The calculation starts at the highest level of the pyramid with gL = [0 0].

Using this guess, the displacement vector dL" l is found by minimizing equation (15).

This d is then used to find the next g using this expression:

gL1 = 2(gL +L) (16)









The final displacement found by minimizing the residual function (15) for each level in

the pyramid is

Lm
d=-2LdL (17)
L=0

The advantage of the pyramid implementation is that a small integration window

can be used to meet the accuracy requirement while the pyramidal approach provides a

robust method to track large displacements. The size of the maximum detectable

displacement is dependent on the number of levels in the pyramid. The gain over the

maximum detectible displacement in the underlying Lucas Kanade algorithm is:

max-Gan =(2Lml -1) (18)

For a three-level pyramid, this produces a gain of 15 times the largest displacement

detectable by the underlying Lucas Kanade step.

Parameters of the Pyramidal Feature-tracking Algorithm

Applying the pyramidal algorithm using the OpenCV C-language library is a matter of
choosing the best values for the following parameters for the particular application.

* Isie This quantity represents the source image size. Specifically the size of level
L = Lo in the pyramid representation. Larger images have greater detail and
therefore can produce more accurate results. Larger images also require more
pyramid levels to track feature displacement. These extra pyramid levels can
contribute to more CPU time. Each pyramid level represents a standard Lucas
Kanade calculation.

* QF. This quantity represents which features are selected for tracking. In terms of
tracking quality, the upper (1 QF ) x 100% features are selected for tracking. The
method for determining tracking quality is defined by Bouguet, Shi and Tomasi [7,
8].

* NF -This quantity represents the number of features to track. If the number of
features in an image meeting the constraint defined by QF is greater than NF only
the best NF features are selected for tracking [8].






18


* W- This quantity is equivalent to the values wx and wy in equation 14. This
quantity controls the size of the integration window and therefore determines the
accuracy of the tracking algorithm.

* Lm This quantity represents the number of pyramid levels in the image. As
described above, this value determines the maximum detectible displacement of a
feature. It also determines how many times the Lucas Kanade algorithm is
performed for each feature.















CHAPTER 4
SYSTEM INTEGRATION

System Overview

The following describes an object-oriented system framework in which vision

processing components are easily interconnected to form a vision-processing pipeline.

Typical vision-processing pipelines involve a number of transformations, filters and

samplers connected in a meaningful way to perform a task.




Video Player Feature Tracking Video Source
Video Recorder Algorithm








Feature Extraction Segmentation
Algorithm Algorithm



Figure 4-1. The System Overview

The system described by this thesis is formed from the following components:

4. Video Source (a camera or file) The OpenCV library provides functionality to
capture video from a file or a live camera.

5. Feature extraction component The feature extraction component is described in
Chapter 3 as part of the feature tracking system.

6. Image Segmentation Component The segmentation component is described in
Chapter 2.

7. Optical Flow Component The optical flow component is described in Chapter 3.









8. Video player / video recorder The OpenCV library provides functionality to write
a video stream to a file or to display it on the computer monitor.

The Mediator Design Pattern

By its nature, video processing is a demanding application for which data

management is essential to a high performance application. Furthermore, the design

process and experimentation processes require that any vision system be made of

reusable, decoupled, extensible and manageable components. To meet these

requirements, the system described in this thesis utilized the mediator design pattern [9].

The mediator design pattern defines a mediator class that serves as a means to

decouple a number of colleague classes. The mediator encapsulates the interaction of the

colleague classes to the extent that they no longer interact directly. All interaction

happens through the mediator.

The primary role of the mediator in this implementation is to distribute video

frames from the output of one component to the inputs of other components. Each

component inherits a colleague class which implements an interface that allows the

mediator to interact with the individual component. When a component is initialized, it

can subscribe to the output of another component. Any frames produced by the source

component will automatically be communicated to the subscribing component.

The following describes the typical message exchange between a colleague and the

mediator. When the colleague changes its internal state in a way that other colleagues

should know about, it communicates that change in state to the mediator class. In this

particular implementation, the colleague class notifies the mediator whenever the

colleague has produced a frame of video. The mediator then grabs the frames from the









output of the source colleague class and distributes the frames to the input of colleagues

subscribing to the output of the source class.

This mediator architecture is designed for expansion. For example, the mediator

class could implement a frame cache for an asynchronous system. Another possible

improvement is the addition of a configuration file for runtime configuration of the vision

system. The system would simply parse the configuration file and based on the contents

it could connect the components in the correct order all without a line of code or a

compiler.

System Timing and the Observer Design Pattern

A central clock controls the timing of the entire system. The clock is implemented

using the observer design pattern. The purpose of this design pattern is to allow a

number of objects to observe another object, the subject. Whenever the state of the

subject changes, all other subscribing objects, the observers, are notified. In this

implementation, the subject is an alarm clock. Any class inheriting the observer class

CWatch can subscribe to the clock. As a parameter of the subscribe method call, the

observing class passes a time period in milliseconds. The clock will then notify the

observer periodically according to the time period.

In this implementation, the clock is useful when capturing data from a live camera.

The clock also makes possible multiple frequencies within the system. For example a

feature-tracking algorithm can run at a rate of 30Hz while an image segmentation

algorithm can run at a different rate such aslHz.

System Interaction

Figure 4-1 illustrates how the components of the system are organized. The

feature-tracking algorithm runs in parallel with the serial combination of the









segmentation algorithm and the feature-extracting algorithm. This organization is

necessary to meet the performance requirements of the system. Segmenting an image at

the full 30Hz frame rate is both computationally expensive and unnecessary. Instead, the

segmentation can happen at a much lower frequency while the feature tracking algorithm

runs at a full 30Hz. The mediator pattern described earlier makes implementing a

parallel architecture practical and easy. Furthermore the system timing functionality

provided by the clock allows for multiple frequencies within the system.

The current implementation is not multi-threaded so this configuration does not

currently make full use of the benefits provided by the architecture. For example, the

segmentation algorithm may run for one second. In this second, the processor never

passes control to the clock object and therefore the more frequent feature tracking

updates are never made. In a multi-threaded environment however, this blocking would

not occur. The low frequency segmentation algorithm could run concurrently with the

higher frequency feature-tracking algorithm for the minor cost of thread management.

The analysis in Chapter 5, System Performance, assumes that the system runs in a multi-

threaded environment.














CHAPTER 5
SYSTEM PERFORMANCE ANALYSIS

System Performance Requirements

A MAV such as the one developed by the AVCAAF group has a limited

performance envelope. As a fixed wing aircraft, it cannot hover nor slow down in order

to complete a calculation before making a decision. This characteristic translates into

strict performance requirements for the underlying control systems. While the object

tracking system described by this paper is not a critical control system, it does run

concurrently with the other vision systems and therefore indirectly affects overall MAV

performance. While an object tracking system may not provide control critical services,

it would likely provide mission critical services for missions such as surveillance, search

and rescue or even function as part of a landmark-based navigation system. Another

possible role critical to the survival of the MAV is collision avoidance. An object

tracking system should meet the following performance requirements.

9. The system should be computationally efficient It should not require a significant
share of the available processing power. Other more critical systems such as flight
stability will run concurrently with the tracking system and should not be adversely
affected by an inefficient tracking algorithm.

10. The system should run at a rate useful for the particular mission An update rate of
1 update every 5 seconds may be sufficient for a navigation by landmark mission,
but totally unsuitable for navigating a forest of skyscrapers.









Computational Complexity of Each Algorithm

The dominant processes in this system are the feature tracking algorithm and the

image segmentation algorithm. This section describes the coupling of these components

and how the parameters of the components affect the performance of the system overall

Chapters 2 and 3 describe the complexity of each component. The complexity of

the segmentation algorithm is 0(I, e. log I,, ) while the complexity of the tracking

algorithm is O(LmNFw 2 logIze,). While the tracking algorithm has the w2 term,

experimentation shows that the segmentation algorithm is the most computationally

costly. The reason for the disparity in actual CPU time is that the L NFw2 is typically

much smaller than the size of the image I,sze.

Analysis and Discussion

Description of the Test Video Sequence

The experiment was conducted on a video sequence that represents a typical MAV

flight. In this sequence the MAV is flying at an approximate altitude of 50 ft and an

average speed of 25mph. The conditions are a sky with scattered cumulus clouds and

unlimited visibility (no haze). The MAV is flying over a green field with regular patches

of brown. Tethered to a fence in the field are two red, helium-filled 8ft diameter

balloons. The balloons are partially covered on the bottom with a black tarp. The

presence of the tarp produces a black wedge in the circular image of each balloon.

The video sequence captures the relative motion of two balloons. Balloon A is on

the left and balloon B is on the right. The first frame in the video sequence contains the

first frame in which balloon A entered the picture. The last frame in the video sequence

corresponds to the last frame in which balloon B is present.









In the initial frames of the video sequence, both balloons A and B are present.

Balloon A is on the left and balloon B is on the right. As the MAV flies toward the

balloons, both balloons translate from right to left until only balloon B is within the

image. The MAV then turns left toward balloon B and flies directly toward it. After

leaving, balloon A never renters the frame.

To make the turns the MAV must roll and therefore each turn is marked by a

rotation of the entire picture. Also, the MAV must make slight altitude changes that are

reflected as vertical motion in the picture.

The video sequence is 108 frames long and was captured at 30 frames per second.

Each frame in the sequence has a resolution of 720 by 480 pixels and is interlaced.

Preprocessing

The video sequence was preprocessed before running the experiments described in

this chapter. First the video was de-interlaced and then it was down sampled to 50% of

the original size.

De-interlacing was achieved by duplicating one of the fields for each frame in the

interlaced sequence. This reduced the vertical resolution by 50%, but this method

prevented the introduction of ghost artifacts in the video.

The video was down sampled by applying a radius-1 Gaussian blur and then a

point-sampled resize by half. This method avoids artifacts caused by aliasing, but at the

cost of a blurrier output. Bilinear or bicubic filters may be a better choice to down sample

the video and maintain quality without producing artifacts but the result of the Gaussian

blur was sufficient.









Method

The Segmentation algorithm and tracking algorithms are the primary subjects of the

experiment. Each algorithm has a set of parameters as described earlier in their

respective chapters. The objective of this experiment is to search the parameter space of

these algorithms to find an optimal configuration. The parameter space is huge and

therefore an exhaustive search of the parameter space is not feasible. Furthermore,

because of the nature of the problem, quantitative experimental results cannot be used to

determine the success of each experiment. Instead, a human must evaluate each result

set. The human observer must classify the experiment as successful or unsuccessful.

To search the parameter space, the parameters were adjusted with increasing

computational efficiency until the algorithm no longer met its requirements. Each of the

algorithms was tested individually and compared against its requirements.

Neither algorithm depends on the other to meet its requirements. Each algorithm

however will run on the same hardware and is therefore constrained by the availability of

CPU time. This analysis first describes how each algorithm behaves independent of the

other. The analysis then describes the performance of a system containing these two

algorithms, specifically the frequency at which each algorithm can run in real-time given

a specific hardware configuration.

The computation performance of each experiment was measured using a code

profiler. The profiler measured the total CPU time used by each algorithm.

Environment

The experiments described in this chapter were performed on a Pentium III running

at 800MHz with 512 MB RAM.









The Segmentation Algorithm

Each experiment to measure the performance of the segmentation algorithm was

performed by running the algorithm on all 108 frames in the video sequence. To verify

that the parameter k does not affect the computational performance, table 5-1 shows the

results for k = 100, 1000, 5000 and 10000. The different values ofk did not change the

time of computation, but did change the quality of the output. As expected, lower values

of k produced smaller components and therefore more components while larger values of

k produced the opposite. By qualitative observation, k = 5000 produced the best

segmentation of the test video. This value of k was used to perform the remaining

experiments.












A B











C D
Figure 5-1. Graph-based Segmentation Output. A) Original image, B) k = 100, C) k =
5,000, D) k= 10,000.










Table 5-1. CPU Time for the Graph-Based Segmentation Algorithm
Experiment k Cmn-size I,,ze [pixels] Total CPU Average CPU
[pixels] Time [sec] Time per
Frame[sec]
1 100 50 360x240 452.7 4.2
2 1000 50 360x240 406.1 3.8
3 5000 50 360x240 443.0 4.1
4 10000 50 360x240 428.0 4.0
5 5000 50 270x180 224.2 2.1
(75%)
6 5000 50 180x120 107.4 1.0
(50%)
7 5000 50 90x60 (25%) 27.0 0.25

The computational complexity of the segmentation algorithm dictates that the

algorithm's time of execution is dependent on the size of the image. Table 5-1 shows the

CPU time for the algorithm running on an image scaled by 1.0, 0.75, 0.5 and 0.25. The

total CPU time values accurately reflect the algorithm complexity 0(Is,,e log Is, ).



Segmentation Algorithm Performance
100o00% Image Size
90.00% A 360x240
8.0%- ..... 270x180
S- 180x120
70.00% -
60.00%
CPU 50.00%o -
Load \
40.00% \ .
30.00%
20.00% '....
10.00% -A -,
0.00%1 ll 2
1 2 3 4 5 6 7 8 9 10 11 12
Segmentation Period
Figure 5-2. Segmentation Algorithm Performance.

The algorithm meets its requirements for scale values 1.0, 0.75 and 0.5 but not for

0.25. There is a particular set of frames (frames 68 90) where the algorithm fails at









every scale. In these frames the algorithm confuses a brown patch of dirt and grass in a

field with the red balloon in the foreground.

Pyramidal Lucas-Kanade Feature Tracker

The dominant parameters in the feature-tracking algorithm are Lm, w, and NF.

Table 5-2 shows results for combinations of L, and w. The algorithm requires a set of

features to track as input. The OpenCV library contains a function

cvGoodFeaturesToTrack that finds the best features suitable for the tracking algorithm [3,

4]. This function requires values for the parameters QF and NF. These parameters were

described in chapter 3.













Figure 5-3. Results from the Pyramidal Lucas-Kanade Feature Tracker. A) NF = 200, B)
NF = 1,000

To improve performance the 30 FPS video was sampled at the lower frequencies of

15, 10 and 5 FPS. Running at lower frequencies requires that the algorithm work for

larger displacements of the features between frames. This scenario is precisely what the

pyramidal L-K tracker was designed to fulfill. The algorithm maintained a good track for

the 15 and 10Hz runs. The algorithm started losing points during tracking for the 5Hz

run even with the maximum number of Pyramid levels Lm = 4.









Table 5-2. CPU Time for the Pyramidal Implementation of the Lucas Kanade Feature-
tracking Algorithm.
Experiment Frames Lm w NF Average CPU Load
per Second CPU Time
per Frame
[psec]
8 30 1 2 100 6,677.45 20.03%
9 30 2 2 100 8,495.98 25.49%
10 30 3 2 100 7,287.76 21.86%
11 30 1 3 100 6,909.44 20.73%
12 30 2 3 100 7,950.88 23.85%
13 30 3 3 100 8,758.49 26.28%
14 30 3 3 50 5,617.95 16.85%
15 30 3 3 200 15,172.56 45.52%
16 30 3 3 1000 73,582.15 220.75%
17 15 3 3 200 13,276.56 19.91%
18 10 3 3 200 14,025.36 14.03%
19 5 4 3 200 13,326.06 6.66%
In experiment 16, the feature extractor quality parameter had to be set to 0.005 to get
enough points to meet the NF = 1000.

Coupling of Algorithms

The segmentation algorithm requires significantly more computation than the

feature tracker. This is easy to understand by recalling the computational complexity of

the algorithms. The segmentation algorithm was highly dependent on the size of the

image: Is,,e. While the feature tracking algorithm was highly dependent on the number of

feature points to be tracked: NF. This difference is intuitive: the segmentation algorithm

must run on every pixel in the image while the feature tracking algorithm must only run

on a small set of pixels. Also the tracking algorithm exhibited robust performance at low

frequencies: 15 and 10Hz. As with most algorithms there is a tradeoff between accuracy

and performance. Figure 5-4 charts the performance of the coupled algorithms against

the period of the segmentation algorithm.










Coupled Performance Tracking Frequency
......... 30 FPS
100.00% --- 15FPS

90.000%- # 270x180 10 FPS

80.00% -


CPU#
S0.0 360x240



50.00% ,-
40.00%-



30.00% 180x120
20.00% ---------------------------

20.00% -
1 2 3 4 5 6 7 8 9 10
Segmentation Period
Figure 5-4. The Performance of the Coupled Segmentation and Tracking Algorithms.
Results plotted for images with sizes 360x240, 270x180 and 180x120 at
tracking frequencies of 30, 15 and 10 FPS.

The AVCAAF MAV flies at a speed of 36.7 ft/sec (25 MPH). Assuming that the

MAV is flying in a straight line in a static world, in order to detect an object before it is

within 100ft, the MAV must refresh its knowledge of objects once every 2.73 seconds

(the time to cover 100ft at 25MPH). Figure 5-4 shows that this can be achieved using the

50% scaled image (180x120) and a tracking frequency between 10 and 30 without

loading the CPU any more than 65%. Tracking at 10 FPS puts a 50% load on the

processor while tracking at 30 FPS loads the processor approximately 64%. Using the

75% scaled image, the system would exceed the available CPU power at a 30 FPS

tracking frequency and use 100% of the processor for tracking frequencies of 10 and 15

FPS. Using 100% of the processor for object tracking is impractical because there are

likely other systems more critical to MAV flight requiring CPU resources














CHAPTER 6
CONCLUSION AND FUTURE WORK

The results from chapter 5 show that the Pyramidal Implementation of the Lucas

Kanade feature tracker is robust and computationally efficient algorithm. It is capable of

meeting functional and performance requirements over a range of configurations. The

graph-based segmentation algorithm, while capable of meeting the functional and

performance requirements marginally, did not perform in a robust or computationally

efficient manner.

Future Work for the Pyramidal Implementation of the Lucas Kanade Feature
Tracker

The Lucas Kanade algorithm was designed an image registration algorithm for the

purposes of stereo vision, but it has many applications beyond stereo vision. This paper

shows how the algorithm can be used to track moving objects from a moving camera.

Structure from motion and image mosaic registration are other applications that could be

useful in MAV missions. The Lucas Kanade algorithm is a unifying framework [10].

They pyramidal implementation allows for efficient computation of optical flow for

a specific set of points in the image. An arbitrary set of points or even the entire image

can be calculated. Using the architecture described in Chapter 4, the optical flow

processing component can be shared by other video components in the system. This

reduces redundant calculations.









Future Work for the Segmentation Algorithm

A possible optimization for the segmentation algorithm is to optimize for distance

in the image (the further the distance the less detailed the image has). The segmentation

algorithm functions at a fixed level of detail. This fixed level of detail is optimized for a

specific distance in the image, but not for all distances in the image. The AVCAAF

MAV uses a vision system for estimating the location of the horizon in the image[3].

One improvement would be to use the horizon estimate as a way to identify the distance

to a pixel in 3D space. The segmentation algorithm could be adjusted to discriminate at

finer detail for pixels further in the image.















LIST OF REFERENCES


[1] S. Todorovic, M.C. Nechyba, "Detection of Artificial Structures in Natural-Scene
Images Using Dynamic Trees," Proc. 17th Int '. Conf. Pattern Rec., Cambridge,
UK, Intl. Assoc. Pattern Rec., 2004, pp. 35-39.

[2] S. Todorovic, M.C. Nechyba, "Intelligent Missions for MAVs: Visual Contexts for
Control, Tracking and Recognition," Proc. 2004 IEEE Intl. Conf on Robotics and
Automation, New Orleans, LA, Apr. 2004, pp. 1640-1645.

[3] S. Todorovic, M.C. Nechyba, P.G. Ifju, "Sky/Ground Modeling for Autonomous
MAV Flight," Proc. IEEE Int'l. Conf. Robotics andAutomation, Taipei, Taiwan,
vol. 1, 2003, pp. 1422-1427.

[4] S. Todorovic, M.C. Nechyba ,"Towards Intelligent Mission Profiles of Micro Air
Vehicles: Multiscale Viterbi Classification," Lecture Notes in Computer Science,
Computer Vision ECCV 2004. 8th European Conf. on Computer Vision, Prague,
Czech Republic, vol. 3022, May 2004, pp. 178-189.

[5] P.F. Felzenszwalb and D.P. Huttenlocher, "Efficient Graph-Based Image
Segmentation," Int'lJ. Computer Vision, vol. 59, no 2, September 2004, pp. 167-
181.

[6] B.D. Lucas and T. Kanade, "An Iterative Image Registration Technique with an
Application to Stereo Vision," Proc. Image Understanding Workshop,
Washington, D.C., 1981, pp. 121-130.

[7] J. Bouguet, "Pyramidal Implementation of the Lucas Kanade Feature Tracker
Description of the Algorithm," OpenCVDocumentation, Intel Corporation,
Microprocessor Research Labs, Santa Clara, CA, 2000.

[8] J. Shi and C. Tomasi, "Good Features to Track," Proc. IEEE Computer Society
Conf. Computer Vision and Pattern Recognition, Seattle, WA, 1994, pp. 593-600.

[9] E. Gamma, R. Helm, R. Johnson, J. Vlissides, Design Patterns: Elements of
Reusable Object-Oriented Software, Addison-Wesley, Boston, MA, 1995.

[10] S. Baker and I. Matthews, "Lucas-Kanade 20 Years On: A Unifying Framework,"
Int'lJ. Computer Vision, vol. 56, no. 3, February 2004, pp. 221-255.















BIOGRAPHICAL SKETCH

Ted Belser (Von) was born in Gainesville, Florida, in 1978. Von participated in the

International Baccalaureate Program at Eastside High School in Gainesville. He

continued his education at the University of Florida where he earned his Bachelor of

Science degrees in electrical engineering and computer engineering. While participating

in a software development startup company he attended graduate school at the University

of Florida and earned a Master of Science degree in electrical engineering.