<%BANNER%>

Strategic Learning


PAGE 1

STRATEGIC LEARNING By FIDAN BOYLU A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2006

PAGE 2

Copyright 2006 by Fidan Boylu

PAGE 3

To my mother, Leyla Boylu

PAGE 4

iv ACKNOWLEDGMENTS I would like to thank my advisor, Dr. Ga ry J. Koehler, for giving me the great honor and pleasure to work with him. He ha s been beyond an advisor to me and I have always admired and looked up to him. He has become the second most important person in my life after my mom over the years I spent in Gainesville. He has an extraordinary power of making research enj oyable with his surprising idea s, his unique way of thinking and amazing style of attacking a problem. He has given me the most valuable skills, tools and lessons that will la st a lifetime. I would like to thank him especially for his endless help and support at the times of stress. I would also wish to thank my cochair, Dr. Haldun Aytug, for always encouraging and challenging me throughout the process. It would not have been so much fun without hi s brilliance and smart ideas. I also would like to thank Dr. David Sappington and Dr. Prav een Pathak for their time and interest. Special thanks go to my friend Bilge G okhan Celik for making Gainesville a more bearable place by always having the time to listen to my pointless and endless monologues and also for making me feel better by proving that there is at least one other person around that could worry about anything and everything more than I could. I also wish to thank Arzu Erenguc for being the best friend ever. Last but certainly not least, I am foreve r indebted to my mother, Leyla Boylu, for her influence and support in realizing my car eer goals and for teaching me how to be tough and not to give up no matter how difficult things get.

PAGE 5

v TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iv LIST OF TABLES............................................................................................................vii LIST OF FIGURES.........................................................................................................viii ABSTRACT.......................................................................................................................ix CHAPTER 1 INTRODUCTION........................................................................................................1 2 STRATEGIC LEARNING...........................................................................................3 Machine Learning and Data Mining ParadigmSupervised Learning.........................3 Economic Machine LearningUt ility-Based Data Mining...........................................6 Cost Sensitive Learning.........................................................................................7 Data Acquisition Costs..........................................................................................8 Strategic Learning.........................................................................................................9 Adversarial Classification...................................................................................11 Multi-Agent Reinforcement Learning.................................................................12 Learning in the Presence of Self-Interested Agents............................................16 Strategic Learning with Support Vector Machines.............................................17 Strategic LearningFuture Research...................................................................23 3 LEARNING IN THE PRESENCE OF SELF-INTERESTED AGENTS..................26 Introduction.................................................................................................................26 Related Literature.......................................................................................................29 Illustration...................................................................................................................34 Research Areas...........................................................................................................38 Summary.....................................................................................................................41 4 DISCRIMINATION WITH STRATEGIC BEHAVIOR...........................................42 Introduction and Preliminaries...................................................................................42 Strategic Learning...............................................................................................42 Related Literature................................................................................................46

PAGE 6

vi Linear Discriminant Functions...................................................................................49 Linear Discriminant Methods..............................................................................50 Statistical Learning Theory.................................................................................52 Support Vector Machines....................................................................................54 Learning while Anticipating Strate gic Behavior: The Base Case.............................57 The Agent Problem..............................................................................................57 The Base Case.....................................................................................................59 Learning while Anticipating Strategi c Behavior: The General Case........................66 Properties of P3...................................................................................................68 Strategic Learning Model....................................................................................74 Sample Application....................................................................................................78 Stochastic Versions.....................................................................................................86 Conclusion and Future Research................................................................................89 5 USING GENETIC ALGORITHMS TO SO LVE THE STRATEGIC LEARNING PROBLEM.................................................................................................................92 An Unconstrained Formulation for Strategic Learning..............................................92 A Genetic Algorithm Formula tion for Strategic Learning.........................................98 Experimental Results................................................................................................100 Discussion and Future Research...............................................................................101 6 STRATEGIC LEARNING WITH CONSTRAINED AGENTS.............................103 Introduction and Preliminaries.................................................................................103 Model........................................................................................................................107 Application to Spam Filtering..................................................................................113 Conclusion................................................................................................................124 7 CONCLUSION.........................................................................................................125 APPENDIX PROOF OF THEOREM 1...............................................................................................126 Lemma 1............................................................................................................126 Theorem 1..........................................................................................................126 LIST OF REFERENCES.................................................................................................134 BIOGRAPHICAL SKETCH...........................................................................................139

PAGE 7

vii LIST OF TABLES Table page 4-1 Possible cases depending on iz ................................................................................68 4-2 Different regions of costs.........................................................................................72 4-3 Negative cases..........................................................................................................76 4-4 Positive cases............................................................................................................77 4-5 Converted German credit data..................................................................................79 4-6 1-norm strategic SVM solutions (P4).......................................................................82 4-7 2norm strategic SVM solutions (P4)......................................................................83 4-8 Strategic SVM solutions (P4) for 1000 instances....................................................86 5-1 Different regions of costs.........................................................................................94 5-2 GA sketch...............................................................................................................100 5-3 2norm strategic SVM solutions (P4) versus GA for 100 instances.....................101

PAGE 8

viii LIST OF FIGURES Figure page 2-1 Strategic Learning framework..................................................................................17 2-2 Wider margins..........................................................................................................22 4-1 Theorem 1................................................................................................................62 4-2 Multi-round non-strategic SVM...............................................................................65 4-3 Positive case with 1ir and 0.5 .......................................................................70 4-4 Positive case with 1ir and 0 ..........................................................................70 4-5 Negative case with 6ir .........................................................................................71 4-6 Negative case with 1ir ..........................................................................................71 4-7 A typical graph of | f bw.....................................................................................72 4-8 Possible cases for points of discontinuity of | f bw.............................................73 6-1 Spam email with 2ir ..........................................................................................119 6-2 Non-Spam email without strategic behavior..........................................................122

PAGE 9

ix Abstract of Dissertation Pres ented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy STRATEGIC LEARNING By Fidan Boylu August, 2006 Chair: Gary J. Koehler Cochair: Haldun Aytug Major Department: Decision and Information Sciences It is reasonable to anticipate that rationa l agents who are subject to classification by a principal using a discriminant function might attempt to alter their true attribute values so as to achieve a positive clas sification. In this study, we e xplore this pote ntial strategic gaming and develop inference methods for th e principal to determine discriminant functions in the presence of strategic behavior by agents a nd show that this strategic behavior results in an alterati on of the usual learning rule. Although induction methods differ from each other in various aspects, there is one essential issue that is common to all: the assumption that th ere is no strategic behavior inherent in the sample data generation process. In that respect, the main purpose of this study is to research the question, What if th e observed attributes will be deliberately modified by the acts of some self-inter ested agents who will gain a preferred classification by engaging in such behavior. ? Hence, we investigate the need for anticipating this kind of strategic behavior and incorporat e it into the learning process.

PAGE 10

x Since classical learning appro aches do not consider the exis tence of such behavior, we aim to contribute by using rational expectations theory to determine optimal classifiers to correctly classify instances when such inst ances are strategic deci sion making agents. We carry out our analysis for a powerful induction method known as support vector machines. First, we define the framework of Strate gic Learning. For separable data sets, we characterize an optimal strategy for the principa l that fully anticipates agent behavior in a setting where agents have fixed reservati on costs. For non-sepa rable data sets, we provide an MIP formulation and apply it to a credit-risk evaluation setting. Then, we modify our framework by considering a set ting where agent costs and reservations are both unknown by the principal. Later, we de velop a Genetic Algorithm for Strategic Learning to solve larger versions of the pr oblem. Finally, we investigate the situation where there is a need to enforce constraints on agent behavior in the context of Strategic Learning and thus we extend th e concept of Strategic Learni ng to constrained agents.

PAGE 11

1 CHAPTER 1 INTRODUCTION In many situations a principal gathers a data sample containing positive and negative examples of a concept to induce a cl assification rule using a machine learning algorithm. Although learning algorithms differ fr om each other in various aspects, there is one essential issue that is common to al l: the assumption that there is no strategic behavior inherent in the sample data genera tion process. In that respect, we ask the question What if the observed attributes will be deliberat ely modified by the acts of some self-interested agents w ho will gain a preferred clas sification by engaging in such behavior.? Therein for such cases there is a need for anticipating this kind of strategic behavior and incorporating it in to the learning process. Clas sical learning approaches do not consider the existence of such behavior. In this dissertation we study this paradigm. This dissertation is organized as a collect ion of articles, each of which covers one of several aspects of the enti re study. Each chapter corres ponds to an article which is complete within itself. Due to this self-c ontained style of preparation, we allowed for redundancies across chapters. Th is chapter is intended to give an outline of this dissertation. In both Chapters 2 and 3, we provide an overview of Strategi c Learning with the exception that we intend to reach different t ype of readers. In Chapter 4, we give a comprehensive and intricate study of Strategi c Learning and provide the details of the model. In Chapter 5, we develop a Geneti c Algorithm for Strategic Learning to solve larger versions of the problem. In Chapte r 6, we extend the Strategic Learning model to

PAGE 12

2 handle more complex agent behaviors and in Ch apter 7, we conclude with a discussion of results and future work.

PAGE 13

3 CHAPTER 2 STRATEGIC LEARNING Machine Learning and Data Mining ParadigmSupervised Learning Todays highly computerized environmen t makes it possible for researchers and practitioners to collect and st ore any kind or amount of information easily in electronic form. As a result, an enormous amount of da ta in many different formats is available for analysis. This increase in the availab ility and easy access of data enables many companies to constantly look for ways to make use of their vast data collections to create competitive advantage and keep pace with the rapidly changing needs of their customers. This strong demand for utilizing the availabl e data has created a recent interest in applying machine learning algorithms to an alyze large amounts of corporate and scientific data, a practice whic h is called "data mining." Here we use the terms data mining and machine learning interchangeably. An important type of machine learning commonly used in data mining tasks is called supervised learning which is pe rformed by making use of the information collected from a set of examples called the t raining set. The trai ning set usually takes the form 11Sx,y,...,x,y where is the total number of available examples. Each example (also called an instance) is denoted by 1(,...,) iiin x xx which is the vector of n attributes for the example. The label of each example is denoted by iy and is assumed to be known for each instance which is why supervised learning is sometimes referred to as learning with a teacher. Given this setting, we are interested in choosing

PAGE 14

4 an hypothesis that will be able to discriminate between classes of instances. A wide range of algorithms have been developed for this task including decision trees (Quinlan 1986), neural networks (Bishop 1995), a ssociation rules (Agr awal et al. 1993), discriminant functions (Fisher 1936 ), and support vector machines (Cristianini and Shawe-Taylor 2000). Particularly, throu ghout this chapter, we focus on binary classification where iy1,1. Informally, we are interested in classification of two classes of instances which we call the negativ e class and positive class respectively. We choose a collection of candidate functions as our hypothesis space. For example, if we are interested in a linear clas sifier, then the hypothesis space consists of functions of the form 'wxb. Under these circumstances, the go al is to learn a linear function :nfX such that 0fx if xX belongs to the positive class and 0fx if it belongs to the negative class. Recently, a new paradigm has evolved for the binary classification problem. We call it Strategic Learning. One typical aspect common to all data mining methods is that they use training data without questioning the future usage of the learned function. More specifically, none of th ese algorithms take into account the possibility that any future observed attributes might be deliberat ely modified by their source when the source is a human or collection of humans. They fail to anticipate that people (and collections of people) might game the system and alter their attributes to attain a positive classification. As an example, consider th e credit card approval scenario where certain data (such as age, martial status, checking account balance, number of existing credit cards, etc.) are collected from each applicant in order to be able to make an approval decision. There are hundreds of websites that purport to help applicants increase their

PAGE 15

5 credit score by offering legal ways to mani pulate their informatio n prior to the credit application. Also, the case of a terrorist trying to get thr ough airline security is another vivid example of how certain individuals might try to proactively act in order to stay undetected under certain classification sy stems where a decision maker determines functions such as f to be able to classify between classes of individuals. Throughout this chapter, we will speak collectively of these yet to be classified individuals as agents and the decision maker as the principal. Until now most researchers assume that the observed data are not strategic which implicitly assumes that the attributes of th e agents are not subject to modification in response to the eventual decision rule. Howeve r, this type of strategic behavior is usually observed in many real world settings as su ggested by the above examples. Thus, it is reasonable to think that individuals or comp anies might try to game systems and Strategic Learning aims to develop a model for this specific type of classification setting where the instances which are subject to classification are known to be self-interested, utility maximizing, intelligent decision making units. In the Strategic Learning setting, each agen t i has a utility function, a true vector of attributes ix and a true group membership (i.e., label) iy. For linear utility, an agent has a vector of costs fo r modifying attributes ic and, for a given task, a reservation cost ir. For the task of achieving a particular classi fication, the reservation cost can be viewed as the maximum effort that an agent is willin g to exert in order to be classified as a positive example which we assume is desirable. On the principals side, iyC is assumed to be the penalty associated with misclassifications of a true type iy. In that respect, we develop a model within utility theory combined with cost sensitive learning.

PAGE 16

6 Before going into the details of Strategic Learning we look at a related area of machine learning where the discovery process considers economic issues such as cost and utility. In utility-based data mining the principal is a utility maximizing entity who considers not just classificati on accuracy in learning but also various associated costs. We briefly review this area before discussing Strategic Learning. Economic Machine LearningU tility-Based Data Mining Utility-based data mining (Provost 2005) is closely related to the problem of Strategic Learning. This area explores the notion of economic utility and its maximization for data mining problems as ther e has been a growing interest in addressing economical issues that arise throughout the data mining process. It has often been assumed that training data sets were freel y available and thus ma ny researchers focused on objectives like predictive accuracy. However, economical issues come into play in data mining since over time data acquisition may become costly. Utility-based methods for knowledge induction incorporate data acqui sition costs and trade these with predictive accuracy so as to maximize th e overall principal utility. Hence these methods become more meaningful and reflective of the real world usage. Utility-based data mining is a broad topic that covers and incorporates aspects of economic utility in data mining and includes ar eas such as cost-sensitive learning, pattern extraction algorithms that incorporate economic utility, effects of misclassification costs on data purchase and types of economic factors. Simply, all machine learning applications that take into account the princi pals utility considerations fall into this category of data-mining research. The resear chers in this area are focused primarily on two main streams. One stream focuses on cost sensitive learning (i.e., cost assigned to misclassifications) (Arnt and Z ilberstein 2005, Ciraco et al. 2005, Crone et al. 2005,

PAGE 17

7 Holte and Drummond 2005, McCarthy et al 2005 and Zadrozny 2005). The other stream focuses on the costs associated with the coll ection of data (i.e., data acquisition cost) (Kapoor and Greiner 2005, Melville et al 2005, Morrison and Cohen 2005). In the following section we will look at these two streams of research. Cost Sensitive Learning The first type of utility-based data mining explores the problem of optimal learning when different misclassification errors incur different penalties. This area has been revisited many times (Elkan 2001). Cost se nsitive classification has been a growing area of research and aims to minimize the expected cost incurred in misclassifying the future instances rather than focusing on improvi ng the predictive accur acy which is usually measured by the number of correctly classi fied instances. This shift of focus from predictive accuracy to cost of misclassifications is maintained by assigning penalties for misclassified instances based on the actual labe l of the instance. For example, in medical diagnosis domains, identifying a sick patient as healthy is usually more costly than labeling a healthy patient as sick. Likewise, in the spam filtering domain, false misclassification of a non-spam email is significantly more costly than misclassifying a spam email. Arnt and Zilberstein (2005) examine a previously unexplored dimension of cost sensitive learning by poin ting to the fact that it is impr actical to measure all possible attributes for each instance when the final re sult has time-dependent ut ility and they call this problem time and cost sensitive classification. Holte and Drummond (2005) review the cla ssic technique of classifier performance visualization, the ROC (receiver operating characteristic) curve, which is a two dimensional plot of the false positive rate versus the true positive rate and argue that it is inadequate for the needs of researchers and pr actitioners as they do not allow any of the

PAGE 18

8 questions to be answered such as what is the classifiers performance in terms of expected cost or for what misclassification co sts does the classifier outperform others. They demonstrate the shortcomings of ROC curves and argue that the cost curves overcome these problems. In that respect, the authors point to the fact that cost-sensitive measurement of classifier performance shoul d be utilized since misclassification costs should be an important part of cl assifier performance evaluation. Data Acquisition Costs The second area of utility-based data mi ning, cost of data acquisition, is an important area which has potential implications for real-world applications and thus is a topic that receives positive attention from indu stry as well as academia. For example, for large, real-world inductive learning proble ms, the number of training examples often must be limited due to the costs associated with procuring, preparing, and storing the training examples and/or the computational co sts associated with learning from them (Weiss and Provost 2003). In many classi fication tasks, training data have missing values that can be acquired at a cost (Melville et al. 2005). For example, in the medical diagnosis domain, some of the patients attrib utes may require an e xpensive test to be measured. To be able to build accurate predictive models, it is important to acquire these missing values. However, acquiring all th e missing values may not always be possible due to economical or other type of constraint s. A quick solution would be to acquire a random subset of values but this approach may not be most effective. Melville et al. (2005) propose a method called active feature-value acquisition which incrementally selects feature values that are most cost-e ffective for improving the models accuracy. They represent two policies, Sampled Expected Utility and Expected Utility, that acquire feature values for inducing a classification m odel based on an estimation of the expected

PAGE 19

9 improvement in model accuracy per unit cost. Other researchers investigate the same problem under a scenario where the number of feature values that can be purchased are limited by a budget (Kapoor and Greiner 2005). Whereas utility-based data mining incor porates a principals utility, Strategic Learning additionally considers the possibility that the objects of classification are selfinterested, utility maximizing, intelligent decision making units. We believe that Strategic Learning considerations make a subs tantial contribution to utility-based data mining. However it should be emphasized th at Strategic Learning should be considered as a totally new stream of utility-based da ta mining. Strategic Learning looks at the problems where different classes of instances with different misc lassification costs and utility structures can act strate gically when they are subject to discrimination. In the next section, we cover the details of Strategic Learning. Strategic Learning As was mentioned before, we look into a certain class of problems in which a decision maker needs to discover a classification rule to cla ssify intelligent agents. The main aspect of this problem that distinguishe s it from standard data mining problems is that we acknowledge the fact that the agents may engage in strategi c behavior and try to alter their characteristics for a favorable cl assification. We call this set of learning problems Strategic Learning. In this type of data mining, the key poi nt is to anticipate agent strategic behavior in the induction pro cess. This has not been addressed by any of the standard learning approaches. Depending on the type of application, the agent can be thought of as any type of intelligent decision making unit which is capable of acting strategically to maximize its

PAGE 20

10 individual utility function. The following are some examples of strategic agents and corresponding principals under different real world settings: a credit card company (the principal) decides which people (agents) get credit cards. an admission board at a university (the pr incipal) decides which applicants (agents) get admitted. an auditing group (principal) tries to spot fraudulent or soon to be bankrupt companies (agent). an anti-spam package (the principal is the package creator) tries to correctly label and then screen spam (which is agent created). airport security guards (the principal) try to distinguish terrorists from normal passengers (agents). Apparently, in each of these settings and in many others that were not mentioned here, if agents know or have some notion of the decision rule that the principal uses, they can try to modify their attributes to attain a positive classification by the principal. In most cases, the attributes used by a princi pal for classification are obvious and many people can discern which might be changed for their benefit. In the credit approval case, it is likely that an increase in ones check ing account balance or getting a job will be beneficial. Thus, it is reasonable to anticip ate that the agents will attempt to manipulate their attributes (either thorough deceit or not) wh enever doing so is in their best interest. This gaming situation between the agents and the principal leads to a need for anticipating this kind of strategic behavior a nd incorporating it to the standard learning approaches. Furthermore, if one uses classi cal learning methods to classify individuals when they are strategic decision making units then it might be possible for some agents to be able to eventually game the system.

PAGE 21

11 To date, very few learning methods fall in the Strategic Learning paradigm. The closest is called adversarial cl assification which we review be low. Another related area is called reinforcement learning. This area ha s some aspects of Strategic Learning that we discuss below also. Adversarial Classification Dalvi et al. (2004) acknowledge the fact th at classification should be viewed as a game between the classifier (which we call th e principal) and the adversary (which we call the agent) for all the reasons that we have discussed so far. They emphasize the fact that the problem is observed in many domains such as spam detection, intrusion detection, fraud detection, surveillance and counter-terrorism. In their setting, the adversary actively manipulates da ta to find ways to make the classifier produce a false decision. They argue that the adversary can learn ways to defeat the classifier which would result in a degrading of its performance as the classifier needs to modify its decision rule every time agents react by mani pulating their behaviors. Clearly, this leads to an arms race between the classifier and th e adversary resulting in a never ending game of modifications on both sides since the adve rsary will react to clas sifiers strategy in every period and classifier will need to adju st accordingly in the next period. This poses an economical problem as well since in ever y period more human effort and cost are incurred to be able to modify the classifier to adapt to the latest strategy of the adversary. They approach the Strategic Learning problem from a micro perspective by focusing on a single-shot version of the clas sification game where only one move by each player is considered. They start by assuming that the classifier initially decides on a classification rule when the data are not modi fied by the adversary. But, knowing that adversary will deploy an optimal plan against th is classification rule, the classifier instead

PAGE 22

12 uses an optimal decision rule which takes into account the adversarys optimal modifications. They focus on a Bayesian classification method. Although their approach is quite explanatory, it is an initial effort since their goal was to be able to explain only one round of the game. However as they discuss, the ultimate solution is the one that solves the repeated version of this game. However, by viewing the problem as an infinite game played between two parties, they tend to encourage modifications rather then prevent these modifications. That leads to a key question that needs to be answ ered: is there an optimal stra tegy for the classifier which will possibly prevent an adversary from evol ving against the classifier round after round when this strategic gaming is pursued infinite ly? or is it possible to prevent an agents actions by anticipating them before the fact and taking corrective action rather than reacting to the outcome after the game is played? These points are intensively investigated in Chapters 4 and 5 where we formulate the problem as the well-known principal-agent problem where the principal anticipates the actions of agents and uses that information to discover a fool-proof cla ssifier which takes into account the possible strategic behavior. In that sense, their a pproach is more of a preventive one than a reactive one. Also, their model involves many strategic agents acting towards one principal as opposed to a two-player game setting. Multi-Agent Reinforcement Learning Another related area of res earch is reinforcement lear ning which involves learning through interactions (Samuel 1 959 and Kaelbling et al. 1996). In this section, we will briefly discuss this area and point out its similarities and differences with Strategic Learning. Reinforcement learning is a field of machine learning in which agents learn by using the reward signals provided from th e environment. Essentially, an agent

PAGE 23

13 understands and updates its performance acco rding to the inter actions with its environment. In reinforcement learning theory an agent is characterized by four main elements: a policy, a reward function, a value function, and a model of the environment. The agent learns by considering every unique configuration of the environment. An agent acts according to a policy that is essentially a function that tells the agent how to behave by taking in information sensed from the environment, and outputting an action to perform. Depending on the action performed, the agent can go from one state to another and the reward function assigns a value to each state the agent can be in. Reinforcement learning agents are fundamentally reward-driven and the ultimate goal of any reinforcement learning agent is to maximize its accumulated reward over time. A reward function outputs the immediate reward of a state while the value function specifies the long run reward of that state after taking into account the states that are likely to follow, and the rewards available in those states. This is generally achieved through a particular sequence of actions to be able to reach the states in the envi ronment that offer the highest reward (Sutton and Barto 1998). In that respect, a Strategic Learning agent is quite similar to reinforcement learning agent as the former also aims to maximize its utility while attempting to reach a preferred classification state. Also, th e environment can be thought of as analogous to the principal who essentially decides on the re ward function (the classifier). However, the main difference of reinforcem ent learning from Strategic Learning is that learning is realized over time thr ough interactions betwee n the environment and agents, something that Strategic Learning essentially aims to avoid. The ultimate goal in Strategic Learning is to be able to anticipat e the possible actions of the agents and take

PAGE 24

14 preventive action. Strategic Learning attempts to avoid those interactions for the principals sake by anticipating the agents behavior since otherwise the principal is forced to modify its classifier after ever y interaction with the agents which is time consuming and costly in many data mini ng problems and, more importantly, causes degradation of the classifiers performance. In that respect, by using Strategic Learning model, a principal takes a more sophisticated approach and plans for his/her future course of action as opposed to doing simple reacti on-based, trial-and-error exploration as in reinforcement learning. In addition, in reinfo rcement learning the interaction between the principal (the environment) and the agent is cooperative unlike in Strategic Learning where it is assumed adversarial. A more specialized type of reinforcement learning which has more insight to the Strategic Learning is reinforcement learni ng in leader-follower multi-agent systems (Bhattacharyya and Tharakunnel 2005, Littman 1994). In leader-follower systems which have a number of applications such as monitoring and control of energy markets (Keyhani 2003), e-business supply chain cont racting and coordination (Fan et al. 2003), modeling public policy formulation in pollution control, taxation etc. (Ehtamo et al. 2002), a leader decides on and announces an in centive to induce the followers to act in a way that maximizes the leaders utility, while the followers maximize their own utilities under the announced incentive scheme. This is similar in many ways to our principalagent terminology as the leader acts as the principal and tries to identify and announce the ultimate decision rule that would maximi ze his/her own objective while the agents act as followers who seek to maximize their own utilities.

PAGE 25

15 Bhattacharyya and Tharakunnel (2005) apply this kind of a sequential approach to propose a reinforcement based learning algor ithm for repeated game leader-follower multi-agent systems. One of the interesting co ntributions of their work is the introduction of non-symmetric agents with varying role s to the existing multi-agent reinforcement learning research. This is analogous to our asymmetric setting where both the principal and agents are self-interested utility maximi zing units with the exception that principal has a leading role in setting the classifier to wh ich the agents react. This is similar to the interaction between leader and followers in their work. However, a key difference between the two is the sequential nature of the decisions made by the leader and the followers. In the leader follower setting, the learning takes place progressively from period to period as leader and followers interact with each other according to the ideas of trial-and-error le arning. Essentially, the leader announces his/her decision at specific points in time based on the aggregate information gained from the earlier rounds and the followers make their decisions according to the incentive announced at that period. In this scenario, the leader aims to learn an optimal incentive based on the cumulative information from the earlier periods while the followers try to learn optimal actions based on the announced incentive. Learning is achieved over successive rounds of decisions with informati on being carried from one round to the next. Even though the leader-follower approach has similarities with Strategic Learning, there are fundamental differences. First, Strate gic Learning is an an ticipatory approach. In other words, in Strategic Learning, l earning is achieved by anticipating strategic behavior by agents and incorporating this an ticipation in the learning process rather than following an after the fact reactive appro ach. Second, Strategic Learning does not

PAGE 26

16 involve periods based on the principles of principal-agent theory. Strategic Learning results show that a sequential approach will often yield suboptimal results. Learning in the Presence of Self-Interested Agents We approach to Strategic Learning more extensively in Chapters 3 and 4. Particularly, in Chapter 3, we explore the problem under the name of Learning in the Presence of Self-interested Agents and propose the framework for this type of learning that we briefly discuss in this section. Refreshing the previously developed notation, let nX be the instance space which can be partitioned into two sets, a set consisting of positive cases consistent with some underlying but unknown concept and th e remaining negative cases. For example, in Abdel Khalik and El-Sheshai (1980) the un derlying concept is described by firms that wont default or go bankrupt. Attributes such as the firms ratio of retained earnings to total tangible assets, etc. were used in th is study. One forms a training set by randomly drawing a sample of instances of size from X, and then determining the true label (-1 or 1) of each such instance. Using this sample, a machine learning algorithm infers a hypothesis. A key consideration is the choice of sample size. Whether it is large enough to control generalization erro r is of key importance. Under our setting, the principals goal is to determine the classification function f to select individuals (for exam ple, select good credit applican ts) or to spot negative cases (such as terrorists who try to get through as security line ). However, each strategic agents goal is to achieve positive classifi cation (e.g., admission to university) regardless of their true label. For that, they act strate gically and take actions in a way to alter their true attributes which may lead to an effectiv ely altered instance space. In some cases it is

PAGE 27

17 possible for the agents to infer their own rules about how the principal is making decisions under f and spot attributes that are likely to help produce positive classifications. Most classi cal learning methods operate using a sample from X but this space may be altered (call it X) due to the ability of agents to infer their own rules about the classification rule. This possible change from X to X needs to be anticipated. Strategic Learning does this by incorporating rational expe ctations theory (Muth 1961) into the classical learning theory. Figure 2-1 outlines the Strategic Learning framework. Figure 2-1 shows that when strategic behavi or is applied to the sample space, X, it causes it to change from X to X (reflecting strategic behavior by the agents). Also, classical learning theory operates on X while Strategic Learning operates on all anticipated Xs. Figure 2-1. Strategi c Learning framework. Strategic Learning with Support Vector Machines In Chapter 4 we consider Strategic Lear ning while inducing linear discriminant functions using Support Vector Machines (SVMs). SVM is an algorithm based on X Classical Learning Theory Rational Expectations Theory X Strategic Learning Strategic Behavior

PAGE 28

18 Statistical Learning Theory (Vapnik 1998). In this section, we discuss Chapter 4 and some of our results. SVMs are a computationally efficient way of learning linear discriminant functions as they can be applied easily to enormous sample data sets. In essence, SVMs are motivated to achieve better generalizatio n by trading-off empirical error with generalization error. This translates to the simple goal of minimizing the margin of the decision boundary of the separating hyperplane with parameters (w,b). Thus, the problem reduces to minimizing the norm of th e weight vector (w) while penalizing for any misclassification errors (Cristianini and Shawe-Taylor 2000). An optimal SVM classifier is called a maximum margin hype rplane. There are several SVM models and the first model is called the hard margin classifier which is applicable when the training set is linearly separable. This model determines linear discriminant functions by solving ,' ..(')11,...,wb iiMinww stywxbi (1) The above formulation produces a maxima l margin hyperplane when no strategic behavior is present. To illustrate how stra tegic behavior alters the above model, we briefly look at the approach used in Chapter 4 of this dissert ation to incorporate strategic behavior into the model. We start by in troducing the agents strategic move problem that shows how rational agents will alter their true attributes if they knew the principals SVM classifier. If the princi pals classification function ( '1ii f xwxb ) was known by rational agents, then each agent would solve what they call the strategic move problem to determine how they could achieve (or maintain) positive classification while

PAGE 29

19 exerting minimal effort in terms of cost. Th is is captured in the objective function (cost minimization). Thus the agents problem can be modeled as min' 1 0 ii ii icd stwxDwdb d where Dw is a diagonal matrix defined by ,10 10j jj jw Dw w This problem finds a minimal cost change of attributes, iDwd, if feasible. This is the amount of modification that an agent needs to make on his/he r attributes to be classified as a positive case. This would be undertaken if this cost doesnt exceed the agents reservation cost, ir Let *,idwb be an optimal solution for the agents strategic move problem (*,idwb is zero if the strategic move probl em is infeasible or if the agent lacks enough reservation). Then the princi pals strategic problem becomes the following *min ..,11,...,wb iiiww stywxDwdwbbi (2) When compared with the non-strategic SV M model, the difference is the term *,iDwdwb which depends on the agents problem. Basically, this term represents the principals anticipation of a modification of attributes by agent i. By incorporating this term into the principals problem, this formulation makes it possible to prevent some misclassifications by taking corrective action be fore the fact (i.e., before the principal determines a classification rule and incurs misclassification cost as agents make modifications). The essential idea is to anti cipate an agents optimal strategic move and

PAGE 30

20 use that information to infer a classification rule that will offset the agents possible strategic behavior. In Chapter 4, we derive a complete characterization for the solution of the principals strategic problem under the base setting where all agen ts have the same reservation and change costs (i.e., irr and icc ) and 11Sx,y,...,x,y is linearly separable. Theorem 1 in Chapter 4 states the following: **,wb solves (1) if and only if ** **22 22 bt w tt solves (2) where *t is given by **max/ kk ktrwc and ,0max0,1 argmin jii j jw jcbwx k w. In essence, Theorem 1 states that a prin cipal anticipating stra tegic behavior of agents all having the same utilities and cost stru ctures will use a classi fier that is parallel to the non-strategic SVM solution **, wb. The solution to the strategic SVM is a scaled (by *2 2 t) and shifted form of **, wb The margin of the strategic SVM solution hyperplane is greater than th e non-strategic SVM so lution and thus the probability of better generalization is greater. This scaling factor depends on the cost structure for altering attribute values, the reservation cost fo r being labeled as a positive case, and **, wb Figure 2-2 (a) shows a completely separa ble training set with a hyperplane using the non-strategic SVM classifier (solid line) and corresponding positive and negative margins (dotted lines) along with data points in two-dimensional space. In Theorem 1, we show that the negative agents will try to achieve a positive labeling by changing

PAGE 31

21 their true attributes if the cost of doing so doesnt exceed their reservation cost. This is indicated in Figure 2-2 (a) with the horizontal arrows poin ting out from some the points for negative agents towards the positive side of the hyperplane. Clearly, these are the agents whose reservation costs are sufficiently high and are willing to engage in strategic behavior to move to the positive side of the hyperplane. However, the principal, anticipating such strategic behavior, shifts and scales the hyperplane such that no true negative agent will benefit from engaging in such behavior. Figure 2-2 (b) shows the sample space and the resulting strategic classifier. However, as Figure 2-2 (b) shows, the negative agents would have no in centive to change their true attributes since they will not be able to move to the positive margin any mo re and, hence, would not exert any effort. Apparently, this shift may leave some marginally positive agents in danger of being classified as negative agents. Since they t oo anticipate that the principal will alter the classification function so as to cancel the effect s of the expected strategic behavior of the negative labeled agents, they mi ght undertake changes. In other words, they are forced to move as indicated by the arrows in Figure 22 (b). Thus, the ones who are penalized for engaging in a strategic behavior (i.e ., must exert effort to attain a positive classification) are not the negative agents but rather the marginal positive agents. Moreover, the resulting classifier has a grea ter margin and better generalization capability compared to the non-strate gic SVM learning results. In Figure 2-2 (c), the new strategic classi fier with wider margins on each side and the resulting altered instances due to strate gic behavior is pictur ed. Notice that the margin of the resulting hyperplane is wider and is in fact a scaled a nd a shifted version of the hyperplane in Figure 2-2 (a) and thus differs from non-strategic SVM results.

PAGE 32

22 (a) (b) (c) Figure 2-2. Wider margins.(a) Separable training set with a non-strategic SVM classifier (b) Sample space as the result of st rategic behavior (c) Strategic SVM classifier. For non-separable datasets, there are no results comparable to Theorem 1. However, for that case where S is not linearl y separable and all agents have their own reservation and change costs (i.e., ir and ic), we. derive mixed integer programming models for the solution of the principals st rategic problem. We apply our results on a credit card application setti ng and our results show that the strategic formulation performs better than its non-strategic counterpart.

PAGE 33

23 Chapter 5 is an extension of our work which considers the cases when it is not realistic to let each attribut e be modified unboundedly without posing any constraints on how much they can actually be modified. In other words, agents are constrained on how much and in which way they can modify their attributes. Towards th at end we look at a spam categorization problem where spammers (negative agents) are only allowed to make modifications in the form of addition or deletion of words with an upper limit on the number of modifications allowed. Essent ially, we formulate the problem by allowing only binary modifications which is an intere sting constraint on the agent behavior. Clearly, agents can be constr ained in many other ways such as upper and lower bounds on the modifications or the modi fications may need to belong to a certain set of moves (like in checkers or chess). Interestingly, for the spam categorization problem, we point out that not all agents are strategic and in fact only the negative agents (spammers in our case) act strategically since it is not reasonable for legitimate intern et users to engage in strategic behavior and change the content of their emails. This is quite distinguishing as it is a Strategic Learning model for an environment where nonstrategic and strategi c agents coexist. Strategic LearningFuture Research The form of Strategic Learni ng problem discussed in this chapter assumes that the only costs are misclassification costs and there is no cost associated with making the true positive agents alter their behavior. Including these costs would create a formulation that is equivalent to the social welfare con cept common in economic s literature as the principal may need to trade-off the misclassification cost with the disutility caused by forcing the true positive agents to move. In that way, the principal would be able to maximize his/her own utility while minimizing the disutility of positive agents. Also,

PAGE 34

24 Theorem 1 is only applicable to separable da tasets and an import ant contribution would be to develop a similar theoretica l result for non-separable datasets. One of the most interesting angles for future research is to be able to remove the assumption that both the principal and the agents know each others parameters. In practice this assumption rarely holds and its usually the case that principal and agents will try to somehow roughly predict each ot hers parameters. We provide several formulations in Chapter 4 for cases where the agents utility parameters are not known with certainty. More work is needed in this area. It is possible that the classifier develo ped when the agents do not collude may be suboptimal when the agents cooperate and ac t seemingly in an irrational way. For example, determining what happens in a scenario where agents collude and offer incentives to other agents to make sub-optimal changes in th eir attributes to confuse the principal would make this problem more realistic. It is possible to reverse the Strategic Lear ning problem and use these ideas to create a classifier (or a policy) that promotes certain actions (rath er than avoid) or use these ideas as a what-if tool to test the implications of certain policies. For example, board of directors could develop execu tive compensation policies to promote long term value generation by anticipating how the CEOs can game the system to their advantage (which usually causes short term gains at the cost of long term value). All discussion so far has focused on usi ng SVMs as a classifier even though learning theory and rational expectations theory are independent of implementation details. It would be very useful to determ ine the validity of these results independent of implementation and introduce the Strategic Lear ning problem to the ot her classifiers like

PAGE 35

25 decision trees, nearest neighbor, neural networks, etc. Essent ially, Strategic Learning is a general problem that will arise in any lear ning situation involving intelligent agents so it should be applied to other learning algorithms. Another key area of future research is the application of domain knowledge to Strategic Learning. Kernel methods acco mplish this by using a nonlinear, higher dimensional mapping of attributes to features to make the cla sses linearly separable. It may be possible to compute an appropriate ke rnel that can anticipate and cancel the effects of strategic behavior. Such a possi ble kernel can be developed using agents utility functions and cost structures whic h are a form of domain specific knowledge. The current research on Strategic Learning so far only addresses static situations. However, it is possible that some exoge nous factors like the environment or the parameters being used are changing over time For example, it might be possible that over time some new attributes may be adde d to the data set or conversely some may become obsolete. This type of a dynamic situ ation might need to be modeled in a way to accommodate the possible changes to be able to determine classifiers that will adapt efficiently. There is yet another angle to approach th e problem which is the game theoretical point of view which has been partially addres sed by Dalvi et al. (2004) However, further investigation of this angle is an interesting and relevant task for future research in the area. An important area of research in Strate gic Learning is to fi nd better algorithmic methods for solving the Strate gic Learning problem. While mixed integer formulations exist, solution methods currently do not scale-up like the non-strategic counterparts.

PAGE 36

26 CHAPTER 3 LEARNING IN THE PRESENCE OF SELF-INTERESTED AGENTS1 In many situations a principal gathers a data sample containing positive and negative examples of a concept to induce a cl assification rule using a machine learning algorithm. Although learning algor ithms differ from each other in various aspects, there is one essential issue that is common to al l: the assumption that there is no strategic behavior inherent in the sample data genera tion process. In that respect, we ask the question What if the observed attributes will be deliberately modifi ed by the acts of some self-interested agents who will gain a preferred classificati on by engaging in such behavior.? Therein for such cases there is a need for anticipating this kind of strategic behavior and incorporating it in to the learning process. Clas sical learning approaches do not consider the existence of such behavior. In this chapter we st udy the need for this paradigm and outline related research issues. Introduction Machine learning research has made great progress in many areas and applications over the past decade. Many machine learning al gorithms have evolved to the point that they can be used in typical commercial data mining tasks including credit approval (Chapter 4), spam detection (Fawcett 2003) fraud detection (Fawcet and Provost 1997), text categorization (Dum ais et al. 1998), etc. 1 An earlier version of this chapter was publis hed in the Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS'06), Track 7, p. 158b.

PAGE 37

27 One of the most common types of data mini ng task is supervised learning. Here the learner acquires a training sample that consists of previously labeled cases and applies a machine learning algorithm that us es the sample to choose a best fit hypothesis from a designated hypothesis space. This chosen concept will be used to classify (i.e., label) unseen cases later. We can think of an example as a vector of attribute values (an instance) plus a class label. The set of all instances is called the instance space. Suppose there are n attributes. Th en a vector of attributes is nx. For example, in a credit approval application, attributes might be age, income, number of dependents, etc. The label can be binary, nominal or real valued. For example, in the credit approval problem, the labels might be good or bad or simply +1 or -1. Then an example would be the pair x y where nx and 1,1 y An hypothesis space consists of all the possible ways the learner wishes to consider representing th e relationship between the attributes and the label. For example, an often used hypothesi s space is the set of linear discriminant functions. A particular hypothesis would be wb where nw Then ,:1,1nwb meaning if '0 wxb then the label is +1, otherwise it is -1. A learning algorithm sel ects a particular hypothesis (e.g., a particular, wb) that best satisfies some induction principle. The supe rvised in supervised learning means that the training sample contains corr ect labels for each instance. There are a plethora of learning met hods. Some of the most popular methods include decision tree methods (Quinlan 1986) neural network me thods (Rosenblatt 1958), Bayesian methods (Duda and Hart 1973), Support Vector Machines (SVMs) (Cristianini and Shawe-Taylor 2000), etc. Although many of the algorithms differ from

PAGE 38

28 each other in many aspects, there is one essent ial issue that is comm on to all: the implicit assumption that there is no st rategic behavior inherent in the sample data generation process. For example, in supervised learni ng where a decision make r uses a training set to infer an underlying concept, the training exam ples are taken as is. In that respect, we ask the question What if the observed attr ibutes will be deliber ately modified by the acts of some self-interested agents who will ga in a preferred classification by engaging in such behavior.? If these agents know the true classifi cation rule, they can easily discern how changing their attributes could lead to a posit ive classification and, assuming the cost to change these attributes is acceptable, then proceed to make the changes. If the classification rule is not know n by the agents, either the ag ents can attempt to discover what the important attributes might be or, mo re likely, use common se nse to alter obvious attributes. For example, poor credit risk individuals interested in obtaining credit might proactively try to manipulate th eir attributes (either through de ceit or not) so as to obtain a positive rating (e.g., there are many website s that purport to help people change their credit ratings). A more extreme example is the terrorist who tries to appear normal so as to gain access to potential target sites. Less extreme are spammers who continuously try to break through screening rules by changing their e-mail messages and titles. So there is a need for anticipating this kind of strategic behavi or and incorporating such considerations into the classical lear ning approaches. Current ly none consider the existence of such behavior.

PAGE 39

29 In this chapter, we outline the need fo r this kind of a paradigm and present many related emerging research issues. Many types of learning situations potentially fall in this setting. In fact, whenever the instances repr esent adaptive agents, the risk of strategic behavior is possible. We fr ame this type of learning in a principal-agent setting where there are many rational agents who act as au tonomous decision making units working to maximize their individual utili ties and a principal who needs to classify these agents as belonging to one class or the other. For exam ple, a credit card comp any (the principal) decides which people (agents) get credit cards. An admission board at a university (the principal) decides which applicants (agents) get admitted. An auditing group (principal) tries to spot fraud or detect imminent bankruptcy in companies (agent). An anti-spam package (the principal is the p ackage creator) tries to correctly label and then screen spam (agent created). In each of these examples, we assume that the agen ts can anticipate what the resulting classifier is and modify thei r attributes accordingly. Knowing this the principal will create a classifi er to cancel the effects of such behavior. Such anticipation is based on the assumption that each part y acts rationally and each knows the others parameters. We start with a discussion of related litera ture in the next section. Only two recent papers directly address Strategic Learning. We also discuss a somewhat similar scenario that arises in leader-follower learning. Later, we illustrate the ideas of Strategic Learning and outline areas of possible research. We end with a su mmary in the last section. Related Literature Since the agents can learn, anticipate and react to the principals classification method by altering their behavior, the principal in turn needs to adapt his/her strategy in order to be able to cancel the effects of agen ts strategic efforts. This arms race between

PAGE 40

30 the two parties (principal as opposed to agen ts) is the essential mo tivation of Strategic Learning. However, as was pointed out in Dalvi et al (2004) the key goal for the principal is to identify the ultimate decision rule initially especially when this strategic gaming could be pursued infinitely. That is the principal rather than reacting to the agents actions in an af ter the fact fashion, needs to antic ipate their strategic behavior and identify an optimum strategy taking the agents possible strategic behavior into account, an approach which is explored deeply in Chapter 4. We note that even in learning situatio ns where an optimal induction can be achieved, as we study in Chapter 4, the usua l arms race approach may not discover this rule. That is, if a principal gathers data, induc es a rule and starts using this rule, strategic agents will attempt to alter their attributes for positive classification. The instance space is now different forcing the pr incipal to gather a new data set and induce a new rule. In Chapter 4 we give an example where this adaptive learning does not converge to an optimal rule. Indeed, in our set ting, such convergence may be rare. Dalvi et al. (2004) argue that in many domains such as spam detection, intrusion detection, fraud detection, surveillance and counter-terro rism, the data are being manipulated actively by an adve rsary (agent in our termi nology) seeking to make the classifier (principal in our terminology) produce a fals e decision. They further argue that in these domains, the performance of a classifier can degrade rapidly after it is deployed, as the adversary learns to defeat it. We agree. They view classification as a two peri od game between the classifier and the adversary, and produce a classifi er that is optimal given the adversary's optimal strategy. They also show that a Nash equilibrium exis ts when the adversary incurs unit cost for

PAGE 41

31 altering an observation. However, as they suggest, computation of a Nash equilibrium is hard since the computation time is exponen tial in the number of attributes. The experiments they do in the spam detection dom ain show that their approach can greatly outperform a standard classifi er that does not take into consideration any strategic behavior by automatically adapting the cl assifier to the adversary's evolving manipulations. Referring to their example, in the domain of e-mail spam detection, standard classifiers like naive Bayes we re initially quite successful but spammers soon learned to fool them by inserting non-spam" words in to e-mails or breaking up spam" ones with spurious punctuation, etc. As the spam filters were modified to detect these tricks, spammers started using new tricks (Fawcett 2003). Eventually spammers and filter designers are engaged in a never-ending game of modification as filters continually include new ways to detect spam and spamme rs continually invent new ways to avoid detection. This kind of gaming is not unique to the spam detection domain and is found in many other domains such as computer intrus ion detection, where the anti-virus programs are continuously updated as new attacks ar e experienced; frau d detection, where perpetrators change their t actics every time they get ca ught (Fawcett a nd Provost 1997); web search, where search engines constantly revise their ranking functions in order to cope with pages which are manipulated in order to have higher rankings; etc. As a result, the performance of principal can drop remarkably when an adversarial environment is present.

PAGE 42

32 In Chapter 4 we develop methods to determin e linear discriminant classifiers in the presence of strategic behavior by agents We focus on a powerful induction method known as support vector machines, and for se parable data sets, we characterize an optimal strategy for the principal that fully an ticipates agent behavior In our setting, agents have linear utility and have a fixed reservation costs for changing attributes. The principal anticipates the optimal agent behavior for a given classi fier (i.e., he uses rational expectations) and chooses a classifier that ca nt be manipulated within the acceptable cost ranges for the agents. In this setting there is no possibili ty for a cat and mouse type scenario. No actions by the agents can al ter the principals cl assification rule. Here, our first important result is that under specific condition s, an optimal linear discriminant solution with strategic behavior is a shifted and s caled version of the solution found by SVMs without strategic beha vior. This result is striking since we prove that it is optimal. So far, this is th e only Strategic Learning re sult that incorporates rational expectations theory into the classi cal learning approach and is proved to be optimum. Hence, the main contribution of our work is that rational expectations theory can be used to determine optimal classifiers to correctly classify instances when such instances are strategic decision making agents. This provides a new way of looking at the learning problem and thus opens up many research areas to investigation. For non-separable data sets we give mi xed integer programming formulations and apply those to a credit-risk evaluation setting. Our results show that discriminant analysis undertaken without taking into ac count the poten tial strategic behavior of agents could be misleading and can lead to unwanted results.

PAGE 43

33 Although Strategic Learning approaches ha ve not been used before, learning problems involving intelligent ag ents in a gaming situation ha ve been investigated in other settings. For example, in leader-follower learning sy stems which have a number of applications such as monitori ng and control of energy markets (Keyhani 2003), ebusiness supply chain contra cting and coordination (Fan et al. 2003), modeling public policy formulation in pollution control, taxa tion etc. (Ehtamo et al. 2002), a leader decides on and announces an in centive to induce the followers to act in a way that maximizes the leaders utility while the followers maximize their own utilities under the announced incentive scheme. This is analogous in some sense to our setting since the principal (the leader in thei r terminology) tries to identify (i.e., learn) and announce the ultimate decision rule that would maximize her own objective while the agents (followers in their terminology) seek to maximize their own utilities. In both cases, it is possible to think of the situation as the principal (or leader) aiming to maximize some kind of a social welfare function given the self interested actions of the agents. Specifically, these kinds of decision situa tions are termed incentive Stackelberg games (Von Stackelberg 1952) where the leader first determines an incentive function and announces it and the followers, after obser ving the announced incentive, make their own decisions. For example, in their work, Bh attacharyya et al. (2005) apply this kind of a sequential approach to propose a reinfor cement based learning algorithm for repeated game leader-follower multi-agent systems. Th e key point here is th e sequential nature of the decisions made by the leader and th e followers. The learning takes place progressively as principal and agents interact with each other based on the principles of reinforcement learning (Kaelbling et al. 1996) which is centered around the idea of trial-

PAGE 44

34 and-error learning. Specifically, the leader announces his decision at specific points in time based on the aggregate information gain ed from the earlier rounds, and the followers make their decisions according to the incentive announced at that period. In this scenario, the leader aims to learn an optimal in centive based on the cumulative information from the earlier periods while the followers try to learn optimal actions based on the announced incentive. Learning is achieved over successive rounds of decisions with information being carried from one round to the next. This is where existing research and the framework proposed in this paper differ from each other. The ensuing analysis demonstrates that this sequential approach will often yield suboptimal results while the ultimate solution can only be found by anticipating and incorporating this anticipation in th e learning process rather than fo llowing an after the fact reactive approach. Illustration To further illustrate the Stra tegic Learning framework discu ssed in this chapter, let nX be the instance space. X is partitioned into two sets, a set consisting of positive examples consistent with some underlyi ng but unknown concept and the remaining negative examples. For example, in Messier and Hansen (1988) the underlying concept is described by firms that wont default or go bankrupt. In their study, the attributes consists of values such as the firms ratio of retained earnings to total tangible assets, etc. A training set is formed by randomly drawing a sample of instances of size from X, and then determining the true label (-1 or 1) of each such instance. From this sample, one uses a machine learning algorithm to infer a hypothesis from these examples. The choice of a sample size that is large enough to control gene ralization error is of key importance.

PAGE 45

35 Under this setting, the principals desired goal is to determine a classification function f(i.e., an hypothesis) to se lect individuals (for ex ample, select college applicants likely to succeed) or to spot nega tive cases (such as which firms are likely to file for bankruptcy or commit fraud, etc.). However, the agents acting strategically may take actions in a way to improve thei r situation in hopes of achieving positive classification (e.g., admission to university). This may lead to an effectively altered instance space. After f is discovered by the pr incipal, agents may take the classification rule into account even if fhas not been announced. For exam ple, agents can infer their own rules about how the principa l is making decisions under f a nd spot attributes that are likely to help produce positive classificati ons. Most classical learning methods will operate using a sample from X but this space may be altered by subsequent agent actions (call it X). This change needs to be anticipate d. One powerful way of doing this is by incorporating rational ex pectations theory to the classi cal learning theory Figure 2-1 of Chapter 2 outlines this new proposed framework. Figure 2-1 shows that when strategic behavi or is applied to the sample space, X, it causes it to change from X to X and parallel to that rationa l expectations theory applied to classical learning theory tran sforms it to what we suggest as Strategic earning. .Notice that while classical learning theory operat es on X, strategic le arning operates on all anticipated Xs. Learning theory assumes that the principal (learner) has a loss f unction that usually measures the error associated with the classifi cation task. In general, the principal has to worry about two types of erro rs: the empirical error and th e generalization error. The empirical error is easily estimated by usi ng the sample. However, the generalization

PAGE 46

36 error, a measure of how well the function wi ll predict unseen examples,depends on what functional form is assumed for the target func tion (the hypothesis), th e size of the training set and the algorithm used to discover it. Once the principal has chosen a representation for the function (a language for representing an hypothesis such as a decision tree, neural network, or linear discriminant function, etc.), learning theory dictates that the generalization error is bounded assuming that the true functi on can be represented using this language. For example, linear discriminant analysis assumes that the true function is linear. The success of each algorithm operating on this language depends on how well it uses the information presented by the sample and how well it trades off the estimate of the generalization error with the empirical error. For example, decision tree algorithms achieve this balance by first minimizing the em pirical error and then intelligently pruning leaves until it reaches some balance between the two types of errors. A very successful linear discriminant technique called the S upport Vector Machines (Cristianini and Shawe-Taylor 2000), achieves this balance by trading off the margin of separation between two classes with re spect to the separating hyperp lane with a measure of empirical error. It is a well known result th at the maximum margin hyperplane, which is the hyperplane far from both cl asses, minimizes the genera lization error (Vapnik 1998). It is however not clear whethe r these results carry over when the input space is altered by strategic positioning of the agents. To appreciate the impact of strategic behavi or in classification, consider the setting in Chapter 4. We show that negative agents try to achieve a positive labeling by changing their true attributes if the cost of doing so doesnt exceed their reservation cost.

PAGE 47

37 However, the principal, anticipating such strategic behavior, shifts and scales the classification function (a maxi mum margin hyperplane) such that no true negative agent will benefit from engaging in such behavior. Thus, in practice, the negative agents have no incentive to change their true attributes and, hence, do not exert any effort. However, this shift may leave some marginally positive agents in danger of being classified as negative. Since they too antic ipate that the principal will alter the classification function so as to cancel the effects of the expected strategic behavior of the negative labeled agents, they will undertake change s. Thus, the ones who are penalized for engaging in a strategic behavior are not the negative agen ts but rather the marginal positive agents. Moreover, the resulting classifier has a grea ter margin and better generalization capability (Vapnik 1998) compared to the SVM learning results when we assume X is static. Hence, normal SVM methods can never discove r an optimal strategi c classifier, even under repeated applications. Such observations have implications not only for induction of classifiers but also for task s such as policy setting. Figure 2-2 of Chapter 2 illustrates the resulting classifier under strategic behavi or. Notice that the marg in of the resulting hyperplane is wider and is in fact a scaled and a shifted version of the hyperplane in Figure 2-2 (a) and thus differs from normal SVM result s without strategic behavior. Figure 2-2 (a) shows the sample space without strategic behavior and the corresponding SVM classifier (the continuous line). Gi ven these conditions some negative agents can engage in strategic beha vior as indicated by a rrows. Figure 2-2 (b) shows the sample space and the resulting optimum classifier given the strategic behavior of agents. Notice that the marginally positive ag ents are the ones that are forced to move shown by the arrows while the ne gative agents prefer not to alter their attributes since

PAGE 48

38 they no longer have any incentive of doing so. It costs them too much. Finally, in Figure 2-2 (c), the new classifier with wider ma rgins on each side and the resulting altered instances due to strategic behavior is pictured. The fact that most of the learning takes place in dynamic and changing environments where there is interaction between agents and the principal or among agents themselves leads us to question the fundamentals of the supervised learning algorithms which mainly operate on fixed but unknown distributions. This static approach to supervised learning, that does not take into account any possible strategic activity in data generation, does not seem to be realistic give n the vast array of examples where active manipulation of attributes by actions of agents is realized. Although learning algorithms exist that consider actions of agents (Kaelbling et al.1996), their main concern is to research how agents learn from such interactions so, in that sense, the research framework proposed here differs from those. Here the main concern is not to learn from the mistakes but to prevent mist akes from happening and to improve learning by anticipating behavior that would otherwise cause faulty decisions. Research Areas We believe there is an important need to explore various as pects of this new paradigm. In the following we identify some issues that have not been explored yet. An obvious area of future research is to generalize and extend the methods shown in Chapter 4 and Dalvi et al. (2004) to the other classifiers like decision trees, nearest neighbor, neural networks, etc. Essentially, the problem can be extended appropriately and applied to many other lear ning algorithms since it is bene ficial to take strategic behavior into account since not doing so might be misleading as the current research in the area suggests.

PAGE 49

39 One interesting research a ngle would be to approach the problem from a game theoretical point of view and to be able to answer questions like What is the best (optimum) strategy for the principal given the agents strategic behavior? Are there any conditions other than those identified by Dalv i et al. (2004) under which these kinds of problems have Nash equilibria? If so, what form do they take, and are there cases where they can be computed efficiently? Under wh at conditions do repeated games converge to these equilibria? Since strategic behavior alters the input space it is not clear to what extent the results on learning bounds (Cristianini a nd Shawe-Taylor 2000, Vapnik 1998) from statistical learning theory appl y even when the classifier an ticipates this behavior. The results in Chapter 4 suggest that the results may not carry over as is. One issue of great importance that has not been explored yet is the application of domain knowledge. Each learning technique incorporates domain knowledge differently. So-called kernel methods (such as those used with SVMs) use a nonlinear, higher dimensional mapping of attributes to features to make the cl asses linearly separable. It can be shown that such a task can be carri ed-out by kernel functions (hence the name) and the kernel itself can be seen as a similarity measure that is meaningful in that domain. It may be possible to anticipate and cancel the effects of strategi c behavior by applying an appropriate kernel mapping. It may be po ssible to develop such a kernel using agents utility functions and cost stru ctures since that knowledge is one form of domain specific knowledge. Existing research in Strate gic Learning (i.e., Chapter 4 and Dalvi et al. (2004)) assumes that all agents have the same cost of changing an attribute. Also, it assumes that

PAGE 50

40 agents have linear utilities. Relaxing these assumptions would be interesting for future research. Additional areas of future rese arch include the following. It is possible to apply the proposed ideas to cooperative multi-agent system s. For example, if agents can collude and offer side payments to other agents to make sub-optimal moves to confuse and thwart the principal, can this be anticipated in th e induction process? Are there conditions under which collusion by the agents will beat a classifi er that does not explic itly consider it? Human beings are often quite good at the le vel of adaptation (either as a principal or agent) in which we are interested. Good sports players will carefully watch how their opponents play and change strategy based on th eir opponents' actions. Inspired by such human behavior, we would like to apply Strategic Learning to competitive multi-agent settings where multiple principals/agents interact and try to learn while competing. Research so far focuses on static situations If the instance space changes over time (due to some exogenous factors), is it possi ble to dynamically model user behavior and determine classifiers that will adapt efficien tly? For example, allowing new features to be added as time progresses could be a good way to model such a dynamic and interactive environment. The goal would be to make this adaptation as easy and productive as possible. Current research assumes that principal and agent know each others parameters. In other words, the principal is well inform ed about the costs and the problem that the agent faces. When the principal and the ag ent do not know each other's parameters, how would that affect the optimal strategies and what would be the addi tional learning needs? This is the issue on which economists focus th eir analysis of the "principal agent"

PAGE 51

41 problem where they consider cases in which th e principal is less omniscient (Laffont and Martimort 2002). If the computation of an optimal strate gy is too expensive in some cases, would approximate solutions and weaker notions of optimality become sufficient in real-world scenarios? It would be valuable to derive approximately optimal solutions to the Strategic Learning problem. This framework can also be used to enc ourage real change rather than preventing negative behavior. This might have a pplications in public policy problems. Summary In this chapter we outline the need for machine learning methods that anticipate strategic actions by the agents over which a pr incipal is inducting a classification rule. For example, credit approval, fraud detecti on, college admission, bankruptcy detection, spam detection, etc. are all cases involvi ng strategic agents who might try to achieve positive classifications by altering their attributes. We reviewed two recent studies initializi ng results on this problem. Dalvi et al. (2004) use a two stage model to produce a s uperior principal clas sifier. In our study (Chapter 4), we go a step further. Ideally, using rational expectati on theory, one might be able to fully anticipate agent actions and incorporate this in a machine learning induction process to determine dominant classifiers as done in Chapter 4. After outlining and illustrating the new paradigm, we discussed many potential research areas and questions.

PAGE 52

42 CHAPTER 4 DISCRIMINATION WITH STRATEGIC BEHAVIOR Introduction and Preliminaries Strategic Learning We study the problem where a decision make r needs to discover a classification rule to classify intelligent agents. Agents may engage in strategi c behavior and try to alter their characteristics for a favorable classification. We show how the decision maker can induce a classification rule that fully anticipates such behavior. We call this learning in the presence of self-interested agents or simply Strategic Learning. Suppose nX contains vectors whose n ob servable components represent values of attributes for rational agents that will be classified by a decision maker (for example, as good credit risks or bad ones). X is partitioned into two sets, a set consisting of cases consistent with some underlyi ng but unknown concept (called the positive examples of the concept) and the remaining cases (a set of negative examples of the concept). For example, in Messier and Hansen (1988) the underlying concept is described by firms that wont default or go bankrupt and attributes consists of values such as the firms ratio of retained earnings to total tangible assets, the firms ratio of total earnings to total tangible assets, etc. A desired goal is to sample X, determine the true label of each example and infer the concep t from these examples. This is a typical task in data mining, machine le arning, pattern recognition, etc. In such situations, the co ncept of interest is assu med fixed but unknown. The observable relevant attributes of interest ar e assumed given for our problem. During the

PAGE 53

43 inference process, a decision maker observes instances drawn randomly with replacement from X each having a label -1 or 1 denoting whet her it is a negative or positive example of the concept. These labels are not nor mally observable, but during the inference process we assume that such labels are available, perhaps th rough extensive study or from past outcomes. The collection of these examples forms the training set 11Sx,y,...,x,y of observations where iy1,1 identifies the label. We assume there are at least two elements of S having opposite labels. (Strictly speaking, S is not a set since there may be duplicate entries). The decision maker uses the training set to determine an instance of a representation of the concept that we embody in a function f:X where fx0 if xX belongs to the positive class and fx0 if it belongs to the negative class. In the language of learning theo ry (Vapnik 1988) we are perf orming supervised learning when we infer f from a sample S, each example of which has a known label. The decision maker must choose a general form for f (e.g., a decision tree, a linear function, a neural network, etc.). Depending on the repr esentation chosen for the target concept, one may use inference methods that produce the desired output. For example, for representations such as neural networks (Tam and Kiang 1992), decision trees (Quinlan 1986), discriminant functions (Hand 1981), support vector machines (Cristianini and Shawe-Taylor 2000), etc. many methods have been developed to determine an actual f given a sample S. The representation c hoice sets the induction bias and the methodology choice determines the quality of the final f found. It is often the case that a decision maker, heretofore the principal, determines functions such as f to select individuals (as for college entr ance, credit approval, etc.) or

PAGE 54

44 to spot early signs of problems in companies (such as bankruptcy, fraud, etc.). We will speak collectively of these yet to be classi fied individuals or companies as agents where agent i is represented by his/her true vector of attributes ix and true label iy. Although the exact nature of f might be unknow n to these agents, it is typically the case that the direction of change in an attri bute that increases the likelihood of a positive classification is known. For example, it is ge nerally believed that be tter grades positively influence admission to most universities. Henc e, taking actions to improve ones grades will help in achieving a positive labeling by a prin cipal in charge of admission decisions. Hence, it is reasonable to anticipate th at such agents who are subject to classification by a principal usi ng f might attempt to alter superficially their true attribute values so as to achieve a positive classificati on under f when they may actually belong to the negative class or be only marginally positive. This is not to say that agents need to lie or engage in deceit, although these behaviors could certainly be used to alter the true attribute values. Instead, they might proactivel y try to change their attribute values prior to their classification by a deci sion maker. For example, in a college entrance decision, one attribute often used to discriminate is the level of participation in extracurricular activities. An individual could discern this and make an effort to join clubs, etc. merely to increase his/her chance of a positive entr ance decision. There are even websites that purport to help one increase his/ her scores (i.e., the value th at f gives for them). For example, http://www.consumercreditbuilder.com advertised they have ways one can learn how to raise his/her credit scores, even as high as 30% or better. http://www.testprep.com claims Increases of 100 point s on the SAT and 3 to 4 on the ACT have been common, with hi gher scores often achieved.

PAGE 55

45 http://www.brightonedge.org/ even offers a college admissions camp aimed at increasing the chances of being admitted to college. We explore this potential strategic gaming and develop inference methods to determine f realizing that stra tegic behavior may be important in using f. That is, we anticipate strategic behavior in th e induction process leading to f. Suppose the principal can assess the costs to an agent needed to change an attribute value. Some attributes may have a very hi gh cost to change and some a relatively low cost. For example, a potential college applican t might note that the co st of participating in extracurricular activities might be rela tively low compared to studying harder to change a grade point average. The latter in th e short run may be impossible to alter. The costs of getting caught with lying or deceit might be very high, for example, in fraud cases. Let ic0 be the vector of costs to agent i for changing a true vector n ix to iixDd for id0 (diagonal matrix D, with diagonal components of +1 and -1, merely orients the moves to reflect di rections where increases lead to better scores under f). The cost of such a change to the agent is then iic'd. We assume that th e reservation cost of being labeled a positive example is ir for agent i. We further assume a rational agent will engage in strategic behavior if iiiiii d0rminc'ds.t.fxDd0. Thus, we envision a situation where the or iginal instance space, X, is possibly perturbed after f is discovered by the principal, even if f is kept secret. Most induction methods will operate using a sample from X. However, we contend that strategic

PAGE 56

46 behavior will result in a change fXX from which future instances will be sampled. This needs to be anticipated. In this chapter we develop methods to de termine linear discriminant classifiers in the presence of strategic be havior by agents. We focus on a powerful induction method known as support vector machines. For separabl e data sets, we charac terize (Theorem 1) an optimal principal induction method that an ticipates agent behavior. For non-separable data sets (i.e., data sets where no linear separator exists that can separate the negative from positive examples) we give a general approach. Then, we apply these approaches to a credit-risk evaluation setting. Later, we extend the gene ral approach to a stochastic version of the problem where agent parameters such as ic and ir are not known with certainty by the principal. We conclude with a discussion and possible future research in the last section. Related Literature Around the time of our first draft of this study, Dalvi et al. (2004) described a scenario where an adversary alters the data of the negative class subject to known costs and utility. They formulate the problem as a game with two players, one named Adversary and the other Classifi er, in which the Adversary tr ies to alter true negative points to mislead the Classifier into classify ing them as positive. In our terminology, the Classifier is our princi pal. Their Adversary is able to control the agent attributes. The authors show that if the Adversary incurs a unit cost for altering an observation then there exists a Nash equilibrium solution for this game. Since, finding a Nash equilibrium is prohibitive in the genera l case they focus on a one-step game in which they assume that the Classifier publishes a classifier (C0) before the game and the Adversary modifies the

PAGE 57

47 sample points to fool C0 but knowing that the Adversary wi ll engage in such behavior the Classifier actually us es a new classifier C1. The authors focus on the Nave Bayes Classifier (NBC) and show that updati ng NBC based on the expectation that the Adversary will try to alter the observations yields better results. They provide an algorithm and an application to the spam filtering problem where spammers (agents) quickly adapt their e-mail tactics (they alter their attribute vector) to circumvent spam filters. While our general task is somewhat similar to that studied in Da lvi et al. (2004), we outline some major differences between our approaches. In general, we focus on a case where agent attributes belongi ng to either class can be al tered. Dalvi et al.(2004) only consider negative cases. For example, if a marginal positive credit applicant would be labeled as a negative instance by the principal, the agent may use some of the techniques at http://www.consumercreditbuilder.com to increase the chance of a positive labeling. This is a major difference. As we show in Theorem 1 only marginal positive cases must change attributes. Secondly, we use support vector machines as our learning algorithm since it has many nice theoreti cal properties relating to i nduction risk and optimization which we discuss below. The linear classifier induced by support vector machines is also very easy to interpret as a sc oring function, unlike the Nave Bayes Classifier. Finally, our formulation is a version of the basic principal/agent problem formulation which inherently represents a two player game with infinite steps. Dalvi et al. only consider a one step game. Since we assume that our observations are actua l agents engaging in strategic behavior, our formulation inherently models a multi-agent game in which one principal, many negative agents and many positive agents may modify their behavior.

PAGE 58

48 With the exception of the Dalvi et al. (2004) and our earlier drafts, Strategic Learning approaches have not been used be fore. However, lear ning problems involving intelligent agents in a gaming situation have been investigated in other settings. For example, in leader-follower systems a l eader (i.e., our principal) decides on and announces an incentive to induce followers (i.e., our agents) to act in a way that maximizes the leaders utility, while the fo llowers maximize their own utilities under the announced incentive scheme. This is analogous in some sense to our setting since the leader tries to identify and announce the ul timate decision rule that would maximize his/her own objective while the followers seek to maximize th eir own utilities. In both cases, this can be viewed as the leader tryi ng to maximize some kind of a social welfare function given the self interested actions of the followers. These kinds of decision are termed incen tive Stackelberg games (Von Stackelberg 1952) where the leader first determines an incentive function and announces it and the followers, after observing the announced in centive, make their own decisions. For example, Bhattacharyya at al. (2005) apply this kind of a se quential approach to propose a reinforcement based learning algorithm for repeated game leader-follower multi-agent systems. A key point here is the sequential nature of the decisions. Learning takes place progressively as principal and agents inter act with each other based on the principles of reinforcement learning (Sutton and Barto 1998) which uses the idea of trial-and-error learning. In this scenario, the leader tries to learn an optimal in centive based on the cumulative information from the earlier periods while the followers try to learn optimal actions based on the announced incentive. Learning is achieved over successive rounds

PAGE 59

49 with information being carried from one round to the next. This differs from our method in the sense that this sequential approach will often yield suboptimal results while the ultimate solution can only be found by anticipa ting and incorporating this anticipation in the learning process itself rather than followi ng an after the fact reactive approach. One other line of research that is closely re lated to our problem is utility-based data mining (Provost 2005). Due to recent growi ng demand for solving economical problems that arise during the da ta mining process, there has been an interest among researchers to explore the notion of economic utility and its maximization fo r data mining. So far the focus has been on objectives like predictive accuracy or minimization of misclassification costs assuming that training data sets were freely available. However, over time, it may become costly to acquire and maintain data causing economical problems in data mining. Utility-based data mining trades off these acquisition costs with predictive accuracy to maximize the overall utility of the principa l. While utility-based data mining is concerned with the principals utility, St rategic Learning additionally considers the possibility that the objects of classificati on are self-interested, utility maximizing, intelligent decision making units. Linear Discriminant Functions Among the many possible classification met hods, linear discriminant functions are the most widely used since they are simple to apply, easy to interpret and provide good results for a wide range of pr oblems (Hand 1981). In this stu dy we restrict the class of functions, f, over which the principal searches to linear discriminant functions (LDFs). As we will show, this is not restrictive since kernel mappings can be applied for nonlinear domains. Many methods exist for finding li near discriminant f unctions. However, a powerful methodology, support vector machin es, has been developed over the last 10

PAGE 60

50 years that builds on statistical learning theory ideas. We discuss the importance of this below. A brief review LDFs is provided first. Linear Discriminant Methods Linear discriminant analysis for binary classification is usually performed by first determining a non-zero vector nw and a scalar b such that the hyperplane w'xb0 partitions the n-dimensional Euclidian space into two half-spaces. Then, an observed vector ix is assigned to the positive class if it satisfiesiw'xb0 Otherwise it is assigned to the negative class. That is, nw,b:1,1 where +1 denotes the positive class (points givingiw'xb0 ) and -1 denotes the negative class. Fisher (1936) was the first to in troduce linear discriminant analysis seeking a linear combination of variables that maximizes the distance between the means of the two classes while minimizing the variance within each class. He developed methods for finding w and b from a training set already cl assified by a supervisor. The results of Fisher were followed by many other approaches to determine (w,b) which mainly differ on the criterion used to make the decision of choice between a number of candidate functions. For instance, some of the statistical approach es focused on making different assumptions about the underlying distri bution. For example, logit analysis (Cooley and Lohnes 1971) which is a type of regression, uses a dummy dependent variable which can only have the values 1 or 0 and considers the maximum likelihood methods to estimate w and b. Bayesian methods (Duda and Hart 1973) on the other hand seek the optimum decision rule that minimizes the probability of error. In determining LDFs, numerous mathem atical programming methods have been studied. These distribution free methods were first introduced in (Mangasarian 1965) and

PAGE 61

51 then more actively explored in the 1 980s (Freed and Glover 1981a, 1981b, 1982, 1986a, 1986b, Glover 1990, Koehler and Erenguc 1990b). In general these methods attempt to find w and b that optimize direc tly (or some proxy for) the numb er of misclass ifications. For instance, Freed and Glover (1981) maxi mized the minimum deviation of any data point from the separating hyperplane. Also Freed and Glover (1986) focus only on the observations that ended on the wrong side of the hyperplane. They determined a (w,b) that minimized the maximum exterior devi ation. Most of these methods exhibited undesirable properties (Markowski and Markowski 1985, Koehler 1989a, 1989b). Different approaches like nonlinear (Stam and Joachimsthaler 1989) and mixed integer programming (MIP) (Koehler and Erenguc 1990a) have al so been studied. Moreover, models that combine objectives such as mi nimizing the cost of misclassification along with the number of misclassifications have also been developed (Bajgier and Hill 1982). Heuristic optimization has also been used. For example, Koehler (1991) studied Genetic Algorithms to determine linear discriminant functions. Often mathematical programs that directly minimize the number of misclassifications resort to some type of combinatorial search method and are computationally expensive and usually impr actical. An important exception was given by Mangasarian (1994) who gave three nonlinear formulations for solving the misclassification problem us ing bi-linear constraints to count the number of misclassifications. Good experime ntal results have been observed (Bennett and Bredensteiner 1997). However, the problem with these and other learning approaches is that they have focused and depended on just the training da ta set such that the hypothesis correctly

PAGE 62

52 classified the data on the trai ning set but made essentially po or predictions on the unseen data. That is, they typically would over-f it during the induction process at the later expense of generalization. This is primarily due to the fact that these approaches were based on procedures that directly minimi ze the classification error (or a proxy for classification error) over a training data set. Statistical learning theory (Vapnik 1998, 1999) attempts to overcome this problem by trading-off this potential over-fitting in trai ning with generalization ability. This fact makes the theory a powerful tool for theoretic al analysis and also a strong basis for practical algorithms for estimating multidimensional discriminant functions. We build our Strategic Learning model al ong these lines. In the follo wing section, we provide a brief overview of these statisti cal learning theory results whic h form the basis for support vector machines. We follow this with a brief discussion on support vector machines. Statistical Learning Theory Statistical learning theory (Vapnik 1998, 1999) provides a solid mathematical framework for studying some common pitfalls in machine learning such as over-fitting. Assuming that x is an instance generated by sampling randomly and independently from an unknown but fixed probability distribution, the learning problem then consists of minimizing a risk functional represented by the expected loss ove r the entire set of instances. Since the sampling distribution is unknown the expected loss cannot be evaluated directly and some induction principle must be used. However, a training set of instances is available. Many approaches use the empirical risk minimization principle which infers a function using the training set by minimizing the empirical risk (which is usually measured as the number of misclassifications in the training data set). The empirical risk

PAGE 63

53 minimization principle often l eads to over-fitting of data. That is, it often discovers a function that nicely discriminates on the traini ng set but can not do better than chance (or even worse) on as yet unseen points outside the training set. This has been observed in many studies. For example, Eisenbeis (1987) critiques studies based on such over-fitting. Statistical learning theory approaches th is problem by using the structural risk minimization principle (Vapnik 1999, Schlkopf and Smola 2001). It has been shown that, given S, for any target func tion with a probability at least 1 the risk functional can be bounded by the sum of the empirical ri sk and a term largel y capturing what is called the structural risk (see Vapnik (1999) for details). The structural risk is a function of the number of training points, the target confidence level, and the capacity, h, of the target function. The capacity, h, measur es the expressiveness of the target class of functions. In particular, for binary classification, h is th e maximal number of points (k) that can be separated into two classes in all possible k2 ways using functions in the target class of functions. This measure is called the VC-dimension and the size of the training set is required to be proportional with this quantity to ensure good generalization. For linear discriminant functions, without additional assumptions, the VC-dimension is hn1 (Cristianini and Shawe-Tayl or 2000,Vapnik and Chervonenkis 1981) A common assumption added for certain learning situations is that nxX implies xR This is called the boundedness assumpti on. A class of linear discriminant functions of the form yxw'xb0 with w1 is termed margin LDFs. Under the boundedness assumption, th e VC-dimension, h, for margin LDFs, is bounded above by 221minn,R/ and may be much smaller than n+1. Support

PAGE 64

54 Vector Machines determine an LDF by dire ctly minimizing the theoretical bound on the risk functional. The bound is improved by decreasing the VC-dimension, so that for margin LDFs one can focus on minimizing 22R/ The following section contains a brief review of this methodology. Support Vector Machines Support Vector Machines (SVMs) offer a powerful method for discovering linear discriminant functions by directly minimizi ng a theoretical bound on the risk functional. Having its primary motivation of mini mizing a bound on the generalization error distinguishes SVM approaches from other popular methods such as neural networks which use heuristic methods to find parameters that best generalize. In addition, SVM learning is theoretically guaranteed to fi nd an optimum concept (since the induction problem reduces to a quadratic, convex mi nimization problem) which marks a distinction between this system and most other learning methods. Neur al networks, decision trees, etc. do not carry this guarantee often l eading to local minima and an accompanying plethora of heuristic approaches to find acceptable results. For example, motivated by the Ockhams razor principle, most decision tree induction and pruning algorithms try to create the smallest tree that produces an accep table training error in the hopes that smaller trees generalize better (Quinlan 1996). Unfortunately, ther e is no guarantee that the tree produced by such heuristics minimize genera lization error. SVM algorithms also scaleup to very large data sets and have been a pplied to problems involvi ng text data, pictures, etc.

PAGE 65

55 There are several SVM models. The first model is the so-called maximal margin classifier model. When the training set is linear ly separable, this model determines linear discriminant functions by solving ,' ..(')11,...,wb iiMinww stywxbi As discussed below, this formulation pr oduces a maximal margin hyperplane with a geometric margin equal to 21/w. (Note, we show the 2-norm being used here. Other norms can also be used as we discuss later.) In general, for many real world problems the sample space may not be linearly separable. When the data are not separable the SVM problem can be formulated with the introduction of margin slack variables as follows: 0 1 ,' ..(')11,...,ii i wb iiiMinwwC stywxbi where i is the margin slack variable measuring th e shortfall of a point in its margin from the hyperplane and C is a positiv e parameter. C is chosen to trade-off between margin maximization and training error minimization. This formulation is termed the softmargin SVM (Cristianini and Shawe-Taylor 2000). In the first model, where the data are separable, the objective function minimizes the square of the norm of the weight vector, w. It can be shown that th is is equivalent to maximizing the geometric margin when the f unctional margin of the hyperplane is fixed to 1 (Cristianini and Shawe-Taylor 2000.) as follows. For some x and x the (linearly separable) SVM is guaranteed to find an op timal weight vector which will satisfy

PAGE 66

56 '1 wxb and '1 wxb so the margin is then the half of the distance between x and x which is 1''1 2 wxwx ww ww Thus, minimizing ww is the same as maximizing the margin which, in turn, minimizes the 22R/ term bounding the VC-dimension. The non-separable case trades-off the margin with the margin shor tfall. By minimizing a quadratic objective function with linear inequality constraints, SVMs manage to escape the problem of local optima faced by other learning methods since the problem becomes a convex minimization problem having a global optimal solution. It is also possible to employ kernel ma ppings in conjunction with SVMs to learn non-linear functions. A kernel is an implic it mapping of input data onto a potentially higher dimensional feature space. The hi gher dimension feature space improves the computational power of the learning machin e by implicitly allowing combinations and functions of the original input variables. These combinations and functions of original input variables are usually called features wh ile the original input variables are called attributes. For example, consider some financia l attributes like total debt and total assets. If a properly chosen kernel is used, it will allo w a debt ratio (total debt to total assets) to be examined as well as the total debt and total assets and, potentially, has more informational power than total debt and total assets variables used alone. Thus, a kernel allows many different relationships between variables to be simultaneously examined. SVM finds a linear discriminant function in the feature space, which is usually then

PAGE 67

57 nonlinear in the attri bute space. It sometimes happens that a kernel mapping can map data not linearly separable in the attribute space to a linearly separable feature space. There are many classes of generic kernels (Genton 2001) but also kernels are developed for specific application areas such as text recognition. For the purposes of this dissertation, we assume a kernel has already been applied to the initial data so that we may treat these features as primitive attribut es. We thus focus our research on finding LDFs under strategic behavior leaving the re lated complications introduced by kernels to future research. In the next section we characterize the induction process for finding an optimal SVM solution to the strategic LDF problem. Learning while Anticipating Stra tegic Behavior: The Base Case In this section we start by introducing the agents strategic move problem that shows how a rational agent will alter his/her true attributes if he/she knew the principals classification LDF. We then turn to the simplest vers ion of the Strategic Learning problem and derive a complete characterizati on for the principals strategic LDF. The base problem is generalized in subsequent sections. The Agent Problem If the principals classification function (1iwxb ) was known to the rational agents, they would solve what we call the st rategic move problem to determine how to achieve (or maintain) positive classification unde r the principals LDF at minimal cost to themselves. The problem is min' ..1 0ii ii icd stwxDwdb d where Dw is a diagonal matrix defined by

PAGE 68

58 ,10 10j jj jw Dw w If feasible, this problem determines a minimal cost change of attributes, iDwd, needed to be classified as a positive case. This would be undertaken if this cost doesnt exceed the agents reservation cost, ir Since this optimization problem has only one constraint (other than non-ne gativity constraints), the following can be determined. For non-zero w, let k satisfy ,0max0,1 argminjii j jw jcbwx j w then for **max0,1 ,i i jbwx zwb w we have *** *,1, 0iiii j j izwbifczwbr dwb otherwise *,izwb can be interpreted as the projection of the amount of modifica tion that the agent needs to make on the j th attribute to achieve positive cl assification with respect to the wb that the principal chooses. For w equal to zero or infeasible strategic move problems, set *,0idwb An infeasible move occurs when **,ii jczwbr. Notice that if the ratio */i jjcw is the same for different values of j the agent problem has alternate optimal solutions. That is, the ag ent will be indifferent between multiple j

PAGE 69

59 values corresponding to moving in different opt imal directions (or convex combinations thereof). Notice that it might be possible for some attri butes to be correlated with each other. For instance, some of the variables of the agents optimization problem can be linearly dependent. This can be form ulated by expressing depende nt variables in terms of independent variables. For exampl e, the agents problem would be min' 1 0 ii iii icd stwxDwAdb d where iiAd captures the linear relati onships. Solving gives *,1, 0 ikiii k ii Z wbifcZwbr Adwb otherwise Consequently, this effects *,idwb such that it might be a combination of movements in different direc tions since changing one attr ibute may cause others to change. This merely complicates the presentation but does not substantively affect the results. For this reason, we assume lin ear independence between the attributes for simplicity. The Base Case We start by studying the simplest version of the principals Strategic Learning problem. We assume: all agents have the same reservation and change costs (i.e., irr and icc). 11Sx,y,...,x,y is linearly separable

PAGE 70

60 These assumptions are removed in the ne xt section. Using SVM while ignoring strategic behavior would be accomplished by solving ,1:min ..11,..., wb iiPww stywxbi Under Strategic Learning th e principal anticipates any possible agent actions and solves the following. *2:min ..,11,..., wb iiiPww stywxDwdwbbi This problem is no longer a nice conv ex optimization problem with linear constraints. The constraints are not even piecewise linear convex a nd/or concave (as we show in the next section). Nonetheless, the following result char acterizes an optimal solution to the principals problems under strategic behavior by agents. For ease of presentation the proof is in the Appendix A. Theorem 1 **, wb solves P1 if and only if ** **22 22 bt w tt solves P2 where *t is given by **max/jj jtrwc Theorem 1 states that a principal anticipati ng strategic behavior of agents all having the same utilities and cost structures will use a classifier that is pa rallel to the SVM-LDF **, wb determined without taking into considerat ion strategic behavior This function is a scaled (by *2 2t) and shifted form of the original so the objective of P2 is strictly

PAGE 71

61 smaller than P1s meaning that the margin is greater and that the probability of better generalization is greater. The scaling and shift of the hyperplane depend on the cost structure for altering attribute values, the reservation cost fo r being labeled as a positive case, and **, wb For example, suppose we have the following training set: Positive cases (1iy ): 12 5 x 25 9 x 31 6 x Negative cases (1iy ): 44 4 x 55 5 x and 66 0 x and that 1 2 c and r3 Solving P1 gives *0.7272 0.5454 w and *0.272727 b Theorem 1 shows that the optimal LDF under strategic behavior is 0.34783 0.26087 w and 0.65217 b. Figure 4-1 shows this, the margins and the moved points. (a) (b)

PAGE 72

62 (c) (d) Figure 4-1. Theorem 1. (a) A normal SV M LDF would result in two negative points changing an attribute enough to be cla ssified as positive. (b) Theorem 1 shifts the LDF causing two positive point s to move to stay classified as positive. (c) The two negative poin ts gain nothing by moving so would not do so (thus figuratively returning to thei r original positions). (d) The final LDF with wider margins reflects these anticipated steps. In our setting, the true negative agents try to achieve a positive labeling by changing their true attributes if the cost of doing so doesnt exceed their reservation cost. However, an astute principal, anticipating su ch strategic behavior, shifts the hyperplane such that no true negative ag ent will benefit from engaging in such behavior. Thus, in practice, the negative agents ha ve no incentive to change thei r true attributes and, hence, will not exert any effort. However, the ag ents who are marginally positive are now in danger of being classified as negative. Since they too anticipate that the principal will alter the discriminant function so as to can cel the effects of th e expected strategic behavior of the negative labeled agents, they will undertake changes. Thus, roughly speaking, the ones who are penalized for e ngaging in a strategic behavior are not the negative agents but rather the marginal pos itive agents. So, the final discriminant function leads to a bigger gap between th e two classes of points and produces a separation. Figure 4-1 il lustrates these points. In fact, Theorem 1 shows that the principa l is pushing marginal positive agents to alter their attributes in part to gain better generalization results. The new margin need not

PAGE 73

63 be this large merely to keep negative agen ts from altering their attributes to gain a positive classification. So the principal is le ft with a trade-off between forcing marginal positive agents to make large changes and po ssibly increasing the ge neralization error of the induced LDF. More specifically, the new margin is *1 1 2 t w Any margin greater than */2 t will label negative strategi c agents as negative. So we can choose parallel hyperplane s giving geometric margins between ** *1 ,1 22 tt w Theorem 1, giving an optimal solution to P2, yields the largest margin. A principal might elect to choose a smaller margin to spare ma rginal positive cases the extra effort needed to still be labeled positive. A separate line of thought might argue for the maximal margin used in Theorem 1. Since positive agents are unaware of the locat ions of negative agen ts, they may act to move maximally to ensure their positive labeling. What if Theorem 1 (or P2) isnt used but rath er iterated forms of P1. Interestingly, if a normal, non-strategic SVM were applied to the new instance space, X (i.e., resulting from fXX ), it may not produce the same result with the altered versions of the original sample as P2. Suppose that instead of using the shifted classifier that incorporates the effects of strategic behavior, the principal chooses to use one that is found by the normal SVM algorithm and updates the cl assifier every round after ob taining observed attributes (now reflecting strategic behavi or). For example, solving 1P for the sample formed by

PAGE 74

64 the above points yields *0.7272 0.5454w and *0.272727 b. Realizing that the principal will use the hyperplane **(,)wb in the first round, agents will adjust their attribute vectors accord ingly. Calculating ***,idwb for these points we get ** 1(,)0dwb ** 2(,)0dwb ** 3(,)0 dwb ** 42.7502 (,) 0 dwb ** 53 (,) 0 dwb and ** 6(,)0dwb As a result of strategic be havior, the principal will obs erve the following perturbed sample after the first round 12 5x 25 9x 31 6x 41.2498 4x 52 5x and 66 0x Note that 1 x and 5 x are the same point so this set is not separable. In the second round the principal will adjust the hyperplane using the normal SVM solution with the perturbed sample observed after the first round. However, note that the sample observed after the first round is no l onger linearly separable so the principal will use the soft margin SVM (say with 11C and15C reflecting the fact that there is a greater penalty for mislabeling a negative case). Solving for **(,)wb for the second round yields *0.4 0.8w and *4.2b Notice that /jjcw is the same for 1,2j so the agent problem has alternate optima. That is, the agent can choose to alter any of the two at tributes since they give the same objective value min5icd For simplicity, we assume that all agents will choose to move in the smallest indexed attr ibute in case of alternate optima.

PAGE 75

65 The agents who engaged in strategic beha vior in the first round had incurred some cost causing a reduction in their reservation values. If we calculate the residual reservation cost left for each agent after the first round we get 1r3 2r3 3r3 4r0.2498 5r0 and 6r3 Calculating ***,idwb for these points for the second round, we get ** 1(,)0dwb ** 2(,)0dwb ** 3(,)0dwb ** 4(,)0dwb ** 5(,)0dwb and ** 6(,)0dwb Thus, the sample after the second ro und stays the same and solving for **(,)wb for the second round yields the same hyperplane. None of the points in the sample satisfy the constraint *,kiczwbr so the strategic behavior ends in the second round. Figure 4-2 displays the hyperplane that so lves P1 and the final hyperplane of the iterative approach. The optimal hyperplane of Theorem 1 is superior to the resulting hyperplane of the iterative approach in the se nse that it prevents the movements of all negative instances while the hyperplane of the ite rative approach still cant prevent misclassifications to occur as a result of strategic behavior. (a) (b) Figure 4-2. Multi-round non-strategic SVM. (a) shows the movement of negative points after the SVM LDF is announced. (b) shows a new SVM based on these moved points.

PAGE 76

66 Remark: Theorem 1 is similar to Theorem 2.1 in Dalvi et al. (2004). Theorem 1 shows that if we knew the true class membersh ip of each point before the game started, it is possible to create a classi fier that has the same perfor mance when strategic behavior (i.e., gaming) is present. Moreover, Theorem 1 characterizes the optim al strategy of each type of agent as well as the principal; some thing that has been omitted in Dalvi et al. (2004). It also shows that under rational expe ctations the positive in stances also have to change their behavior. In essence, they are the only ones that end up modifying their behavior. Learning while Anticipating Strate gic Behavior: The General Case In this section we assume that agents may have their own reservation and change costs and that the data set may not be separa ble. Refreshing the not ation in the Strategic Learning setting, each agent i ha s a true vector of attributes ix, a true label iy, reservation cost ir and a vector of costs for modifying attributes ic. Reservation cost can be viewed as the maximum effort that an agent is willing to exert in order to be classified as a positive agent. On the principals side, iyC is the penalty associated with the margin shortfall of an agent of true type iy. (Other schemes can be used to price the margin shortfall of sub-categories of these cases, but we forego the extra notati onal burden to show this). Towards that end, in the abse nce of Strategic Learni ng, we would solve the soft-margin form of SVM. The straightforw ard modification of the soft-margin SVM to handle Strategic Learning is as follows. First, let ,iqwb be the amount of bias that an agent i can introduce to the principals classification function 1iwxb by engaging in strategic behavior. As discussed in Section 3, a principal may wa nt to trade-off some confidence in the

PAGE 77

67 generalization error of the induced LDF for lowered effort needed by positive agents to stay positive. Later we argue that a reasonable way for the principle to implement this is to penalize agent effort by adding 1 ii yq where 10C to the soft-margin objective. With this, the general model for Strategic Learning is: 113:min, ..,11,..., i iyii wb iy iiiiPwwCqwb stywxqwbbi where 01'0 01' 1' i i ii iifbwx qwb ifbwxz bwxotherwise and 0maxi jj ii jc i jw zr c for 0 ic with at least one j satisfying 0 i jc. In the next section, we study this proble m and show it is not mathematically wellposed but may be modified to produce an ep silon-optimal formulation. Following that section, we develop a mixed integer quadr atic program and a mixed integer linear program (for the 1-norm counterpart) for solving P3.

PAGE 78

68 Properties of P3 Let 11,,,i iyii iy f wbmwCwbqwb where ,max0,1,iiiiwbywxqwbb and 0 0iyC with w being a norm of w and m an increasing function with 00m. We are interested in minimizing f wb which will be formulated as a mixed integer program when j jmww and a quadratic mixed integer program when 'mwww The following table illustrates the cases that result depending on an agents functional distance to the positive margin (1'ibwx ). Table 4-1. Possible cases depending on iz Case i y 1' ibwx ,iqwb ,iwb 1 -1 < 0 0 1'ibwx 2 -1 0,iz 1' ibwx 2 3a -1 iz and 2 0 1'ibwx 3b -1 izand 2 0 0 4 1 < 0 0 0 5 1 0,iz 1' ibwx 0 6 1 iz 0 1' ibwx

PAGE 79

69 Now, we define 11|||i iyii iy f bwmwCbwqbw which is the total cost function f wb when w is fixed. Let bw be an optimal solution to min|b f bw. There are four cases involve d with determining an optimal bw. The following Figures illustrate these cases for the following two points: Positive case (1iy ): 2 5px Negative case (1iy ):, 1 6nx for 0.8 0.6w 1 2c 11 C and 15 C. Figure 4-3 shows how the costs 1max0,1,',iiiCbqwbwxqwb vary with b for a positive agent. Figure 4-4 shows the same when 0 Figure 4-5 shows how the costs 1max0,1,'iiCbqwbwx vary for a negative agent. Figure 4-6 shows this when the agent doe s not have a high enough reservation to cover all positions within the margin. Notice it has a similar (though reversed) graph as a positive agent with 0

PAGE 80

70 Positive Case-0.5 0 0.5 1 1.5 2 2.5 3 -4-3-2-101234 bCost Non-Zero Lambda Figure 4-3. Positive case with 1ir and 0.5 Positive Case-0.5 0 0.5 1 1.5 2 2.5 3 -4-3-2-101234 bCost Zero Lambda Figure 4-4. Positive case with 1ir and 0

PAGE 81

71 Negative Case-5 0 5 10 15 20 25 30 -12-10-8-6-4-20 bCost Figure 4-5. Negative case with 6ir Negative Case-5 0 5 10 15 20 25 30 -12-10-8-6-4-20 bCost Figure 4-6. Negative case with 1ir Table 4-2 gives the regions shown in these cases.

PAGE 82

72 Table 4-2. Different regions of costs Positive Cases Negative Cases 0 0 2 iz 2 iz ,1'iizwx ,1'iizwx ,1'iwx ,1'iizwx 1',iizwx 1',1'iiizwxwx 1',1'iiiwxzwx 1',iwx 1',1'iiizwxwx 1',1'iiizwxwx 1', iwx 1',iwx As a typical graph of | f bw where all negative and pos itive points are combined, we see graphs like Figure 4-7 which was produced using the following six points. Positive cases (1iy ): 12 5x 25 9x ,34 4x Negative cases (1iy ): 41 6x 55 5x and 66 0x Total Costs0 10 20 30 40 50 60 70 80 90 100 -11-6-14 bCost s Figure 4-7. A typical graph of | f bw for 0.8 0.6w 1 2c 11 C, 15C, 1ir and 0.5

PAGE 83

73 Clearly | f bw is not any form of convexity (like quasiconvex). Furthermore, there are points of discon tinuity where the function is upper semi-continuous. When trying to find a minimizing point for | f bw these points of discontinuity pose a problem when limsup||yb f ywfbw which is caused by negative agents. Figure 48 shows three cases that may arise with lin ear upper semi-continuous functions. Here 2 b is a point of discontinuity and all three cases have an open interval adjoining it to the left. In the first case, b is actually an optimal poi nt (because there are multiple optima and any point in 1,2 is optimal. The remaining two cases have no optimal point in 1,2 so min|b f bw is not a well-posed problem. Point of Discontinuity-0.2 0 0.2 0.4 0.6 0.8 1 1.2 00.511.522.533.5 bTotal Cost Point of Discontinuity-0.2 0 0.2 0.4 0.6 0.8 1 1.2 00.511.522.533.5 bTotal Cost Point of Discontinuity-0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1.2 00.511.522.533.5 bTotal Cost Figure 4-8. Possible cases fo r points of discontinuity of | f bw. In the latter two cases we consider the point b as optimal (assuming it has a lower cost than at b) and call it an epsil on-optimal point of the neighborhood. We denote this problem as min|b f bw and a solution as bw In the next section we re formulate P3 to produce epsilon optimal solutions.

PAGE 84

74 Strategic Learning Model In this section we develop mixed integer models of the (epsilon) Strategic Learning model. To evaluate maxj i j i jw r c we introduce binary variables, H, to get ,1,..., 1jj iiiij ii jj ij jww rzrMHjn cc Hn Here, for each i, one ,ijH must be zero so j ii i jw zr c for only one j. With this, the lower bound forces iz to be the maximal value. These constraints need only appear for the K different cost vectors (ikcv ). That is, although there are agents there may only be K unique cost vectors (Theorem 1 assu mes all have the same cost vector so K = 1). Let iq be a decision variable representing ,iwDwdwb. The cost of moving to a positive agent is */ii jjcqw where *argmax j j i jw j c. This can be evaluated by */ii jjcqw. However, adding

PAGE 85

75 *1 iii j y jcq w to the SVM objective for a positive would yield a non-positive de finite objective. So, instead, as discussed earlier, we use a proxy for the agent effort of 1 ii yq where 10C To evaluate absolute values of component s of w the usual trick of finding absolute values by 0min .. otherconstraintskj s js stsws wont work because our objective function ha s terms whose values will be impacted by minimizing the sum of the s variables. He nce we introduce another vector of binary variables, I, and the following constraints to handle absolute values. 0 1,, 0 1,, jj jjwww M Iwjn M MIwjn We need to determine the iq variables. No agent will ex ert effort to adjust their attributes if the effort does not yield a positive classificati on. Conversely, if exerting effort (not exceeding the reserv ation limit) will result in a posi tive classification, then an agent who would otherwise be classified as ne gative will exert the effort. Consider the case of a negative ag ent. We replace

PAGE 86

76 *,1 iiiiywxDwdwbb with the following: 1 2 1iii ii iiiywxb V wxbMVz where 0,1 iV and where 0 M is a sufficiently large and 0 sufficiently small. Table 4-3 includes the full implications of th ese constraints together with the objective function that minimizes i when it is otherwise unconstrained from above. Notice that iq is not explicitly needed for the negative cases. The last case has 0 iV even though the constraints also allow 1 iV because the minimization process will force i (and hence iV) to zero. Table 4-3. Negative cases. 1 iwxb iV i 0 1 12 iiywxb 0, iz 1 2 0 for 2iz iz 0 > 0 for 2iz Consider the case of a positive agent. Here we replace *,1 iiiiywxDwdwbb with the constraints 1 0 0 iiii ii ii iiywxbq MMV MVq zq

PAGE 87

77 where 0,1 iV and 0 M is sufficiently large. Table 4-4 includes the full implications of these constraints together with the objective function that minimizes i over iq when possible. Table 4-4. Positive cases. 1 iwxb iV iq i 0 0 or 1 0 0 0, iz 1 1 iwxb 0 iz 0 0 1 iwxb Collecting the above gives: 0 11 ,4:min' .. 1: 1 0 1: 1 2 1 max: 1,,,1,...,i i iyii iy i iii ii ii ii i ii ii iii jj iiiij ii jjPwwCq st efforty wxbq MMV MVq zq efforty wxb V wxbMVz adjustment wwww rzrMHjni cc ,1 1,..., : 0 1,, 0 ij j jj jjHni absolutevalue www MIwjn MMIw 1,, jn

PAGE 88

78 ,int: ,,0,1jijiegrality IHV We note that the SVM model typically uses a 2-norm measure (our ww in the objective) but a 1-norm alternative is equall y acceptable and many researchers focus on it (e.g., see Fung and Mangasarian (2002)). In such a case, the objective would be 11 i iyii j jiywwCq making the 1-norm version of P4 a Mixed Inte ger Linear program. This is fortuitous since the 1-norm problem is much easier to solve. In the next section, we use P4 to solve a strategic version of a credit-risk evaluati on problem. We then look at stochastic versions of P4 where the principal may not know certain agent parameters. Sample Application In this section we apply our results to a credit-risk evaluation dataset which is publicly available at the UCI repository (http://www.ics.uci.edu/~mlearn/MLRepository.html ) and referred to as German credit data. The original dataset consists of 1,000 in stances with 20 attribut es (7 numerical, 13 categorical). For the purposes of our analysis, some of th ese categorical attribut es, such as status of existing checking account (greater than zero, between zero and 200 DM, greater than 200 DM) were converted to numerical values (e.g., 0, 100, 300) others were replaced by binary dummy variables. For the attributes that were converted, we assumed the va lue of the attribute to be the midpoint of the interval it lies within and for values that ar e outside the specified intervals we incremented with an amount refl ecting the pattern of increase. Thus, the

PAGE 89

79 resulting dataset has 52 attri butes summarized in the Table 4-5. For numerical reasons we standardized the converted data set. Table 4-5. Converted German credit data. Attribute index (i) Attribute name Type kc 0 Checking Account Balance Converted to Continuous 0.1 1 Duration Continuous 100 2,3,4,5,6 Credit History Converted to Binary 7,8, 9,10,11,12, 13,14,15,16,17 Purpose Converted to Binary 18 Credit Amount Continuous 10 19 Savings Account Balance Converted to Continuous 0.01 20 Employment Since Converted to Continuous 100 21 Instalment rate Continuous 100 22,23, 24, 25, 26 Personal Status Converted to Binary 27, 28, 29 Other Parties Converted to Binary 30 Residence Since Continuous 100 31,32, 33, 34 Property Converted to Binary 35 Age Continuous 100 36, 37, 38 Other Instalment Plans Converted to Binary 39, 40, 41 Housing Converted to Binary 42 Number of Existing Credit Cards Continuous 100 43, 44, 45, 46 Job Converted to Binary 47 Number of Dependents Continuous 100 48, 49 Own Telephone Converted to Binary 50, 51 Foreign worker Converted to Binary The categorical attributes contained in th e original dataset (t hat we converted to binary variables) such as personal status (divorced male, married female and such) and sex are almost impossible to alter by an agen t or not worth altering for the purposes of a credit application so we assigned their c value infinite cost. This was operationalized in P4 by leaving out th e inequalities jj iiiij ii jjwwww rzrMH cc corresponding to such js and by reducing the right side of

PAGE 90

80 ,1 ij jHn by one for each such j. Each attribute is assigned a different cost ranging between 0.01 and 100 reflecting our subjective assessment of the relative leve l of difficulty of cha nging that attributes value. These costs are summarized in Table 4-5. We solve this problem assuming K = 1 (i.e., all agents have the same utility structure) with 15r Further we set 710 14 C and 15 C. Applying P4 to 100 point subset of th e full 1,000 point, non-separable data set using the same reservation and misclassifica tion cost structure, we got the results summarized in Tables 4-6 and 4-7. Table 4-6 focuses on strategic solutions varying for the 1-norm version of P4 and Table 4-7 focuses on the 2-norm version of P4. Also the result s of the non-strategic solutions are included in each table. We used CPlex (ILOG 2005) to solve all the problems. In Tables 4-6 and 4-7, The Number of Positive and Negatives Moved are the number of cases that could move to a positive labeling with respect to the solution a nd the Total Reservation Cost Used is the total cost to the agents for these moves ( */ii jj icqw ). The rows of the tables lis ted under the heading of Str ategic Impact such as number of points moved and misclassified, total misclassification cost (1iyi iC ), objective value with strategic moves etc. ar e the results of strategic behavior.

PAGE 91

81 Notice that 1iyi iC is reported in two different ro ws. For the strategic solutions, this result is the same in both rows for the obvious reason of P4 being modeled to anticipate and take into a ccount the possible strategic behavior. However for nonstrategic solutions, there is no such anticipation so the first row corresponds to the misclassification costs without strategic behavior and the se cond one is calculated taking into account the misclassificat ion costs of points after they move with respect to the nonstrategic solution. Similarly, the variable iq does not exist in any of th e non-strategic formulations. Thus, the term 1 ii yq corresponding to the non-strategi c solutions in all tables are calculated theoretically taking in account the movements of the positive points with respect to the solution (In Tables 4-6 and 4-7, 1 ). Likewise, the moved positive and negative points and final mi sclassifications reflect after-solution moves. j was found to be attribute 19 (Savings A ccount Balance) for most of the cases. However in some of the cases we observe a tie between attribut e 19 and attribute 0 (Checking Account Balance) leaving th e agents indifferent between making modifications on these two a ttributes. For those cases,* j was chosen to be the attribute with the lower index number.

PAGE 92

82Table 4-6. 1-norm strategic SVM solu tions (P4) for various values of vs non-strategic solution for 100 instances. 1-Norm Non-StrategicStrategic Attribute name 1 0 0.01 0.05 0.1 0.5 1 1.5 2 0 Checking Account Balance 0 0.000565 0.000565 0.0007330.000711 -0.01333 -0.01046 0.013333 -0.01333 1 Duration -1.42275 -0.00827 -0.00833 -0.00867 -0.00868 0.060293 0 -0.86934 -1.15545 18 Credit Amount 0 0 0 -0.00063 -0.00063 -0.7216 -0.84269 -0.32216 -0.43376 19 Savings Account Balance 0.276264 0.001333 0.001333 0.0013330.001333 0.001333 0.001333 0.001333 0.001333 20 Employment Since 0.043407 0 0 0.0003780.000494 0.255101 0.345983 0.151507 0.240438 21 Instalment rate -1.58332 -0.00527 -0.0053 -0.00543 -0.00547 -0.68458 -0.4795 -0.28348 -1.20469 30 Residence Since 2.678459 0.010079 0.010143 0.0107650.010761 1.312225 2.110051 2.06295 2.185361 35 Age 2.073676 -0.00203 -0.00201 -0.00023 0 0.11225 0 0.216032 0.7117 42 # of Existing Credit Cards -0.59592 -0.00314 -0.00316 -0.00386 -0.004 -0.92055 -1.01822 -0.6245 -0.46409 47 Number of Dependents 1.637194 0.003129 0.003154 0.0032550.003241 1.559915 1.757889 3.144514 1.835584 *b 1.36938 -0.99817 -0.99816 -0.99714 -0.85675 -0.24652 0.23245 0.61498 0.83092 w 23.55181 0.07271 0.072972 0.07898 1.10333 11.98734 15.60196 19.13669 20.81358 1i y i iC 101.6607 8.0395 8.0392 8.0366 8.0367 10.00 20.00 31.00 57.5435 Objective Value 125.213 8.11226 9.48827 14.9906 21.4947 53.6396 75.3268 93.7098 105.838 Strategic Impact # of Positives Move d 6 69 69 69 64 51 32 24 15 # of Negatives Moved 26 0 0 0 0 1 1 2 4 # of Pos. Misclassifications 0 1 1 1 1 0 0 0 1 # of Neg. Misclassifications 30 0 0 0 0 1 2 3 4 Total Reservation Cost Used 3.73468 1034.28 1032.00 1031.25 926.60 474.78 297.93 217.86 103.05 1iyi iC (with Strategic moves) 318.8339 8.0395 8.0392 8.0366 8.0367 10.00 20.00 31.00 57.5435 1 ii yq 103.17571 0 1.37600 6.8750 12.35469 31.65221 39.72485 43.57221 27.48069 Objective Value (with Strategic moves) 350.2729 8.1122 9.4882 14.9906 21.4947 53.6396 75.3268 93.7098 105.838 Seconds to solve (3.4 GHz Xeon proc.) 0.016 4.376 4.391 2.422 2.844 2.047 1.344 1.625 1.282

PAGE 93

83Table 4-7. 2norm strategic SVM so lutions (P4) for various values of vs non-strategic solution for 100 instances 2-Norm Non-StrategicStrategic Attribute name 1 0 0.01 0.05 0.1 0.5 1 1.5 2 0 Checking Account Balance 0.077049 0.000685 0.000128 0 0.010434 -0.01333 0 0.013333 -0.01333 1 Duration -0.94589 -0.00709 -0.00679 -0.0057 -0.0648 0.099996 0.058722 -0.60349 -0.72603 18 Credit Amount -0.33725 -0.00225 -0.00253 -0.00404 -0.04935 -0.65761 -0.77481 -0.4234 -0.8176 19 Savings Account Balance 0.280419 0.001333 0.001333 0.0013330.001333 0.001333 0.001333 0.001333 0.001333 20 Employment Since 0.418413 0.000513 0.001267 0.0017980.036009 0.381561 0.439863 0.722547 0.338969 21 Instalment rate -0.89546 -0.00726 -0.00835 -0.00989 -0.11209 -0.57348 -0.43845 -0.81373 -0.80658 30 Residence Since 1.316182 0.014112 0.01465 0.0176940.15321 1.165015 1.881122 1.165456 0.97931 35 Age 0.885631 -0.00013 0.001142 0.0010630.016148 0.243205 -0.03338 0.198731 -0.11161 42 # of Existing Credit Cards -0.57952 -0.00221 -0.00358 -0.00191 -0.02947 -0.7029 -0.63172 -0.22857 -0.38767 47 Number of Dependents 1.217864 0.006797 0.006577 0.0102790.108277 1.380006 1.209247 0.956158 1.201656 *b 0.97802 -0.99691 -0.95366 -0.86202 -0.80088 -0.27503 0.15576 0.42523 0.68693 w 3.59426 0.00051 0.14567 0.47972 0.59942 2.65509 3.19144 2.95735 3.10397 1iyi iC 110.5845 8.0343 8.0343 8.0379 8.5655 10.00 20.00 39.7220 50.5038 Objective Value 123.503 8.03488 9.38724 14.4660 20.5455 49.2335 72.6676 89.5899 101.208 Strategic Impact # of Positives Move d 10 69 69 65 63 52 35 24 19 # of Negatives Moved 28 0 0 0 0 1 0 1 3 # of Pos. Misclassifications 0 1 1 1 1 0 0 1 0 # of Neg. Misclassifications 30 0 0 0 0 1 2 2 5 Total Reservation Cost Used 3.01754 1035 998.71 929.70 871.55 482.76 318.61 205.60 154.01 1iyi iC (with Strategic moves) 302.9685 8.0343 8.0343 8.0379 8.5655 10.00 20.00 39.7220 50.5038 1 ii yq 8.37569 0 1.33162 6.19800 11.62066 32.18402 42.48227 41.12195 41.06958 Objective Value (with Strategic moves) 324.2629 8.0348 9.3872 14.4660 20.5455 49.2335 72.6676 89.5899 101.208 Seconds to solve (3.4 GHz Xeon proc.) 0.063 81.661 102.694 45.643 32.408 6.328 8.672 8.953 9.642

PAGE 94

84 The 2-norm results are very similar to their 1-norm counterparts. This is interesting since the 1-norm problem is much simpler to solve. The solutions to P4 provide several significant improvements over their nonstrategic counterparts. First, strategic solutions perform better in terms of the total number of misclassifications. In both Tables 4-6 and 4-7, we obser ve a drastic decrease in the number of negative misclassifications for stra tegic solutions when compared with their non-strategic counterparts. Strategic solutions better separate the positive agents from the negative ones, and hence their costs of misclassification, 1iyi iC is lower than the nonstrategic results for all values of in both tables. They accomplish this by forcing a large number of positive agents to modify their attributes at a significant total costs in effort to these agents. This can be observed by the increase in number of positive points moved for the strategic solutions compared to non-strategic solutions. As discussed after Theorem 1, a principal may want to exchange some margin (i.e., 1/w) for lowered moves and thus effort by positive agents. The downside could be a looser bound on the principals risk functiona l of the induced discriminant function and hence lower confidence in the result. Comparing the non-strategic and strategic results for 1 we see a significant drop in objective value in both tables empha sizing the high payoff gained by Strategic Learning.

PAGE 95

85 Furthermore, when strategic results ar e compared for increasing values of we see an increase in the objective values. This is a result of penalizing the objective function more for each movement of positive agents as is increased. This forces fewer agents to move and hence causes an increas e in the positive misclassification cost. However, depending on the trade-off between an increase in and a decrease in 1ii yq term, we observe fluctuations in the 1 ii yq value. Comparing the signs of the various coefficients, we s ee switches in a few (Credit Amount and Age) which reflect the change in effect on the classi fication after agents adjust. Table 4-8 below compares the results of non-strategic and stra tegic solutions for the full 1,000 point German credit dataset wh ich was not standardized. The 1-norm and 2-norm strategic results are very similar to each other. A decrease in the number of agents move d for both 1-norm and 2-norm solutions compared to the non-strategic case is observed. Hence, the reservation cost used for strategic solutions is substantially lower than their non-strategic count erparts. This shows that strategic solutions were able to prevent most of the agent movement. This leads to an improvement in the term1 ii yq and also in the strate gic objective function. It should be noted that with 3.99 which is very close to 14 C, strategic and non-strategic solutions are quite similar.

PAGE 96

86 Table 4-8. Strategic SVM solutions (P4) for 3.99 vs non-strategic solutions for 1000 instances. 1-Norm 2-Norm Attribute name NonStrategic Strategic NonStrategic Strategic 0 Checking Account Balance -0.0006 -0.00023 -0.00062 -0.00027 1 Duration -0.02818 -0.03106 -0.02798 -0.0312 18 Credit Amount -8.3E-05 -7.7E-05 -8.5E-05 -7.8E-05 19 Savings Account Balance 0.000539 0.000225 0.000538 0.000223 20 Employment Since 0.071218 0.067657 0.070059 0.068999 21 Instalment rate -0.23377 -0.19916 -0.23657 -0.20034 30 Residence Since -0.02751 -0.04517 -0.02634 -0.04759 35 Age 0.007454 0.009296 0.007888 0.009507 42 # of Existing Credit Cards -0.16523 -0.12605 -0.16987 -0.13148 47 Number of Dependents -0.00372 -0.09361 -0.00321 -0.08893 *b 2.61065 3.09950 1.75885 1.87177 w 12.96727 13.09071 2.83717 2.78167 1i y i iC 2579.97 2524.75 2579.63 2526.27 Objective Value 2592.94 2606.92 2587.69 2601.57 Strategic Impact # of Positives Move d 196 111 212 112 # of Negatives Moved 86 10 85 7 # of Positive Misclassificat ions 115 204 116 205 # of Neg. Misclassifications 256 259 260 259 Total Reservation Cost Used 2035.05 787.95 2030.00 775.13 1iyi iC (with Strategic moves) 2465.39 2524.75 2460.56 2526.27 1 ii yq 295.08 69.07 295.08 67.55 Objective Value (with Strategic moves) 2763.69 2606.92 2763.69 2601.57 Seconds 0.438 784.67 1.891 73194.671 Stochastic Versions The assumption that the principal know s the reservation values and costs (ri and ci) of each agent is rather limiting. One way to relax this assumption is to assume that the principal, through experience, knows that ther e are different types of agents and the associated distribution function over these types. Consequen tly, in solving the strategic problem the principal has to take into account the fact that he/she cannot count on known ri and ci.values.

PAGE 97

87 To model this we start by assuming that r,c is a random vector with finite support 1,...,sS and a discrete density function. We indicate each agent by his/her type and if an agent is of type s he/she has costs cs and reservation value rs. An alternative interpretation would be to say that the random vector s depends on the agent type s. Agents as usual solve the following min() 1 0is iis iscsd stwxDwdb d Following the same logic as the deterministic case the following can be determined for each s. j ,0()max0,1 argminji s jw jcsbwx j w then for **max0,1 ,si is jbwx zwb w we have *** *,1(),() 0sisis j iszwbifcszwbrs dwb otherwise For w equal to zero, set *0,0isdb There are many different ways of formula ting the principals stochastic Strategic Learning problem. One such approach is to model the problem by taking all possible realizations of into account and populating the constraints of the deterministic case for

PAGE 98

88 each s1,...,S This can be interpreted as a wo rst case formulation of the problem. Here, the principals problem is written as: 1 *minmax ..,11,..., and 1,...,iyis wb s i iiisiswwC stywxDwdwbbisS Another approach might be to use expect ed values of random variables r(s) and c(s) to arrive at an average agent type us ing the models discussed in earlier sections where all agents have the same cost and reserv ation values. A third approach would be to use chance-constrained formulation and replace the original constraints by corresponding chance constraints. Let i01 then the principals problem becomes *min ..Pr,11,..., wb iiisiww stobywxDwdwbbi Here (1)i represents the allowable risk that *,isdwb takes on values that wouldnt satisfy the constraints. The approach we favor is a fourth ap proach that depends on minimizing the expected total misclassification cost by hedgi ng against different possi ble agents types as shown in the following model. Let s P be the probability that an agent is of type s. Then we solve 11 *5:min ..,11,..., and 1,...,iS ysis wb is iiisisPwwCP stywxDwdwbbisS The counterpart of P5 (as P4 was to P3) is:

PAGE 99

89 0 111 16:min' .. 1: 1 1,...,,1,..., 1,...,,1,..., 0 1,...,,i i iSS ysissis iss y i iisis isis isisPwwCPPq st efforty wxbqisS MMVisS MVqis 1,..., 1,...,,1,..., 1: 1 1,...,,1,..., 2 1,...,,1,..., 1iis i iis isis iS zqisS efforty wxbisS VisS wxb , 1,...,,1,..., max: 1,,,1,..., 1 1,..., :isi jj iiiij ii jj ij j jMVzisS adjustment wwww rzrMHjni cc Hni absolutevalue www MI ,0 1,, 0 1,, int: ,,0,1j jj jijiwjn MMIwjn egrality IHV Conclusion and Future Research In this chapter we studied the effect of strategic behavior in determining linear discriminant functions. We considered two cases. In the base case, we analyzed the problem under the assumption that all agents have the same reservation costs. We showed that the optimal solution with strategic behavior is a shifted, scaled version of the solution found without strategic behavior.

PAGE 100

90 For the case of equal reserv ation costs we find that the principal will choose a discriminant function where negative agents will have no incentive to change their true attributes but agents who are marginally posi tive will be forced to alter their attributes. Thus, roughly speaking, the ones who are penal ized for engaging in strategic behavior are not the negative agents but rather the ma rginal positive agents. We also note that under strategic behavior the final discriminant function used by the pr incipal will produce a bigger gap between the two classes of points. For the general case where all agents have different reservation and cost structures,, we developed mixed integer pr ogramming models and applied our results to a credit card evaluation setting. An issue of great importance that has not been explored yet is the application of kernel mappings under strategic behavior. It may be possible to anti cipate and cancel the effects of strategic behavior by applying an appropriate kernel mapping. Of course, for an agent to anticipate a usef ul direction of change in some unknown feature space seems unlikely. Ramifications such as these make this approach daunting. There are many other avenues to investigate. Usually, it might not be realistic to let each attribute be modified unboundedly without posing any constraints on how much they can actually be modified which we model in Chapter 5. An interesting direction of research is to relax the assumptions on what the principal and the agents know. Although we ha ve modeled a stochastic version of the Strategic Learning problem wher e we hedged for the uncertainty in the types of agents, we have not yet incorporated informationa l uncertainty in terms of the available knowledge to both parties. As an example, instead of modeling agent behavior for a

PAGE 101

91 given classifier, we might assume that agents know only the signs of the coefficients of a given classifier. Also, it can be assumed that the agents can react to a classifier that is only known with some error. Hence, relaxing the assumptions on what agents and principal know about each other is an open area of research. We assumed linear agent utilities. Re laxing these assumptions would make an interesting future research project. One inte resting area for future consideration is the case of unequal reservation costs where negativ e agents are willing to expend any amount to be positively classified (e.g., suicid e bombers trying to appear normal). Additional areas of future research include the following. Agent collusion is not considered here. Many possibilities come to mind. For example, if agents can collude and offer side payments to other agents to ma ke sub-optimal changes in their attributes to confuse and thwart the principal, can this be anticipated in the induction process? We studied a static situation. If the instance space changes over time (due to some exogenous factors), can we dynamically model user beha vior and determine classifiers that will adapt efficiently? Another twist of this model can be to encourage real change (not just thwart superficial change). This might prove useful in public policy problems.

PAGE 102

92 CHAPTER 5 USING GENETIC ALGORITHMS TO SO LVE THE STRATEGIC LEARNING PROBLEM In this chapter, we provide a Genetic Algorithm for solving the Strategic Learning problem. We start by reducing the Strate gic Learning problem to an unconstrained search over nw Once we have accomplished this, we develop a Genetic Algorithm (GA) to perform this search. An Unconstrained Formulatio n for Strategic Learning Strategic Learning is defined as the task of a principal w hose goal is to discriminate between certain type of agents which are self-interested, u tility maximizing and decision making units. The following are examples of such principal and agent pairs: a credit card company (the principal) decides which people (agents) get credit cards. an anti-spam package (the principal is the package creator) tries to correctly label and then screen spam (which is agent created). airport security guards (the principal) try to distinguish terrorists from normal passengers (agents). In this chapter, we focus on linear discri minant functions for binary classification which is usually performed by fi rst determining a non-zero vector nw and a scalar b such that the hyperplane 0 wxb partitions the n-dimensional Euclidian space into two half-spaces. Then, an observed vector ix is assigned to the positive class if it satisfies0 wxb Otherwise it is assigned to the negative class. That is,

PAGE 103

93 ,:1,1nwb where +1 denotes the positive cl ass and -1 denotes the negative class. Refreshing the notation in Stra tegic Learning setting, each agent i has a true vector of attributes ix, a true label iy1,1 reservation cost ir and a vector of costs for modifying attributes ic. Reservation cost can be viewed as the maximum effort that an agent is willing to exert in order to be classi fied as a positive agent. On the principals side, iyC is the penalty associated with the margin shortfall (i ) of an agent of true type iy. In Chapter 4, we proposed mixed inte ger programming solutions for Strategic Learning problem by focusing on a classification method known as support vector machines (Cristianini and Shawe-Taylor 2000) and we defined the general model of Strategic Learning problem as the following: 113:min, ..,11,..., i iyii wb iy iiiiPwwCqwb stywxqwbbi where 01'0 01' 1' i i ii iifbwx qwb ifbwxz bwxotherwise and 0maxi jj ii jc i jw zr c

PAGE 104

94 for 0 0 iyC and 0 ic with at least one j satisfying 0 i jc. Now, letting ,max0,1,iiiiwbywxqwbb and determining b using a specialized search algorithm (see below) fo r a given w we get an unconstrained version of the Strategic Learning problem which is 11,,,i iyii iy f wbmwCwbqwb where with w being a norm of w and m an increasing function with 00 m. We are interested in epsilon-minimizing f wb. Let bw be an epsilon optimal solution to min|b f bw where| f bw denotes the function f wb for a fixed w. In other words, for a given w, we want to determine a value for bw that epsilon-minimizes | f bw. As noted in Chapter 4, | f bw does not have any ni ce features such as quasiconvexity. However it is still easy to find an epsilon-optimal solution. Table 5-1 gives the possible re gions of the value ,,iyiiCwbqwb positive agents and ,iyiCwb for negative agents depending on and iz. Table 5-1. Different regions of costs Positive Cases Negative Cases 0 0 2 iz 2 iz ,1' iizwx ,1' iizwx ,1' iwx ,1' iizwx 1', iizwx 1',1' iiizwxwx 1',1' iiiwxzwx 1', iwx 1',1' iiizwxwx 1',1' iiizwxwx 1', iwx 1', iwx

PAGE 105

95 | f bw is linear in the different regions of cost included in Table 5-1 which makes it easy to develop an algorithm to find an epsilon optimal solution to min|b f bw. The key is to search over the finite starting points for the different regions in Table 5-1. For negative po ints at the point of discontinuity, 1' iizwx, we evaluate the function at an epsilon lower point because the function jumps up at the discontinuity (See Chapter 4 for details). There are a finite number of such starti ng and epsilon-neighbor points. Algorithm 1, below, examines each of thes e points and evaluates the function | f bw at these points. The proof of epsilon-optimality provided by Algorithm 1 follows from knowing that between two consecutive (non-epsilon) starti ng points the cost function is linear, meaning only the end points need be evaluate d. Furthermore, we ignore points between an epsilon neighbor b and b (hence we achieve only epsilon optimal solutions, see Chapter 4 for details). So we march through potential values of b noting the one with the lowest total cost. This provides an epsilon optimal value of b (bw). Below is Algorithm 1. Let 0 be sufficiently small and fixed. Algorithm 1 Input w. Output an epsilon-optimal b. Set for (1,..., i) { if ( 1 iy)

PAGE 106

96 1' iizwx if (0 ) 1' iwx else 1' iizwx 1' iwx if (2 iz) 1' iwx } Set f, b for each b { | f fbw if f f then { f f, bb } } Output b; At this point we have reduced the Strategi c Learning problem to an unconstrained search over nw. Before proceeding to the GA formulation, one result is of value. Lemma 1 provides an upper bound on min w f w and allows us to focus on non-zero values of w. Lemma 1 1 1 1 120 0 20 i iiii yi iii yiCyCy f CyCy Proof:

PAGE 107

97 When w=0, 0iz and thus 0iq for all i. We are left with choosing b. From the identities of Table 1 we get 10 0 10 ii i ii iCy b Cy Thus 11 1000,0,0 00,00 i i iyii iy yi ifmCbqwb Cb 1 1 1 120 20 i iiii yi iii yiCyCy CyCy At this point we can solve the Strate gic Learning problem by searching over /0 nw. However, we have found it more advantageous to search over /0 nw where we also determine a scaling factor for each w. Define the function 11,,,i iyii iy f wbmwCwbqwb Let | bw be an epsilon optimal solution to min, b f wb for a fixed w. Consider 11|,(|),(|)i iyii iy f wmwCwbwqwbw where

PAGE 108

98 ,(|)max0,1|,|'iiiiwbwybwqwbwwx Informally, | f wis the total cost function as varies for a fixed w and corresponding epsilon optimal solution | bw. This effectively reduces th e search to one over the unit sphere. For all the problem sets we have examined we have noticed that | f w is quasiconvex in Whether this is true in gene ral or under some reasonable set of assumptions remains unknown at this time. However, assuming it is true, we can perform our search as follow s. Generate a normalized /0 nw. Perform a univariate search over where, for each such we use Algorithm 1 to solve for | bw. This process yields a solution w If our assumption about the quasiconvexity of | f w is not true, then we just search over /0 nw. A Genetic Algorithm Formulat ion for Strategic Learning We propose a Genetic Algorithm (GA) for so lving the Strategic Learning problem. Below is a sketch of the algorithm. We state the algorithm assuming we will use the scaling discussed above. However, this can ea sily be removed from the formulation. We represent population strings as a string of bits representing an n-dimensional vector w. The real-valued coefficients are limited to t hose that can be repres ented by 32 bits. We use one bit as a sign and the rest for th e magnitude. The GA produces new population members through a mixing process. Mixi ng consists of mutation and crossover operators. With probability, two selected strings are mated. The two strings produce two children formed using an affine linear crossover operator (Davis 1989). That is,

PAGE 109

99 child1parent1parent2 child2parent1parent2ww1w w1ww for 1,1 randomly drawn. One of the children is randomly selected (a tournament selection could be used here instead). If the strings do not mate (with probability 1 ), one is randomly selected to survive. On ce all the new member strings are formed, mutation operators are applied. The GA mutation operator is a uniform mutation operator where each bit is flipped with probability We also implement a zero-coefficient mutation where iw is changed to zero with probability zero Once a string is chosen it is scaled by found to minimize | f w using a onedimensional search. Parent strings are selected using rank selection (Goldberg 1989). The fitness function used in ranking is based on a lexic ographic ordering involvi ng three values. The three values, in order of importance, are f w, the margin and the number of non-zero coefficients. A string is more fit, le xicographically speaki ng, if, first, its f w is lower. If the two strings have equal f w values, we then choose the string having the larger margin. If these are also equal, we choos e the string having the sm aller number of nonzero values.

PAGE 110

100 Table 5-2. GA sketch Algorithm: (Genetic Algorithm) Given: Mutation rates and zero crossover rate and population size p1 Initialization: Generate an initial population, population 0. Randomly draw p strings with re placement. One with the best objective (lexicographica lly speaking) is designated the queen bee. Step 1: Form a new population as follows. (A) Repeat the following steps unt il the new population has p members. (1) Randomly choose two members from the old population using the rank selection process. (2) Form children through a mixing process consisting of crossover and mutation operations. (B) For each string, normalize the ws represented by the string and reset the string to repres ent the normalized values. Compute the b value according to Algorithm 1 and an optimal value using a one dimensional search algorithm. Replace w with w Compute the strings objective value, margin and number of non-zero coeffici ents. If these are lexicographica lly better than the queen bees make the string the new queen bee. (C) Replace the lowest ranked memb er with the queen if the queen isnt already a population member. Step 2: If stopping conditions are not met, return to Step 1. Experimental Results To test the efficacy of this GA approach, we performed runs on 100 point subset of German credit data set. For thes e runs we used the parameters0.2 10-3 zero0.01 and p10 The results of 2-norm mi xed integer programming model

PAGE 111

101 developed in Chapter 4 and the GA approach de veloped in this chapter are compared for 1 in Table 5-3. Table 5-3. 2norm strategic SVM solu tions (P4) versus GA for 100 instances Attribute name P4 GA 0 Checking Account Balance 0 -0.01154 1 Duration 0.058722 -0.23769 18 Credit Amount -0.77481 -0.00486 19 Savings Account Balance 0.001333 0.001152 20 Employment Since 0.439863 0.19678 21 Instalment rate -0.43845 -0.18336 30 Residence Since 1.881121 0.130356 35 Age -0.03338 0.126218 42 # of Existing Credit Cards -0.63172 0.070311 47 Number of Dependents 1.209247 0.140167 *b 0.15576 -0.03833 # of Positives Moved 35 60 # of Negatives Moved 2 4 # of Positive Misclassifications 3 1 # of Neg. Misclassifications 4 12 1iyi iC 20.00 61.253 w 1.78646 1.01176 1 ii yq 42.48227 51.8381 Objective Value 72.6676 116.7200 When the results of the two approa ches are compared, we see that P4 outperforms GA in many ways. First, P4 forces fewe r positive agents to move and the total misclassification cost 1iyi iC in significantly lower compared to GA results. Also, objective value of P4 is lower than GA Thus, we conclude that GA approach is promising but needs further work. Discussion and Future Research In this chapter, we have reduced the Strategic Learning problem to an unconstrained search over nw and provided a Genetic Algorithm for solving the

PAGE 112

102 problem. Perhaps, the most interesting capabilit y of this model is its power to scale up to large sample sizes in comparison to the mi xed integer programming model developed in Chapter 4. Also more work needs to be comp leted to identify the set of parameters for which the function | f w is quasiconvex in Although the results presented in this chapter are promising, further analysis of the approach is required.

PAGE 113

103 CHAPTER 6 STRATEGIC LEARNING WITH CONSTRAINED AGENTS In Strategic Learning, the pr incipal anticipates possible alteration of attributes by agents wishing to achieve a positive classifi cation. In many cases, an agent has control over its attribute vector such th at it might choose to delibera tely modify one or more of its attributes in order to achieve favorable clas sification with respect to a classifier chosen by the principal. In that respect, the principal ha s no control over an agents modifications. However, in many cases, agents are constrained on how much an attribute can be modified. For example, agents can be constrained in ways such as upper and lower bounds on the modifications or the modifi cations may need to belong to a certain set of moves (like in checkers or chess). In this chapter, we explore the need for anticipating attribute adjust ment by constrained agents. Introduction and Preliminaries The goal of learning from data has a long history of investigation and the development of algorithms for that purpose st ill remain an important research area. Today, there exist many powerful learning al gorithms such as decision trees (Quinlan 1986), neural networks (Tam a nd Kiang 1992), Bayesian methods (Duda and Hart 1973), and support vector machines (SVMs) (Crist ianini and Shawe-Taylor 2000) among others, that are utilized in many different domains of data mining. All of these algorithms are based on the supervised learning model wh ere the learning algorithm is supplied a training set of correctly labele d examples. An example is an attribute vector augmented with a membership identifier (the label). The task is to find an opt imum classifier that

PAGE 114

104 separates the set of examples according to th eir memberships. For example, the training set can be a collection of spam and non-spam emails where each ema il is represented by a binary vector of attributes each having a value of 1 if a particular word is present in the email or 0 otherwise. The aim is to discove r a filter (i.e., a cla ssifier) which correctly identifies an email as either spam or non-spam. This type of a classi fication task is quite common and appears in many other areas such as credit risk evaluation (Chapter 4), fraud detection (Fawcett and Provost 1997), text cate gorization (Dumais et al.1998 Joachims 1998) etc. In many cases this classification task is carried out by a decision maker whose goal is to determine a function to correctly classi fy instances. Throughout this discussion, we will refer to this decision maker as the p rincipal and these instances as agents which are defined as self-interes ted decision making units. For example, for the spam categorization problem the principa l can be thought of as the sp am filter and the agents as emails. Although emails are not self-inter ested decision making units, per se, their creators (the spammers) are. Recent research in the area (C hapter 4, Dalvi et al.1994) consider learning where the sample space, from which the traini ng examples are drawn, is deliberately manipulated by self-interested agents. In earli er Chapters, we define d this new paradigm of learning under such behavior as Str ategic Learning since our formulation incorporates strategic behavior in the classical supervised learning problem. In our setting, a principal seeking an ultimate classi fication rule anticipates the possible strategic behavior of self-interested agen ts who are subject to classification. Dalvi et al. approach the same problem by defining this adversar ial classification as a two period game

PAGE 115

105 between two players, the classifier and the adversary. They focus on Bayesian classification method and conduct experiments on a spam problem. We formulate the problem using rational expe ctations theory and thus we model an infinite game version between the principal and the agents. In Chapters 4 and 5 we developed methods to determine linear disc riminant classifiers in the presence of strategic behavior by agents. In Chapter 4, we used a powerful induction method known as support vector machines (Cristiannini a nd Shawe-Taylor 2000) which is also the method that will be used in this chapter. In Chapter 4, we assumed agents have a linear disutility function and fixed reservation co sts. Under these conditions, for separable datasets, we discovered that if the principal anticipates optimal agent behavior for a given classifier as rational expectati ons theory depicts then he/she can choose a cl assifier that cant be defeated by any actions of agents give n their cost structures. We showed that the results of nave discriminant analysis undert aken without taking an ticipating the potential strategic behavior may strictly differ from the results of the analysis done with the awareness that agents may behave strategica lly. We concluded that the naive approach when used in the presence of strategic beha vior could be ineff ective and misleading. We illustrated our results on a credit-ri sk evaluation problem. First, we characterized an optimal strate gy for the principal on a separa ble data set. Second, for non-separable data sets, we provided mixed integer program ming solutions and applied them to a credit-risk evaluation setting. Strategic Learning assumes that agents have control over their attribute vectors such that they might choose to deliberately m odify one or more of their attributes in order

PAGE 116

106 to achieve favorable classification with resp ect to whichever cla ssifier the principal chooses. In that respect, the principal has no control over an agents actions. However, an agents actions are constraine d by three facts. First, an agent is assumed to have a constant predefined reserv ation cost such that the agent can choose to modify an attribute (or collection of attri butes) only if the cost of doing so does not exceed the reservation cost. Second, the agents are economically constrained by the cost of changing an attribute which may vary from attribute to attribute and agent to agent. Subsequently, an agents actions depend highly on this cost structure. These first two facts were considered in the earlier chapters of this dissertation. In this chapter, we consider a third fact which is central to the main idea of this chapter. There might be bounds on how much an attribute can be cha nged by an agent (other than just those induced by the reservation costs constraint). That is, possible attribute manipulations performed by agents need to be constrai ned for other reasons. For example, most attributes in practical applic ations have upper and lower bounds In a credit card domain, attributes such as age, number of credit ca rds, income, education level, etc. all have natural bounds. Often there are more involved constraints. For example, in financial domains, simple income and balance sheet re lationships dictate line ar interdependencies between many common attributes. Thus, usuall y, it might not be realistic to let each attribute be modified unbounde dly which was not modeled in our earlier chapters. In addition to natural bounds on attribut es the representation chosen for the instance space might impose addi tional constraints. For exam ple, in a spam domain, if one chooses a binary representation for each ema il (1 if a word is present, 0 if not) then the problem is automatically constrained sin ce the agent (the spamme r in this case) can

PAGE 117

107 only alter an email by adding or deleting words thus changing an attribute from 0 to 1 or vice versa. So the attributes are co nstrained to change only by 1 unit. Motivated by these facts, we investigate the situation where there is a need to enforce constraints on agent behavior in the context of Strategic Learning. Thus we extend the concept of Strategic Learning to c onstrained agents and apply our results to spam categorization problem. We focus on linear discriminant functions as our hypothesis space. Model Suppose nxX represents an agents attributes Each agent is a member of one of the two classes, positive (+1) or negative (-1), identified by y. The collection of examples forms the training set 11Sx,y,...,x,y where the subscript i identifies the ith observation ix (and iy its label) sampled from X (sampling is i.i.d.). Using the training set, an induc tion method will determine a vect or w and scalar b used to provide classifications over X as follows. An observed vector ixX is classified positive if it satisfies iw'xb0. Otherwise it is classified negative. That is, nw,b:1,1 and the goal is to find a w,b that optimizes some measure. We start by assuming a linearly separable sample space and use SVM to find a maximal margin classifier. This changes the decision rule to ixX is classified positive if it satisfies iw'xb1. Given a linearly separable sample space, a hyperplane w,b that solves the opt imization problem: 0 ,:min ..1 1,...,wb iiSww stywxbi

PAGE 118

108 gives us a maximal margin hyperplane (Cristianini and Sh awe-Taylor 2000) Let ic'd, c0, be the costs to agent i for modi fying the agents true attribute vector n ix to ()iii x Dwd for id0 where iD is a diagonal matrix d efined by ,10 10j jj jw Dw w Basically, ,jjDw defines the profitable direction of change for the jth attribute. Let V be the set of feasible attribute values for agents. Given iD we constrain an agents changed attributes by () iii x DwdV For example, if, as in the spam case, all at tribute vectors are bina ry valued, we have 0,1 nV In other cases, it is possible to have different constraints. For example :Vxlxu imposes simple lower and upper bounds on the attributes. Let the reservation cost of bei ng labeled a positive example be ir for agent i. We say that each self-interested agent will solve the following optimization problem givenw,b, where they minimize the cost of modifi cation subject to being classified as positive. Introducing the additional constraint, () iii x DwdV into the agents problem in Chapter 4, we get the constrained agents problem: 'min ()1 ()ii iii iii iiicd st wxDwdb cdr xDwdV (1) Note that if 1 iwxb then 0 id is a feasible solution.

PAGE 119

109 If the constrained agents problem de fined above has a feasible solution *,idwb then the agent has enough re servation cost to modify his attributes for a given ,wb. We set *,0idwb for the case where the problem is infeasible. Taking into account this potential strategic behavior, the prin cipal solves the following problem: 1 *:min ..,11,...,wb iiiiSww stywxDwdwbbi In Chapter 4, for the case of unconstrained agents (agents without the additional () iii x DwdV constraint), we characterized an op timal solution for the above problem for separable datasets under the assumption that ir and ic are constant (say r and c) across agents. We showed that if **, wb solves 0S then under the assumptions given, ** **22 22 bt w tt solves 1S where **max/ kk ktrwc Thus a principal anticipating strategic behavior of agents will use a classi fier that is a scaled and shifted version of **, wb determined without taking into consider ation strategic behavior. The shift and scaling depend on the cost structure for modify ing attribute values (c), the reservation cost for being labeled as a positive case (r), and **, wb However, this solution is not applicable to the constrained ve rsion of interest because of several facts. Firs t, in Chapter 4, we find that an agent optimally chooses to modify only one attribute for a given ,wb. But due to the constrained nature of (1), we find that in the case of constrained agents, each agent might need to modify multiple attributes. The decision of which to modi fy is highly dependent on the particular

PAGE 120

110 attribute vector,i x and the nature of the constraint s imposed on the agent. Generally speaking, in constrained cases, there is no gua rantee that the optimal move will be in only one direction. On the contra ry, it is typically a combina tion of movements in different directions for the cases we have seen. Also in the unconstrained version, if the modification of a particular at tribute is found to be optimal for one agent then that same particular attribute is optimal for all agents in the sample space. But in the constrained version, there is no guarantee th at the optimal movement is the same across all agents. Thus, Theorem 1 as given in Chapter 4 is not applicable since the shifting and scaling may not force linear separability after the m oves are realized when, especially for some marginal positive agents, the necessary modification to stay on the positive side of the scaled and shifted hyperplane is not feasible. This may allow some negative agents, with enough reservation cost, to move and blend into positive agents. Thus, we face the issue of nonseparability caused by st rategic behavior. Since li near separability is not guaranteed any more, we turn our focus to the general Strategic Learning formulation developed in Chapter 4. Assuming that the principal incurs fixed costs of misclassification 1C and 1 C, for positive and negative mi sclassifications respectively, 1S reduces to the following: 1 11:min, ..,11,...,i iyii wb iy iiiiSwwCqwb stywxqwbbi where

PAGE 121

111 01'0 01' 1' i i ii iifbwx qwb ifbwxz bwxotherwise and 0maxi jj ii jc i jw zr c for 0 and 0 iyC .i is the margin slack variable m easuring the shortfall of a point in its margin from the hyperplane. Noticing that the objective function of 1S has three components, ww is one over the square root of th e margin, (the margin is to be maximized in statistical l earning theory (Vapnik 1998)) while the second component, 1iyi iC is the sum of individual misclassifi cation costs. The third component, 1,ii yqwb, accounts for the extra effort that need s to be exerted by positive agents as a result of strategic behavior (see Chapter 4 for details). In Chapter 5 we defined the unconstrained version of the of the Strategic Learning problem which is 11,,,i iyii iy f wbmwCwbqwb where ,max0,1iiiiwbywxDwdb w is a norm of w and m an increasing function with 00 m. Then, we developed Algorithm 1 which finds an epsilon optimal solution to min|b f bw where | f bw denotes the function f wb for a fixed w

PAGE 122

112 In that chapter, Algorithm 1 is used by a Genetic Algorithm to develop a solution to the Strategic Learning problem for unconstrai ned agents. The fundamentals of Algorithm 1 are explained thoroughly in Chapters 4 and 5 but we provide a summary of the general idea of the algorithm here since in this ch apter, we propose a similar algorithm for the constrained agents case.. The key point of Algorithm 1 is to search over finite inflection and discontinuity points of the objective function that are cause d by strategic behavi or of agents. The function is linear betw een two such consecutive points. In other words, an agents strategic behavior can be explai ned for a fixed w (b is variable) and consists of different regions where the agent can take different strategic actions (move or dont move). Algorithm 1 is a search over these inflection a nd discontinuity points of these regions for each agent and the proof of optimality foll ows from knowing that between two such consecutive points the cost function is linear, meaning only the end points need be evaluated. Algorithm 1 examines each of th ese points which are potential values of b and evaluates the total cost at these points to find the one with the lowest cost. This finds a b providing an epsilon optimal value for 1S for fixed w In this chapter, we contribute by bringing in additional constraints to an agents problem which affects the way the movement of each agent is determined with respect to a given hyperplane. In that resp ect, the general idea of Algor ithm 1 is applicable to our case but with the emphasis that the calculati on of inflection and discontinuity points is dependent on the additional cons traints imposed on the agent. In the next section, we first look at th e agent behavior under additional constraints specific to spam categorization problem and develop Algorithm 2 that finds an optimal

PAGE 123

113 solution to constrained agents problem. La ter, we develop Algorithm 3 that alters Algorithm 1 to take into account the specific form of agent constraints that arise in a spam categorization application. The Genetic Algorithm developed in Chapter 5 is not affected by the issues that arise due to the introduction of additiona l agent constraints and thus generalizes to our case of constrained agents. Application to Spam Filtering In this section we apply our results to the spam categorization problem and develop algorithms specific to that problem. We choos e to present emails in binary format where []1ixj when the jth word is present in the ith email or []0ixj otherwise. The emails are labeled such that spam emails corresponding to negative cases are assigned a label -1 whereas the non-spam em ails corresponding to positive cases are assigned the label +1. In our scenario, spamme rs might modify their emails by adding or deleting words (a third possibility is repl acing words which corresponds to deletion of one word and addition of another at the same time). Given this se tting, the spammers problem becomes: min ()1 ()0,1ii iii iii n iiicd st wxDwdb cdr xDwd By setting , ,10 and 0 10 and 1 0jij ijij jjwx Dwwx otherwise

PAGE 124

114 we get a simpler version: min ()1 0,1ii iii iii icd st wxDwdb cdr d In this case there are only two possible m odifications that a spammer can make on a particular attribute []i x j. If the true attribute value is 1 then the change can be either -1 or 0 or if the attribute value is 0 then the ch ange can be either +1 or 0. These facts are both captured by the design of iDw along with the constraint 0,1id guaranteeing that iiDwd gives a legitimate modification. As an example, consider the following spam email Spam (1iy ): 0 0 0 1ix and 0.8 1.2 0 0.5w Let ()iiii x xDwd Clearly, ,0 1 0 1i jjDw and one possible modification that the spammer can make is 0 1 0 1id changing i x to

PAGE 125

115 0 1 0 0ix We assume that ir is an integer value and also ic, the unit cost of changing an attribute, is identical for each attr ibute and is equal to 1. It is hard to argue to the contrary since changing any one attribut e is no harder than changi ng any other. With these assumptions, the agent problem reduces to a form of a binary knapsack problem and we develop Algorithm 2 which is a greedy algorithm for findi ng an optimal solution to agents problem for a given wb. The agents problem resembles the binary knapsack problem with a similar goal of packing the knapsack with objec ts in order to maximize the total value of the packed objects without exceeding the knapsacks capacity. However, in our case, the goal is to minimize iicd while we satisfy the constraint ()1iiiwxDwdb which can be thought of as the knapsack capacity with the ex ception that we allo w for the last object added to exceed the knapsack capacity. This corresponds to a surplus in the constraint ()1iiiwxDwdb Essentially, in constr ained agents problem, each iw can be thought of as an object and ou r goal is to satisfy the constraint ()1iiiwxDwdb without violating the rese rvation cost constraint (iiicdr ) which makes it more involved than a regular knapsack problem. More specifically, if the reservation cost constrai nt is violated before ()1iiiwxDwdb is met, then *0id Although greedy algorithms fail to always provide an optimal solution to binary knapsack problems, in our case since we assume 1ic and allow for the last object

PAGE 126

116 added to exceed the capacity, a greedy al gorithm proves optimal. A key point is that since 1ic (i.e., the cost of modification is e qual for all attributes), the agent is indifferent to choose a modifi cation among feasible ones since they all cost the same. Hence, the proof of optimality follows from the fact that the agent will choose a modification that will give a maximum m ovement since the goal is to satisfy ()1iiiwxDwdb while minimizingiicd Now, define L to be the list of indexes such that 11 for L,LiiLLiiwwL In other words, L is the list of attribute indexes sorted according to the magnitude of the corresponding iw (in descending order). For example, for 0.8 1.2 0 0.5w L would be 1,0,3,2 L. Essentially, Algorithm 2 takes wb and i x (an arbitrary hyperplane and an attribute vector) as inputs and then outputs *,idwb which is the minimum cost modification for an agent in order to be clas sified as positive, if possible depending on the reservation cost. By sequentially visiti ng each attribute index drawn from the list L, the algorithm checks if a modification is possible on the curr ent attribute given ,i jjDw If true, the necessary modifi cation is made. The algorithm stops when the reservation cost constraint will be violated or a positive classification is achieved or when the indexes contained in L are all visited. The last step is to check if the movement is big enough for

PAGE 127

117 the agent to be able to get classified as a positive case otherwise th e agent will prefer not to move (setting *,0idwb). Algorithm 2 Input: wb and i x Set 0 id, isumwxb 0 r and 1j while ((1 sum) and ( j L) and (irr )){ if ,0jjLLDw{ L ,wj jjLLsumsumDw, ,1jiLd r } j } if (1 sum) 0 id Output: *,idwb Algorithm 2 helps to explain why Theorem 1 does not apply in our case. If the above algorithm finds a non-zero solution, *,idwb corresponding to the modification of a set of attributes (or words) then the spammer has enough reservation cost to modify an email. However, for Theorem 1 to hold, the sa me set of attributes has to be optimal for every email. This is not guaranteed in our cas e for several reasons. First, for example, if the deletion of ith word is found to be a part of an optimal modification for a particular email, then this implies that ith word exists in that email. However, there is no guarantee that it exists in the rest of the emails too. In fact, the ith word may not even exist in any of

PAGE 128

118 the remaining emails which makes the deletion of that word infeasible. Thus, the optimal modification for each email may be different which contradicts the basic idea of Theroem 1. Given the agents problem, the princi pals problem can be written as 1min ..11,..., 01 0,1iyi wb i iiiii ii iii iwwC stywxDwdbi xd cdr d Recall that the objective here is to minimize 11,,,i iyii iy f wbmwCwbqwb. However, in the spam categorization problem, it is important to realize that pos itive agents are non-strategic since it is reasonable to think that the au thor of a non-spam email is unl ikely to change the content of his/her email fearing that it might be detected by the spam filter as spam. The legitimate users of internet (users other th an spammers, hackers etc.) normally do not engage in such strategi c behavior so the term 1,ii yqwb is zero for the spam categorization problem since no effort will be exerted by positive agents. Using Algorithm 1 ideas, we develop Algorithm 3 to produce an optimal shift in determination of an optimal b for min| b f bw where 1||iyi i f bwmwCbw.

PAGE 129

119 Figure 6-1 illustrates how the cost ,max0,1iiiiwbywxDwdb vary with b for the following case Spam (1iy ): 0 0 0 1ix for 0.8 1.2 0 0.5 w 1 1 1 1 c and 11 C -1 -0.5 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3 3.5 bcostSpam Figure 6-1. Spam email with 2ir

PAGE 130

120 Essentially, Figure 6-1 consists of lin ear regions corresponding to different modifications (*,idwb) of the spam email assuming the spammer uses the greedy strategy (Algorithm 2). Each region is characterized by the corresponding *,idwb where the discontinuities occur as spammer needs to make one more modification to *,idwb in order to achieve or maintain the positive classification gained by strategic behavior. As an example, in Figure 1 there are three discontinuity points. The first discontinuity point at 1.5b occurs when agent makes his first available modification to achieve positive classification (i.e., he sets *0 0 0 0id to *0 1 0 0id ). The second discontinuity point at 0.3b occurs when spammer makes his second available modification by changing the current *0 1 0 0id to

PAGE 131

121 *0 1 0 1id and thus maintains the positive classification. The last discontinuity point at 0.2b is when the agent can not make any more modifications to maintain positive classification so chooses not to make any modifications by setting *0 0 0 0id As pointed out earlier, in the spam categorization problem, positive agents are nonstrategic so we safely assume that *,0idwb for non-spam cases. Thus, this leads us to only focus on negative (spam) cases. Ho wever, we still analyze non-spam cases to determine the points of inflections of the cost function ,iwb. Figure 6-2 illustrates how ,iwb changes with b for the following non-spam email with no strategic behavior. Non-spam (1iy ): 1 1 1 1ix for 0.8 1.2 0 0.5 w 1 1 1 1 c and 11 C

PAGE 132

122 -1 -0.5 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 bcostNon-spam Figure 6-2. Non-Spam email without strategic behavior. In the following, we develop Algorithm 3 which is a search over the discontinuity and inflection points of each agent. As was mentioned before, Algorithm 3 is a modified form of Algorithm 1 with the exception that Algorithm 3 is developed for constrained agents for spam categorization problem. Deta ils of Algorithm 1 can be found in Chapters 4 and 5. The key idea is to search over the finite discontinuity and inflection points of the different regions of agent behavior. For non-st rategic positive agents the only inflection point is at 1'iwx For negative agents, at the last point of discon tinuity (when all available modifications are done ), we evaluate the function | f bw at an epsilon lower point because the function is ri ght-continuous at a ll the discontinuity points and our goal is to minimize the function. There are a fi nite number of such points and Algorithm 3

PAGE 133

123 examines each of these points and evaluates the total cost function | f bw at these points. Let 0 be sufficiently small and fixed. Algorithm 3 Input: w Set for (1,..., i) { if ( 1 iy ) 1' iwx else{ Set 0iz 0r ,1j 1' iwx while ( j L and irr ){ if ,0jji LLDw { L ,wj jjiii LLzzDw 1'iizwx r }//if j }//while 1' iizwx if (2iz ) 1' iwx

PAGE 134

124 }//else }//for Set f b for each b { | f fbw if f f then { f f bb } }//for Output: b Conclusion In this chapter we investigated the situ ation where there is a need to enforce constraints on agent behavior in the cont ext of Strategic Learning and we extend the concept of Strategic Learning to constrai ned agents. We have selected spam categorization problem as a particular exampl e and developed algorithms specific to that problem. This chapter presents one crucial extension of Strategic Learning and stands as an example of how one can modify the general Strategic Lear ning model to accommodate for particularities that arise due to the nature of the specific application. In that respect, it is an attempt to deviate from the ideal setting to help analyze more realistic situations like the case of constrained agents. The problem of constrained agents obviously is not limited to spam categorization and can be observed in many other problems which involve agents with limited strategic behavior such as fraud detection.

PAGE 135

125 CHAPTER 7 CONCLUSION In this study we investigated the Strategi c Learning problem which is defined as the task of a principal whose goal is to discrimina te between certain type of agents which are self-interested, utility maximizing and deci sion making units. We concentrated on linear discriminant functions for bina ry classification and focused on support vector machines. In both Chapters 2 and 3, we provide an overview of Strategi c Learning with the exception that we intend to reach different t ype of readers. In Chapter 4, we give a comprehensive and intricate study of Strategi c Learning and provide the details of the model. In Chapter 5, we develop a Gene tic Algorithm for Strategic Learning to solve larger versions of the problem. In Chapter 6, we extend the Strategic Learning model to handle more complex agent behaviors. This study is organized as a collection of articles, each of which corresponds to one chapter of the entire study. Each chapte r is complete within itself and includes a conclusion and future work section related wi th the aspects of the study covered in that specific chapter. Due to this self-contained style of preparation, we ask the reader to refer to those sections for a detailed de scription of possible future work areas.

PAGE 136

126 APPENDIX PROOF OF THEOREM 1 Theorem 1 states that when the training set is separable, a solution to the nonstrategic SVM problem (P1) can be perturbed to a solution to the strategic SVM problem (P2). We start our proof by first showing a simple result. Lemma 1 If wb solves P2 then (a) *,0 idwb for all 1 iy (b) **,max/ ikk kdwbrwc for at least one i where 1 iy Proof: (a) Since wb solves P2 no truly negative agent can achieve a positive classification so there is no incen tive for such agents to move. (b) At least one positive support vector will have been moved its maximal amount or else the svm margin can be increased, which would be a contradiction. The maximum move is max/kk krwc. Theorem 1 **, wb solves P1 if and only if ** **22 22 bt w tt solves P2 where *t is given by **max/ kk ktrwc Proof:

PAGE 137

127 Since S has at least two elements having opposite labels, P1 must have *0 w Since c and r are positive, *0 t. Let *argmax/ kk kkwc We start by first showing feasibility of ** **22 22 bt w tt to P2. We break this part of the proof into two parts, one handling positively labeled agents and the other negatively labeled ones. Case 1 iy : We start by noting that **'1 iwxb for all 1 iy and is exactly satisfied for positive support vectors. Let i index a positive support vector. Now ** ** ** ** *22 max0,1' 22 22 2 22 2 i i kbt wx tt bt zw tt w t and **** *** *****2222 1'1' 22222 iibttt wxwxb ttttt Furthermore we need ** **22 22 kibt czwr tt. Now *** *** **2222 222 2/ k ki kkkc btt czwrr ttt wrwc

PAGE 138

128 Thus, **** *** ****2222 ,,1 2222 iikbtbt dwzw tttt is a feasible move for agent i with ** **22 22 bt w tt. Now consider ** ** ** *** *** **** * *22 22 222 222 2 '/2 2 2 1/21 2 iii i ibt ywxDwdwb tt tbt wx ttt wxtbt t t t So after the move we see that positiv e support vector agents become positive support vectors of the new LDF ** **22 22 bt w tt. Any other positive agent will either not have to adjust attributes (b ecause it was far enough from the original LDF hyperplane) or will have to adju st to a value no greater than *2 2 t t. Case 1iy : We start by noting that **'1 iwxb for all 1 iy and is exactly satisfied for negative s upport vectors. Let i index a negative support vector. There are two cases. In the fi rst, the margin of the P1 solution may be larger than the maximal move a negative agent is willing to make so the agent would gain nothing by moving. This means that ***,0 idwb. The second case has ***,0 idwb. As in the positive case, we focus just on the negative support vectors

PAGE 139

129 noting that any other negative agent will either not have to adjust attributes (because it was too far from the original LDF hyperplane to make a difference) or will have to adjust to a value no greater than at le ast as much as the support vectors Case ***,0 idwb: Assume ***,0 idwb. Then we know that **1'2 ibwx and then that *2 k kc r w (since this agent cannot move). Now consider **** ** **** ** **2222 1'1' 2222 2 1'2 22 ii ibtbt wxwx tttt t wxb tt Then ** ** **22 ,0 22 ibt dw tt since no move is possibl e also. Thus we have ** * ** ** ** **2 ', 2 22 22 2 '1 22iii i ibt ywxDwdwb t bt wx tt t wxb tt This shows that a negative support vect or under P1 remains one under the new LDF ** **22 22 bt w tt. Case ***,0 idwb : Assume ***,0 idwb Then we know

PAGE 140

130 *** *** ****222 1'1'2 2222 iibtt wxwxb tttt and that ** ** ** *222 ,1 22 ik kbt dw tt w so ** ** ** ** ** ** **22 ', 22 22 '2 22 2 2' 22 1 iii i ibt ywxDwdwb tt bt wx tt t wxb tt which shows, under ** **22 22 bt w tt a negative support vector wouldnt move. Then we have ** ** ** ** ** ** **22 ', 22 22 22 2 22 1 iii i ibt ywxDwdwb tt bt wx tt t wxb tt This shows that a negative support vect or under P1 remains one under the new LDF ** **22 22 bt w tt.

PAGE 141

131 We now start with a solution, wb to P2. As above, we break this part of the proof into two parts, one handling positively labeled agents and the other negatively labeled ones. We start with wb and consider ** *2 2 22 tbt t w as a feasible solution of P1. Case 1 iy : We start by noting that *',1 iiwxDwdwbb for all 1 iy and is exactly satisfied for positive support vectors. By Lemma 1b we have there is an i such that ** ** ** *2 2 22 2 22 2 1max/ 22 2 1max/ 2 i i kk k kk ktbt t wx tt wxb tt rwc t trwc Thus we see that this maximally shifted support vector can be a support vector of an unshifted problem provided *2 max/ 2 kk kt trwc Now consider any other positive agent, call him agent j. Then

PAGE 142

132 ** ** ** ** ** **2 2 22 2 22 2 1, 22 2 1, 22 2 1max/ 22 11 j j j i kk ktbt t wx tt wxb tt Dwdwb tt Dwdwb tt rwc tt showing feasibility to P1. Case 1 iy : We start by noting that *',1 iiwxDwdwbb for all 1 iy and is exactly satisfied for negativ e support vectors. By Lemma 1a we know *,0 idwb for all 1 iy Thus '1 iwxb and is equality for negative support vectors. Consider the following for a negative support vector. ** ** **2 2 22 2 22 2 1 22 i itbt t wx tt wxb tt Similarly, for other negative agents we get

PAGE 143

133 ** *2 2 '1 22 itbt t wx showing feasibility to P1. Above we showed that an optimal solution, **, wb to P1 provides a feasible solution, ** **22 22 bt w tt, to P2 and in the second part we showed that an optimal solution, wb to P2 provides a feasible solution, ** *2 2 22 tbt t w, to P1. The following inequalities must hold. 2 **2 '' 2 t wwww 2 ** *2 '' 2 wwww t so 22 **** **22 ''' 22 wwwwww tt and then 2 ** *2 '' 2 wwww t completing the proof.

PAGE 144

134 LIST OF REFERENCES Abdel-Khalik, A. R., K. M. El-Sheshai. 1980. Information choice and utilization in an experiment on default prediction. Journal of Accounting Research 18(2) 325-342. Agrawal, R., T. Imielinski, A. Swami. 1993. Mining association rules between sets of items in large databases. Proceedings of the ACM SIGMOD Conference on Management of Data, Washington, D.C. 207-216. Arnt, A., S. Zilberstein. 2005. Learning policies for sequentia l time and cost sensitive classification. Proceedings of the KDD-05 Workshop on Utility-Based Data Mining Chicago, IL, 39-46. Bajgier, S. M., A.V. Hill. 1982. An experime ntal comparison of statistical and linear programming approaches to the discriminant problem. Decision Sciences 13 604618. Bennett, K. P., E. J. Bredensteiner. 1997. A parametric optimization method for machine learning. INFORMS Journal on Computing 9 311-318. Bhattacharyya, S., K.K. Tharakunnel. 2005. Reinforcement learning in learder-follower multiagent systems: Framework and an algorithm, Information and Decision Sciences, University of Illinois, Chicago. Bishop, C. 1995. Neural networks for pattern recognition Oxford University Press, New York, NY. Ciraco, M., M. Rogalewski, G. Weiss. 2005. Improving classifier ut ility by altering the misclassification cost ratio. Proceedings of the KDD-05 Workshop on Utility-Based Data Mining Chicago, IL, 46-53. Cristianini, N., J. Shawe-Taylor. 2000. An introduction to suppor t vector machines and other kernel-based methods Cambridge University Press, Cambridge, UK. Crone, S., S. Lessmann, R. Stahlblock. 2005. Utility based data mining for time series analysis: Cost sensitive learning for neural network predictors. Proceedings of the KDD-05 Workshop on Utility-Based Data Mining Chicago, IL 59-69. Dalvi, N., P. Domingos, Mausam, S. Sa nghai, D. Verma. 2004. Adversarial classification. Tenth ACM SIGKDD Internationa l Conference on Knowledge Discovery and Data Mining (KDD), Seattle, 99-108.

PAGE 145

135 Davis, L. 1989. Adapting operator pr obabilities in genetic algorithms. Proceedings of the Third International Conf erence on Genetic Algorithms Schaefer, J. D., ed. Morgan Kaufman, Los Altos, California, 61-69. Duda, R. O., P. E. Hart. 1973. Pattern classification and scene analysis. Wiley, New York, NY. Dumais, S., J. Platt, D. Heckerman, M. Sahami. 1998. Inductive learning algorithms and representations for text categorization. Proceedings of the 7th International Conference on Information and Knowledge Management, ACM Press, Bethesda, Maryland, 148-155. Ehtamo, H.,M. Kitti, P. R. Hmlinen 2002. Recent studies on incentive design problems in game theory and management science. Optimal Control and Differential Games. Essays in Honor of Steffen Jrgensen 121-134. Eisenbeis, R. 1987. Discussion, Supplement to Srinivasan, V. and Kim, Y. H. 1987. Credit granting: A comparative analysis of classification procedures. J. Fin 42(3) 665-680. Elkan, C. 2001. The foundations of cost-sensitive learning. Proceedings of the Seventeenth International Joint Conferen ce on Artificial Intelligence (IJCAI) Seattle, 973-978. Fan, M., J. Stallaert, A. B. Whinston. 2003. Decentralized mechanism design for supply chain organizations using auction market. Information System Research 14(1) 1-22. Fawcett, T. 2003. In vivo" Spam filte ring: A challenge problem for KDD. SIGKDD Explorations 5(2) 140-148. Fawcett. T., F. Provost. 1997. Adaptive fraud detection. Data Mining and Knowledge Discovery 1(3) 291-316. Fisher, R. A. 1936. The use of multiple measurements in taxonomic problems. Annals of Eugenics. 7 179-188. Freed, N., F. Glover. 1981a. A linear programmi ng approach to the discriminant problem. Decision Sciences 12 68-74. Freed, N., F. Glover. 1981b. Simple bu t powerful goal programming models for discriminant problems. European Journal of Operational Research 7 44-60. Freed, N., F. Glover. 1982. Linear programmi ng and statistical disc rimination -the LP side. Decision Sciences 13 172-175. Freed, N., F. Glover. 1986a. Evaluating altern ative linear programming models to solve the two-group discriminant problem. Decision Sciences 17 151-162.

PAGE 146

136 Freed, N., F. Glover. 1986b. Resolving certain difficulties and improving the classification power of LP discri minant analysis formulations. Decision Sciences 17 589-595. Fung, G., O. L Mangasarian. 2002. A feature selection Newton method for support vector machine classification. Data Mining Institute Technical Report 0203, University of Wisconsin, Madison. Genton, M.G. 2001. Classes of kernels for m achine-learning: A sta tistics perspective. Journal of Machine Learning Research 2 299-312. Glover, F. 1990. Improved linear and intege r programming models for discriminant analysis. Decision Sciences 21 771-785. Goldberg, D. E. 1989. Genetic Algorithms in Search, Optimization & Machine Learning Addison-Wesley, Reading, MA. Hand, D. J. 1981. Discrimination and Classification John Wiley & Sons, New York, NY. Holte, R., C. Drummond. 2005. Cost-s ensitive classifier evaluation. Proceedings of the KDD-05 Workshop on Utility-Based Data Mining Chicago, IL, 3. ILOG, ILOG CPLEX. 2005. Reference manual and user manual V9.5, ILOG, Gentilly, France. Joachims, T. 1998. Text categorizati on with support vector machines. Proceedings of European Conference on Machine Learning (ECML) 137-142. Kaelbling, L. P., M.L Littman, A. W. M oore. 1996. Reinforcement learning: A survey Journal of Artificial Intelligence Research 4 1039-1069. Kapoor, A., R. Greiner. 2005. Reinforcement lear ning for active model selection. Proceedings of the KDD-05 Workshop on Utility-Based Data Mining Chicago, IL, 17-24. Keyhani, A. 2003. Leader-follower framewo rk for control of energy services. IEEE Transactions on Power Systems 18(2) 837-841. Koehler, G. J. 1989a. Characterization of unacceptable solutions in LP discriminant analysis. Decision Science 20 239-257. Koehler, G. J. 1989b. Unacceptable solutions and the hybrid discriminant model. Decision Science 20 844-848. Koehler, G. J. 1991. Linear discriminant functions determined by genetic search. Journal on Computing 3 345-357.

PAGE 147

137 Koehler, G. J., S.S. Erenguc. 1990a. Minimizing misclassifications in linear discriminant analysis. Decision Science 21(1) 63-85. Koehler, G. J., S.S. Erenguc. 1990b. Survey of mathematical programming models and experimental results for lin ear discriminant analysis. Managerial and Decision Economics 11 215-225. Laffont, JJ., D. Martimort. 2002. The Theory of Incentives: The Principal-Agent Model Princeton University Press, Princeton, NJ. Littman, M. L. 1994. Markov games as a framework for multi-agent reinforcement learning Proceedings of the Eleventh International Conference on Machine Learning New Brunswick, NJ, 157-163. Mangasarian, O. 1965. Linear and nonlin ear separation of patterns by linear programming. Operations Research 13 444-452. Mangasarian, O. 1994. Miscla ssification minimization. Journal of Global Optimization 5 309-323. Markowski, E. P., C. A. Markowski. 1985. Some difficulties and improvements in applying linear programming formulati ons to the discriminant problem. Decision Sciences 16 237-247. McCarthy, K., B. Zabar, G. Weiss. 2005. Do es cost-sensitive learning beat sampling for classifying rare classes?. Proceedings of the KDD-05 Workshop on Utility-Based Data Mining Chicago, IL, 69-78. Melville, P., M. Saar-Tsechansky, F. Provost, R. Mooney. 2005. Economical active feature-value acquisition through expected utility estimation. Proceedings of the KDD-05 Workshop on Utility-Based Data Mining Chicago, IL, 10-17. Messier, W.F., Jr., J.V. Hansen. 1988. Inducing rules for expert system development: An example using default bankruptcy data. Management Science. 34(12) 1403-1415. Morrison, C., P. Cohen. 2005. Noisy informa tion value in utility-based decision making. Proceedings of the KDD-05 Workshop on Utility-Based Data Mining Chicago, IL 34-39. Muth, J. A. 1961. Rational expectations and the theory of price movements. Econometrica. 29 (6) 315-35. Provost, F. J. 2005. Toward Economic machin e learning and utility based data mining Proceedings of the ACM SIGKDD Work shop on Utility-based Data Mining Chicago, IL 1. Rosenblatt, F. 1958. The percep tron: A probabilistic model for information storage and organization in the brain. Psychological Review 65(6) 386-408.

PAGE 148

138 Quinlan, J. R. 1986. Induc tion of decision trees. Machine Learning 1 81-106. Quinlan, J. R. 1996. Decision trees and instance-based classifiers. In CRC Handbook of Computer Science and Engineering A. B. Tucker, Ed., CRC Press, Boca Raton, FL. Samuel, A. L. 1959. Some studies in machine learning using the game of checkers. IBM Journal on Research and Development 49 210-229. Schlkopf, B., A. J. Smola. 2001. Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond MIT Press, Cambridge, MA. Stam, A., E. A. Joachimsthaler.1989. Solving the classification problem in discriminant analysis via linear and non linear programming methods. Decision Sciences 20 285 93. Sutton R. S.,A. G. Barto. 1998. Reinforcement learning: An introduction. MIT Press, Cambridge, MA. Tam, K.Y., M.Y. Kiang. 1992. Managerial applications of neural networks : The case of bank failure predictions. Management Science. 38(7). Vapnik, V. 1998. Statistical learning theory John Wiley & Sons, New York, NY. Vapnik, V. 1999. An Overview of Statistical Learning Theory. IEEE Transactions on Neural Networks 10 988-999. Vapnik,V. and Chervonenkis, A. 1981. The necessary and sufficient conditions for uniform convergence of mean s to their expectations. Theory of Probability and its Applications 26 532-553. Von Stackelberg, H. 1952. The theory of market economy. Oxford Univ. Press, London, UK. Weiss, G., F. Provost. 2003. Learning when tr aining data are costly: the effect of class distribution on tree induction. Journal of Artificial Intelligence Research 19 315354. Zadrozny, B. 2005. One-benefit learning: Cost -sensitive learning with restricted cost information. Proceedings of the KDD-05 Workshop on Utility-Based Data Mining Chicago, IL, 53-59.

PAGE 149

139 BIOGRAPHICAL SKETCH Fidan Boylu has received a B.S. degree in electrical and el ectronic engineering from Middle East Technical University in Turkey. After getting her MBA degree in 2002, she started working as a research and teaching assist ant at the University of Florida. Her research involves classification of observations when the associated data will be deliberately manipulated by self interested agents This novel idea expands results from statistical learning theory a nd the well-known principa l agent problem. Her research shows that results from the statisti cal learning theory have to be modified to account for this strategic behavior. Applicati ons of this idea include credit rating, spam filtering, and college admissions to name a few. She has presented her results at three international conferences, INFORMS 2005, DSI 2005 and HICCS 2006. Her work was nominated for a best paper awar d in HICCS 2006. Fidan is set to graduate and receive her PhD degree from University of Florida in the summer of 2006 and start an academic career in the Operations and Information Mana gement Department at the University of Connecticut.


xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20110209_AAAABK INGEST_TIME 2011-02-09T06:29:53Z PACKAGE UFE0015543_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 52722 DFID F20110209_AAAFQO ORIGIN DEPOSITOR PATH boylu_f_Page_043.pro GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
643143dfe743c14c7bc42cfb549d820c
SHA-1
e9239005e6da5e0b6bc32702df60bbc35494e76e
20714 F20110209_AAAFRC boylu_f_Page_094.QC.jpg
a8248087265cfbcadb5ac332b40c7c75
e3a613eb438721790a6ae155a6460d8a50c75e65
599723 F20110209_AAAFQP boylu_f_Page_103.jp2
67775d3723d51d776409a8077566a527
919c7fe86c556fe52fb767124ffd08cfb26b3118
4975 F20110209_AAAFRD boylu_f_Page_001.QC.jpg
465207d384bae3c11f17f317aa986f16
39b003b104fd56622cb730403c7d07e6a4a89a92
32853 F20110209_AAAFQQ boylu_f_Page_098.pro
5d518f082d2127a595b92b120c24da81
6845a90f85c2c1809b1e0947e9a72892e08f8ffa
48447 F20110209_AAAFRE boylu_f_Page_056.pro
bee2693886b9e6f1a5984fac4792be72
1608f582cd63d059769c83b1cf8ebfdbddf43886
1123 F20110209_AAAFQR boylu_f_Page_107.txt
ccb94d53c5ed84a242d121d02c1f2adf
32f9c6208e01cd474588839366c39c38b2be2057
44422 F20110209_AAAFRF boylu_f_Page_071.jpg
96b9c24b4c71644923318a0091a4dc53
29144fa3e47f1fa4d28683ab30d6dff5bee9b08a
1006999 F20110209_AAAFQS boylu_f_Page_113.jp2
d0a6ed73df9957cc375e6bdc4274bd89
8bb5f0b19fbc12f3149f3dd98cfb0be46addb340
8423998 F20110209_AAAFRG boylu_f_Page_020.tif
d15052961ea6466a0d410ac33ccfaffb
e1aacf004fd2d8dfe8b52f9478398870c1111082
462847 F20110209_AAAFQT boylu_f_Page_141.jp2
71cdfb2968bc4fca080755837ae9cb15
f05d3daaad2161f20624d0a2638d47673d291853
36515 F20110209_AAAFRH boylu_f_Page_124.jpg
7728f8cb14940361bc4d0334eadf6c99
db950f63ec5173c4bd1272bd067ccbf0cfe271d8
58386 F20110209_AAAFQU boylu_f_Page_096.pro
0a2e4f9f86b32f2d0375ead2e6f0bd15
e05d9f6c0f20c06e0adf99fa2eacf418391c03e1
8157 F20110209_AAAFRI boylu_f_Page_129.pro
87aa9d11ad29df085fbb11a037b57f46
6998a863f96eac41a934879cab8b50861acf3ddf
1593 F20110209_AAAFQV boylu_f_Page_094.txt
4ffab21935b5f6473e27458ecde75f21
401150e491679e6bc37b3346e624554f9155c58e
43306 F20110209_AAAFRJ boylu_f_Page_066.pro
b860b7ea763b2c5a7157cc3b26cc81bb
43983f5677a329712439bbbd453ef6798cd7a7ff
52560 F20110209_AAAFQW boylu_f_Page_116.pro
bf2c16e945b28b908154bb6fceda9751
c5eebb852bc1b6ebedb1c1e9f8a50031d8b3e2c6
41112 F20110209_AAAFRK boylu_f_Page_117.pro
711c1661aafe501492b169f0d3c443d7
eee3a1e7ea5966461d374402cd2b9a073f084fa0
460299 F20110209_AAAFQX boylu_f_Page_131.jp2
2cf7fd05cccc6e2163168842af8f486a
00cccff9b905bc5528b5bfd272612931db953d16
40248 F20110209_AAAFRL boylu_f_Page_084.jpg
fd5e1df8b8307e838d2cb0f1b2b5ce5d
e0857281cc540e5b4b35fc7769d0716fbb8e383e
F20110209_AAAFQY boylu_f_Page_036.tif
4d2e4f8eacb00601ec68bf98cd5aad8f
e6cc09dea2c51384ccffde7e031b27e9aa5e66bf
875325 F20110209_AAAFSA boylu_f_Page_120.jp2
032c3f869aba87c60af62f4ebcf5805b
018e27462f30a6c3d6df7c160f0d6956c8930e1c
797084 F20110209_AAAFRM boylu_f_Page_090.jp2
0be4ced878236c56feacd53d793a5626
35b47e8fa678b61fb0191b937ba48a78e9f98fef
F20110209_AAAFQZ boylu_f_Page_074.tif
2856b786faedc74b44885d57313dd4f1
b9f3356829a54f72cc68aad039f3f78e536a343a
F20110209_AAAFSB boylu_f_Page_016.tif
1933dbca0b1681266322fe3c41fc7fce
c7398a565b589eea9ce8f0e58d8c98f0611723f0
1369 F20110209_AAAFRN boylu_f_Page_007.txt
9ec3e5169ffe91d0d76b75c1311d9989
58ffaa082fd2e741668d31b7099641f27d311a24
24751 F20110209_AAAFSC boylu_f_Page_072.QC.jpg
27a2c3463bdfb93acb6e790609910c06
162a7cd7922cc3328ed41452c4230eb548daac15
F20110209_AAAFRO boylu_f_Page_080.tif
9e35ebe5ae04c7c0184afeb7f9c7d885
9451037a83412582a3f42656b6a0cf67eaeaf8b8
51420 F20110209_AAAFSD boylu_f_Page_014.pro
3405fc54f8142f6a610f5563613ab9e1
f9978ef5caa9635247c4f013acf21057a9d2f8c1
27280 F20110209_AAAFRP boylu_f_Page_080.jpg
2ad1d65078092cfdcce45436d6902ed4
4284ce56c1fc46573c2eade30b1adbd45355bdc9
22970 F20110209_AAAFSE boylu_f_Page_125.QC.jpg
a5437b16ded510d654696c275a942867
9d216c5c50492d64e3fd9d2ab1e905bf15aef95f
6481 F20110209_AAAFRQ boylu_f_Page_145thm.jpg
522b4d9ac6beee7f195b25c4d0838f24
0a8db1565e5bddaa394a9d1f13f424d75946c2fd
48216 F20110209_AAAFSF boylu_f_Page_024.pro
436fcc9b9deec422ffa9114991b6b4eb
eedbf3904f5a5a0c34024da8a176ac2aa568c3bc
943561 F20110209_AAAFRR boylu_f_Page_067.jp2
ad1ce67fb6b513dd5ff715d7eeec2a38
e264a7c2f9b290e3b7bdf78baad125be2a0f4f30
4980 F20110209_AAAFSG boylu_f_Page_051thm.jpg
717e66a48383b449183389f6a5ae70a2
92f0a273d87cfd787e107582ef8dfdda13167fab
50171 F20110209_AAAFRS boylu_f_Page_046.pro
817aa329959be0c4f62d450f55b1f91c
7e58f82519a899fa95b5342641694aab05291fd2
25080 F20110209_AAAFSH boylu_f_Page_034.QC.jpg
ba0e90de014fbd050eaf25a4d816b1ea
786963b4c03502dcc5075b2c0cc90986606db669
5015 F20110209_AAAFRT boylu_f_Page_102thm.jpg
2d9f6d9df369a0e1e9f1384def3b79e0
7f7e52ed1eb2cc7f6676e80b219f4568c4be470a
F20110209_AAAFSI boylu_f_Page_024.tif
710759e915f4d52e8e426809b62da85a
22f7a42f4991c49ba3102b778d555c09b9fcdbb8
4721 F20110209_AAAFRU boylu_f_Page_082thm.jpg
c52e30a100d696aa1b49da5c207f0b76
991f20a95140aba078e126e3178ee684acf07a6f
10272 F20110209_AAAFSJ boylu_f_Page_080.QC.jpg
890c789addab97ef682b2805ec7389bc
aa27778da0f7e7a579e42c60557b458555319f12
8314 F20110209_AAAFRV boylu_f_Page_129.QC.jpg
f7d33067acf41c8d67cde2415e114ce9
94b03ab6477ddbebf6fe069f36a110e7d5e88b3e
1196 F20110209_AAAFSK boylu_f_Page_139.txt
dde0222404b9375299169280669137b1
f610e520b3ffc750a928144ce2f038791195013b
F20110209_AAAFRW boylu_f_Page_104.tif
1310679645cdcf43e7f2a4c187737e6a
f43ccfcf2ff0e483147bf5642235f3f9bbbea0e4
78295 F20110209_AAAFSL boylu_f_Page_148.jpg
97ab7b602de6ea97bf8775162ee7e023
f9caccf50de649903a8da7ea692bcd8eb93f6855
39377 F20110209_AAAFRX boylu_f_Page_110.pro
56324747aa3997f9bf722a51bbfd5763
6117212c613a14796b70083fafe1fb78bab3d81b
72598 F20110209_AAAFTA boylu_f_Page_041.jpg
e55d5cf8b499703f796ce3799f1956cd
7e908184483501a8db0620212e61b402c444fae4
51250 F20110209_AAAFSM boylu_f_Page_010.jpg
0298d405eb5c5f198a1ff1d0b98271e7
38268a2e1b55c7fe37ab07b1a0c4f441af83512d
F20110209_AAAFRY boylu_f_Page_045.tif
e2d79c3c001c2212975b2de7e378d799
6032d18fe1ef17fd671dccbfe2972762c4678d52
1461 F20110209_AAAFTB boylu_f_Page_123.txt
636c733165b14133d56eb67519e61191
542e4a9b537ea1fc160e7dcc5e7630b055380dcd
1418 F20110209_AAAFSN boylu_f_Page_001thm.jpg
1dae54f7afbb46ac933e08ff1f233f54
c57d5de42dd408b02890ad42bdb8e081617469b9
81055 F20110209_AAAFRZ boylu_f_Page_076.jpg
581a934c27c8c7fc78de3ad059392952
b7669fec425f8b27112bbb9d8fa7290d64766237
2007 F20110209_AAAFTC boylu_f_Page_062.txt
7fc155cfa1a270e05a983a2336547183
d48e0df326ff3fc98204eda3e3574d44b49851ec
5278 F20110209_AAAFSO boylu_f_Page_094thm.jpg
6d25228dda0fa380eed15fa8405c6986
cd3464e103ab16508cd3cda2754822fd84c405bf
31176 F20110209_AAAFTD boylu_f_Page_069.pro
bae5ab92d185309b7e8c1a89caf5a00a
38c6d2908da49a184715adfbd8900f4aec236e9d
42123 F20110209_AAAFSP boylu_f_Page_079.jpg
143584c918f5ab39ef72a41344891503
dadf7cbe9db6668a10a573450ff798e57f7ba586
19596 F20110209_AAAFTE boylu_f_Page_032.pro
6b16d3b7eafee1cb5bd2fa35a4563207
08800c25f56f2417f313ce15116f97f8497dc065
3385 F20110209_AAAFSQ boylu_f_Page_005thm.jpg
94bfb7e4815aba2d7e3f07f2263c134d
32d728ed1d79bb9fc285d0363b9ceb3ea0d65b0a
F20110209_AAAFTF boylu_f_Page_039.tif
58bb3c9e05369035e1d97a8d56919665
fcc56983b6040b9cf4e6a527a575b3dde6980fa9
987 F20110209_AAAFSR boylu_f_Page_084.txt
0835f0ce8022cb1b525a1a9c6bb720a0
506f962ee5dc8b7d9558acc492f6cb693f4aaf2e
20616 F20110209_AAAFTG boylu_f_Page_120.QC.jpg
d4ce8e432dab345c2b21d818138f99c4
f540755356b761c1c4f7896aa59309a83a7da4df
27288 F20110209_AAAFSS boylu_f_Page_039.QC.jpg
9e7959f360888babb428acf783079a92
800103e74665969ec15dcfb3397efbe1b0adde73
F20110209_AAAFTH boylu_f_Page_044.tif
e93fcfa44b6b6d2e9757db7f7c915d4b
5953887432cb02176118107e00bf36ec2d8e39d5
25138 F20110209_AAAFST boylu_f_Page_114.QC.jpg
9de3df32a5060a70627f1efa5354e342
c477f93e969c1bf1680afb1570719259c1e1bcc2
69263 F20110209_AAAFTI boylu_f_Page_036.jpg
b53b9b287fa0341240f575d023b598a6
bc809522f31170a1b9720328d1f43646fb4e2e12
5129 F20110209_AAAFSU boylu_f_Page_090thm.jpg
2dc11be222c57c9ff765970d1be08773
8622ebd722076343c2f4d6f8b127037fa1c2c2ef
944 F20110209_AAAFTJ boylu_f_Page_143.txt
c31565b5d83533fc9872211c3486b837
c271a33c9622f12e81f2e90e5a94aef21939c897
F20110209_AAAFSV boylu_f_Page_147.tif
ab3585b61cd55d6b6d9642022fd15460
519a01ffc148f2fc21ed019152d8887b1a7f749d
6327 F20110209_AAAFSW boylu_f_Page_017thm.jpg
678ac1ee540f31ea883b00abff42c3a4
7290999f7733253aa54c8b8ce576ede4542082fd
15202 F20110209_AAAFTK boylu_f_Page_068.QC.jpg
b962622131fb692e684e53f0eae50519
497cd88303423f6147d1b60a2f74b5faa984ad7d
F20110209_AAAFSX boylu_f_Page_013.tif
0551a3a6d40b87ac7d27a537ae627cfc
d7f84a4f50e14ca2931f5e6359a7dda531cc5250
912401 F20110209_AAAFTL boylu_f_Page_036.jp2
53ec2d669568fd89740874bd41ce8fdc
3d7328f1462d191fe37a2025cf2aa2b5233763ed
F20110209_AAAFSY boylu_f_Page_133.tif
ce35bb9f109c21944ca006d3f676c8af
4573a05ef62f5fa678c6dc63f8a5c0e19be18a3b
5932 F20110209_AAAFUA boylu_f_Page_019thm.jpg
525522d33c3326b793a5853c02dc5031
f134ef7629726ae0b155591d62cc8d6bb9b8fc94
1280 F20110209_AAAFTM boylu_f_Page_103.txt
ac78f9454eac3004c80de771b0ecac06
89f7b75b16119a519cbd923535356dd83fb5e56a
90 F20110209_AAAFSZ boylu_f_Page_003.txt
b65736989d1f0918f04876f4ff4b51ce
4d97b8b280d564d428c60eefa948d5ab0e815a70
4030 F20110209_AAAFUB boylu_f_Page_149thm.jpg
9780d6b244ce27a0b470c42e55b30bdb
becfd732aeb4d5a6fd9f4ead210de2e713de71ee
26795 F20110209_AAAFTN boylu_f_Page_097.pro
1606339fdddc873f72f4c0c74dfbd496
3b6ec283dae304e08c89ead5ce69b476c1d2c5b7
86402 F20110209_AAAFUC boylu_f_Page_017.jpg
9fc9e9e48dba50ae862a2a27298894e6
dbcfb21bfde8d01f71e2a53d5680421fdede7023
49964 F20110209_AAAFTO boylu_f_Page_068.jpg
e341d3717a17c2a65101a33dfa62599b
82151b23dbfed283d809d093e888101cf3c858e5
1466 F20110209_AAAFUD boylu_f_Page_099.txt
19c000bb3bb7bf1def95b6d04a8f7dd7
1fd6ad2c1e904aaad2766356f7bf2e8b13e1bf05
1051976 F20110209_AAAFTP boylu_f_Page_148.jp2
7877f61165e755af0e6f81c539a4a9ce
12c77bfc4768e2facf93a58e268514f176db5aa4
3228 F20110209_AAAFUE boylu_f_Page_131thm.jpg
50d7d876572c7159e4b7a74ffe309106
21236a5960f63146502a9bf396923afd92e17c8d
1051974 F20110209_AAAFTQ boylu_f_Page_145.jp2
fb0b4c8c967390a9904df84d185a09f1
4da19e44351a72d34886485141294725ecc07c49
F20110209_AAAGAA boylu_f_Page_053.tif
f74d94207a113ad46316c66a981817dd
5845cfd354fea1debf04cdbf49e96232d1242680
2088 F20110209_AAAFUF boylu_f_Page_054.txt
c461352a18d014fc89d7ebb43159956f
f08fcce09104169d809c63c60e2300fa5cf2b7c9
1939 F20110209_AAAFTR boylu_f_Page_019.txt
6ecf430f97c05f923d3bb1fb2a04fe92
4a09e95ec03322ff133ca5b33450802f0bda0535
F20110209_AAAGAB boylu_f_Page_056.tif
48c3c839947bb3e2d36aa759dfb0479b
c943421ce005ee91a68ffccf5284ba1e49437452
4229 F20110209_AAAFUG boylu_f_Page_127thm.jpg
79c2d8287344914c4764d5d1a486e952
6caa96a6b6d79f05edc5932f1b7c7ace71f10824
81506 F20110209_AAAFTS boylu_f_Page_072.jpg
879c897d2e2d01ca16d43554eb199960
4fbaa683087f138fd9e867ece0135b49624a27dd
F20110209_AAAGAC boylu_f_Page_057.tif
f1b7606985df72e3bc87ed87898c5794
473c8f75010a4652de6f532944bfd2fe21dba577
44976 F20110209_AAAFUH boylu_f_Page_011.pro
46af964ed839f6d4d5a8ef1759c536be
1a8cf07be8aef6a7524a7079b9ff4b05a026555d
8425398 F20110209_AAAFTT boylu_f_Page_093.tif
8cf69f08b566dfb01b869f2a664f9828
c592a3b016e70dd47f1c953585991e0fdbfd6497
F20110209_AAAGAD boylu_f_Page_059.tif
d7a8d9370abe99fae2fb971c4ba09ea5
e88dde165794f27e870cb2a3f1bf8e00d2f01d88
82589 F20110209_AAAFUI boylu_f_Page_053.jpg
8e2deb49e5f2575f286c5c315fa78899
10ec33c4b7e4eac09b0b19ea7414cab3e6cd9d17
13534 F20110209_AAAFTU boylu_f_Page_136.QC.jpg
fff5e3de7a6664f1ca5ef74f15dbd5d0
f3a9a9df863d041e31ea104b8c209d840d152beb
F20110209_AAAGAE boylu_f_Page_060.tif
5edb2cce3f0bc1e0fbd9275a7598a9c1
65bcc219888918a7ab4fb58c5d614caaf4f4208b
1051979 F20110209_AAAFUJ boylu_f_Page_076.jp2
13d2dd51e1cc968ec95605d239d44f2b
cb638837bf2517ab4cee035af157add1ef6371c3
17775 F20110209_AAAFTV boylu_f_Page_142.pro
ba03719cfc097a451017bd96305346e1
bd141da1f34bc3c2f7c30d477881a758d51b1bff
F20110209_AAAGAF boylu_f_Page_061.tif
f545e3fc632739ceec273067248a0792
87ce7953b0d461a13058cf6ab9c685309d6df464
50707 F20110209_AAAFUK boylu_f_Page_082.jpg
9ee373c305869c03b7180c9f9fae26c0
6ebd8bc48ff798d30ed15bdafa94d12d4179c689
22455 F20110209_AAAFTW boylu_f_Page_148.QC.jpg
c7e574204b0ff86d54a1689d2e64607d
ddac9fa36353c8247d8f87033c3e5c9d78aed444
F20110209_AAAGAG boylu_f_Page_063.tif
cf2b0808172994c165a0ae13647d418a
0049f88a0d5971948c15a9741dff6a819cdbab78
72320 F20110209_AAAFUL boylu_f_Page_064.jpg
eed2e504d234616ca650a328b659032e
898787d17182e39d18640a8ee03bfd80798714c9
1691 F20110209_AAAFTX boylu_f_Page_104.txt
e59b9ef18f869a53ff76c6977d3ab3d0
56b3197d19400f66a1b8c7566e490900df118777
F20110209_AAAGAH boylu_f_Page_065.tif
5d310f8263d9b0e914b22cdf919f5163
a88818295c779af337b3ce0132948344e9fd6954
23406 F20110209_AAAFVA boylu_f_Page_113.QC.jpg
1da5e8d781f93389f81dd6862231c391
caa3253619b157e43164e7a7884fbdef884a355e
5683 F20110209_AAAFUM boylu_f_Page_028thm.jpg
515fa91660b0617eb65f48c353038cdd
acf6b03f0e7d75c81e28cb5fd933634153e6258b
5699 F20110209_AAAFTY boylu_f_Page_055thm.jpg
bd3d1d42056d94ada4e546e387a9d3fc
1ba86746de7c35b6b230c2a619ab0c9df7992d42
6403 F20110209_AAAFVB boylu_f_Page_014thm.jpg
64ef51f80cb47ce58783096914d7ea9d
169ef49424c13e353fd482c096176c3053d43ed7
81773 F20110209_AAAFUN boylu_f_Page_061.jpg
6a6a53d6ef4a288c74ee6433ced60c69
53b03020f2af79f53909993fdf78a6cf32849e03
F20110209_AAAFTZ boylu_f_Page_115.tif
73c1ef59dcca6150f1da97e8daee9172
2700e4dcb975333d9187f0c8a41148d342079858
F20110209_AAAGAI boylu_f_Page_067.tif
5f079473ccfa8b9114f9c5984b90b09a
eb5dbb53e7f27b76a8c29b2250c80f947dffc4dd
68920 F20110209_AAAFVC boylu_f_Page_125.jpg
32e1049b1fd84389d5da2323c615afa1
381fd415d669585ec2e19e39ac9caaa4798fa743
3728 F20110209_AAAFUO boylu_f_Page_077thm.jpg
02a56bf2b1525508690de33d153fcf44
29ecd7bb047ec39603f08e6594ce90eaf25ab978
F20110209_AAAGAJ boylu_f_Page_069.tif
2dbccf1a0f4997c44422a749229bca77
22cc992102e7ad3427a97793aa167012049e37cc
68824 F20110209_AAAFVD boylu_f_Page_119.jpg
a13f7ad9bbdeb3c8f8313a7ce1adf7b5
3adbc4815dd9f530ca1109d472d772b584c5962c
19423 F20110209_AAAFUP boylu_f_Page_124.pro
9682fefcb46ae96f41f90fe6edacee20
b0dc05a3b2b24b1f75ef982e923978d051768c2a
3318 F20110209_AAAFVE boylu_f_Page_130thm.jpg
74f473b5a4453f4d2b61b1c7a5409399
96628fb4f735f65645b0afaad32b42d16a9b6910
26407 F20110209_AAAFUQ boylu_f_Page_060.QC.jpg
c76f7d9669f979302708930708fc17a1
32ba80ca92462da6e11bfe04a2881ddf26f0c4e9
F20110209_AAAGAK boylu_f_Page_070.tif
f8989f3ebddb35dda3e395475049d200
175c33c41eeb37b1cc6c46df4b1d9a313dbcfec4
52623 F20110209_AAAFVF boylu_f_Page_146.pro
802e94ea2e782bb2c7662f0879c2a63b
8d9be2fd7a3be691d369d98db7bcff59eaf3a390
53051 F20110209_AAAFUR boylu_f_Page_031.pro
958242163874682f90eec2e6894a29bd
3dbcb51a4709ce33f79805fb42041e64cf6b4bcb
F20110209_AAAGBA boylu_f_Page_110.tif
21a56b871286413cde058ba49cdcac42
9833f09fd4457f095171467ef0558bc34e47a576
F20110209_AAAGAL boylu_f_Page_071.tif
812157581ad04926b9d89a97942cb7bf
6934297dfe7820d2a890ae23f13bb0d024f1772b
9393 F20110209_AAAFVG boylu_f_Page_106.QC.jpg
2c6e79ff4673ee721aa50cb234982bd8
b4d29c8d118f5458557a9b2e07b20f480c255dd0
6062 F20110209_AAAFUS boylu_f_Page_062thm.jpg
650fa15511db05eab6deb3795b6c6d8e
d4976566a70ce12d435c969ebacdc12b9d21e303
F20110209_AAAGBB boylu_f_Page_113.tif
73da44b2c2dc28bd209e2631f0ca4a96
0207d848d854e862a957fa4e9a47dea47114c3c7
F20110209_AAAGAM boylu_f_Page_072.tif
2732664773081583c11d1042c07fbf8d
d421173753e3a9ac9ac5e046271883e38707453e
73201 F20110209_AAAFVH boylu_f_Page_012.jp2
4dd88b590a93edea37b1275a106b4420
65ceb44313b79643817622a317ccb4fe2da33197
1727 F20110209_AAAFUT boylu_f_Page_029.txt
615ae854af42c891d149541d1b4832b6
35cd8d882837a272f3cf883bb1c82e49dd43f333
F20110209_AAAGBC boylu_f_Page_114.tif
ec281f43de5be1b3d9500bef390bb704
028f4981606ac1e953de93aa6018336698fe926a
F20110209_AAAGAN boylu_f_Page_075.tif
a90cb0f4766098885c5ec0917c293628
eaf1a9a480c15c58cb2e1ba07903ae0ed615ad29
F20110209_AAAFVI boylu_f_Page_035.tif
d86eae697f2888a63c49bcb0d94b8424
8941b3474275e62c2914981c88fb238527619866
467 F20110209_AAAFUU boylu_f_Page_080.txt
94cc76a46da8835971e4df3b67f53273
a2ab28f41b07ef90b5adab956e663a6ab312f86a
F20110209_AAAGBD boylu_f_Page_117.tif
c863ed2d02d247d7fce2539166537551
151fd585f669f85fb7990d8fc369d86337e5ad70
F20110209_AAAGAO boylu_f_Page_076.tif
5120727bc30125225e01bc533e6d09e9
6ee2d3129c34b841d799e234380dc0b1732a5b32
369684 F20110209_AAAFVJ boylu_f_Page_142.jp2
608aab735b9e4ef3d9d677ae9e3faed6
dd58f7c789975352e00372aa98cb7c59760a044d
26550 F20110209_AAAFUV boylu_f_Page_031.QC.jpg
61cd0d75cb5da74ac95e24d6b83ae887
170bc30383e044828a48816841d9b10eaf3c098e
F20110209_AAAGBE boylu_f_Page_119.tif
42c56cafd33d1d9b16b66906938a34e8
775e22833f0d1b84cf426a9f08e6cc4b064e99e8
F20110209_AAAGAP boylu_f_Page_081.tif
87a0b12e66bf89c2f77a612b651d1eb7
78e39c9936dad8455f1d086f1962361e6fac763c
17665 F20110209_AAAFVK boylu_f_Page_135.QC.jpg
31a5b094e9ead8decfc9804e6865f62a
171a1591b672b424bb706e505752a51663efad82
1067 F20110209_AAAFUW boylu_f_Page_131.txt
4db4a03326e58f0074cd928f24c212b9
b6f7e4637a52bffd006f8cce06de373cd8da6d46
F20110209_AAAGBF boylu_f_Page_121.tif
53d3045c46caa361273c55e5d3bb3129
b8bfd1fbfb3c4490d00ededa9a17c8f81c253766
F20110209_AAAGAQ boylu_f_Page_086.tif
5330af583a6d509585ad0d1bfcdbd9cd
9861f8c118caac9810bcf62ad71dce505e3f8301
21747 F20110209_AAAFVL boylu_f_Page_119.QC.jpg
fe9bea339238add22c2ceff06eca3002
33a3902678b0c2eea0eafa917b1e4201c4bc9b69
45903 F20110209_AAAFUX boylu_f_Page_097.jpg
612943da2651625ed2d5c1d111972bf3
888bb63415a07a28dc3b65fff0b16ecf9baf3739
F20110209_AAAGBG boylu_f_Page_122.tif
888c01c3f42fa4b4b21a56993907402d
442f82678e1ff1b865d33db4344b694306ff20a0
5216 F20110209_AAAFWA boylu_f_Page_075thm.jpg
09ee96cf43a7c161a3576f057ebe9425
7e27fcfce43b823f47d51b503b6a3cf15e9daffb
F20110209_AAAGAR boylu_f_Page_087.tif
3aa6e189f614b83164d37099daa245bf
b01aa801ec7316af274737b143882e68e5d1089f
32130 F20110209_AAAFVM boylu_f_Page_142.jpg
1b8aaa7525635c44b114cd5775b2efa5
7dbb925d19e53c34c5ded29491aa7606db594cc9
6325 F20110209_AAAFUY boylu_f_Page_053thm.jpg
20f3ff78f55067b7292cf0382b156bb5
871025a67dcf533376e76d1bb2a92fc6fa27f300
F20110209_AAAGBH boylu_f_Page_123.tif
f95716f9a93063d0f51ab2db985c642a
519c38232e65c142f17dcf79eae5ff4ce7d1763c
F20110209_AAAGAS boylu_f_Page_088.tif
ed1d8493eb81093307fb3e644f52eef7
cafd5055d2365e40e776d98dbfd24c634afa9b44
2112 F20110209_AAAFVN boylu_f_Page_039.txt
fe89de48c3c6cf8ad28a21b58766c603
c2ea786a6d7ea08c5c367b34cf848890e071aa65
6491 F20110209_AAAFUZ boylu_f_Page_033thm.jpg
db9ed34d58624971bb71f5706f630438
4c55c7d3a214f762312c6294df9a59f1567879a0
F20110209_AAAGBI boylu_f_Page_127.tif
46aed21efeb82ba86cdc0368b4ee502b
b29f5637aa844c3b8ac83d8b7f5941092a3a047a
6254 F20110209_AAAFWB boylu_f_Page_057thm.jpg
b2e704cf22323731acdc3d823f37d011
f030180f5d166646bbe365c2f0de12723644f4eb
F20110209_AAAGAT boylu_f_Page_095.tif
71c082c1eefcbac02e39815403bd5e57
6df7dff0852b673fd236aff1ca6cbf50454d96ec
F20110209_AAAFVO boylu_f_Page_084.tif
2cdaa6f0fd8de83eca159b737650875b
1324f06aee693b1bf5ac8a8538614458e522b6c9
F20110209_AAAGBJ boylu_f_Page_128.tif
ba773ddf0aa6f2125e9a783ddd586f6b
bb99338e7fe813da1588c682aea99d6f4b766f84
1290 F20110209_AAAFWC boylu_f_Page_002.QC.jpg
ffa3d4e224b6d129d136af39664d133e
d1dea5b8b538508d0873264177b6916084a8fd33
F20110209_AAAGAU boylu_f_Page_096.tif
734f052f340203bc9e4de6ef0b7d50a7
4b39466ed7cfbb8ffb11415827c0334ea071b2a0
11288 F20110209_AAAFVP boylu_f_Page_131.QC.jpg
255ac032594468ca7130fc498b040f7a
dee983da8114b2e66923c449f1d1beed621c7e1c
F20110209_AAAGBK boylu_f_Page_129.tif
ce62208dcffc6a9283641016fadbafb6
d19a8a531529e027f2bef0a31beb6718c1af42b6
83272 F20110209_AAAFWD boylu_f_Page_014.jpg
5d9c6c6a18ddc5d021ccc68ffc97eeb6
0b054aa65e98835d498082b1086f1d0858944879
F20110209_AAAGAV boylu_f_Page_098.tif
fa4eb7304f3a99f4c7f09d25c723786f
28cae9ccd01014e16df1c97b35bb4225277590d2
5259 F20110209_AAAFVQ boylu_f_Page_029thm.jpg
591111bbb12b8b090d05d01e932eefd4
8ed39086cc46d7d81ca13e166c39080e2acb569a
4915 F20110209_AAAFWE boylu_f_Page_105thm.jpg
69ffdbc597cedfa1bb1acb1b9b8a8665
bedfde635d163ccc4c209638786d8d09e40a8813
F20110209_AAAGAW boylu_f_Page_099.tif
5d69670e3498704afbfc16f90858e4c1
175ec0c55a7c1aebc51914bbbe30122bd2349200
70973 F20110209_AAAFVR boylu_f_Page_004.jpg
ff095082f1ccfab8c9da7affee3371a3
d34ce455a70191112e34db82f219dc35f5d86827
1563 F20110209_AAAGCA boylu_f_Page_008.txt
4de975d0b6148dbaa1fa9bc2f0c6340f
ee8f7af8c563e1774db0d8365eac58062c6a4d37
F20110209_AAAGBL boylu_f_Page_130.tif
d137209c0d6fe62388ed931931da1696
c469c8700209df18daf9e4233ae399ab85e896d3
73294 F20110209_AAAFWF boylu_f_Page_005.pro
22b8dca18f2cccdb6808bf7e8cda30c1
63d30cce2959b99ee6c950e7eb028d22f27e4fd7
F20110209_AAAGAX boylu_f_Page_100.tif
baa9b15c73b0844677f68d64d3ee1721
efbbb4eca906780508fd6f05e5f96aa731841137
34967 F20110209_AAAFVS boylu_f_Page_091.pro
f8f3f957d9c7a6a38726c82b6bde50a3
20b4a6b50db191019e3b9141cb7a68a36f11c7cd
F20110209_AAAGCB boylu_f_Page_010.txt
fdee974885190c119c8e6ca323ba905a
64d29f222e23fc9e2919747566042ffb99340606
F20110209_AAAGBM boylu_f_Page_131.tif
ca0dc1180b6c3362d7dc21e269656a56
dbf55ddb0eccb6104b864e3bb034bc8e56e8b528
428448 F20110209_AAAFWG boylu_f_Page_124.jp2
d6c7d393878404802091f0c418a069a8
ecdece8962de01ace7b254976fd649674f2a877e
F20110209_AAAGAY boylu_f_Page_102.tif
09f4f45465ca149ffc3f38f907395d38
d49ffde7afe798c68a383ee7201f3acdd0532918
1990 F20110209_AAAFVT boylu_f_Page_026.txt
366224adc15209a124cd3f775185620f
d86714c7c9c03be670a8295a1669344c6932f148
1853 F20110209_AAAGCC boylu_f_Page_011.txt
f5f00f187f05f16c8c41f9dd2f95ced6
c8c4372219305e032b7b192deddb9fa32605ace4
F20110209_AAAGBN boylu_f_Page_132.tif
1ca42b881f9217c023b3972482774857
09d31bb147f81f25443900cd926c023e4b944117
F20110209_AAAFWH boylu_f_Page_001.tif
beb706030cb7163457ad75522119e4ef
1b08317136e54721940a819f8accddc65cbe51ff
F20110209_AAAGAZ boylu_f_Page_109.tif
84aa67f5afe7bdc7eaf2e8c490662b09
7ddac4c1fb67e4b8f43e3d4895ffba546b4bc6e3
41172 F20110209_AAAFVU boylu_f_Page_077.jpg
678aa2a6a0301f679e9a40703d8beddc
41ff679fabd061b8f3a2c3c1b1d2a7d3280e1506
172 F20110209_AAAGCD boylu_f_Page_012.txt
d82d9771822ac2f7ca301c6b3811409e
e1e8ed57734ac2ad16aa218140187c5d73074453
F20110209_AAAGBO boylu_f_Page_134.tif
aac99ad35013a0ea38dd8e66d213eb05
920e9a47b2c3c01f8f435ced3c18457ce776d22a
F20110209_AAAFWI boylu_f_Page_118.tif
c5f798f745af6359104533d4541e117a
39afc0051d6a6132c8134372ecf4f2b5b0170abd
23755 F20110209_AAAFVV boylu_f_Page_035.QC.jpg
8f7984d45abe78c266a3e51d48cbc9dc
2181ede3d4459c42bfbc1c457a57e5e24ae71e05
2016 F20110209_AAAGCE boylu_f_Page_014.txt
0e27c3a9b0f41f33ad610328e2527688
6a10b3d6ff0758c5dad4eaf025bc7f60ede4ba30
F20110209_AAAGBP boylu_f_Page_136.tif
578aaad5f7f2bc13ca3cebb82aff15c7
4930e18ad78a42020ecba35b35bd4a543822b947
80748 F20110209_AAAFWJ boylu_f_Page_042.jpg
8d64419d16eaef8e798b12137294df0b
0d3ef5243aa1e1fdbe8ae04b993f8519c56db90e
68205 F20110209_AAAFVW boylu_f_Page_088.jpg
616b8da028a062267b48ad0bdeaec76b
be3d43063c898908237fa00490fc75bedc65e78f
2067 F20110209_AAAGCF boylu_f_Page_017.txt
9cd9eaed901d36ec681e6e2d8d7678a3
e5bf4b92388e85ccb5599fb68e036ca9982b0c2b
F20110209_AAAGBQ boylu_f_Page_137.tif
e37fdf2de4065c81f3dfa62949a7d998
78275953526cfc78e1fa56293f2e0927353242de
183838 F20110209_AAAFWK boylu_f_Page_001.jp2
bbe21bd0908fad5752c0eb69ca5986ba
b7e9589546c2b88a29778842cf804a25454c9310
43155 F20110209_AAAFVX boylu_f_Page_108.pro
5f9b4df67e43367b08a4c9562af62e1e
1c4e1f0cd82b08fabd193fc803c819f25b51ebe6
2066 F20110209_AAAGCG boylu_f_Page_021.txt
9f5756d78a586d72627943ff0db3998b
c5e0f86ccee1aaf86b9e2afea5f23349e9d6728f
4366 F20110209_AAAFXA boylu_f_Page_109thm.jpg
9bcd77b5b5a51dd6706f3fccd29b1268
e2821f721ccf0a0aba306d627eae8e44b27f9f8a
F20110209_AAAGBR boylu_f_Page_139.tif
1ce495977efdeb80c43a68a78dbca978
f0027865bbaecab14ee23b009d8bd9eaca60bac6
62994 F20110209_AAAFWL boylu_f_Page_090.jpg
c1603edd205d9d32419d8f50104fb87d
268ac770ea13e0de1d5d7a67a4d36924355edcde
45775 F20110209_AAAFVY boylu_f_Page_085.jpg
ab15e3a9a95091e40edfee929a37a8d2
eebb64987455fd86cfb22552964e76b0b3514b44
2033 F20110209_AAAGCH boylu_f_Page_023.txt
dd2167e87cdf47df8aa81382aa25c5d6
2dc2372eef885de24c65ebe9896ddeced78735a8
1857 F20110209_AAAFXB boylu_f_Page_028.txt
2ad6bc7b3dabf0c7e089b802dea51e6a
83b7258cdf2783b26efdd0ef91731ec86b899810
F20110209_AAAGBS boylu_f_Page_140.tif
246fa5cfd54a872436cd5f5e80459352
f7141aa7d41c127208d1cb5dc468b580ee6e0ca4
2403 F20110209_AAAFWM boylu_f_Page_147.txt
8e97a7871a8047dae5370dc22df7a735
cf0572a54f285b55c7aef7caf4e0f0f02ef54173
1051981 F20110209_AAAFVZ boylu_f_Page_046.jp2
71c6cba91532d347805e072b9e8c5f06
e91645f0a004d33ce6e96c640e8f0d2f21200fc6
1900 F20110209_AAAGCI boylu_f_Page_024.txt
fba9cd29ab1bbfd9eeddce6ea18b2a76
021a24c33de6eb6dcf333ccb307211709cab7bda
F20110209_AAAGBT boylu_f_Page_142.tif
81fe5b606ea6e9812f58730ce230d959
93c2f92e8bf437917455d9c58f75eb6eaf54fa96
3988 F20110209_AAAFWN boylu_f_Page_085thm.jpg
4cccc245107e986df05d2fd1087e2de0
a021ef10487105309b6ccff0d5a33fb69762c31f
2079 F20110209_AAAGCJ boylu_f_Page_025.txt
47bd1ec4bab6f9d101d252389553cb4f
24048417606784d0a073abe5aeef5a7cc079cf7d
F20110209_AAAFXC boylu_f_Page_017.tif
e99ae562e154c7c92be51ab84d1fd294
1e011253029ec9301317c74418219d3eb4f6a9b4
F20110209_AAAGBU boylu_f_Page_143.tif
75f2480f7c54cb2e105ea7d0bb43a1a0
bb325a476e39854800146092cd0526055d917f54
F20110209_AAAFWO boylu_f_Page_078.tif
847ed692a0ac755d1142024b4d1cecd1
4852ae0e7a3a9448a15dd72057b5f61b622775b6
1358 F20110209_AAAGCK boylu_f_Page_027.txt
08f602e750705a5f9fc9318b3dcccf54
a47cd1051ea5600a323230a54f05135c99bc423d
1684 F20110209_AAAFXD boylu_f_Page_095.txt
a4d9f09a321ff790dd34062345a86483
9d9ea87be45594d54c9d6b7046728df61eff1863
F20110209_AAAGBV boylu_f_Page_144.tif
c04ec3408ac716b16b6e1e5218d7729d
95dc4fa7120c08bfa7357f0942a8aac7c18879db
1034533 F20110209_AAAFWP boylu_f_Page_020.jp2
2fc9ee264c98de2a934bc2ef3487e2d5
a280738a024b4d335003d8367f51a25b35372c97
932 F20110209_AAAGCL boylu_f_Page_032.txt
cdf6cc4775f55253a87795dce4845c54
e075ca885a506775c280f498760a0ed7099f5859
2988 F20110209_AAAFXE boylu_f_Page_096.txt
3ce90c9e6d893de1793ca4cf2a097216
38c6b80a063e3c45317d76d442dcec8a07cbb958
F20110209_AAAGBW boylu_f_Page_145.tif
6e655e01ea2f92a848a7e0be1d9dce7b
ff31cd2da493a6f800372b4b4930bf2d3d2d15f9
F20110209_AAAFWQ boylu_f_Page_112.tif
8e74f8b883269425e0cbb95a2ffd070a
da9eb1bad2df48b36ce5d0e68e4f554c8978fa46
57751 F20110209_AAAFXF boylu_f_Page_098.jpg
fb6e83a87bb88720316217fe45ea39d2
7b972007ff885abc405fa4a4d0365176103d75eb
F20110209_AAAGBX boylu_f_Page_148.tif
668817fc80668f9348f6601641231cf9
4125240a6c6bd6441c02e93b7added552ecfac67
1051985 F20110209_AAAFWR boylu_f_Page_025.jp2
d7176653265620b9dfd7114760868c07
969d98c99c5b1a6f0ebefa4eff0854f577c2d8f3
2115 F20110209_AAAGDA boylu_f_Page_059.txt
ce8b8c9fd6b8d3d2386bf6d5f861dc21
ea8ea7df267bc3ccbcf828d1485980d5bcf38163
2069 F20110209_AAAGCM boylu_f_Page_033.txt
502052dd048968d05188b1675178b017
1e57ce6e1753c3cffe66b02e7d6bdd89630bde96
64380 F20110209_AAAFXG boylu_f_Page_104.jpg
a8bbefd684a5212b6a0da71ecd4e31b2
0d8970bd4e1126f6e74aac2a22eb54ada7d7581d
369 F20110209_AAAGBY boylu_f_Page_001.txt
efae8fa99ccee70683e82634c386fed3
965eb4b11a95ec6af59515e76c7960f7846e3879
5380 F20110209_AAAFWS boylu_f_Page_052thm.jpg
87c8f12a4759392d98fb337bdea7436b
b1707c41e0b44a7230e78b36f3d6e2a1e977a432
1994 F20110209_AAAGDB boylu_f_Page_063.txt
372bd5171692e4a5f45bfb8e22840f25
1d80d5be583e5e6f0eebcbfc9c142a4b04e8983e
1962 F20110209_AAAGCN boylu_f_Page_034.txt
2a2b87dc4040ee1e4f12b0fb0a7eb266
13df2facd0f1079f09ad880f0daebe535e82a3d8
64675 F20110209_AAAFXH boylu_f_Page_126.jpg
55e298e574e562218a11984e8d51c669
14b3666a1b7bdb1a1bd567d4b9cb8155c495f6b1
1714 F20110209_AAAGBZ boylu_f_Page_004.txt
bff4090168d6ac7ec5e97687204dbc99
c0dce26582351f220ca021adf93cf810c22995b3
20353 F20110209_AAAFWT boylu_f_Page_117.QC.jpg
45e0115fcc3dc73b50838857729a92c6
3576f24ee60f7232e3d2b5f39d2eb4471fd25901
1607 F20110209_AAAGDC boylu_f_Page_065.txt
6cefbd643ff7f194dfdc97b062e31a30
76e4c4abe4c4dff9d434837c5ebea4e974541d42
1879 F20110209_AAAGCO boylu_f_Page_035.txt
b8210891c000f17794666aa9474a7b0f
4a53ac9ab6b8705ab5e23c03c64fa9cb276c16ea
4079 F20110209_AAAFXI boylu_f_Page_093thm.jpg
a1ca91f2f59b39337ed34baec27cf84c
f8822cdad326e5ea10808dedf4798105e4c7cbc9
F20110209_AAAFWU boylu_f_Page_092.tif
98039080aa9c9e6f6fdb9d11e4026545
a883b791c193c113b2a8039ab634cca51745fb0c
1735 F20110209_AAAGDD boylu_f_Page_067.txt
1761af5c0fc27261fa16debfde4221d0
63117b5bf63198bcd0b9dbda4cb4ecf3bae94ea0
F20110209_AAAGCP boylu_f_Page_036.txt
5856518762cc31627f70d1c682d2f512
6ba85d5446be2396e2c403b66898f0751b479074
42498 F20110209_AAAFXJ boylu_f_Page_013.pro
299e5cd40bbd815db3fda43c428f96cb
8bcff0c0d0ef6bd4bdfa9f603a5a24fc124bcb85
26413 F20110209_AAAFWV boylu_f_Page_022.QC.jpg
4abbfa11b0c4b0851bfcfa03f99c5108
f8a85697abb24004e32086dd34489c11ce113c66
F20110209_AAAGDE boylu_f_Page_068.txt
ef9ceca0acfc530bcbf2cd39bc978fc0
cc847d1ad73f29a88dd6cd8b6085caf2daf4eb7e
1963 F20110209_AAAGCQ boylu_f_Page_037.txt
cc069715e0e8653eca9b85175f941a9d
cbac08c7eeced1bcda35658376b012d2e8e8a285
1940 F20110209_AAAFXK boylu_f_Page_056.txt
1613c4492501deccf9c930bea6ad8ed7
03a77e0ad9611a8ec863c5f4aac965900a3fae36
884 F20110209_AAAFWW boylu_f_Page_133.txt
98cf28ee7e7bfd81b7d36d7d6c547b16
5b8df1eeb877f47645c8843e99bde12df9e18ca0
1333 F20110209_AAAGDF boylu_f_Page_069.txt
bd3a56c04c2db4450c3c0cf9e408f395
23bc3053beb27c664f3799b3232e1adf6212338b
881939 F20110209_AAAFYA boylu_f_Page_075.jp2
dc86e9e9f72977e64c6b213b2f822fbc
ad6c72c6936985e337be73a0a31c1c693b143bfd
1831 F20110209_AAAGCR boylu_f_Page_038.txt
1b8b604095c07a6aad240041519a656d
a7c040a06f1df7b9314f7a3f74bc07918564f60f
F20110209_AAAFXL boylu_f_Page_062.tif
509357da2a4113f79df8402eb9b27993
b90a07a999c462adaeb8b7eb3de509422e33d679
2036 F20110209_AAAFWX boylu_f_Page_022.txt
82b9ac2e1ed12d6ca80df95de7fb6281
ef24324aa9e55e3d49cd3e32c51af811305598fc
2262 F20110209_AAAGDG boylu_f_Page_072.txt
9586fff996658d53db64ef9379f2de0f
80f4724b94364ad74e863dbec8504146144ad7a2
675176 F20110209_AAAFYB boylu_f_Page_010.jp2
381f8d254230a4a9d863fef49d69b671
8c8513cd64f4962430685e39694f863c2ea975a0
1984 F20110209_AAAGCS boylu_f_Page_044.txt
2065e248c8aaac53bcd818e9176de0a2
535ee1c8d4bdfc303e5cdc7a17b3d2845e8582b7
1713 F20110209_AAAFXM boylu_f_Page_009.txt
3413c1682134b370418d577b57e71f6b
4c50658335d1daf362749fb70f9384624ed13ed0
F20110209_AAAFWY boylu_f_Page_037.tif
6e814550c7c5d119931b162e6c00a0ce
243423b5e0991282aea470fe1900980ea367db59
1751 F20110209_AAAGDH boylu_f_Page_073.txt
1a4ed150b314ad2b99b3efd09e64496a
70d6f0635f3a9c9fe230be078d22f358ac89da9d
1051962 F20110209_AAAFYC boylu_f_Page_146.jp2
a9575bf0c43bbbc0a46665b75044bc93
7fc7cd96c37849d3bbc01f3024d9bc75c58c46dc
2005 F20110209_AAAGCT boylu_f_Page_045.txt
c96b14b96d26525a550aac3077dafaa4
412e6cd62ae731adc3b745870be17d068cb33756
F20110209_AAAFXN boylu_f_Page_022.tif
7c8e1a3ef05e7bacac776de74d545e3f
e67903a3aae4684d526ba804a2a9b8b2847fc1a8
73897 F20110209_AAAFWZ boylu_f_Page_110.jpg
33b56c104a78e52e6f14fdecd4623be3
071a0e958c7c1e8c4f62ec0a199612a8b366f73d
1710 F20110209_AAAGDI boylu_f_Page_074.txt
9a52192574cc0d3f49f094bb10d58b4e
934d1d801fe3513f714a36968d825561718bfb19
1993 F20110209_AAAGCU boylu_f_Page_048.txt
4f498eba92dff4bc1f454b63e9606d4f
026965c50478648e68a9c810423c5d4bfd3a19ea
1051956 F20110209_AAAFXO boylu_f_Page_037.jp2
3705a2cf75a235c8df474cd6a3626fb7
a254db4791e44af7e6078988afa953b9df8d2c86
1957 F20110209_AAAGDJ boylu_f_Page_076.txt
a5bd12c88d3889486448be2aa0ed5a93
967f840f87ba5a4eb6270ce0761fbdbdaddfdb7d
1893 F20110209_AAAFYD boylu_f_Page_061.txt
0b780fe6bc17ccb09f29e4314e26581c
bfa70e57b26eccce3963c8bcec669edd3f5317a8
1932 F20110209_AAAGCV boylu_f_Page_049.txt
141fd11ade7e5f3976fd6d4428a75a49
f5a962650f6cbfae5588e88855e8dc77ea06deaf
1051958 F20110209_AAAFXP boylu_f_Page_072.jp2
3f6c7900f9f6b3e193ed1152ff7e90f2
da48b077883ebb54413c269922e789cae4b47e2c
1096 F20110209_AAAGDK boylu_f_Page_077.txt
d26f212f78a9a7e2451537a88566dcbd
0f09fc27b178dd1a4f13ae50fe2d8dbc4182208f
1956 F20110209_AAAFYE boylu_f_Page_148.txt
9b4bc6c1db413295c7fb1d727144f040
7fcdd4dd358b0d11c3ec4f09f1b6f9539d0332d5
1952 F20110209_AAAGCW boylu_f_Page_050.txt
bb1e690dc69c70879200fdf152a7cd7c
5e4d290d2976b8c9879c2afedd0a62e877857a62
689440 F20110209_AAAFXQ boylu_f_Page_128.jp2
0294867d5648de1dcbad17acd9df26b7
c44af37d21a25764983bef2d608ee744747f2014
1965 F20110209_AAAGDL boylu_f_Page_083.txt
2477f37a37549910db37c49cefc165f4
d8d162d4cf018e16fdc3aa608cab17bf6f41653a
1967 F20110209_AAAFYF boylu_f_Page_042.txt
efd6ac981f3513ffe17219a2aab130ba
0a0bf8faf4294106680f685058b1d37e2c802419
1616 F20110209_AAAGCX boylu_f_Page_051.txt
9507f178113234d5d0d6818a1ed47c0c
c399a91aa2a2622a54fc9f794de2745f0da3213e
4054 F20110209_AAAFXR boylu_f_Page_032thm.jpg
7af03bd3530e25cb5ea6194ab329878a
4ce808cbebd994608d9165b638a209d65ddee9cf
490 F20110209_AAAGEA boylu_f_Page_112.txt
a3e2c747c50a81009d654976f8760da2
484555c5ffdfc2532678aea94dc47f6cf1e5cbc1
1241 F20110209_AAAGDM boylu_f_Page_085.txt
32a30329667765933e301b527a69240f
6019040f6c8cb85aa68e2d6000e7164b7c06f8d3
48739 F20110209_AAAFYG boylu_f_Page_060.pro
202d42d2e611545ad0ca81512a142ae4
2e089133acf8c4f739cba154bba31297e9a8574f
1980 F20110209_AAAGCY boylu_f_Page_053.txt
d34387c48dbf5fd7c54cc25fb8aaca0c
ca0682cf8a5b53bb5198cc7f6b9440cbab51b15f
260969 F20110209_AAAFXS boylu_f_Page_081.jp2
dbfa3ac746614e486334931c6b2c56d1
77171e8f0bea2a2faf3e5cf65f126985e0734b22
1854 F20110209_AAAGEB boylu_f_Page_113.txt
f484ee302cf52b95f37f6ec0cc7ed832
1c1f82bb8cdd4e0618eaff6536ad4ce41a3b9895
1969 F20110209_AAAGCZ boylu_f_Page_058.txt
35760a76ff63d1c658b1037c48152e0d
305448523cb97b969124d9aa0efcbb02077bfec7
66832 F20110209_AAAFXT boylu_f_Page_029.jpg
e877b6d8a0a2312823705882b4ca76ce
4845c89179f2f54c92b39b13d5bb73a2def46199
F20110209_AAAGEC boylu_f_Page_114.txt
970af9dd77e76e30ee3fb620e326891a
366c0893d030238301fab58cc7a73775f060a6dd
1051 F20110209_AAAGDN boylu_f_Page_087.txt
642db54be55083e7ce654f52ae85907c
1929190b2f3672e1ffd3214af18a56d23eee45be
866435 F20110209_AAAFYH boylu_f_Page_094.jp2
c7368073418046048df5420977120edb
1d117d542ecead88d0cd68be0ac1bb52d030b68e
19353 F20110209_AAAFXU boylu_f_Page_074.QC.jpg
7b5170c77cd14640edddd40ac3e1cda2
b130c98a1a603227b240f92cf61b5a22d73e3803
1916 F20110209_AAAGED boylu_f_Page_115.txt
bbf5ffddc92858b1a14a7bd4c8efa4e1
7fe69f3da41e2696026f0153c2ccfb264f9a162b
1683 F20110209_AAAGDO boylu_f_Page_088.txt
3e9873a4ad9e090ff5efa414e4a5cf0b
099a8f39fc84e2d94d2d489831829283f7ac07a6
49506 F20110209_AAAFYI boylu_f_Page_034.pro
1867fa88210a3df6cd4b75716e49f156
8a956a86e1a5d60a91f9c9c4b4fc93471c7f44b6
1929 F20110209_AAAFXV boylu_f_Page_060.txt
da5f0f1e7b778c71d800df04503f68b1
33ac5ed20009c71fbe8144f423a052d85f57c14d
1761 F20110209_AAAGEE boylu_f_Page_119.txt
ff4633cbad90f6a19d6c6d6c65c0931f
b39c253c75809bffabb0f72444f1359d3fc85df5
2474 F20110209_AAAGDP boylu_f_Page_089.txt
873b2e0a1e76c89f5738bf6542d47a00
93db31516019151763660fcfb5c5b091d079f166
25112 F20110209_AAAFYJ boylu_f_Page_019.QC.jpg
bb0dda78794c1e72ef302b9dfa63c6ef
58dcb379c3378c74fdfbee6d39570eb9e582176b
F20110209_AAAFXW boylu_f_Page_053.jp2
7e7f707ba1fc6c87fd7a3097b20ca82f
1de93f3684e4c4aef58763f9a13d16add5a85d67
1667 F20110209_AAAGEF boylu_f_Page_120.txt
0f8cc1506cdfb030d064e06ff40dc784
2a9ea2cf18d05a8b01e5170617af1c24c7a67617
1445 F20110209_AAAGDQ boylu_f_Page_091.txt
8bbd890b8072395b9b971ef99951908e
9d4b58534cfb4f03a3de924b98a66db14fc0eb3d
53487 F20110209_AAAFYK boylu_f_Page_138.jpg
2d19337e4987b8280d891aa3036ee73f
228a691b21033329828fc7caeae4e62958c5bcc6
F20110209_AAAFXX boylu_f_Page_149.tif
3e5e5660a9c33e2d1c29c255b671d2f9
6b9add6cf9b4f7cb57a54f4c3183e323ac0dd58c
1339 F20110209_AAAGEG boylu_f_Page_121.txt
846bd2a5909a9d09155cd6fff051a73c
00080bafcb2f6223af7c89d00de8f1f4e5b4f0c1
239058 F20110209_AAAFZA UFE0015543_00001.xml FULL
30d76976d8c92f10ebda5d16bf8456f8
b22423f66a85e1f59f85b973491fc223e74e0a9d
3478 F20110209_AAAGDR boylu_f_Page_092.txt
298273039e1e11274f71191309034ced
d6b936abd7ea319ed851298373b295f2b727f417
84439 F20110209_AAAFYL boylu_f_Page_089.jpg
8a47cdbbebb9fdc3d992bb191aac959a
2d35ff214d397a408cbefb406c898b7bae637c6d
F20110209_AAAFXY boylu_f_Page_079.tif
aa5f5bf9dee81d30bd320de215fc1006
64a29e2199a17518687e20bf8dd540556c5a059f
1988 F20110209_AAAGEH boylu_f_Page_122.txt
c1d518a19f5e08b2bc68140b5cb75aa7
2c95cf87affbbcb387b2085f5841a2e7241b94ec
1494 F20110209_AAAGDS boylu_f_Page_098.txt
2bbb3ccaa26d8b8190231350fd87f7f2
325f062edeb6d55afba94be7290204cd30bcb0e8
28509 F20110209_AAAFYM boylu_f_Page_085.pro
86f8add7c6f60dadc57a2e864130442e
6f900cd76523d1731421b597a2d03dada60d0f8a
1051935 F20110209_AAAFXZ boylu_f_Page_054.jp2
68cacf81190bb7875777517bf223533d
85ce19b1bd0b3aed2664a77fb21b5d1c042313fb
1032 F20110209_AAAGEI boylu_f_Page_124.txt
17feb6d095ae7747c85c0817e9537ec6
d357dcbf3e2ea2e416c7a52eb45e96e3d76ce653
1937 F20110209_AAAGDT boylu_f_Page_100.txt
8d9287391a13768df6d2ead446a62980
f3ce73ee7b33716acefaade65514844ca40f65b2
86811 F20110209_AAAFYN boylu_f_Page_016.jpg
b2540e0926d7a024ba51dddeb77a9459
fcbd5f7b20cff54347fbedb0b670421ce5f76383
F20110209_AAAGEJ boylu_f_Page_125.txt
44ce62b811c84696e7c362ca84c96b7d
9f3fc51421ba59263dc065e45668de23b6bebba8
F20110209_AAAFZD boylu_f_Page_002.tif
94f5604b4ef7354e84574368bfda1a6f
3ef026e6544f634dc4585953ca7c19b779561229
1417 F20110209_AAAGDU boylu_f_Page_101.txt
7c1f1d4a2606bdd9003188bbc19abdda
bbe0d27b4ba4fdf2de5098cdff228da1d5d570ac
68237 F20110209_AAAFYO boylu_f_Page_075.jpg
4d23c4c013a6f7031afdc008d44ea9aa
7136d3f1f12d22b43f0a0790df46a8b1dbb965b5
1570 F20110209_AAAGEK boylu_f_Page_126.txt
f9523f8a933ee182abe90b5150d1764a
bb24a968ac2b6abedfd2c95995aa006fe005e63e
1431 F20110209_AAAGDV boylu_f_Page_105.txt
60b53ecc3de47fe76a3d0d3c2d4a9c75
24bebd7d0a71c21addd426bd46d98785322a4e6f
871424 F20110209_AAAFYP boylu_f_Page_065.jp2
d66e0e401c4257c2b4308b47936e16c7
5365a02374e27319616a9747cf5a1b03005f7b15
F20110209_AAAGEL boylu_f_Page_128.txt
747c6f1b1e42b11938a1a4bac7ad5e1e
918febd7f1cbf8e3da01595a219054c43f834795
F20110209_AAAFZE boylu_f_Page_003.tif
7f776c5fee0f20c47b7f1ddd6b67735c
424b532e96686d3f7b621f14ad018bc830b8b0c2
852 F20110209_AAAGDW boylu_f_Page_106.txt
89f750b62a626cbc339bc5415165c923
21901f71aa5fac6460daee5bfdedcbd45300b214
30171 F20110209_AAAFYQ boylu_f_Page_068.pro
d18266479b0cbbfba1118ed6953032c7
56705abc74c8dd4b14ca369f1c2ca7fea91b0f82
3251 F20110209_AAAGFA boylu_f_Page_012.pro
85c81154636e0987b282f38327c155eb
fb68263fce78ea1ecb0a0b75be57cfba0aa0fa9c
988 F20110209_AAAGEM boylu_f_Page_130.txt
f29dd10b3308579795975cf485034eab
833b7d6b88403d634b3d5bf142ea98ce978edac7
F20110209_AAAFZF boylu_f_Page_004.tif
dded34da9ee80a6d3b2bb34d75d9c575
03c64d55660f7777b040528463856c8c4ab2f468
1723 F20110209_AAAGDX boylu_f_Page_108.txt
2f23cb9df2c24a9f324f84f229c3d884
33bd0629e8615950540007a2076036646ffb612e
44127 F20110209_AAAFYR boylu_f_Page_099.jpg
c1e3cacb8c67315ce7d1cc9f9ef1f321
346daec0b6e94db7aeaa12e33bc3e5b8403e4fb1
51872 F20110209_AAAGFB boylu_f_Page_015.pro
bc9735a11eacfcc8b70891be2bdb4426
8ab223ba54005d15464fd4d01ba980a1b91cd9dc
1282 F20110209_AAAGEN boylu_f_Page_132.txt
4249c329bd8832ed38154f792d14c24b
a0bdf8ceebb52006e9acb61d10c694ff2b0f8906
F20110209_AAAFZG boylu_f_Page_006.tif
0d5e98f14555a3a8b52f9eefbe174bf7
e6b630b22d4d39293f3e15eec7c248ae86514bdf
1896 F20110209_AAAGDY boylu_f_Page_110.txt
c5f6b0112488201fc7290c002a8ed3fe
06126bb83469a0d17771c2199886a351f5b47616
F20110209_AAAFYS boylu_f_Page_111.tif
dfd3e4cf28304a5e603ab9c52a122a52
47f8bbff8f57d56965ca3bd4128b87516d8454f0
48108 F20110209_AAAGFC boylu_f_Page_019.pro
9950320d7df37caf28b9550ffd3acb20
879885c731514f9a59ff174a10aad9d1647fd41a
F20110209_AAAFZH boylu_f_Page_007.tif
7e43acd0c67f94c724d791411a741ca3
8c42dae28456b0414638e85f392256e200ba8425
1818 F20110209_AAAGDZ boylu_f_Page_111.txt
de58d54cc296f748af735e52bda9c1c4
89040368dd6031fc4a9c11ff1c234aee8a5b6617
567912 F20110209_AAAFYT boylu_f_Page_032.jp2
9d6cb318659561414e98f9d7fdb6ac6f
6c22400c06697d04b569047e5f4ba331546a866f
47524 F20110209_AAAGFD boylu_f_Page_020.pro
3cdeaec0fe521d5ca9b48113ff7de5e9
33c3338bbdc5ecf53e22358d63b97f5caafc98c0
1243 F20110209_AAAGEO boylu_f_Page_134.txt
da07eecd43687f5d6292ac073373ef38
100b67218f5375e1c8046036d50fb1cf4e143c30
F20110209_AAAFZI boylu_f_Page_008.tif
5b5f33b3e80521b74e3c03f8bdc07af7
2b9cec89ed402520490c897105b51513674fbae5
F20110209_AAAFYU boylu_f_Page_033.tif
bd4b2e1b7fe57eb4f6b592e5c1da474f
bd1affeacc76e98ddea5ab3eb8d2d8196a46c7ce
52490 F20110209_AAAGFE boylu_f_Page_021.pro
c07ab82bb88c524be72aba55a8b0348f
d8f059251c39919c307ec5d2008b28743a95f507
1363 F20110209_AAAGEP boylu_f_Page_137.txt
5934759e1c388b53ada996f15d9c8438
73c33d9bd3da5d303273b9b50e430f4a86e52fe2
F20110209_AAAFZJ boylu_f_Page_009.tif
93e336ca9e41db16c376dd71eed2bdae
35efadc53860fad1e361ac663ffc97d678c44c4c
82679 F20110209_AAAFYV boylu_f_Page_045.jpg
c6a64bcb5cb24794fe2c1e54ec2ccd69
95de456de6cf62e6214f65c1d591b319bcb1e4c6
2144 F20110209_AAAGEQ boylu_f_Page_144.txt
e0be0554658e9e69b2c062ce324e3ae9
db75ed65a6e14c62037e6c01f65c32dc8e776f1d
F20110209_AAAFZK boylu_f_Page_010.tif
4189277892328a190fde17cf8009fa38
97fa9ec7f74690da81b8f1df06409c79eacad7b9
1175 F20110209_AAAFYW boylu_f_Page_141.txt
1effe6f5b80b878c18814b92d2703e6d
3e664111cb82cd3d7476f9b24b38d923dc99bece
51845 F20110209_AAAGFF boylu_f_Page_022.pro
4fa9f33fe4dfb4978a8c3ab527c71559
49f08614f7383e408592442579255f8036c103f3
2395 F20110209_AAAGER boylu_f_Page_145.txt
f4743b05ab853870357048880a0dd344
44ca47f508a36e62437219d3feda885aa3d04320
F20110209_AAAFZL boylu_f_Page_014.tif
67d5029a208e82ae0f745379756998ce
200cac84983a8bd0a9964a7e7a45b10a7e1bbed6
64350 F20110209_AAAFYX boylu_f_Page_102.jpg
da4f7a4d771e17e1e4ab465f4a168ce8
18d967c38c9ad012764569ca0466e9df7aaae9e2
51683 F20110209_AAAGFG boylu_f_Page_023.pro
bb6ded0ff20a90d6c759b6ae8ec524e2
a4a375e891f060d5ec33d8bd74c28f902484b267
2179 F20110209_AAAGES boylu_f_Page_146.txt
454d1263b168bed5ad3dce700394b8a8
4c1395eb558baca072004440caa3898cf3563ae1
F20110209_AAAFZM boylu_f_Page_018.tif
c92eee833daa00085c5f657de5b7d539
4ec76f3c4eb905441db10278206ba18a1b07358c
3985 F20110209_AAAFYY boylu_f_Page_078thm.jpg
d703ec2be0dab02b476b13500aa95966
8ca540912fe25e5872c34864b5b4bb32ed628851
33096 F20110209_AAAGFH boylu_f_Page_027.pro
7fd6cea089030d8dfe5a9ddd0b157bc1
5747996901e1e9ff6ac405a049d60b0dd088580f
6305 F20110209_AAAGET boylu_f_Page_001.pro
96f0b56636c41409d1ce91f415948586
715e522891fe7f0c5f2997bfd1d2298a4caacbe2
F20110209_AAAFZN boylu_f_Page_023.tif
479a146a63dbbda2591ec455222786d8
c3b2f6b23133f6863f227b9f607feebe54753fa3
1913 F20110209_AAAFYZ boylu_f_Page_020.txt
a557722d86d8f73855a518b3cd7010a2
fda0cde070e89dd6f2bf79abe0f690dc1cef3abc
45647 F20110209_AAAGFI boylu_f_Page_028.pro
abe94df4f35cb6d0a9855c4c03b52a68
d0059b8bfd29463317fc2b87aa260e75bf90d864
1070 F20110209_AAAGEU boylu_f_Page_002.pro
bdb02fe139ec637a0a5b1e98de41285f
ff33bcf75ba3477c56c75c9a08cb48b234e1e162
F20110209_AAAFZO boylu_f_Page_025.tif
b670f350f4514264d26304c598e9859c
acfebb045db25791e26a492c094fa993b041250e
40099 F20110209_AAAGFJ boylu_f_Page_029.pro
6a1d96876d55740a67c001845243d806
a798562c5d4fd32111bf53f2f102049c6fe9c703
936 F20110209_AAAGEV boylu_f_Page_003.pro
8e93dfab63b9a1fe8e4a743aa7594cef
a4ef0a0546ea6e705f5f465280f825de02d3ac8d
F20110209_AAAFZP boylu_f_Page_026.tif
9ef941056f4d9bb4d3ebf08fb90d26a4
e133f618c8fa9dad9455761b8ccc4ec27cfdc646
42347 F20110209_AAAGFK boylu_f_Page_030.pro
a9413d767e30329d4359525166b8bd59
208b17e5f7632510007062a077243ed81fb6235f
42250 F20110209_AAAGEW boylu_f_Page_004.pro
327c6c63814e16805ba781934f3f76d3
beb4947e60cac173b8c8512717abd7525cbf5732
F20110209_AAAFZQ boylu_f_Page_027.tif
b4b72385ef22b72826bfe4aa77a667ce
b7b3316ffebdd4e168647bd9a1bdbf1cb151f995
52565 F20110209_AAAGFL boylu_f_Page_033.pro
0517728224759eeff521344c0e36c3f4
93e4fad1e0886d9ddcb64a8bf656fad880506ce3
78915 F20110209_AAAGEX boylu_f_Page_006.pro
a8506c90a6a89ada3d1bca46d700bc8f
7d6a9a0f953c28e56c33d77aa153228391b4da77
F20110209_AAAFZR boylu_f_Page_029.tif
838031c966eb23b88b2125c002425c11
4c8ebb1e6c2614ddedf8956b9f0d968aa9330625
50893 F20110209_AAAGGA boylu_f_Page_057.pro
54b3967b960a848b6cc082595991e00c
de82c24413dacea6d885dd0902275853d2d63ac7
41232 F20110209_AAAGFM boylu_f_Page_036.pro
574bad2c7a53e08517b60e667d537058
bc2895d76680ce4767bd384d375a3c5f3e9625b3
32241 F20110209_AAAGEY boylu_f_Page_007.pro
2e762fbdcbb0dd6fe8cc2a8463083d85
33e15829595e9f7fcce540e17290a16cc4fbd9d7
F20110209_AAAFZS boylu_f_Page_030.tif
aa235d866c806a4ba809d7858b7c03a4
a7461e6c4256112bc3189ae4d322e5c5909fe0cd
53216 F20110209_AAAGGB boylu_f_Page_059.pro
a07c28d65cd6d5379dd59c17033263a6
e4a3ea5fadd9e76eb6ae9c2455d6a3a80cbe5df0
49764 F20110209_AAAGFN boylu_f_Page_037.pro
67b61bfeb67e745e38426f30d21b4f93
eccc32249f99100e4ecee68b74ab47691e920caa
29966 F20110209_AAAGEZ boylu_f_Page_010.pro
2e5234023c0402dd926e0710beb38e22
1e0b5df77c20ca1a41976d41007e5fcf378ca22d
F20110209_AAAFZT boylu_f_Page_031.tif
159455d44dd8569072945b316f863f90
f3d37b6a53d6c28177ae4897af199fe715ea32ba
47901 F20110209_AAAGGC boylu_f_Page_061.pro
56c3132ff10549a858feb05afc4bb20d
bf5e6b546603473567bd107aad4b0f056f8035dd
45943 F20110209_AAAGFO boylu_f_Page_038.pro
bc1440f64fa8ad133961a569602f5a32
a2c8bd699570af8f85884f7aa5e53c37e991b55f
F20110209_AAAFZU boylu_f_Page_034.tif
ec7ee0b615277f2132c351ff22860db1
98623ea0abe3c78494eda64f19f272a01f86c601
50810 F20110209_AAAGGD boylu_f_Page_062.pro
4bae60cbc7e5c616dc5b0bbde31f8f5b
34268956f2bfff2a67a5770f561a3f985ab9b8ef
F20110209_AAAFZV boylu_f_Page_038.tif
b04ff8a7ee7fdadc9f93de42cee467a1
45ef7b9359cbbe36a69798b9515c629543fcbf0a
39156 F20110209_AAAGGE boylu_f_Page_065.pro
584060fe1290e633d936b4b3561c75c5
562f803e9f4e60d02dfdb9ca24be8823995e9e2e
52701 F20110209_AAAGFP boylu_f_Page_039.pro
168a6f338dad21c8ff83d77b0d8e0229
e6bc742b8e3d39adead1a9248b408b9f3e5c4b28
F20110209_AAAFZW boylu_f_Page_040.tif
83207babecd1320a505bda56c6674f56
dc53d3676d4ba94aec5defbb5d206da406a23f14
42420 F20110209_AAAGGF boylu_f_Page_067.pro
54d69e828d28d41e640d3812f3556f41
d6c89402b5a9503809684d4d09c47a343b141284
51180 F20110209_AAAGFQ boylu_f_Page_040.pro
ad521ba650205912bb9886591fd204af
5e1c5303b1555d55296d4bf85e7f323039a53fe3
F20110209_AAAFZX boylu_f_Page_041.tif
a136180b4187c7c153861b1058592008
6c2514d682919ac5ef63bb40db1bd1ab6b4f22aa
41919 F20110209_AAAGGG boylu_f_Page_073.pro
682af547adcad0bcab2c8c0e5c58aa27
dfdb24e20fa0f9506da1979a92e6c9a5190582d9
43354 F20110209_AAAGFR boylu_f_Page_041.pro
95b74834fbb0b88db8bdf971d906d36d
73450eba330958d0330d815eaaf261a14a6f68ef
F20110209_AAAFZY boylu_f_Page_042.tif
2ed657a6924f834682b77c70d7cb156e
d8e29b8bbf13338066d4a73a3b0fed768f7fdb11
38827 F20110209_AAAGGH boylu_f_Page_074.pro
d3f4e87e6c53a95bf95b74fb78b42752
a095a2fdc840655031fa9e81f1f9575ca8c7eaa1
49431 F20110209_AAAGFS boylu_f_Page_044.pro
0f8256bf6cc9efb8a95615da77e06b71
a0f4ba1a80d277a45bcba62e9dbf42bfc57442df
F20110209_AAAFZZ boylu_f_Page_052.tif
cf9c299e9c57a68be416967447ec34ec
f7a8d61d7ff833ad4d812df5b33179b55134f06b
36472 F20110209_AAAGGI boylu_f_Page_075.pro
c143439e48e907c348a0ec678649315e
57b96b0ed0bac1c569fc11f7ff412d1e6d8a7332
50805 F20110209_AAAGFT boylu_f_Page_045.pro
a4039036ba2fdfef13d4e424387c8d3c
4c4379ef89ccea77ff50f381ef0f76662b4a3258
22860 F20110209_AAAGGJ boylu_f_Page_079.pro
2a1c61fd26d1caa12309061abef5eb47
5f241208d9f4c60730b4cb8a443d05976b1afde2
53416 F20110209_AAAGFU boylu_f_Page_047.pro
2ab743fb532ea174d5402ebceaef1300
950e970f6e6e7f5a854e4f845050c824a132f74b
6907 F20110209_AAAGGK boylu_f_Page_080.pro
4ce713e165173dec094aab902dde773f
ea5f6bf8a17b50eb3b814bc364fe82bf3605973a
49265 F20110209_AAAGFV boylu_f_Page_050.pro
61bf71dd20d5652bd31e3c4da77b284e
ae34486bf283f68d0eb51f72a628b06dd6ff168a
40321 F20110209_AAAGGL boylu_f_Page_083.pro
665125149baedb0bbce678ff9f449ab7
b2c7ffb45e9e0e5557f93baa1cbe51fd99b4ab17
43383 F20110209_AAAGFW boylu_f_Page_052.pro
05d6533138ee2f1408f4f55650f70e41
0187dc04a746124aa1485903aaf07f534ce7a7ca
44990 F20110209_AAAGHA boylu_f_Page_113.pro
28a714add2afdd09bd1d5af2cc37554b
dfcabbc522c60381dbac129f9490da0b57555207
20885 F20110209_AAAGGM boylu_f_Page_084.pro
5ae92b1070b89cc7a263d4c47a40bc66
a4218dbd6779503ffe3b440dc14c8ea4d69eff0f
50341 F20110209_AAAGFX boylu_f_Page_053.pro
924f107f774cf4289737176cbe0ce7b6
dccadbf248dab3b05f3408d337f17b7b8e150648
48292 F20110209_AAAGHB boylu_f_Page_115.pro
0bc57f28777e7f408d662e67587971c3
1385c0f0adce0376d0a0e1948b45563f7d67730b
22433 F20110209_AAAGGN boylu_f_Page_086.pro
9c751b1bb054a453e9931f8d49e4d6ca
6fce58a271722b432fdbc26cf5e7e0a743d5c0fb
53028 F20110209_AAAGFY boylu_f_Page_054.pro
165b41340b2203179763c6e78e7e6111
7bd90a7c970bd7462b372f299aa51cb4fd083127
34295 F20110209_AAAGHC boylu_f_Page_118.pro
2c1dc0d24db174fa9d27cbc754e46b75
1e69ec1ceab68f7897f70e197b977247170d1113
40254 F20110209_AAAGGO boylu_f_Page_088.pro
985466302d454f3e4b177d045cbf43c4
0ac44da40746f3bfc3301c4d0c84553462991da4
43942 F20110209_AAAGFZ boylu_f_Page_055.pro
3f8d1d6992dc8962b140cd27d439ba2b
9c8a7c2df980ec47f630313a8f2f4e3e688b2a9b
29841 F20110209_AAAGHD boylu_f_Page_121.pro
c40d4a0ab16d382250158632a57b2fdc
9cf5bca41b26224ed4c724789c28a34f70c49ec1
51175 F20110209_AAAGGP boylu_f_Page_089.pro
e110078ac1aff9d18659b3135cc83140
3161ae3fe9288e66849edc1712df6a17c6a216c9
33723 F20110209_AAAGHE boylu_f_Page_123.pro
2fc44db8190e79a43d18307bce96bdaf
dee4a180c27071f070142dff33f9dec47411b518
42209 F20110209_AAAGHF boylu_f_Page_125.pro
4c1ff0c8f912419cdb0acfc3dcb649f9
f0fc5477f07868a83a5fa5e44689fb58dad302fb
36418 F20110209_AAAGGQ boylu_f_Page_090.pro
bb4b899ea3ecc098407e3d33fa204036
e8bdc729933ba586f22adadf9ba9c99569d14eb4
38537 F20110209_AAAGHG boylu_f_Page_126.pro
438560b2baf15418a0ea884fd38653dd
9ca1c369f065ec6496ff19a03ff22a0254f563bd
67745 F20110209_AAAGGR boylu_f_Page_092.pro
5987a830e07799469e2d4a29e61ea236
d8cbd7d704700c56b26222f46b67650555937016
28550 F20110209_AAAGHH boylu_f_Page_127.pro
b2b18b881fa852ba954d8c0223997603
d84336e49a367bf9c2f3fee28aed04bd78c95da9
70249 F20110209_AAAGGS boylu_f_Page_093.pro
2929295be47d8081e2813c7aee405e3c
ecf2db3f63417f6694aac5675f26d1ee96ae4001
27700 F20110209_AAAGHI boylu_f_Page_132.pro
f4d6a4e67a9b4a4fcf76350de85da3de
dbe99c93d186da3fdafb9fcdb62da07ef9f3ca04
39301 F20110209_AAAGGT boylu_f_Page_094.pro
77679eebed17809b99d4218eecc5b12b
3984023379482d3c8f711ad43d1788be38cf525f
28236 F20110209_AAAGHJ boylu_f_Page_134.pro
b4851a12820ae3b60b1ae091bdaffe2b
95c4cc76eeb8ca4d9681bf8047f443bae03fb3b9
48688 F20110209_AAAGGU boylu_f_Page_100.pro
e0eb0082094a8d7af0213ad1d85c427d
e73166fbf70ef2ff1db54572abbe3c1bb4ec415b
33365 F20110209_AAAGHK boylu_f_Page_135.pro
43a1f857014252f4d68b7845a7cacb26
da2e4a3af4b1d5f7e941bae9e876d629cafd48c6
35430 F20110209_AAAGGV boylu_f_Page_101.pro
942b2835a381e4efa3e1eab23e45624c
1e530f599d879bdf4bbde8de10f4dba844d788df
23216 F20110209_AAAGHL boylu_f_Page_136.pro
e055116dfbd84abf6f3d65b14d5b711f
6d84e26d4ee23b74bb26f712f08b34030d266645
37693 F20110209_AAAGGW boylu_f_Page_102.pro
e4c0a2b6bf5af60f639212f78243f39a
425210b05e35f45c42ff0ef1a260e901132e74bd
23794 F20110209_AAAGHM boylu_f_Page_137.pro
ec3fda61f3497356bbf40c2c50bd4438
d7d8c698723aad42d5332d4a97801d2f541539d0
14061 F20110209_AAAGGX boylu_f_Page_106.pro
f59262d0a799d5e66f413020eb5bfbb5
f4e8f7e644fc2c8d9f9a0395fd563923b32acde9
9268 F20110209_AAAGIA boylu_f_Page_008.QC.jpg
6bd4287679965a99c9bcb9eb880c8879
80f053a88a39505e25f2cddc4c4b87f9f055f9ec
29763 F20110209_AAAGHN boylu_f_Page_138.pro
da180ee9cab7ab3eabf57e9367d6bd35
3d329c5d2dbae8a337891f0c92098f5541efe204
31682 F20110209_AAAGGY boylu_f_Page_109.pro
889590e4562e2ffb0c01e9caf9d777d7
10aa0d5d815b37b717b5c34cce1439d24e9a4fe8
63618 F20110209_AAAGIB boylu_f_Page_009.jpg
3f3d85de4be686d57ec700c810d9f30c
7f2fc6be6275d1c959869802a85da01883c5e13f
16974 F20110209_AAAGHO boylu_f_Page_140.pro
8892eeca0a5955e1dfe7435aab059cd1
2d399c5bf6c4479cacd3b4c77c13916349292a18
11324 F20110209_AAAGGZ boylu_f_Page_112.pro
3cc986b6af4f87501d4330927dc802b2
e9479971a3e7be3e7720c4d1d8ce8b1e71fb1ebc
24855 F20110209_AAAFFA boylu_f_Page_058.QC.jpg
cc8de20922333224ac096384788cb582
fe0156d44ab0dcdcfb2a3bdf55101fa0c36b761b
19185 F20110209_AAAGIC boylu_f_Page_009.QC.jpg
b0d6f2a3a61828e4866fca51440499fc
4b2bb47dbbd559be25fbf3afa3e611aead58d56b
21670 F20110209_AAAGHP boylu_f_Page_141.pro
3d53fe83f16cd4ce30756c3495c4a53c
3b2215217dd29eeed2a3d32e8c6adba618774d3b
F20110209_AAAFFB boylu_f_Page_012.tif
aec4ef92b9f61486a4d4205ba1ef5272
62b455b6329666ec6fffaec69a722c9c58c08c3e
15892 F20110209_AAAGID boylu_f_Page_010.QC.jpg
123dc49141c1a6c9be6ac70ea5be4451
b65ecec220701ef6b1532212c9474cfe7e75447f
15649 F20110209_AAAGHQ boylu_f_Page_143.pro
7bc97d28739e9f49942166cf9d89d579
4f13177848d821d74d1d13f5ded3a322266e91b2
2080 F20110209_AAAFFC boylu_f_Page_031.txt
d6935118f5143e7f1d5fbff67dde10b2
8c411029eba4ae65c93e188ecb715ae71da40961
74405 F20110209_AAAGIE boylu_f_Page_011.jpg
478177c98444b5504c55de1e18248a83
a04a1e812ee647756301bd6d09ca56155d669eb3
F20110209_AAAFFD boylu_f_Page_054.tif
480887d64160936631effded10507d50
80d084d1b416f59fd5f16bf4e5f9448ffb18a3af
22815 F20110209_AAAGIF boylu_f_Page_011.QC.jpg
5f27eceaa039b20281a8bcbbfcc8c8ee
a2cb3f0888f85c7403f8634e74f3187fdc566e6a
52918 F20110209_AAAGHR boylu_f_Page_144.pro
80486a3ca416ae9ce9a22fad27af0d29
fa82b27a274313e9cd6a743c71015ca02bf0fc6c
23231 F20110209_AAAFFE boylu_f_Page_110.QC.jpg
c888ecaf42c12464830d85be5799880b
22d1e12a8d3e262533702fa9e3e47472bc457af5
7680 F20110209_AAAGIG boylu_f_Page_012.jpg
445d6a494f29e997f6d0add84cfe761a
efc5e03e5a8869350533fd894b0107bf015ce33b
58023 F20110209_AAAGHS boylu_f_Page_145.pro
aa56a6e6ba0230fb7d34e91042c66479
53991512076d2c9c5223010819838271af1eac18
F20110209_AAAFFF boylu_f_Page_050.tif
0ea4ce540f5d507209d289289e6b845f
8d5896ef2235d3d5cff3440565ec2e697e65cbd5
2742 F20110209_AAAGIH boylu_f_Page_012.QC.jpg
4554dad71f05f25840ff3e7a1c4329c6
7cf34aeb07705ceb1b486f02a1a1fd9602ee9b19
59329 F20110209_AAAGHT boylu_f_Page_147.pro
4b312ca6664b0ce06f272929b248c97d
b9c7cb6af5725ef88250700f77d9ffd52a777a7b
23556 F20110209_AAAFFG boylu_f_Page_038.QC.jpg
be9ccd4f4174650deddf9e63f5acf217
d8eb6498ffb7afe749793a7c215956f8900d0b51
24977 F20110209_AAAGII boylu_f_Page_015.QC.jpg
9ac285f44d7517153a081c1df77ad15a
26a67dce1755ff97d742d07d027860da6a8b3095
46759 F20110209_AAAGHU boylu_f_Page_148.pro
abca7f42a39c9b9fcdd4501443f73aee
38fc1fdb4d1a019dee0acb5bedc91ad739e22d81
22436 F20110209_AAAFFH boylu_f_Page_004.QC.jpg
21547cac653f87292a14a4e34c90b870
ac7668b84a376cd3fb5d7bdb20e94f3a2afa4023
27203 F20110209_AAAGIJ boylu_f_Page_016.QC.jpg
581f1441846f2aefe21c64a83d5b9c18
ba68d97758d341ef74c5d46279afeaed74fc622b
4013 F20110209_AAAGHV boylu_f_Page_002.jpg
826790866f7dc9fed5c0734e8fd50e4e
acac3f55e6fab85582409eaf933f4f005ccc77c9
5604 F20110209_AAAFFI boylu_f_Page_013thm.jpg
22bbf21a301398e72998d18f4e23fb46
319bb92f155b6c06825d302349cd4befaa161ca0
26812 F20110209_AAAGIK boylu_f_Page_017.QC.jpg
914f70f39611c1abb90da202fd6e6136
75e17673f22d7aca9b1a3a1fccc6f455c1b3f6eb
1048 F20110209_AAAGHW boylu_f_Page_003.QC.jpg
6ecf0b05a2a9efc97fb415ca46ef3ec0
5c2405feb8e2896006e7367bf20aa94b71df4a7e
15634 F20110209_AAAFFJ boylu_f_Page_086.QC.jpg
697056eb6cc379acba035b7bfefea77e
a404ea02cc73c3a7dfed11d5fa85356a7de9612e
85120 F20110209_AAAGIL boylu_f_Page_021.jpg
11ae5f046b5009558836e8f1b7970c41
85ecced555eeddf56d24b7f189627a11785cd07b
61564 F20110209_AAAGHX boylu_f_Page_005.jpg
3ae351b78c9830a8a95d90a1015c6299
bd26556eac0a4c4e5e4b0072d703a0d22624cfc4
84578 F20110209_AAAGJA boylu_f_Page_033.jpg
a1b5cce50b65379267e387512b43ed73
a39409e214cc0bbbddeb4bc083c1fa298590b413
83784 F20110209_AAAFFK boylu_f_Page_057.jpg
6e22bceb0f3a6eb8675b27fe21002c14
3b79a852550c8389b53d8a4ebe034269f97a8c19
27047 F20110209_AAAGIM boylu_f_Page_021.QC.jpg
978ebca9425451904d538a3d2f900977
29d5f75c1f77dd78bc093f7d77c9150c4ae03bcc
14610 F20110209_AAAGHY boylu_f_Page_005.QC.jpg
4e1365a2cd622515b6bda0a2ff6e8098
9e06d312a91b9d527d88ff31acb4109633d986f0
27003 F20110209_AAAGJB boylu_f_Page_033.QC.jpg
b2ffbc75a5a4cb7c95b0e99531a626c5
d36dd94ff155aca81d29d5ca6787915f92ba5962
42995 F20110209_AAAFFL boylu_f_Page_119.pro
03a9d127d523efe4cecd3ddc32bb0341
7b9cc502eae9bcaf1d94fcbc366a61ef7c7e8fe5
84236 F20110209_AAAGIN boylu_f_Page_022.jpg
5dfd3ba615232c902ddad3050919e6f0
45de9ce152d7adeb8f60e2a7a28cc6a35291b987
69268 F20110209_AAAGHZ boylu_f_Page_006.jpg
fbbd3b7626c79e0fe8efb9d12be78011
6ced060ea97b0db1dd571956d976d12e0d914d49
3808 F20110209_AAAFGA boylu_f_Page_137thm.jpg
9a494d54c54660a5dd75d7ffeb2d04db
0170fc7223e99d9e742dede23dfe1e23a5ae380c
21201 F20110209_AAAGJC boylu_f_Page_036.QC.jpg
5ac3191f6b24eea3aeb86e838ddd0f7a
ed579de8db773cbfcd5eab1e667ca8e3ba7381ce
84483 F20110209_AAAFFM boylu_f_Page_144.jpg
7764eade64b1200611cb5a7948dd99cc
1493eff1d4027643ff3f1cd7c9d2555f3479028a
83580 F20110209_AAAGIO boylu_f_Page_023.jpg
d2fad8e9e3f245d66557e1ae8d6e1e04
230cc93b77c708ac93db2281b49363e0f7a72554
750144 F20110209_AAAFGB boylu_f_Page_135.jp2
9ef1feb03e5c71e3760e05008501a189
e781e76a307d2b7e3afa4c49bfd48b0d802cb3e8
27263 F20110209_AAAGJD boylu_f_Page_037.QC.jpg
a35eeda0d24ede6d236a73719295a37b
fd420bbc6bc164a03d9212f9c45c5cec0e0175f0
7725 F20110209_AAAFFN boylu_f_Page_081.pro
14e5188b2fdbe9428006fb7d0d772eaf
3a5ee5c450bf7c73e51381e3877268400c63f797
25532 F20110209_AAAGIP boylu_f_Page_023.QC.jpg
25800a906a1abecbe7d2669412d1d3a4
9a50e1aeee2decd47efe4472575489a6f989c857
15108 F20110209_AAAFEY boylu_f_Page_006.QC.jpg
ae3a39271783f738dec7b18f7b83ae2a
58af7fd2468191b90500d4d647e6351b5ea58b17
86025 F20110209_AAAGJE boylu_f_Page_039.jpg
6c2e757a0c7399f13034d5bc9a0168f0
5e2e61a4af6d105fee5b71c2e9b2ce66c9559b59
1623 F20110209_AAAFGC boylu_f_Page_102.txt
9eed04b2bbe954f5c5b54c3ac7e4d88c
0b855105c922f58c0a15a692f74b22ecf04d1759
21556 F20110209_AAAFFO boylu_f_Page_073.QC.jpg
cd17f230f2fb66e1b0e77583d53ea359
614dfef47658c3cf868a1af365bc6821c9f0c673
80356 F20110209_AAAGIQ boylu_f_Page_024.jpg
c3fc59924d1ceb811c5a7bf8bb5969f3
634fc26260df6c29fd05012b3f56974edb6620f5
1051949 F20110209_AAAFEZ boylu_f_Page_049.jp2
1a147dfc981f7cb3fd99f11dc6812c5a
09c180eec693798d39de6106cc86ef4ca6a48097
25674 F20110209_AAAGJF boylu_f_Page_040.QC.jpg
7ddcc5c701a91558c2bc5465a3a096a5
cea507d98cb62496a48cc8beb1dc54a8405801dc
56737 F20110209_AAAFGD boylu_f_Page_123.jpg
e6ae68aeac0203f95d1f75685d42eb7e
48d16520cb16e3b0661001243633a77188344b70
F20110209_AAAFFP boylu_f_Page_055.tif
7818b61daf94eeeb4ebb480cf73c9cfd
3bee4fab3cc8bbc5f304a08a093bfffc09d9d307
26487 F20110209_AAAGIR boylu_f_Page_025.QC.jpg
9df2e1fa1ff6aaac8778e888268ca441
8afea67ce592fe4daba5807330430a4dfeb18144
22418 F20110209_AAAGJG boylu_f_Page_041.QC.jpg
cd74abfda2abc9801e29be5e1eb00928
ab284ebcb3d8d888b887eb09fdfafa3c737eb8f6
53375 F20110209_AAAFGE boylu_f_Page_072.pro
d3cec05506c58bc6f8ed16e17038b67f
e525adefb185ef79e15f6913b42086f963e5bc29
25085 F20110209_AAAGJH boylu_f_Page_042.QC.jpg
e7e2069a1554ed2ec58555737033325c
dd47a41223e5df37110e7a08b4e72e10cb16ac21
19501 F20110209_AAAFGF boylu_f_Page_107.pro
6d5c000b0dbb2ad53d08c9fdb09de2c4
b8a97256d5e1f14e0e010b55695c1d7bf88c0ef6
F20110209_AAAFFQ boylu_f_Page_149.txt
0a6ed9a57a8b0a76623b761a59d2e84f
95a0bf399e97eef2f0d981da88b3bdd0428caf19
83411 F20110209_AAAGIS boylu_f_Page_026.jpg
c7d9b15bc4b993797759e95d3c1b5148
9c0d94c1744ffc6db1fba65b0a32661c47f073d0
86219 F20110209_AAAGJI boylu_f_Page_043.jpg
2bcc49f73101e49ee92bdbde346878c8
0c8ef2e53ce50f68f68b7799f3aab0ce458d24eb
F20110209_AAAFGG boylu_f_Page_082.tif
63d0715ba9ebf31bfe6215cb59d0dfa3
7caa586054feaef8ad4b2370f333cf26b811eb81
4600 F20110209_AAAFFR boylu_f_Page_121thm.jpg
6cc9514d02abf9a876a7b84e305ac0d1
b9f9dc3c6e9da274b0f2c887bd40671e000387be
25147 F20110209_AAAGIT boylu_f_Page_026.QC.jpg
2e067b932579c8266d97fcd8ad4edd8d
3463c115460a7af5449e8d803a0c2d61ece3106d
26579 F20110209_AAAGJJ boylu_f_Page_043.QC.jpg
5d4f568c03137f1ab8b248caae345bb6
788f3e17d0ef439ee5235137a9949ba2855a9d8a
25907 F20110209_AAAFGH boylu_f_Page_014.QC.jpg
81ebe920879a9ac72c04055d65df701e
0476cb0292244c0c73abdf3f8b81c43639bea2aa
1752 F20110209_AAAFFS boylu_f_Page_066.txt
6a1e42fa64305409ea96f8ad32246df0
168f973964c6a6405ca3c155137830a327c0aa72
19712 F20110209_AAAGIU boylu_f_Page_027.QC.jpg
d9138f5144e2c461d071b42195722ebf
bb17d04f4b74f20a8322df1394dab65d49fc0b39
81097 F20110209_AAAGJK boylu_f_Page_044.jpg
f76b717c8e8f3f960dfa0c73f6275542
9cea86344e7f65a3a611a998c3c38cf96cd8ea5f
F20110209_AAAFGI boylu_f_Page_066.tif
0cca296ca4d90020d98075a136617464
2a6dc5c708ba07f747365d54aee89bc6a09300d5
6111 F20110209_AAAFFT boylu_f_Page_035thm.jpg
6f1c3afc8465f792488f3a5e84d3470d
aeea8b1111fa67dab4a362b5e075326bc1d9a469
24418 F20110209_AAAGIV boylu_f_Page_028.QC.jpg
dca944c71eff30e0c6e3e3cdda062b1f
e8d84858ea88efd8ef422a6c47744cdb562e991c
25599 F20110209_AAAGJL boylu_f_Page_044.QC.jpg
bc36b5c8a9f33376f4adff1cc95d4c71
469d94711b3f787c29301d527db93a1d3842ed77
49542 F20110209_AAAFGJ boylu_f_Page_048.pro
6cf8d1b4fb6551e72c486af2bfd3dd40
39195e129316a0485b3b60320b6b6d9ff3330308
27351 F20110209_AAAFFU boylu_f_Page_103.pro
3b946fa54f52101ecd9e85c93dc93948
346f5baf3774ee8e02ebf1eb39383f907772675f
70161 F20110209_AAAGIW boylu_f_Page_030.jpg
7e1ffe4209262a2af2109e0302312444
5e1eb627c47d356d0b353fe82a893481c3de3556
85935 F20110209_AAAGKA boylu_f_Page_059.jpg
d35810d721a3fe308c19819b47cf390e
c874292b74af8dfd863d82a478f752c0aa29fc20
25422 F20110209_AAAGJM boylu_f_Page_045.QC.jpg
882d46e549050132ba627d9eb51fa2c9
295572c69205d0d336ad0f55e3b624777114aeaf
527 F20110209_AAAFGK boylu_f_Page_129.txt
47a40c62321f84bd4444f4c9f54ec491
46040ef8f4b8c988245b995ee0d87be88bec8c0d
5745 F20110209_AAAFFV boylu_f_Page_125thm.jpg
080fe198e484dd2ab86b49b5b489b827
0e34d8ff5ac5c3f8aa529dfd0c6e1382f8af671f
20222 F20110209_AAAGIX boylu_f_Page_030.QC.jpg
d76f8f1a608eaa6a6f44f74ae94b1b8d
b0e45db764c840851b56a43c1a6b4e4ec9509680
27650 F20110209_AAAGKB boylu_f_Page_059.QC.jpg
18ef7e9edd7a1ae5aeafa035723bbc9e
3bc99e662b047c49343347669f91bd9aaed0f769
81226 F20110209_AAAGJN boylu_f_Page_046.jpg
264bb9432df67fe97f7ed7ff99b62142
7049ff1943c9afa5b40d18a1610e64fd5124d25e
81333 F20110209_AAAFGL boylu_f_Page_015.jpg
edc4cc83b298d19f4259c321881253ff
fc6785c08ac856d17a41ac13338cb28e913ff6c5
55607 F20110209_AAAFFW boylu_f_Page_118.jpg
82f6b0cbca3176e3a132c2c6884657f0
b340e1dbae67380c440c2cfc357944fd9c8afe33
47853 F20110209_AAAGIY boylu_f_Page_032.jpg
0097a6cb5e922cf3251586dca0c11119
902dfbeec86134964f870e0cb6dcfdea3faf6689
81490 F20110209_AAAGKC boylu_f_Page_060.jpg
ded29d16443c57026b6185e3d8dd17c4
d5beef20fccf1e47bffc30711748d1559000135d
87130 F20110209_AAAGJO boylu_f_Page_047.jpg
d7aaca39f820ad06a6b477134f954aa4
755d56144cb7a35b2654ac884fbc075e421507cb
52926 F20110209_AAAFGM boylu_f_Page_016.pro
cb4e1e5d2938f13bf54177306fdd8f0d
76688dbc5b62ed22d2a10c76d7e6b77afb208ebc
24401 F20110209_AAAFFX boylu_f_Page_024.QC.jpg
486124a3a1d92eff3e821e9a402c1ee3
883ee1dcb5f98a7a138389ee60a489baf5b1ca41
14969 F20110209_AAAGIZ boylu_f_Page_032.QC.jpg
6c306da10b3126d9c7d8c1aeeb119da3
f108bfcf05b2229e1b692d9252831c58d389c89c
52938 F20110209_AAAFHA boylu_f_Page_121.jpg
14f1d846aab9b7b92fe42b945e59bcec
f3bce3bf361d8592c91872fd94bcbe5ac7f7376b
83282 F20110209_AAAGKD boylu_f_Page_062.jpg
cb695813143a7f65a4fcf55cb777e28d
31cdd2dedcce95c7aa1588afd81457697cc9d383
27066 F20110209_AAAGJP boylu_f_Page_047.QC.jpg
b929f2fa087c0a3c3e510ddba06aa497
a9415fea3641cef36ce8c5e1312a5b26d4d83178
F20110209_AAAFGN boylu_f_Page_005.tif
8b73a54ca34625ca42769b15d757e1be
5be5bbd97be97c4eab9bec0b590528ce34027c98
F20110209_AAAFFY boylu_f_Page_101.tif
6891aaae7867bd4ce3da9cb2f8b34dad
0bf2ec9691853754b0dac8cb4c02bd26f958c452
375228 F20110209_AAAFHB boylu_f_Page_007.jp2
8986f48480a867f50bb175012d6ae082
8890e58422f01ee538f9eddca22dab23f24cfe82
67187 F20110209_AAAGKE boylu_f_Page_065.jpg
8306a9f99f4ee047e606205eb6e78197
98abd0e8816bf923c3e3d719c39e039152accc17
80908 F20110209_AAAGJQ boylu_f_Page_048.jpg
a2df5a0cadb80bb6fbde875b657d1cdf
44f82d47b0fb133214718682bc0dcf5427df9eb3
6288 F20110209_AAAFGO boylu_f_Page_022thm.jpg
43fd3e9532ebdefe1af9408058000337
edf392ce79a6608e8dbe20d657809e433f05ee1f
1051982 F20110209_AAAFFZ boylu_f_Page_045.jp2
09f98db37fc897d264d4af41dbc881de
de376f90e7957f56749c0c40d94d4e9874f0a3e3
F20110209_AAAFHC boylu_f_Page_073.tif
d84871b5f6c113a996a87def6f1a7bec
aece2cd9e90346d1dc918cadbe147f0757202208
21816 F20110209_AAAGKF boylu_f_Page_066.QC.jpg
4e11bff94d20885a03d5da8992e2abaf
5dced175ed316e1e48bfec912dfd44b523c3a0e8
25230 F20110209_AAAGJR boylu_f_Page_048.QC.jpg
9e9295ea446c21d4b13918396f47ca9d
b7851bcedd922e2640c37a1490f61b1f1251e97b
23700 F20110209_AAAFGP boylu_f_Page_077.pro
9c447d519ae840834034b389e12d1f1c
98d6943442f6f617630c53ef8b8dd3119156dd5b
10281 F20110209_AAAFHD boylu_f_Page_124.QC.jpg
d61c9a2a56a67fb3d63dfd4e3915cfc5
c7923d20295c9799d8e390190501061cac1c3eed
71999 F20110209_AAAGKG boylu_f_Page_067.jpg
58e56ff1f3db231be43f72139aee038e
e72736d3126e9ceb7baeab2d05e4a64e6b6ee6bf
80546 F20110209_AAAGJS boylu_f_Page_049.jpg
69fc4294b37985aa8057e1fe56709fef
8a3b3550f8268430d18f048dc52cd43492607ccb
21131 F20110209_AAAFGQ boylu_f_Page_131.pro
987c8f7d98d258f9b468bb45d7a84598
8603217ce905b11e51d3829d465ddd4b2f59560d
5718 F20110209_AAAFHE boylu_f_Page_113thm.jpg
3fbbef1174b9ea7058d0046b76fc4be3
220a005d0a0e7632df122df5c98dfc949affef13
21965 F20110209_AAAGKH boylu_f_Page_067.QC.jpg
05d98dd3fcab9a7a2f2b807ad1ad5894
e096e2dfe292f84194976b8e07b562644ff1cc2b
49172 F20110209_AAAFHF boylu_f_Page_076.pro
a551a9ac08e8d81980307f14e0601805
599ff234c12a396700413eecea36c82c85d49cb4
52653 F20110209_AAAGKI boylu_f_Page_069.jpg
9c8871af2e5f08823301dc4721e710a5
7ef6d537f471406262e054ba41f9326498165a01
80509 F20110209_AAAGJT boylu_f_Page_050.jpg
9edb871b5004a3b1370419c0cb32a3d9
a4b41a5c7e913b79c13a49cc980c41d805206148
78556 F20110209_AAAFGR boylu_f_Page_035.jpg
f9d5c16ed231c211de5a9357d7befcd2
054d18e025cef77c691364c53aeb4355b2f89b40
107 F20110209_AAAFHG boylu_f_Page_002.txt
aef31c885dce9c5feb6363d8cfbd9fba
66fd8744f850932007a6d17459d6906cb8866d42
17454 F20110209_AAAGKJ boylu_f_Page_069.QC.jpg
c8a7c909d8194ebe974d8634828c7eaf
b7674748627a6d477b927f55b04a56561557ce20
20183 F20110209_AAAGJU boylu_f_Page_051.QC.jpg
39e1e49731d59ee3e7af33e3722bd6d2
820c84daa8836b793dcd1b40498b43c88f5a5697
51101 F20110209_AAAFGS boylu_f_Page_109.jpg
7e0523169383c20b5d443be864ebbafd
e600726b9eed24235be8f4deec43cf8b590e8988
24966 F20110209_AAAFHH boylu_f_Page_049.QC.jpg
3d9ea92b3276102db133710f4c2aa34e
5496b31b7088082a14e319c2dcf448c122003f1d
55612 F20110209_AAAGKK boylu_f_Page_070.jpg
c27152b07df8cc4df277c0c40fc1bc31
3db0b5be804b5fa6e7e7bbc4ad34ab0b3ca5bfa8
26298 F20110209_AAAGJV boylu_f_Page_054.QC.jpg
55215e45d0d134192a45071ed9dd4b8b
1e298d7bd8871d74a85985a46134374c2af5a7c6
F20110209_AAAFGT boylu_f_Page_146.tif
39b676aad6582ea812a5a545218c0eca
7c8ef55e369e0be15f327aa3c394e8c6d385b7a7
75260 F20110209_AAAFHI boylu_f_Page_028.jpg
88ce13533a6ed08ee640451dd3513d8c
e6ebc4afc74a82040b9fae1adf1c5cb856c612f9
19019 F20110209_AAAGKL boylu_f_Page_070.QC.jpg
438ae461d9f3b30f48c48d18e70c61c9
c21b8babb2031aebad431868a2731c402fb70b6a
71515 F20110209_AAAGJW boylu_f_Page_055.jpg
4b149d49cf6bf36278686f24326adc62
8b940a0f38470d45119cd40cb97ea9ce0678fafb
F20110209_AAAFGU boylu_f_Page_091.tif
98c8a8ab91f70929101e584b08bc2d99
004ad46e301a48fc49c2732f7338a46133e53fef
4567 F20110209_AAAFHJ boylu_f_Page_132thm.jpg
452cb35f81aadff18eb8c73496c6033b
18ec0dfee20057d08a9ce177710b6d4cdbd53de6
36146 F20110209_AAAGLA boylu_f_Page_087.jpg
bd1d3e7b8a8c0a25a38087c92dca7afc
f3e833f0eb54d2f44c73a5787bded45bc438f319
13799 F20110209_AAAGKM boylu_f_Page_071.QC.jpg
31ce63625953cfd091f8df792708b46c
44e53d0ae9bb695bf3edc94b832379ea76fbfbba
21592 F20110209_AAAGJX boylu_f_Page_055.QC.jpg
6721dd18446742f40128204effd4aeb7
3db6cb350fdd6d5058cb2aa65c73783044bc5336
91108 F20110209_AAAFGV boylu_f_Page_096.jpg
5f83fc6794b4840c300f0ac0ee97043a
eb00dff5fbd912a6ebeed3fe4d6077e7f75814b8
511243 F20110209_AAAFHK boylu_f_Page_086.jp2
c14593b1fdf1a122adf744973c67d4b9
3cef23e84bb2981496339a34f5f48a45ee88481d
20949 F20110209_AAAGLB boylu_f_Page_088.QC.jpg
223f6135aebde0123de4d8537842df39
8980707a923192fb0211034cb6e9d1d49b261a96
70337 F20110209_AAAGKN boylu_f_Page_073.jpg
a62c6a340bd65b8227108494830174bf
91742cc327b1fbe59e3698f900438bc6849acb82
77500 F20110209_AAAGJY boylu_f_Page_056.jpg
dc6a081a2bb8f7c95f31c321347eabd2
ac5ea5bda00bcf1ff2a0d94649474e4a2353b74e
19553 F20110209_AAAFGW boylu_f_Page_095.QC.jpg
e5071affe1877f34c1f507c0655092e7
3a6554b4dd32e65ea18e79262ed1870fa649c9bd
1546 F20110209_AAAFHL boylu_f_Page_118.txt
89bde8abc12128d4fb61a51a913e916c
05cdd2c1598f2bd41fb48380351dc44dbada5f6a
24812 F20110209_AAAGLC boylu_f_Page_089.QC.jpg
cf3a75530a68f7d8ee042ce20c8e4f30
7e669611ec4382fb19d81a05ffaa6860d81419ed
64695 F20110209_AAAGKO boylu_f_Page_074.jpg
f9d03e5757781b89183e601e1d10c9a5
f3d26a8bc331025701e50b34cda81a9236d31c04
25618 F20110209_AAAGJZ boylu_f_Page_057.QC.jpg
4e3dcf6e7995691c183c5c998be9b00f
223462b809ea571c951ec0ba347de22505708854
F20110209_AAAFGX boylu_f_Page_108.tif
1376ca0f76eb9a6ce270aa159cddaf05
a7eea1b987288f50b55786323d1964d679c706ae
F20110209_AAAFIA boylu_f_Page_064.tif
89a5efb201d65a1d14bd9ccfc366e8cb
9fe605f721f21ea85ac9402d28a10493bd82ff3b
3018 F20110209_AAAFHM boylu_f_Page_005.txt
1a2ea005e8b1284444855d3ae455ada2
4f08229fc3df9dc218a0698daf19dc4f65d94e8a
19485 F20110209_AAAGLD boylu_f_Page_090.QC.jpg
b1b29ef9e15f9e8eaaf3ebfe485cf163
2e87012f0fd180f4379c7e77df8cc1d5f2e5e7ce
20432 F20110209_AAAGKP boylu_f_Page_075.QC.jpg
ddd1ec634626a96d33ea7d587e6c759b
fa7eaa43efb8899c891ddf51afe6829f544a548a
504559 F20110209_AAAFGY boylu_f_Page_139.jp2
c63d05cb9393675a30577d608993d849
7470f14cc15033696651005529a53d60dd5e72fd
1257 F20110209_AAAFIB boylu_f_Page_097.txt
f7da9a5592c6c37d0721076a9bcba8dc
24e4282af913f80ccaf2bdab9ba4e41872e4f54c
F20110209_AAAFHN boylu_f_Page_138.tif
8b2aa73f661faaefb4ea70b65bb6c82e
cb126811eefdf9f49dd0058d7298ada78070eebc
57826 F20110209_AAAGLE boylu_f_Page_091.jpg
4525e88db4ea58988e6e09fb7e1cdb62
8b0f2a339d703d4d3e4abf87652e7e1fac94caa5
24439 F20110209_AAAGKQ boylu_f_Page_076.QC.jpg
513f1a6b186b801a14798350a4089ee8
4a90586b9bfdb6959c630acbc93678e3b12b1341
6200 F20110209_AAAFGZ boylu_f_Page_046thm.jpg
bbad05a03b17e3558100e6299f1cf22e
e6d8e1424c9d46e4c71b6535160e112c01092df9
3162 F20110209_AAAFIC boylu_f_Page_124thm.jpg
6c10b253e8d7c95e0632b17940b24b65
83aec97b6f75fb9a9217141fa041497fdd2bb17d
F20110209_AAAFHO boylu_f_Page_116.txt
4fb2a76af1278de9727e3ac57d1dcd64
b189cc28141b96a5f3ff9e7ca72f5425eae869e2
18825 F20110209_AAAGLF boylu_f_Page_091.QC.jpg
b0f8456039f82a74eac12901a88c4300
9fc5ec89b981d8e393ece51fb4623eecaed4a606
13355 F20110209_AAAGKR boylu_f_Page_077.QC.jpg
1be9b4098c61537f820823eff673ad3f
d364d05f22bfb20f81eb80874a02d0cb23638865
71407 F20110209_AAAFID boylu_f_Page_013.jpg
05cbcb9fdb41cf56a0561b92e88d9e45
c064e09538b69a2083afd6709e5ce0685b8cad26
67729 F20110209_AAAFHP boylu_f_Page_092.jpg
a4d9068b02a19b276e06ade523e70a6f
12af04ea7c7b3ce1412018b32a76ee49a0554102
17481 F20110209_AAAGLG boylu_f_Page_092.QC.jpg
ce59e9bd0c31c9e7caf3f0d849bc99da
764b04a636c89069303f33172284e85691bde5b6
14272 F20110209_AAAGKS boylu_f_Page_078.QC.jpg
fb5771632caa93523463adc6a581890f
d701b6208f8bf332b0ee5303dd0a730f1a7aa763
31585 F20110209_AAAFIE boylu_f_Page_007.jpg
00a0eec623ddc6ebf27fab9fced4716f
e02baa6032fa59daae72dfd4f1cf5cc851c7e1f7
3990 F20110209_AAAFHQ boylu_f_Page_010thm.jpg
ec17df8ef08a61f3b9ad62bb96eb159a
7f23e8f1cc34d67979c1738fdbcf7714f38e0f97
68478 F20110209_AAAGLH boylu_f_Page_093.jpg
ac8b2fa33196d3468a29ebb33d2c8a44
90ba6f552710a14607edc27c3539ef343290b20c
12370 F20110209_AAAGKT boylu_f_Page_079.QC.jpg
2d23ee0ba84936d5b9d133e9654792e3
f785310e92af546e66245782fab148f621bf2f43
6378 F20110209_AAAFIF boylu_f_Page_021thm.jpg
873cd941b9737506a62c1ceb1dc1db06
2883b46f167f933cec56eb16cb5e0f7a962a29e8
21420 F20110209_AAAFHR boylu_f_Page_104.QC.jpg
ad2d49a02d4f1f8656c16bc87cd34462
92b9425ff1f46b8954f82f10c49d150ff03728f9
17551 F20110209_AAAGLI boylu_f_Page_093.QC.jpg
436180285f2250cc7b9c56fe367cb31b
bffe8b111fd57a15ddd75af74dcce7d923fafbf5
772740 F20110209_AAAFIG boylu_f_Page_091.jp2
d9223403030db815fbbbe48d39e87cb3
7466ed64e44d853bb700ba1d7a8d0db8d4f0bae8
63135 F20110209_AAAGLJ boylu_f_Page_094.jpg
083c39e42e1e7844b276f44fe08920cf
891193cb9d597c49418f32cd7fe9489b0159e4c2
27087 F20110209_AAAGKU boylu_f_Page_081.jpg
a2d805bcd017f8dd1bc6602131f51816
4ac8e4b36159cb6450961ebefef99faa6ac62a01
F20110209_AAAFIH boylu_f_Page_090.txt
721b16bc87b4c9522209aad20389d397
42fc8bab0aa35636af3d7fd8ea70c5f6ac62b801
F20110209_AAAFHS boylu_f_Page_041.txt
f35ea9c31ac09a70ac9bc0f8be881d4f
fa7831c0e87a210b74c74bad5ec7c3f5aeb1ef96
61988 F20110209_AAAGLK boylu_f_Page_095.jpg
80d3deb451defc34732d10d05c780631
9951ce34a459c786b988301ed2a8b49cb79d79fe
16503 F20110209_AAAGKV boylu_f_Page_082.QC.jpg
65f53cee875b1e2ade9e44b47642a615
9505fb2aa63584a709655288dbfc4d63959b2f04
5796 F20110209_AAAFII boylu_f_Page_144thm.jpg
8bb9a6870ef19c2ca3a2622230c367a0
4f5dd250c3e53ef53f35f7f2cb5272feeaa47ae2
1051986 F20110209_AAAFHT boylu_f_Page_096.jp2
15e782d4d6bce0c206652ffe7323ca69
010bb31f9affe4bebae2485a2c8a0f93cb742abe
27035 F20110209_AAAGLL boylu_f_Page_096.QC.jpg
10378cb46006856e505258f568029d31
925a76f03e70d75900f604d8d24b7b83e211b3f6
62657 F20110209_AAAGKW boylu_f_Page_083.jpg
9b1392aa1fc865a5984fb69cb5d4b715
f636633ebfc219bf119d0fb5f82470af9d52c431
F20110209_AAAFIJ boylu_f_Page_116.tif
4a53f8b697b5b4d087f2bf6a465c5c6a
d6ef55078548ed929ec047b3655ba280333e7215
2876 F20110209_AAAFHU boylu_f_Page_129thm.jpg
8fee4c71c00ef5e47cff8051dde955da
cc675afe5b9cb20d97e50e9028618210f6240ba0
80173 F20110209_AAAGMA boylu_f_Page_115.jpg
91c9df56505166f30deb0b6062fd64bf
65c4a06b742684a96e795fe06f5558e40d0098a3
15035 F20110209_AAAGLM boylu_f_Page_097.QC.jpg
9765f5ed9ba9a4930b204b12207954b9
c6ed88c2c55901feeacf917d863bb2c1c8ce3434
17532 F20110209_AAAGKX boylu_f_Page_083.QC.jpg
1854ba06c5308b27a7cad5e0a3fb1051
a93bf360404f70be62a328ea92ba0e212b9fa6bd
921 F20110209_AAAFIK boylu_f_Page_142.txt
e4b2dec5f09acd421aca9e0000d418a4
14198e503a4859a7a0a6c39373623db1aedf60d8
18904 F20110209_AAAFHV boylu_f_Page_102.QC.jpg
270cd18ffca011f73d721044268d1e85
9388c6a3b7c9f1342455b396a4c4512d980c457c
84975 F20110209_AAAGMB boylu_f_Page_116.jpg
01f29678b609019dd973d8d6d32484dc
b351b33a9f519b7202d2e4f7773295dd23a0a280
17850 F20110209_AAAGLN boylu_f_Page_098.QC.jpg
9c24ad59003231cff3a8936bb999973e
5a94509ed50e0c50407492c0217ee82cc526d443
13856 F20110209_AAAGKY boylu_f_Page_084.QC.jpg
197d52704a3837b4be5d202369a4401c
847ca072f4ffe92a4e974a15382dbc64250cb6c7
F20110209_AAAFIL boylu_f_Page_033.jp2
292fb2a5647c1007293e5e9d6377a3d0
343e2b8b3205217c4dbe30374a254eb379ed2c45
965 F20110209_AAAFHW boylu_f_Page_071.txt
9c1b05187c6092842e7a53e5d4758873
7db0d6b84777db9aa5418b0ae8c6a52819716e4d
65396 F20110209_AAAGMC boylu_f_Page_117.jpg
884c2eb73cd10e997f906897df772ea0
1f4a59ca7083f02c22d2dbb289311b7809dbf7ac
59006 F20110209_AAAGLO boylu_f_Page_101.jpg
3276267e5c7e38f17a966af03f123d95
ff01d89769aaea21ce1d66cde140dec421a13ae1
44075 F20110209_AAAGKZ boylu_f_Page_086.jpg
8d399d7a13d407cacea4811009303b5c
e1627edf308c90f0a7e8041c392c37b43b2aaf69
1051951 F20110209_AAAFJA boylu_f_Page_015.jp2
20d49ac1ddb8f6f9f2c3d15d8d1a1010
df29db6ced71b0fed0d131448995b4673c6183bd
75534 F20110209_AAAFIM boylu_f_Page_113.jpg
3262351729b8b7a7f2088ec3e41a0f6e
76ec27428405a6c077eb118da73f66783337c9ce
19918 F20110209_AAAFHX boylu_f_Page_071.pro
309b6f15caab935c8d4d7d86bfa22fab
9dd6a7ef278177f4a827d9635309fd604cd8b5f8
65465 F20110209_AAAGMD boylu_f_Page_120.jpg
2aaeaf791c4b62e3a1cecb31ece1e717
d67fed8cb77999250d7a5198d1ac2f9512993892
19047 F20110209_AAAGLP boylu_f_Page_101.QC.jpg
50ed63bb2ffba77a77ebae70fa96fe15
3c84e86e569abca3ab1bcbfbd8673ec3a4360694
6005 F20110209_AAAFJB boylu_f_Page_108thm.jpg
c19762dbdd6571e60c7aecdababef2f5
0383837e5626f0321b548ba3abb4252a982865f7
5883 F20110209_AAAFIN boylu_f_Page_072thm.jpg
a74887239c26f20be7916f6213685438
0b5fb2f657ef9b657c8fafe0ccd2eacde567306d
16584 F20110209_AAAFHY boylu_f_Page_149.QC.jpg
6e668c81e628c5aa08ac9d981b7063ca
08690223b94717c29361a92e458945f08279709b
15982 F20110209_AAAGME boylu_f_Page_121.QC.jpg
274973753f476a2dbeb0352f3ed19962
4359c5388c09284e945a59d0651a5ba4670adfbb
47452 F20110209_AAAGLQ boylu_f_Page_103.jpg
56fa0bbace4464b983000e659c69460d
b38a70ac49786bafd4b5220a2a7f282fc91bf507
27298 F20110209_AAAFJC boylu_f_Page_018.QC.jpg
26ff901d3619cad4169904c99d26e796
1b366b041cc4cb299ac6f3cbd653c4a72cec56b1
79679 F20110209_AAAFIO boylu_f_Page_019.jpg
0ccc13f4782931655548644332a9cbc5
e8c7d82d53881df284e019f74d0c791bf75b1e12
779259 F20110209_AAAFHZ boylu_f_Page_104.jp2
8760e4a0a3341beac0c461a74f3f5b3e
20e32569eb6fddc1f142e0844c3ba37573d2acfe
82080 F20110209_AAAGMF boylu_f_Page_122.jpg
988f39c94f09b42d0a29c68fa433c9ad
504ef9ac6caa362608b137e7c89ca02fd94d12f4
16119 F20110209_AAAGLR boylu_f_Page_103.QC.jpg
2268d51d7282f8988c0e6404f2fced8d
5d75e26a9a680d7cd7207f9e4c87fa4717425a7f
73207 F20110209_AAAFJD boylu_f_Page_066.jpg
70fc470bf4d16bb1434ef96b671cfa94
a0155e70aca7f50d1d25516251099bb137004ecb
1000242 F20110209_AAAFIP boylu_f_Page_038.jp2
886e24c8daad2d947170adbe2b439ddd
6bc4985020086d829f0c172d318a3a441e7b0f78
26479 F20110209_AAAGMG boylu_f_Page_122.QC.jpg
57654cdd0c088de469010afc465c2715
194c3bfcdd756f30ac7b7e06cbaaedd4fea25366
18580 F20110209_AAAGLS boylu_f_Page_105.QC.jpg
b0f62acd00c09800f3437bb0c6a0d814
7073227afc8a41b1b272568bb2d2823b39e12d1a
28331 F20110209_AAAFJE boylu_f_Page_106.jpg
07d4d1745a732ab0360db0086cf8431d
b8309c2a3aac29f2b1c999009685c936e97f5f05
38270 F20110209_AAAFIQ boylu_f_Page_009.pro
f9348c92b14357716496e5cfe510c61a
5bfd7db4b59a17bec6dd0a2d9537be5654712428
20127 F20110209_AAAGMH boylu_f_Page_126.QC.jpg
796f7c4e6b3053dadb02f3edb98c0ab3
8bc8b634d58a00f5383bc1dcb3152774bc76e5b4
37717 F20110209_AAAGLT boylu_f_Page_107.jpg
e0e119d3b3a85a3db74bbe83d8fe65d7
d7bbf35c28f3a631ac421be7bb13e850b6a96e5b
F20110209_AAAFJF boylu_f_Page_063.jp2
a9a1ab949ff6df16e53b7e601ece9027
cc6dc3a11c421e2c838a32febdd70067cd9db8b5
14163 F20110209_AAAFIR boylu_f_Page_099.QC.jpg
5886dca1efd4330084b557dc39071b41
a0e71d5fc477b8f171bafc9243921d04cbb642e0
48783 F20110209_AAAGMI boylu_f_Page_127.jpg
2b1ae8cfab101307f7c0d27317c894eb
f6dde62a21263746423b10677b704ae4d6438f0d
12731 F20110209_AAAGLU boylu_f_Page_107.QC.jpg
07b36a1217df192606b462bbc8c8990f
a0d3c344512a4870ca2832ff3e028723d550595f
1020 F20110209_AAAFJG boylu_f_Page_086.txt
d0a9ba3ecb2ce8b7cce10879dc156703
c5bfa290650da548d5ef095b566957f5f901eb5d
17970 F20110209_AAAFIS boylu_f_Page_132.QC.jpg
e18c0a7313ab7b20ae53c41497177036
ee45fc70b5f2c29b2ee5e8e091cbfec2dfacf8d6
54194 F20110209_AAAGMJ boylu_f_Page_128.jpg
c7f95a311b70228a37548727f09f0583
a32171b0a6fb9769796ec28c61ceef6898b37b16
875048 F20110209_AAAFJH boylu_f_Page_006.jp2
55dcf6b6d87532f39f46bc9dda91ca72
f1f17071a3ffb7b28272702ff8360270dceee142
23849 F20110209_AAAGMK boylu_f_Page_129.jpg
67a32a47760a9f52fbea64776b4c570b
bbb6f9374ea63802f77dbd67f51fa7d3f7d57066
72193 F20110209_AAAGLV boylu_f_Page_108.jpg
888fbe6618fb48acb22c9d256a9b0af9
393f469b18a5206f9f89df45e00ab9024b6b1def
F20110209_AAAFJI boylu_f_Page_105.tif
ad93c4a45c2c1b66488dff3ef5b6dad7
75cbb1c0583aad70f5c7d8872439b4d11c14a05a
1997 F20110209_AAAFIT boylu_f_Page_057.txt
ca7b0cceaddd228eb8954d7b10ce9c24
0b1aea9cf815d3a46b61278db0f055df6decfd11
37359 F20110209_AAAGML boylu_f_Page_130.jpg
6a069983a502b3fbb1558efd35b08d1d
8852ad2bef056db4242695b20792a9c1ee8c39a6
16804 F20110209_AAAGLW boylu_f_Page_109.QC.jpg
a3c7e913f4177478d0feffd71f8b1a49
53818feac3a86086f7210930c64281e14b755f0d
22577 F20110209_AAAFJJ boylu_f_Page_052.QC.jpg
0848c9658fe128ff05e77777ee9dd7e2
6fdf7f6380e17cc44e82dd204b102068363aea05
1051929 F20110209_AAAFIU boylu_f_Page_114.jp2
d1841aee73a4d886e7d71f89548eb87c
0a6d564916dd80a6481054ceda342974b6e5ebbe
88207 F20110209_AAAGNA boylu_f_Page_146.jpg
ca8ecaa285417f2ef709a7610ee4be08
8a486e11725804a5fb38b37641b07a8bd7ef1faf
36421 F20110209_AAAGMM boylu_f_Page_131.jpg
7591f84860c65fd0d169adab83f9e168
44571dcd77c3951596f920ed5b2b0ae8e3c21ec2
74170 F20110209_AAAGLX boylu_f_Page_111.jpg
f684f90dc58253c8d1c18fde3a9c63bb
bac25b0cb06759afd5fdca69a7c09fa0783e708e
40728 F20110209_AAAFJK boylu_f_Page_111.pro
16aaf7fdaae615d62faead8351f4d751
2f30ce3bcd912b2e4fccea8d826ead2a26e0d866
1051927 F20110209_AAAFIV boylu_f_Page_058.jp2
d6bbcde2fae8c21471b69ebcd26eb639
cf4bdfb5a777d603e5435b1bc0f7c418157a1c0c
24918 F20110209_AAAGNB boylu_f_Page_146.QC.jpg
a7c4be45f199a17e04840e25fc9fc682
511234ddeac45e140517e102b0d1d065eabca971
52351 F20110209_AAAGMN boylu_f_Page_132.jpg
95939d44258eab317b3b23f8a6670e9c
4362cd1bcc2885b7d5fa79c95bc52e7dde77cd15
21895 F20110209_AAAGLY boylu_f_Page_111.QC.jpg
b0bab4015cb49642ac5dee00e8c38ddc
fb7c71634882f58c75f6ef4e3e79bae118895fc7
2083 F20110209_AAAFJL boylu_f_Page_018.txt
3d80b32238a988b73909ae989abb562f
3bea43499a490715b50979575552618ba7ca138b
37310 F20110209_AAAFIW boylu_f_Page_008.pro
82abd4528f7c6694fbb4d9804b83c27c
dcab55db95525ef85444be204a8f9a3afc1f023b
53116 F20110209_AAAGNC boylu_f_Page_149.jpg
4ee08f31a987df286f17573572199d3e
f2f661ee9f114932eae6683e5ced7b0aaf8840f7
21833 F20110209_AAAGMO boylu_f_Page_133.jpg
8a7ffeaef44f71b6d297053f026f67b7
ed8a954af0c25fba686f4653b7015c3fdb8f67bd
6809 F20110209_AAAGLZ boylu_f_Page_112.QC.jpg
ab2126c07d844a5f6b984770a7d22af5
2d88a795199302b469038fa399eac00ced83d616
8189 F20110209_AAAFJM boylu_f_Page_007.QC.jpg
84cbbc06632d067d247cb4839d2fa704
9580637f49b9437a4966ac0984ff2929f020c0f9
F20110209_AAAFIX boylu_f_Page_125.tif
676f375e6486f2c180a20b74d3c5f6cc
f65b1f725f99d37fb85de638fb5042e779e13171
5086 F20110209_AAAFKA boylu_f_Page_083thm.jpg
325adc828f6b370ac47804b4de2f5bdf
b49005c9021a0e8feb0c0376e16e7615c28341e4
24752 F20110209_AAAGND boylu_f_Page_002.jp2
105dd11b240ccfd311375a59e494a899
978ea83d7c57921a4749747c475c5d3ca3305738
7803 F20110209_AAAGMP boylu_f_Page_133.QC.jpg
3075d3e62d9b630e4884dc86337a4b58
2913634cc6e97f9a6a97d1409c842c94126d36f6
534761 F20110209_AAAFJN boylu_f_Page_099.jp2
25fb9c2479d3adfbca785a3aac0182b9
e27d87a4f42cd61cc0a3b1a0d4a38f73d9a63288
25039 F20110209_AAAFIY boylu_f_Page_050.QC.jpg
8a580821475abb2eff51e8ac582bb3c9
2d7f0b89cc73a66af977970aec64f27e37ab4d0d
77252 F20110209_AAAFKB boylu_f_Page_020.jpg
63123b8e44cfa90b8f0c917ed8eff849
bad093e5f4ac289f4d21a385a41af04638b04e74
934861 F20110209_AAAGNE boylu_f_Page_004.jp2
b3c993826abf6ffc07a547eee14bab38
44c52c68774bc6d2575f0319797776a5678eb0a7
56785 F20110209_AAAGMQ boylu_f_Page_135.jpg
19ffb9cb1206a1ddf984fd27a73a0823
dfdc046fe67d5bf15a98613281e8a6df3ab93699
17634 F20110209_AAAFJO boylu_f_Page_123.QC.jpg
426dc9ca0100960ac4927e1b9e52938a
9d3f33dfe8a07987b1f715094088dc28fd3860e1
5395 F20110209_AAAFIZ boylu_f_Page_065thm.jpg
f4b718bab4a0a3c78e3d46517b086769
dd6d5c4e324dbd8b7c29ce644a2c598e5a74e3c6
16723 F20110209_AAAFKC boylu_f_Page_001.jpg
216c009f934435a6c001cd7140632c91
9dc3a2a0cc4354fd035411b354ba0de43738873c
772101 F20110209_AAAGNF boylu_f_Page_005.jp2
10a5abb1a9b340104264864d51442d10
63f1d3f15d584e7eef7ad3b55e977436311506c7
40491 F20110209_AAAGMR boylu_f_Page_136.jpg
ab32c46c47c9eedb77e2e9c6d085b2d8
ebc9e44932756dc95b6c80fcfcbdf2a55989b305
3717 F20110209_AAAFJP boylu_f_Page_006thm.jpg
408e2f8db2af894b4107413d30bafab8
576182a51e34fba23c44cc00d90c7ebd7e6a46a2
3675 F20110209_AAAFKD boylu_f_Page_086thm.jpg
f87ed706bb4891762763e95cf96852b6
39717a2575df79485978cd4e090a8b8bfbe6ce99
400411 F20110209_AAAGNG boylu_f_Page_008.jp2
ee3de657824737b0a9192f73fe5b9ab3
aacef942770332b03bb7498847cf7e4cd18a804e
13433 F20110209_AAAGMS boylu_f_Page_137.QC.jpg
5191d774687d4872d69ca826ce4278e1
f1c5bbd7856eaa3d612f4df6334e857a3d828627
3790 F20110209_AAAFKE boylu_f_Page_139thm.jpg
924e7f088342a181853e851871d846d1
f90e9c0dc0cd81dd29ab85dd8f2d316d394d9865
38084 F20110209_AAAFJQ boylu_f_Page_095.pro
c53001a8e0a58c69bb0fdaab0377c48a
abcc92357748f1a77ed0e0cd5c3c8bee75022cad
838127 F20110209_AAAGNH boylu_f_Page_009.jp2
0735632fd7d2327309ecd8cc215e8091
ec0d29ef6f3e231291c8eeee11cb1d37718de1ec
42231 F20110209_AAAGMT boylu_f_Page_139.jpg
c904ffbb197780fb6719a69c97cfbd92
a1433ce15554609c4cf51b9b9a20021d70bdd4a7
3272 F20110209_AAAFKF boylu_f_Page_081thm.jpg
4ff074c98574a809aa2f3e56938283a3
472222f2c71b7f60566196796490ef2354380238
5309 F20110209_AAAFJR boylu_f_Page_036thm.jpg
1bc855a37b8f8094de23e221ba906760
32ddaa019d043cf50b8b79851f66bfb8680f2530
982604 F20110209_AAAGNI boylu_f_Page_011.jp2
837eb4d5dff096faf094295a7375f267
97943813d89abbe1bd0edc24b379150626c098c0
32515 F20110209_AAAGMU boylu_f_Page_140.jpg
16e8d905d915e860e0419cf71fe0fb65
a90d2300d79615df72f3c01cf27acf4fd5cc8ed4
50418 F20110209_AAAFKG boylu_f_Page_026.pro
247a87088dd015002ba0748cbd6d697f
dc7b4dd8a439c4c1cfa8b7154a6dcc3acf71b9c7
49710 F20110209_AAAFJS boylu_f_Page_058.pro
6c7caddc554f6d55ea15fc0032c07bae
7ba33392d8b37e658b3c1f2beeaf51bd680ac16a
1051980 F20110209_AAAGNJ boylu_f_Page_014.jp2
072d4d32a65a54d4c32fdcc736ecaeaa
19d51792e0b258964069e3a7299e160f9a0d907e
10445 F20110209_AAAGMV boylu_f_Page_140.QC.jpg
6fad43bdc11ff0f3f301c2c53649e43b
466f828b0f1e215150143aaa53b542ad3fed16d7
3194 F20110209_AAAFKH boylu_f_Page_006.txt
d2ee62b7dd0dff6443956f765efabe4f
4a08b362c8a2577fcb415f55799ef0bd06cfb76b
883 F20110209_AAAFJT boylu_f_Page_140.txt
10e6c604e318e3c021fab34453e37fc8
8f5caa109ea07811ab41538b3ef80e7c06350b97
1051975 F20110209_AAAGNK boylu_f_Page_016.jp2
6ddbd6bd7a2bc49b94e198b04bf2828f
35edc5d8eaed3c667d07bfe6450eb188699ceae1
53239 F20110209_AAAFKI boylu_f_Page_018.pro
cb465489bd2a3271455b5e2fe52969e4
50099fd48e576a4609a12f936f61f48180b50438
1051933 F20110209_AAAGNL boylu_f_Page_017.jp2
42df2cdee33f7b3e7470feed90b41e8a
5c2a82288fa1e1279f089600031db024244d3ae8
27727 F20110209_AAAGMW boylu_f_Page_143.jpg
7a73647b6ffd78504cb1bd13645d7dd8
2d45951e2ca73cd7bcba24516722437b33d509be
593 F20110209_AAAFKJ boylu_f_Page_081.txt
d6d1a6172b2c99b6f3021d6505ddc119
84870dcd22afd6c6d6b870d60a7102703e6c4cfe
64830 F20110209_AAAFJU boylu_f_Page_051.jpg
dc37328f91cf93a29553a42b4caea014
33e9c1a7e38c71fbc0141e32b014d9cc0e37b109
968938 F20110209_AAAGOA boylu_f_Page_041.jp2
06593e59c1eed8c6fc95ef3172e7f499
d30641bf66916232942007a52255612267e55b45
1051921 F20110209_AAAGNM boylu_f_Page_018.jp2
4878475ed21c779390877bbb993abcb0
b56d0c13cae7432757a697486b4eb6a0f34f8d41
8639 F20110209_AAAGMX boylu_f_Page_143.QC.jpg
c0c4c5c2327927abe129142dabe71c85
fcf3b2f80122b6e763c1ca45d37c5104658e0e6c
F20110209_AAAFKK boylu_f_Page_135.tif
817dd0b2cd0e70e16c9f58c323775081
0cc5ff086df4b77eb3671729e1ef649c270efe03
1754 F20110209_AAAFJV boylu_f_Page_013.txt
92f35840e73d788549cc46d0f7692353
b69d9b02ca876eaa6c22dd726d7532a122a1a9bd
F20110209_AAAGOB boylu_f_Page_042.jp2
02429334c9d33daffe0b6f529231d375
1f3553d196230e248007b788ea306c11f893bd8f
1051967 F20110209_AAAGNN boylu_f_Page_019.jp2
d6d314c420677100c63abb664c258c78
2cff98c22eccacefc0c123406e191e4aa4b9bd03
23349 F20110209_AAAGMY boylu_f_Page_144.QC.jpg
d963d1009993599da4ed6ccfa9d1f4bf
cdcb35b35cc4d444a4e85bf42177e1acc9237af7
17416 F20110209_AAAFKL boylu_f_Page_128.QC.jpg
f45598b341e61c144fbe55d6f44634d6
cbefa0645c090a657f9254b1607421f22737929e
17108 F20110209_AAAFJW boylu_f_Page_118.QC.jpg
8598a94e2f5860f1e60508f74879cbda
90679963582540768e65d7bf517f038984662abb
1051920 F20110209_AAAGOC boylu_f_Page_044.jp2
ec5b14f998efbffd6954c57ec0bfa966
5479037f22783788b8d0fb5c085f14cc9c9c605e
1051968 F20110209_AAAGNO boylu_f_Page_021.jp2
7baacf900b762b65a523b6ce7a7f0aac
ea11073217eaf6f08988a307e8fe4fbc41c61741
26341 F20110209_AAAGMZ boylu_f_Page_145.QC.jpg
839dc9c8da7921882dcf2ca18d1b636c
8d804152f54b62810abef9175b572e85430b6d9d
F20110209_AAAFLA boylu_f_Page_106.tif
42fa12ccb335f93cf4d05db8bdd33eb4
ec77fcb12e7aba3315322388f806f4c1a49da919
889672 F20110209_AAAFKM boylu_f_Page_111.jp2
242a82591bed22a8017f859dabeb53e2
3fd61c6a6a0c22fa447ad948dd83bb73fbe7845e
34045 F20110209_AAAFJX boylu_f_Page_105.pro
a375293ec78ea4e1adb137a59da69989
b5bf6f50a0354129602d8bd5a8444bd197a8bd46
F20110209_AAAGOD boylu_f_Page_047.jp2
2a74fc161d32788fe067e7d71c4487c7
e1f78825d9f75596cffb42c8834e7d49e3464d76
F20110209_AAAGNP boylu_f_Page_022.jp2
c54bcd570f9ebac5fc3b8d7092735909
a48d2acfb91884e2f45e239e91c3de7d0fc34f6f
F20110209_AAAFLB boylu_f_Page_097.tif
d9061cd9abb650c89514cc876d503f4e
9eddab1da799b68914f6bb7d6608349f87f402b3
14560 F20110209_AAAFKN boylu_f_Page_085.QC.jpg
b6a037c1a750689dbb1484ed07b795ed
e944b0cd0adfd8ffa022c84b336fe26f50717855
1692 F20110209_AAAFJY boylu_f_Page_064.txt
4bd46a4ecb252d55eeff6210f57a2eb8
5edb53264bcd6bd378bc3d2d491bd79c432764ed
F20110209_AAAGOE boylu_f_Page_048.jp2
77c6dac17a723590966c806942cef24b
393a824a7624869272da23110674de25de1cc3b7
1051931 F20110209_AAAGNQ boylu_f_Page_023.jp2
b99a2a829a960082675911c07e8cb98e
01bb349493efae4eedfd62f74db6af46ab12d17f
F20110209_AAAFLC boylu_f_Page_107.tif
a0f3a567c5db91906aa781ada6573c66
ef585c286498f19f30e47dd456058852bb6490cb
23421 F20110209_AAAFKO boylu_f_Page_020.QC.jpg
7d81b4a0f7f85d6de612b7c012753355
0359c202973122a826fba4b0223022939d02604b
1056 F20110209_AAAFJZ boylu_f_Page_079.txt
0e1a8266047a308cbc91ee569a5bb4c5
345318f81db8d1a6b9c7e8ba65fc4af10a89bc99
872604 F20110209_AAAGOF boylu_f_Page_051.jp2
0e4db87c0dd736013f4226e261ce86a1
75a2ae904ddb2cc3c98e79a2a7dbe600db2d48e6
1051977 F20110209_AAAGNR boylu_f_Page_024.jp2
89539335dbf003b2fcf464096b24a5e9
97ef8ad24f7b73a781d896fae5a06e4aa7d4c25f
F20110209_AAAFLD boylu_f_Page_141.tif
41722a4da9b12d428545c691ec09bb3a
b955999f17068efa90dc2d35a3d33c78f337af37
26926 F20110209_AAAFKP boylu_f_Page_099.pro
cd7e17f17113e6cb9b81be404d31eba2
b90870c2b900b2b7a0b95b39045ae6395dcd748d
962562 F20110209_AAAGOG boylu_f_Page_052.jp2
0c8620d1ef2b5de50b3ff411e3eda193
a757e7f14e83981fd0650f4b49891c56c75017ef
F20110209_AAAGNS boylu_f_Page_026.jp2
6709d4432ca56bd3e244cd651a72028d
b7412b7d7bf5b8738e0efba5d8e1b459ccef6227
52607 F20110209_AAAFLE boylu_f_Page_017.pro
6880c5de58042b8ba9e4e18503010e1f
5d511cf4729afeaefb843a168643119a68dd9e31
5345 F20110209_AAAFKQ boylu_f_Page_041thm.jpg
81d647213681b500a498f4c577c78b11
c0c2c1f89aee473498c665d72005a56b449e47f3
1051909 F20110209_AAAGOH boylu_f_Page_059.jp2
77c7183274d9a45ca18bfe156b9ff81e
9005eccdd81b6ad8a6880c17b77e37ec0e2e5a04
770674 F20110209_AAAGNT boylu_f_Page_027.jp2
821bf795675707e9856ea71767ae97b5
04f39e93a27ed7dcc5bf2aff244b80f4bc344880
F20110209_AAAFLF boylu_f_Page_030.txt
414a2e73ee556d09c1db38befe3c7098
e8e2bc992a58de5eae71c84d317f6eaa10a6809e
15740 F20110209_AAAFKR boylu_f_Page_134.QC.jpg
7079fdefe5c4a92eaf83d60016f93a3a
02c0ebfac641cd922648ffe85f5e91feddee20db
F20110209_AAAGOI boylu_f_Page_060.jp2
3b5e23e0318786c921f88982017144e8
7985025ede9d49a343f864477f727a1fa6439ba9
859333 F20110209_AAAGNU boylu_f_Page_029.jp2
cf76a7a2738db4b1c0bb7a1d2a4ca07b
2a8e4d7ce96d5cd0eb776f51ae567f9ad1ef48a7
1206 F20110209_AAAFLG boylu_f_Page_078.txt
b2f16c9a82502621541b0a4163700601
8fcbad506b757023bada3ba52856cd7bfb6130b3
920344 F20110209_AAAFKS boylu_f_Page_125.jp2
a07e5be7bc06efd0127f9d62b09d1c21
44b53ff8996608ce326d5823a44aedd3bc874886
F20110209_AAAGOJ boylu_f_Page_062.jp2
39c6156f5ff4b4b5017b984cdf029b39
747d28cedde64790bfe4fb745d22b67039bde12f
908421 F20110209_AAAGNV boylu_f_Page_030.jp2
926aeab06a7a3d7c298070738b50239d
b9bc2d2d9394f3f1b34db6edab27d60810f9c899
1978 F20110209_AAAFLH boylu_f_Page_046.txt
fe67ca343d225ad9924771ac70ee66e1
8aaf2a311d91a96bc43fd89104eb13bca38c3964
103203 F20110209_AAAFKT boylu_f_Page_147.jpg
c632c285bb219936215a42375082771d
dac47c55fa85e06b66f924c2e1c5c59bb1c60332
961588 F20110209_AAAGOK boylu_f_Page_064.jp2
614f756849f0048ae6924eb3dc4b72f9
449c2cc00692628c4be36c2d9f3555a7763fdccd
1051984 F20110209_AAAGNW boylu_f_Page_034.jp2
1e2f85512f4f06425f78de963c193c38
fa09f8db0c919934784acc17e3e19b59c97b36c0
F20110209_AAAFLI boylu_f_Page_048.tif
03a53545949479329836dac94b7a6f16
9336ae91a624aa3b597e72ce255876b525a6427f
689472 F20110209_AAAFKU boylu_f_Page_149.jp2
4ca69d2e6bc68d25e4a40f455d62205c
6335822da8b8244efbd008ba8b48321c2012eabd
960211 F20110209_AAAGOL boylu_f_Page_066.jp2
2cf194921dd73bc67c0f24048ce5b1ea
af98557f41b110fbf343aea5b3e4dd09aa15a406
1426 F20110209_AAAFLJ boylu_f_Page_135.txt
ae1e8f79f6cde823ec7a4523b4383b24
1510568c9b3693936c04a96886d2b6dcdbf522be
1051741 F20110209_AAAGPA boylu_f_Page_093.jp2
a3d6002ef12f8795524907c59d215c9c
2cb3c722dbd52a7b929cce257fc316e2193bb56c
643882 F20110209_AAAGOM boylu_f_Page_068.jp2
498d34fba539d00f4de9c3978329af4b
7b80333e0f6faf78ce09ac9662cdd641127df608
1046454 F20110209_AAAGNX boylu_f_Page_035.jp2
baa295fc8d5331e001b2c044bfaaa473
95a2b01f798f463f73e21a8960b54f35bca5691e
25300 F20110209_AAAFLK boylu_f_Page_046.QC.jpg
4907f6ce6c72dda13e7bbc4dcf034f2d
aad4938dd2b2a80b8b7c8ef6182a91adb4743086
F20110209_AAAFKV boylu_f_Page_061.jp2
0e7096f73c32aa8637136dd121f189cf
edea9e3a5ffde1abf86239c59cab35257b7744ac
839399 F20110209_AAAGPB boylu_f_Page_095.jp2
2922680b6f51aef5384fb6c7e446f68a
f1e74966488b6b4b6a2eaa9fbed262230718a7f2
697979 F20110209_AAAGON boylu_f_Page_069.jp2
1ffb00844652444ff7e651bea576cc9c
3f8d85d8f6c136927b888ec2b2b920eeebe599e1
1051955 F20110209_AAAGNY boylu_f_Page_039.jp2
456c33fcb585f118e06c64197838ff20
5d9d82cfe23fd15dea9ac559be98e7830ca12274
F20110209_AAAFLL boylu_f_Page_021.tif
8ca1fcad27b8808dfe30b1755dacfbca
598fbd6fcd03968f36d098262bdf29560afa2819
962693 F20110209_AAAFKW boylu_f_Page_055.jp2
304c55abdf334cd936e25a5b57aa0345
6ca13b9802e18a346a199fafd934dc58524aa8e7
600169 F20110209_AAAGPC boylu_f_Page_097.jp2
1ef86f8e995c130061e80eff70e46c87
6815db3f2f4249cd27130023fa12e2fdaa81dfb8
558244 F20110209_AAAGOO boylu_f_Page_071.jp2
aa70356192ae1fb2496c4d20b50517dd
dbf81c81b31aed4d84c6d000b8a9c295ffc6abda
1051960 F20110209_AAAGNZ boylu_f_Page_040.jp2
65f3108de31ba7a8f2ca44dc6531295f
11937563b2bd5210f4541cfe8af5b7d5d3fabdce
21101 F20110209_AAAFLM boylu_f_Page_130.pro
4f907d0517bf0f13a98a3fd4f6642478
58ace41d6e41c312c7b588bab1e08c2853906e28
26316 F20110209_AAAFKX boylu_f_Page_053.QC.jpg
0095fdfdaad424419fa63d8cfb3be6b9
8a5bdd22686171557f46c3f835bbedc68570440b
11387 F20110209_AAAFMA boylu_f_Page_133.pro
3f478f64cc4fb81902d100718ca998be
c269d74c397b1cec5a4964212a32b95a07cc554a
1051947 F20110209_AAAGPD boylu_f_Page_100.jp2
740a81ae0fd935b08a2641293e4c6d5d
b82a8a47f1b53b3276e73e284de8ebdef59c843a
927254 F20110209_AAAGOP boylu_f_Page_073.jp2
e5c385188d05ea58b0b5355780acb067
a3a57487c02c95948db098d334e3c9a0c86e2d96
1718 F20110209_AAAFLN boylu_f_Page_117.txt
4147763368190982467e4acb42221b52
f3e617e8b7084132a65400fb09420ffd60bda194
23060 F20110209_AAAFKY boylu_f_Page_064.QC.jpg
543498eaba4a34fe7243fe204c4b8317
9be89694eac2cbb59e3b6b1684d536f62e2ee7f6
24800 F20110209_AAAFMB boylu_f_Page_139.pro
559049ea5b482c666ca27cc929b44cad
eb5cd95abaa2b830b384e10d015ea4e3ef2a7e3c
791621 F20110209_AAAGPE boylu_f_Page_101.jp2
349d389172cfb2b44ca257c8051c8432
31238ac5ede5f1e43c59799d838813399a713b09
825364 F20110209_AAAGOQ boylu_f_Page_074.jp2
3eca37ea56c086401f5f8256b6320bd7
31ebd7c7338908bfb86348b570240a10cab8a10e
F20110209_AAAFLO boylu_f_Page_087.QC.jpg
0af8fde660e51d4b3b06dc7bf5361ef2
56c4cce0a143215f284fba19795ccf5a4aea5341
39025 F20110209_AAAFKZ boylu_f_Page_051.pro
301e9c26a0416286b73a286473164533
da229d03ebf9583d301a71c4c43ce7c99df7d72e
F20110209_AAAFMC boylu_f_Page_047.tif
9260b28d829e13eb5653b1d91bcc9ad6
4dcd209e38ed6e127c70f35e8ecac8dd50622de2
748419 F20110209_AAAGPF boylu_f_Page_105.jp2
b8fc6a40a4dabca13bbcdee080280291
11974be4045f0ac4c2b0eeea7d499b5480ad815e
489783 F20110209_AAAGOR boylu_f_Page_078.jp2
522bf1ad3077c6776e97ab6604c3e06f
cf7d711bb8e482bdf63a231968849248a6d676ab
F20110209_AAAFLP boylu_f_Page_085.tif
341dd6d93e4906900072d4cfed9700d9
f5f229dac068ba92d1bc868d14b55553f17f2544
23489 F20110209_AAAFMD boylu_f_Page_013.QC.jpg
a95e2bedbdc15914193f91a4862d9f33
cf7ea6001f3f24e788264190042de3dc1e87f586
333444 F20110209_AAAGPG boylu_f_Page_106.jp2
f23d9546702c545ea9f58e2a4a540905
7219f93aac2aa965754a8cde67fd8d0f6f79f86e
518902 F20110209_AAAGOS boylu_f_Page_079.jp2
9f5b1307a19f30a46cc809c3b4abf15b
0312c00c5d196e798be2b645f0e5d189c6b21c62
42590 F20110209_AAAFLQ boylu_f_Page_064.pro
2c7526863913df1c50b28e493662fa9f
8c3c87a8686541a398a9d04bda6b9c42d7974890
1565 F20110209_AAAFME boylu_f_Page_075.txt
a33576609d44ee01a5be60503a7f22dc
3149091345c5957eceec18ad1cd197c7b509cdaa
467306 F20110209_AAAGPH boylu_f_Page_107.jp2
9f981433754581c35f306ed5db11216a
9a13ae931bf377309ef18dc0c0c960a507da0140
259896 F20110209_AAAGOT boylu_f_Page_080.jp2
713ed1c50288ae795659403dd2024a0e
ed1fe663c37f43e76dc9973ab1434ae200822cd1
607903 F20110209_AAAFLR boylu_f_Page_085.jp2
01f32d964dfd6e18e5194311ceea5265
3f95140df5090ad764f1d70c7bb8573f692d831c
728049 F20110209_AAAFMF boylu_f_Page_098.jp2
ad7920dd46a65c87c84e63caf33e64c5
5916bd6fd885ba64e4361f9a8781fc9c0865686c
967811 F20110209_AAAGPI boylu_f_Page_108.jp2
a7f8798b6b4a4885cc259806dd34d816
687065a92c86f2c91bd473d55d722b6392254ff7
760430 F20110209_AAAGOU boylu_f_Page_083.jp2
456780796ac94bf18015d12bc71a0269
5f124086a1cca1128d30ca8a725549e5b371a6e8
F20110209_AAAFLS boylu_f_Page_068.tif
cbabb5f5f2b6b66ae3dbc9aaa1be0e3b
9f6195f4327af389ed66bee70c17d628a9a72bd4
82716 F20110209_AAAFMG boylu_f_Page_040.jpg
eddbcbf84474f040cf3a4133312941e2
5aeb07e26cbcba95c94379385361227f71f86e43
696661 F20110209_AAAGPJ boylu_f_Page_109.jp2
7654aa9adc94a1337388f3ee4392419e
c90f87dd11e86ea47269b119c0c8cd4e608a45ee
481151 F20110209_AAAGOV boylu_f_Page_084.jp2
6c0c1fd1446731184dfe8733dba0e96e
3bb89678f47d31dfe7cfc5ac76b217e0ffcd0d6b
47209 F20110209_AAAFLT boylu_f_Page_035.pro
ce95aaabb8000d612218e7895ecec1cd
d601f06521cd72d112f612e87faf0b1a7829ef7a
2097 F20110209_AAAFMH boylu_f_Page_016.txt
364a801f7611f7ff9cae096c071efc26
a86bafff829c67692b97e8091c5516e96e42ce3f
904310 F20110209_AAAGPK boylu_f_Page_110.jp2
aa1bc3f8cb9650d51cd4b48ee06afd0c
3bfc0897491751bb91174bbb8afaae3759d78de3
402650 F20110209_AAAGOW boylu_f_Page_087.jp2
08ede4a78497142bbedbec0a1c143bae
47241b24661994fe2c181c50220a0667b23a6998
6302 F20110209_AAAFLU boylu_f_Page_076thm.jpg
6e628cbd297d461b4a65bd4c9ee1c71a
b471286a72d36de16bae1f83a235d89bcad5e34b
26422 F20110209_AAAFMI boylu_f_Page_062.QC.jpg
99a931e5c6ddd36e301a6b3646bd8d43
af7f9cddc0f95b4a20aa54af9bee94567f567673
254797 F20110209_AAAGPL boylu_f_Page_112.jp2
0b2f36555d681df6196e23cde5c8565b
b1ebfc16f043f526e7c361371fd35db30a4cb426
895217 F20110209_AAAGOX boylu_f_Page_088.jp2
0f5f36ae4de33e04df283201ea7ba763
3606e7cd3e8a75bb980e22b879739a299ba11b61
25099 F20110209_AAAFLV boylu_f_Page_056.QC.jpg
8fcbbfb4ab51fbddfc3a8503c53f8fba
39ecd1ab56464f7625bc15beb70178bace6b7455
49981 F20110209_AAAFMJ boylu_f_Page_114.pro
2081b06e64dc72d986a06a58cea0456d
5d1d42ca73e301ffdcecc0fda10d072b17e30494
F20110209_AAAGQA boylu_f_Page_144.jp2
e54353c6bdf59a93c3cf6b0da13b378b
45238c9f8ce96b3bc279807f1e1f2a57bf66a02f
F20110209_AAAGPM boylu_f_Page_116.jp2
bccfb0bcdebc82b99250fafe102b5345
c22b6cb85dbebb331a3983c78ff47b1a9f9ac0ea
F20110209_AAAFMK boylu_f_Page_028.tif
55ef63294aa0b05b405455ecf1352638
c1aad059fe4caa020891580310802f58b51d08ed
F20110209_AAAGQB boylu_f_Page_002thm.jpg
2d9a9dbc6384fbaceeeae928af5f6eb0
73ac9c5a6a229262176236aefae91478ac493f2f
894520 F20110209_AAAGPN boylu_f_Page_117.jp2
a7d11119a8d02dec94f99609d3e0b17c
51a553fbf3beebe0cf7301285dcbdf2d01d6e6ba
1039427 F20110209_AAAGOY boylu_f_Page_089.jp2
530f2cbaa8cb22463a139556225f37b4
e573b97f127397f18dd40357527570ba5bff4927
3326 F20110209_AAAFLW boylu_f_Page_087thm.jpg
4860d78c3a4dfc0c3578146400e8d456
841ffc7bd953a884e587d089e9a40d461902f52d
32583 F20110209_AAAFML boylu_f_Page_008.jpg
efdfb6e0a3aa928b490e3c9e589e757c
37288eb64c629107968442d85973825c7ad1746e
427 F20110209_AAAGQC boylu_f_Page_003thm.jpg
2b0fc5d16d7f7fbcd485ad0cb32acc1c
e2884b78f9d7994cc6d4c601755dda248f61b956
932663 F20110209_AAAGPO boylu_f_Page_119.jp2
26a0578ffa7e8d026599d486d080806b
460725738408cd26a0f773e7d09b42c6ff9d611c
F20110209_AAAGOZ boylu_f_Page_092.jp2
b6826d6a20b3b1ad76519da969e36299
ea86f73a1a3a2c325475c288ccfac308b6b31f24
31352 F20110209_AAAFLX boylu_f_Page_128.pro
6972f904a4b239e0eaf18da3840c4050
75f9bbf462692fc66d5ea4e1eeb8fde5c866531c
52727 F20110209_AAAFNA boylu_f_Page_025.pro
c2eaed589717deec2021701ed3d93f31
f4c08c2e34c7cda04d5df12a3b7edf78b3d94d32
350672 F20110209_AAAFMM boylu_f_Page_140.jp2
2ba5f7e0aaaa1433280a057ca51fa688
ea5465a2c6c5566e56a8621f66af1b2c999e9f86
5411 F20110209_AAAGQD boylu_f_Page_004thm.jpg
9b7a0c0bcb8068cf296d1ae77764d683
fdf6e49dcf3bb35ceeb76724b3ef6e95fc349219
658908 F20110209_AAAGPP boylu_f_Page_121.jp2
5785f186a2ad739f1c557bfab047e689
517566fd9d760d3a5698d23d0518bb74bce3acaf
80341 F20110209_AAAFLY boylu_f_Page_034.jpg
9c2226afb0050aab95d4906edb5ed830
c135ceff16319bbdcc44906a738097b5cd9d969b
F20110209_AAAFNB boylu_f_Page_049.tif
4d5f7c1b83c912380c60250207e69293
0c0a473e16fba8854279ac6f740af792395d7b62
44144 F20110209_AAAFMN boylu_f_Page_078.jpg
810c8f61fb91d7a088207a25704c5ac9
22111db95970952101b8eda88ac4bdced54b994a
2300 F20110209_AAAGQE boylu_f_Page_007thm.jpg
728efd905181749d489e61190b12c474
1a9657c2d50696738ff4ef644f9df573449d01c4
746433 F20110209_AAAGPQ boylu_f_Page_123.jp2
d951744907a604cdeb64d6f8392c3e76
ff97e36e3ea1c25d48c3cbc3323e8e4f439cedc2
24491 F20110209_AAAFLZ boylu_f_Page_100.QC.jpg
6dc9b303a57593518e8d1ecb0f5562fd
416046fd993a061ee0652e5a95fa7fd6d2657772
1490 F20110209_AAAFNC boylu_f_Page_138.txt
d865171adc62b87173dc933fe0f08b55
9b5931c1ab3d2cc20f6e2a7d6f2e7e9902fd794e
1195 F20110209_AAAFMO boylu_f_Page_136.txt
2eb8bbc08808a9b9ee6881a04e79039c
d4afdf572bbe323bbe1dd44455a9b40fd08a66df
4647 F20110209_AAAGQF boylu_f_Page_009thm.jpg
ba88a9062a99c841f6221f647c8ef495
6b18e2059f26b280085e2efdd98a2c178f2fd6e3
835545 F20110209_AAAGPR boylu_f_Page_126.jp2
db8cd15740b86053b963049bc7fc97e7
a49cbdbbbb9f698e3d73ba6a9e80a72559c7f40d
F20110209_AAAFND boylu_f_Page_120.tif
02aab955d4a552546540e37103c402ac
5ece5369c185c2c96c8b402497d318a40318bd9b
19231 F20110209_AAAFMP boylu_f_Page_087.pro
b8947cdc29434fcbd4be789b65caa217
8b5b3e3771627321941cc8d12bdd6f1d85e9904f
790 F20110209_AAAGQG boylu_f_Page_012thm.jpg
e559c5de0745d8dc99550d3baa623d12
05f359a053963fc836290c1c388d2604f8a07aef
610679 F20110209_AAAGPS boylu_f_Page_127.jp2
a6cd18c4358806e427c329e8e440c849
c42ca9706320b39138b2eccee87ea6e7cc57bca9
85408 F20110209_AAAFNE boylu_f_Page_054.jpg
3fb6e0b5f0e9b37a0ab60fea92f14c7f
eb2bb7abb85326987d821f575d86fc6900e0e8cc
12602 F20110209_AAAFMQ boylu_f_Page_141.QC.jpg
af7f19f7ca532f2299e6a52f3c5b5fd1
78be16a7e3aa44b50df979a1cccbd4d3479722e7
6187 F20110209_AAAGQH boylu_f_Page_015thm.jpg
539881e57b7daca0d0bf09edf311f06d
f19ce25e9f9d9d35df77cd1351f4f6802db1d6d9
493582 F20110209_AAAGPT boylu_f_Page_130.jp2
683fafdf292afaa9d0146c7597d36ad2
b2fa02038735d61f8750e5d0f57882bde673cf7f
50172 F20110209_AAAFNF boylu_f_Page_122.pro
aab7d6ba522ee002808b120a77db236d
57ff12f4e6faeec82aad10229844a477c9d1fa6f
20815 F20110209_AAAFMR boylu_f_Page_065.QC.jpg
865b44b8f7e250e5d2cedb4fc8c7d567
1c5eb47f2bccf682f8119847c9ef1658bcbdc959
6258 F20110209_AAAGQI boylu_f_Page_016thm.jpg
d7c9c30b18606f33a80a0cb23e551d04
277fe745aa4668753b45dddf27d42d12e08989d9
249960 F20110209_AAAGPU boylu_f_Page_133.jp2
63c4155f5c773f3283b93140fd3593c1
fa5b7694f92390d118fe3bb04a8bb9f06615ceca
38647 F20110209_AAAFNG boylu_f_Page_141.jpg
45af3e85f50dd91173c8ccb67a28014b
340356178e7470e712cedbe5c531ce07c3b002a7
1456 F20110209_AAAFMS boylu_f_Page_070.txt
13c7d81aa4b4c92408a8b92b60f6bd3a
ca95b8a9eda0b64e4a56a537a3cd8e7f64d6eada
F20110209_AAAGQJ boylu_f_Page_018thm.jpg
b00d48e1ff7ceb879d00c50d162022b3
1e9bd361954f555744a93679a264c422e10b45e9
626986 F20110209_AAAGPV boylu_f_Page_134.jp2
b7aedde71816d54b17f8ff5238c4f744
be139230e2ca20fa8b80a4da45788b12c2213cf3
1284 F20110209_AAAFNH boylu_f_Page_082.txt
5d7b79ee139240ac40a14a4209312b8d
0abf5cd4f6d6595183759ba394917a1e7f1633a5
25234 F20110209_AAAFMT boylu_f_Page_061.QC.jpg
72d476f8e6d64f8aa558b683ddc009e3
54a8e986fd7753dcc199589e0a280879147b3062
5862 F20110209_AAAGQK boylu_f_Page_020thm.jpg
2ecf60c541d76fe9caef0ab5a8ec87cd
8b7294c5dc25f7944470f3e3b22532c5a975cecc
508420 F20110209_AAAGPW boylu_f_Page_136.jp2
bc17d6cafe3b743c28ca833058c6d7f8
d102148bd0f9225106f8141b1842c0acd9dccf4c
730021 F20110209_AAAFNI boylu_f_Page_118.jp2
cf7032b5fdf1a611d87f546e3578dd1f
85ce106fe6395179281263baca0d2ba73a6def61
F20110209_AAAFMU boylu_f_Page_147.jp2
83a0bafb1ce09260f21f73008805bc34
e7dcd1b4bea4edb8e8feca72ffd1cb59e4ca2172
6269 F20110209_AAAGQL boylu_f_Page_023thm.jpg
28ec4cf217156f1682b4c0b2a9d44c35
3bc67794e6923c387c368fb27b01e3e79ea480cd
477308 F20110209_AAAGPX boylu_f_Page_137.jp2
3dd807b73a35e322fc3b204fe04a955d
5eeb6c81bd29e1d9a3a8335b422dcd179f3d0739
85836 F20110209_AAAFNJ boylu_f_Page_025.jpg
04ed08435e138406444bbb1c522d3449
89299782f37c9492bdac74a87a689a0b3fc61c8a
1296 F20110209_AAAFMV boylu_f_Page_109.txt
bf3c4badbf9e93919ec67105f9e9c4a6
0951d993906957a89e27d8c3dd1119a56a380543
6584 F20110209_AAAGRA boylu_f_Page_047thm.jpg
803b8d7b3cbf59aca31ece64a0c46488
bf1bae4d5f2fa75cf5d46849694412f829e81a7e
5969 F20110209_AAAGQM boylu_f_Page_024thm.jpg
b0dfd62bc41236d8830d7b6d895841ed
67e999a1cbb0e6e50e3e29b8126332dc97545eae
641458 F20110209_AAAGPY boylu_f_Page_138.jp2
5dca62613b5eeae13a36e099c3329384
dd7001796fa808d6a13dde224761c9f97eeb50d3
20093 F20110209_AAAFNK boylu_f_Page_003.jp2
ed6600d3fe8692586ed977fb8642aa25
64079d1c8bbd2fcdcdb2f699f29a7ba661959683
16022 F20110209_AAAFMW boylu_f_Page_138.QC.jpg
14651c84149c558b0737c582c8a7ac20
fd34a6e4e00a49f3010bffa67258ba979f78ae31
6255 F20110209_AAAGRB boylu_f_Page_048thm.jpg
2b19a945f1fd36cfa8ec9934fc9ba1aa
9989dcd7d62919ed9cdbdbe7813d52d2a29c9d27
6422 F20110209_AAAGQN boylu_f_Page_025thm.jpg
8352c1ccad6f2027286b0874654c50aa
8f03187f81b44bd672e6cfa660c509e24cc74342
50691 F20110209_AAAFNL boylu_f_Page_063.pro
e1f5af4fb7efc251dede057d580a9d06
f3bbb4ecbda03517a2d88da3d507d0490acd54f1
6045 F20110209_AAAGRC boylu_f_Page_049thm.jpg
56201446680b878d66d3fdaf3a9ec1dc
512610b9b29155647f1d04d341864e891258e62b
6372 F20110209_AAAGQO boylu_f_Page_026thm.jpg
4a9a95238b3cda8417e7d88f4d8b7fa2
4024ded9adb8e9b3cc29b777f53e674ea8d5de63
311647 F20110209_AAAGPZ boylu_f_Page_143.jp2
1c703cf800f00bcbb74820e784b6f684
3271fa594979b2f7aefabef4ba962a9dbb2acca7
630303 F20110209_AAAFOA boylu_f_Page_132.jp2
1ae19a69d5054f393bfbc73b2cf05d80
864bfbac0ee1d26c28eb5134488ba4ee68dd7625
523445 F20110209_AAAFNM boylu_f_Page_077.jp2
19edfc1707de45aac2040eeccf9d261c
5b313c76e116733d123df26185165366a3dc4b76
F20110209_AAAFMX boylu_f_Page_011.tif
7641a265340621d41db90c81dfda7711
52df3993cf02404c0e4888b6552274b22234a854
6204 F20110209_AAAGRD boylu_f_Page_050thm.jpg
2cf167c325d066738942172097737ae3
e7a98aa152438cc33ea59b5be4314c892e70c3a8
5020 F20110209_AAAGQP boylu_f_Page_027thm.jpg
493503ed46a82e331da944f08b74c38a
d854d2ab7b1721c2cb574b5ccdeaa6ae17fb18c3
26236 F20110209_AAAFOB boylu_f_Page_116.QC.jpg
2679d9502776c056bb4d842f1fb595e9
1ce4a79b41b1697370302ea83ae3bd3986e0ad5e
79781 F20110209_AAAFNN boylu_f_Page_100.jpg
887299d4cc94947e126d1525576df510
fc614bb3da60b82bfacba050a07c3d5cd7db6244
2022 F20110209_AAAFMY boylu_f_Page_040.txt
431eadb828bd11660e532f5f1e28bdea
126b2cc3b226a3c0b576197ecb2d6d31dff5889b
6127 F20110209_AAAGRE boylu_f_Page_056thm.jpg
0ec3690b70ca19248e0d761743d7ec5c
b75c63f829a2f703ee27ab19ba36ec3e7a0b9c3f
5629 F20110209_AAAGQQ boylu_f_Page_030thm.jpg
8860f88c14f48ec5bac17053fcb8ecd6
df411dc8c7c970e388e666ade506f59f9099a8d5
86013 F20110209_AAAFOC boylu_f_Page_031.jpg
5afe711e021903393fe2bf0c4438aed9
66bf01fa1b2031d3640fa8b94bd9b74d2485c512
6168 F20110209_AAAFNO boylu_f_Page_114thm.jpg
03031345a9c43ed6dfd80ea4b39d5319
f8f5b28be973af8429a8c9faddc86a54812dec37
48595 F20110209_AAAFMZ boylu_f_Page_049.pro
01ec0bf5899206ffe5c694cc3074219f
7a6af045f699b3d1afdce81565848b6d86ffc26d
6136 F20110209_AAAGRF boylu_f_Page_058thm.jpg
dc03c62482e8f865833c85981613fba5
a2eee33ec76603a9a9f4e88ae82c22d259f6d7af
6003 F20110209_AAAGQR boylu_f_Page_034thm.jpg
471f3e44259a575a68e6ce20cf7825d5
95ee641a3771485b53d3847f33a983f48c655015
2431 F20110209_AAAFOD boylu_f_Page_133thm.jpg
ffcd2c9e4dcfed72a9402aac354387df
80adc553858c8e398a830e0be9e8c5349eddbff8
9994 F20110209_AAAFNP boylu_f_Page_081.QC.jpg
6fdbf80a14c10da160575c920976449d
d6eb25bbc5f7dd6f0c80a1943a96f088d64490b1
6227 F20110209_AAAGRG boylu_f_Page_060thm.jpg
924ce1513d33f86f8613316263c9c946
408ec7e1c79d7b9196064d85d136668ecdda1756
6274 F20110209_AAAGQS boylu_f_Page_037thm.jpg
c574e800983401753408c976d4977ff1
f27e2dc2b647c2798c56df34b70321af6d4c0268
F20110209_AAAFOE boylu_f_Page_031.jp2
c553461b42b9e6fc4a3fa19c78b4125a
78b6725fe76ce1d86ea7c57389fb64780f39e495
2040 F20110209_AAAFNQ boylu_f_Page_015.txt
a05154063b5645c1ec39842f90bea40a
f906e3faf8a2471b6dccec67decb700673d720b2
6058 F20110209_AAAGRH boylu_f_Page_061thm.jpg
6943d814361a04242df5e948f5c93d12
9419c8e951939cc4ebda687e13a3cce5167c7b2e
5749 F20110209_AAAGQT boylu_f_Page_038thm.jpg
e43dc5450864ddf093447e12ded29a9e
5db3ae1566fcc99e31468a44c1326d3d06c41870
83602 F20110209_AAAFOF boylu_f_Page_063.jpg
b30271208f77e8e76afa488d8dadff56
403799393b1d17cbedb2766e23814f01905d2735
49635 F20110209_AAAFNR boylu_f_Page_042.pro
f173592e559cb18aaf45893cc526cebd
245b5acfebb434797f14f4583e9acbdec3ab01a1
6508 F20110209_AAAGRI boylu_f_Page_063thm.jpg
4b6cb9c8c1a4332bc3f2c94c4de487f7
2542c243f11d2d764e102d9e8b6683217f87c568
6463 F20110209_AAAGQU boylu_f_Page_039thm.jpg
19633cc0279c79262f893aa89d3709f2
d01c0051e6cee015f14b1bafcd5cbee255424bf8
F20110209_AAAFOG boylu_f_Page_058.tif
dcc61f869497f686ffd6ee9004cf84aa
29ab202d245cda1b0b429b6913d4fac7528ddecd
86374 F20110209_AAAFNS boylu_f_Page_018.jpg
7abc4d15d04554cb1b3b52a0ad1a32fc
1bf7387710d5d8b2c08f5039ff3ef072d9678efc
5476 F20110209_AAAGRJ boylu_f_Page_064thm.jpg
b0434709c2a8872a1df01c0bcade8e7c
1ca55fca8be8857c9d0838b6f578eec584e077cc
6244 F20110209_AAAGQV boylu_f_Page_040thm.jpg
4149f072123328df4b1fc2c47c183550
205af7a1f1f6e9e2cf43536a74c25ad14a829fc7
10369 F20110209_AAAFOH boylu_f_Page_142.QC.jpg
4a90aa7d8f9ca05735108ab6223e421c
641c0e1c2fcda9a3f0e83c361eb24857e2db9548
14498 F20110209_AAAFNT boylu_f_Page_127.QC.jpg
6c3d4c0e573a40b5a0ae90ac3c516619
146e5862642f9552db055aaaa2326a0bc7e5945c
5565 F20110209_AAAGRK boylu_f_Page_066thm.jpg
294253a8defb9d5cf8db1a41c132547c
dadd6394f28929f32a70f79206495428cdccae4c
F20110209_AAAGQW boylu_f_Page_042thm.jpg
4393b7f403423be189a851cd70fc2ef9
47484679ad483db4a224885b889ac4a010347e49
F20110209_AAAFOI boylu_f_Page_057.jp2
f60693c15853f724167d61b9dd2588a4
40040a85466ba21079bfaf4646a00bfd1592c9a4
F20110209_AAAFNU boylu_f_Page_089.tif
29b61ca3a3287ea770a09260357be4e1
533402f193f9666f24bd9fce40dc8cd5493f6e09
5628 F20110209_AAAGRL boylu_f_Page_067thm.jpg
bb0e0c830a2c71d5dc13f78a988f2afc
717ac250675478447a316bec01d9c1eaa332704f
6397 F20110209_AAAGQX boylu_f_Page_043thm.jpg
212d291106c741dee8e927d99985c185
d44c672c22b71610b3801fa3c42e1e947092f7f6
F20110209_AAAFOJ boylu_f_Page_077.tif
b7c7bae864aa902d7dee6184ce1a043a
e7b961109220a5945e7d726a1b97a039cbb89d0c
59139 F20110209_AAAFNV boylu_f_Page_105.jpg
e0880a0f7e86ba9858e19ad872e7bc75
120e0cf97d165471674a613e02401af8dbd76eb1
4102 F20110209_AAAGSA boylu_f_Page_097thm.jpg
39491a2e9c7fd420097e78d1c2e92649
b14ae87179b0e8e8525b6a74ded3e6bb7b56800a
4364 F20110209_AAAGRM boylu_f_Page_068thm.jpg
309289e9cecfc1d5b117f024d1c9aa73
f08a7380c006e4a854ea6ae1b27afe1e1f077a9e
6049 F20110209_AAAGQY boylu_f_Page_044thm.jpg
7f6811ed2ce8596d46b45675a29c6640
aae6bfdc36f03f4543f06b9c826115d5a4b13b99
1767 F20110209_AAAFOK boylu_f_Page_055.txt
778ee57929a9e6af0ba937fe50c84b58
1ab3306df1d149b9ff13d49cf40629ea7d2ee34d
F20110209_AAAFNW boylu_f_Page_122.jp2
2ffd20b843b7e21bdb1f8ce2d7249d77
3c890f9c2378c5b3928ffde18d350743d77dde46
4829 F20110209_AAAGSB boylu_f_Page_098thm.jpg
2bc2b27ac05b58e91dddb0f8006f4fc2
97c4a1fc92794be23985445dbe00c3831c8a256a
4469 F20110209_AAAGRN boylu_f_Page_069thm.jpg
d5217da98dd55951ef431caca9aaa67b
b8f9abf4e2bf9971a692adecf1f623af2dd6b1a2
6178 F20110209_AAAGQZ boylu_f_Page_045thm.jpg
60d7151b6c211e62462347d387d441ab
a6a48e9b34a3ae0f0440fb73775d4fd153a9b952
2073 F20110209_AAAFOL boylu_f_Page_043.txt
294bec68a919c757a29671380e74c9b6
a6c994bb81253ea0f560c4c55eccbbb70a3178e4
48364 F20110209_AAAFNX boylu_f_Page_134.jpg
4fffc332c74c3918b036af2b787d6a64
c475df1306dad737a8a770b40bb36dd577cd589f
3727 F20110209_AAAGSC boylu_f_Page_099thm.jpg
786572df8cf17634fe6434ac51fc13f4
d20f31a27db4ba141812668649f17a2bfd29bbfe
4694 F20110209_AAAGRO boylu_f_Page_070thm.jpg
91ea5ce649ff68c9469445b5e0876581
235d24c5f9f4fd7301416bee75651fdc131dff7f
F20110209_AAAFOM boylu_f_Page_115.jp2
0208de32fe6026e23bfe2af30f2bf351
f13e4c0e53c84b287e4aeb8de8007fa3449922b6
5459 F20110209_AAAFPA boylu_f_Page_011thm.jpg
68cb391fb1c3d90242aa9eefa4d7b8ad
66ecbd3301e29cc3ad54db6cab64bb0373d87d24
F20110209_AAAGSD boylu_f_Page_100thm.jpg
a6d4afc6bc92a4be52bbadeb11456b4a
ca9d43da6b1b5d6a33245abfbb84b8f605892cdd
3833 F20110209_AAAGRP boylu_f_Page_071thm.jpg
661dc5b64bdadac48fe8c1e8052d46d3
878b04e5a685307a1a14d21afb940bd616434615
F20110209_AAAFNY boylu_f_Page_051.tif
e0979eca471ac3c10e2af3ddc2ace2a2
330f6cdcda88d1d332ee790bf6681c01a796ead1
30085 F20110209_AAAFPB boylu_f_Page_149.pro
b8ce6da8663da0ece47ec0e5f34ac831
2e5205e051546cc3c15c0261d1610289163f0014
74693 F20110209_AAAFON boylu_f_Page_038.jpg
2416098452135c06c326efaa49260964
c92ebeea0094ea09888204af2874ffd6ad5eaf29
4586 F20110209_AAAGSE boylu_f_Page_101thm.jpg
690c17c7fb461d4a3b644c520d1b58cf
5d11070686e59256c777090e17fce37973122105
5653 F20110209_AAAGRQ boylu_f_Page_073thm.jpg
66c5cb8c23b23d5fc0ff96dcab922a80
65e445e97ccec6ffc7d4bdc71df74c973b47b39d
23398 F20110209_AAAFNZ boylu_f_Page_078.pro
45884a4aedbc668cb5dd531c288099ee
5444b297fcaf7d1398c13a9ab8adcbebb939f546
70788 F20110209_AAAFPC boylu_f_Page_052.jpg
f3e1ff420181fdf162f815d947165248
fce99e614aa4ab996c72f2fd5904a40d2ff631f0
21552 F20110209_AAAFOO boylu_f_Page_082.pro
74ad8be4a4ded0911ab6a49e5a858c99
1d35cb9eb9e0d084d942ad5211968755c3a04418
4128 F20110209_AAAGSF boylu_f_Page_103thm.jpg
262413149656dc031b523a4b0d714a40
b8ff0dd7a53c3089437bf865e92ef02d79c788d9
3865 F20110209_AAAGRR boylu_f_Page_079thm.jpg
1d750bb687ed943531b7b0c6c580dbff
db3d00fc7604cf7f5c4bf51d3e0cd24c6100703e
F20110209_AAAFPD boylu_f_Page_090.tif
de33bc29fbbdba69bad2308fbc4852fc
a64434565a00e4ac23885d855f7e4958610df475
95890 F20110209_AAAFOP boylu_f_Page_145.jpg
9a581e96fc759c9361eeb3ac9261db5a
7948c5aefb79506197bb3140c5a2540c9d7a0e52
2877 F20110209_AAAGSG boylu_f_Page_106thm.jpg
693023f52ddf75ae8194cb832c336a16
52d54d87c6ef9d00dcb6f9baa1c1dd3044015fcb
3290 F20110209_AAAGRS boylu_f_Page_080thm.jpg
47d85ed53b2957710f918050cc60afe1
b991fc3e98061e5f6fd8a203b7932af1e9541933
854646 F20110209_AAAFPE boylu_f_Page_102.jp2
9f3ac54e63e51def8e26949460709da5
4dcf48a297057186c61aaf36f1a51e09ad2a2897
1801 F20110209_AAAFOQ boylu_f_Page_052.txt
b12da2a8494cb90c7a1e35b1abec2405
88b8994850d7c6324971fc3188439dffa6212713
3681 F20110209_AAAGSH boylu_f_Page_107thm.jpg
20a3423ce73f9da456717c2b4960e942
2524b15db7942fea0d3b6e8cb6735b764531bae8
3444 F20110209_AAAGRT boylu_f_Page_084thm.jpg
4c2349b4011d65a477f29ff97727aab7
8963012b811637360f34b46db3dcd4ce06902a41
F20110209_AAAFPF boylu_f_Page_124.tif
fade77f9e3a0ac6a035e9fbb18ae7d6d
ca03d4b980faa954bc9d47cb06c6d2a6c2ff92bc
5666 F20110209_AAAFOR boylu_f_Page_074thm.jpg
c0232b5e5bffa4332202b4ef8524ed38
c6d3b484b9379b95ce35a0ff39f191225fc02f05
5965 F20110209_AAAGSI boylu_f_Page_110thm.jpg
f8ce50e2a9368767d8fc4c8d85ce36c3
81c39f79e4edca2e0d4b225fbf448ffdee5d640f
5450 F20110209_AAAGRU boylu_f_Page_088thm.jpg
7c0473e284f22e8a0f2c826db459a8ac
9938f446a83f441c6907a851cb7439659cccc6f1
20872 F20110209_AAAFPG boylu_f_Page_029.QC.jpg
6dd520941b4ecf42b921beef26870a3a
050f88cb65cdf08f680371a3d01a21defb91c25c
27061 F20110209_AAAFOS boylu_f_Page_063.QC.jpg
88ef16dbb26ce37b7716c25b99d454fe
38fd0a91f9fff16626bb9e2181a5a346c41b3d0e
5372 F20110209_AAAGSJ boylu_f_Page_111thm.jpg
ee7b944b3b9ddf292500c4fbb20379e7
ccb3d0c6de3804919277d547560ab84b02f2a3ad
5590 F20110209_AAAGRV boylu_f_Page_089thm.jpg
73754ea9d3130dd66b14b42eed41734c
dad10602dd9c59bf62207a7fcdb68d9bf6e04e06
82239 F20110209_AAAFPH boylu_f_Page_037.jpg
86fe6eac0917829b6150badf2a57d549
cd1a515c7d05541b8d8674705e15939f5979eacf
80780 F20110209_AAAFOT boylu_f_Page_114.jpg
8414ac84b5d5f873d737ebd3bbd28f66
da09f206f0ece358a266809ad2fac8be216fc2b7
1859 F20110209_AAAGSK boylu_f_Page_112thm.jpg
823a5e7b959cc00b4f0019ca868663c9
984edccffe2a501fc747c7fcb74f7f3e2deebdf9
4828 F20110209_AAAGRW boylu_f_Page_091thm.jpg
6295426f4994c9211636e7b359821da5
4ec0eb25179170b4ff1757bdf7075a36b5e6c3d4
229403 F20110209_AAAFPI boylu_f_Page_129.jp2
4d3f9f1ea19db56dac26682723caba02
8b0affcca46a374f016ca874815261817c68a0da
F20110209_AAAFOU boylu_f_Page_043.tif
21f77be31805a4e58b6ddf7a9d77d8a5
b4e7aec31d19fc84db2b4d460ac625251d7aa804
5998 F20110209_AAAGSL boylu_f_Page_115thm.jpg
0c7f52785ef7f78be5bbc0a6efc00e11
89020eaae263f9627c70626e0d3d2ee4ffe16728
4078 F20110209_AAAGRX boylu_f_Page_092thm.jpg
6e646717b5ce17d2edab3404ab6add36
02cdc0a7512bb06e630954c13b26c34f0788ae12
6441 F20110209_AAAFPJ boylu_f_Page_031thm.jpg
b5a6adbeaf740b8057a3f39b4ab5956c
96dbe9f3a212f72278820d3893c7f15c5f36d689
F20110209_AAAFOV boylu_f_Page_050.jp2
4654548eeb51997900d09195fe8def38
d60bff77f928ba7380b252cf0575d4c3b9c31b1a
2621 F20110209_AAAGTA boylu_f_Page_143thm.jpg
d56e221db0cdfa0c80218c9d057fbebc
b82988c19fa5017d4f213205726a87fa057ba2da
6402 F20110209_AAAGSM boylu_f_Page_116thm.jpg
47674789c3cb64b33e614e950a9be419
a83a030787b131bde54fb627cbe5df9707b9991f
5183 F20110209_AAAGRY boylu_f_Page_095thm.jpg
c3640987945ba41a4372a056cfd32039
d4df8030e88542cf016e0c9c0ccae697194ed015
3477 F20110209_AAAFPK boylu_f_Page_093.txt
3b21eae4963efad0da76802cac6feead
b49d2c95ad1334746a34cd6ff846d31388f2f081
F20110209_AAAFOW boylu_f_Page_043.jp2
ec5414c66cf8ff9dfba66a7d8ab878c0
09b5a906486df54510d140165536b38a4fe18629
6185 F20110209_AAAGTB boylu_f_Page_146thm.jpg
5196def9c2e2e272c77c413b82c2c411
c1a3514f5f705e8c7946a9aea2385a2e07b1b080
5254 F20110209_AAAGSN boylu_f_Page_117thm.jpg
4a16887a27bf8d76addcd91551a9df5a
04f21e376b15ef4e1a4dc879bb0fb096db03ea52
6116 F20110209_AAAGRZ boylu_f_Page_096thm.jpg
fc5d6272aaa43ac7303d0cdc3b0f2372
aaf2bd20a08868b9559c9b96cabd2a8bf6709ca4
3720 F20110209_AAAFPL boylu_f_Page_003.jpg
1331e1df5ce0cae906958ff71b3f0b68
c01c09936463be04af13ff815efeb98849874554
27686 F20110209_AAAFOX boylu_f_Page_147.QC.jpg
7f5d9bb94db6b216e4e660b9cafe161a
46b199a09b35d42eac756ad6f9afa830301a8d4b
6498 F20110209_AAAGTC boylu_f_Page_147thm.jpg
01b98201f6f0d7111f00fc00ad1e0bb8
55d822ad966d4d5c94438fc223ca13d879cf1fd2
5002 F20110209_AAAGSO boylu_f_Page_118thm.jpg
e4095dcb571bc1a381c8d4d854e5963c
ce631c2f15751db39fafc9fa9de3bcdb39da0cce
945991 F20110209_AAAFQA boylu_f_Page_013.jp2
e4302141650534fe30c468177c5990b3
43c2f9d920f2185e92be64eca7c60b2978952a75
F20110209_AAAFPM boylu_f_Page_056.jp2
42a000b61025f3b915348ab06b2df096
c46d3cc5fafcd906d23a0fe01a89fc39c53dd383
13147 F20110209_AAAFOY boylu_f_Page_130.QC.jpg
e0e29c217020fe8f60745f9e850957ce
3afc227df7579dda4f11a5d347b6fd6e5717c98c
F20110209_AAAGTD boylu_f_Page_148thm.jpg
95b939a4f3a3efcbb46610b7d3b9289c
bc333336fe35f01348aa4a7654142263a8b93a14
5146 F20110209_AAAGSP boylu_f_Page_120thm.jpg
70d5edcaae16687f5bd9999634628bb1
448b277e1d1039aef4fe5516c85b9edbbf280677
F20110209_AAAFQB boylu_f_Page_126.tif
c567030d81118780fa2f43a3af6cc861
6603f3c2feabefb78795a00ef1f3f1957893f1ce
2609 F20110209_AAAFPN boylu_f_Page_008thm.jpg
acedb7dd4cfef23ca0daf8ad02588400
1a824e4ca642f07a13024b364baa1822e181a368
592093 F20110209_AAAGTE boylu_f.pdf
6f6cd347a5c90a4654897c4968035bda
7907cca8248c723454732507d9641a2e215135a0
6197 F20110209_AAAGSQ boylu_f_Page_122thm.jpg
e5776531301e7fdb435099ea2824f405
f99bc93669a0808ba4b7f2dd7a8976bcf05b5593
13969 F20110209_AAAFQC boylu_f_Page_139.QC.jpg
68684ab41e8030772ad2a55d937981de
546fb1cd0214118584ba5f092f9e0e998d49de36
6357 F20110209_AAAFPO boylu_f_Page_054thm.jpg
34abb94199bdca63ca7f9a8cfca62d3d
5c8e86eb8db8d7f6d0495f940a4dd21ee12b2d73
21624 F20110209_AAAFOZ boylu_f_Page_112.jpg
fdc83de9034eeb0e5ad32dd712d2b042
99ac11b949ae4792cf8df0e7754722a0afb720c8
174318 F20110209_AAAGTF UFE0015543_00001.mets
5b9493fc36a06849d8a63db4749ec1ec
669d06dba03b6a07eed522cabd9d66d167778ae9
4619 F20110209_AAAGSR boylu_f_Page_123thm.jpg
3efeee98c75fc17773b92323ad5a9487
0e8b6cd206a3e228c8295541f15ac1a92216b36f
F20110209_AAAFQD boylu_f_Page_046.tif
3f617842d4ff047748acaac3cffacf8f
52ac991d09e181e787ff10f0f1a38e8468c1b966
60613 F20110209_AAAFPP boylu_f_Page_027.jpg
78030e38aafc2f49d86550f6990747cb
384341692ca69293d9bc68e3c0b119c786911815
5351 F20110209_AAAGSS boylu_f_Page_126thm.jpg
5a9b76279a7c90cb3250e0f91bcb08ee
480ae3f4a7bd2b4f440554664e43fda9aba8ccab
F20110209_AAAFQE boylu_f_Page_094.tif
6400b7e1e7905d16eb5e4ebe396c88ab
c5f8f699586759e4b704239a628f76957a602295
5901 F20110209_AAAFPQ boylu_f_Page_119thm.jpg
5eb905c07dc60affa44311cd7144d560
b13176c9acd5ecef9cc20cb5f20d6330cafae943
4094 F20110209_AAAGST boylu_f_Page_134thm.jpg
53383131adcc18adfa8464986d877053
e05bf3107dbd50b177b354beff00a190a7056dd0
2093 F20110209_AAAFQF boylu_f_Page_047.txt
dbfbd0b3de94c1914c934685d9bf6070
3e351e2a55135196e812985ef6ed893e254df59d
24404 F20110209_AAAFPR boylu_f_Page_115.QC.jpg
1d273065f08eba819eb2fcda722843da
08fb4e448201411d2dae51c20d73bae830cf63d1
F20110209_AAAGSU boylu_f_Page_135thm.jpg
83dcae9ad41d723ad67df147d08c834f
ce39e08d75d2ca352e52c9f2d021654ae8a302f5
5437 F20110209_AAAFQG boylu_f_Page_104thm.jpg
d9bb5660fb7848f6bab547536380f63b
c21c14433288bce3e622ee09a5aedd7ffbf755b5
F20110209_AAAFPS boylu_f_Page_015.tif
41914019db96c5e4e9ca82d652e32470
3b8439f882203b00cb8ba1297e3a4b6960e64a2b
3470 F20110209_AAAGSV boylu_f_Page_136thm.jpg
6d24cd3a13e9ef74c46d5c2539a0c8e3
48068c0df80debd8e4ea932dcb7aec307c34df12
80790 F20110209_AAAFQH boylu_f_Page_058.jpg
914f7bd5b95f5c5dc72048b7a3129c02
a2f73a4f68be729721702c2677d6f3572cf3e915
4736 F20110209_AAAFPT boylu_f_Page_128thm.jpg
66ce64cf91bd44c39ccd6239c12b04b7
44a6adca254a24ff55333b3d29abcab1deb46f7a
4506 F20110209_AAAGSW boylu_f_Page_138thm.jpg
ef594483f6eb99e827d62fbbfc0bbf86
eb7d41a91305a0a7fb5ca6a1e82cfcfff0edc497
F20110209_AAAFQI boylu_f_Page_032.tif
78304cde43707561434722f539dd3824
42fb7f69cb0b609543e30a1e532c84d10f9b0875
1204 F20110209_AAAFPU boylu_f_Page_127.txt
4e08f699d180f5d3b0e6a8f49c399b61
cd17626c4994dc18688d1f443bf0c35298cc250f
2977 F20110209_AAAGSX boylu_f_Page_140thm.jpg
27164ae68f4a94ed9583f2821b4de022
3e3356b28d0efa2bb3c9002ade76e9e85825d68f
36592 F20110209_AAAFQJ boylu_f_Page_104.pro
99d84760122efbb9fdc86b5a03acfd88
3a4fa0a87163470602d8e174307fd164a7e68624
32480 F20110209_AAAFPV boylu_f_Page_070.pro
b916abff08ecff956ac5feb4b546dc0c
7070a3ace80ecd4d0f1ddc37393b8ff23eda91a2
3512 F20110209_AAAGSY boylu_f_Page_141thm.jpg
bc90d4d6538f0069621415011bc43def
b47742cca9344db0f0a110cbe0c780acdb683c38
6489 F20110209_AAAFQK boylu_f_Page_059thm.jpg
10156a8a77d4bcc2b160f62611658f98
bff8baae799603936328932f849d7357f9fc058e
F20110209_AAAFPW boylu_f_Page_083.tif
996a0cf61b6a9b5213b8dfb6e5e317f7
5a56c4a17655db5f607806dfdda311a65ba1ad17
2812 F20110209_AAAGSZ boylu_f_Page_142thm.jpg
613c07c1b9d28977156937dad3d9d77c
1977e81b98ecc3f59c841fcc2ac02c0f865e8577
F20110209_AAAFQL boylu_f_Page_103.tif
6d826cf38f486171340b4038994f762a
0b28920b9ad3ad441e3d0eba6f6a7f73cad4f596
38543 F20110209_AAAFPX boylu_f_Page_137.jpg
e8a29acc1b585f955662911ae244fd4b
e49c1ad2a9d7e2fabb7662bf5318b4c1e100a516
F20110209_AAAFQM boylu_f_Page_019.tif
0ebbcc3a496483e02a0d8afc3778717b
90f77db8a8dc0d2a991fe66fbf9386e78693c121
513941 F20110209_AAAFPY boylu_f_Page_082.jp2
bffd2bea6901d01aea0267f0ee973d89
cde84ecea9817a61ec2faa15831e9d9e31272e1c
999017 F20110209_AAAFRA boylu_f_Page_028.jp2
495c63ebddf25ffd07790f96f799dda2
17c1332e0b353226d010e581f0e91f2a6abf0ab0
23492 F20110209_AAAFQN boylu_f_Page_108.QC.jpg
a4fc168b3dd46a0ff69a5c2c066d01b7
a091e69c6ea5f6940980bc2ba8c0b0b39e580933
714566 F20110209_AAAFPZ boylu_f_Page_070.jp2
d2362247fb0db36cab614fb4b442ed0a
7bc18914469fa9d2bbb747ac1b376e3327e12d3f
40268 F20110209_AAAFRB boylu_f_Page_120.pro
d88e0092e4602e78e7bbc69d1c177fc1
16714f1377a3edaa6c0e548ba080da0574351051


Permanent Link: http://ufdc.ufl.edu/UFE0015543/00001

Material Information

Title: Strategic Learning
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0015543:00001

Permanent Link: http://ufdc.ufl.edu/UFE0015543/00001

Material Information

Title: Strategic Learning
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0015543:00001


This item has the following downloads:


Full Text












STRATEGIC LEARNING


By

FIDAN BOYLU













A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


2006

































Copyright 2006

by

Fidan Boylu

































To my mother, Leyla Boylu















ACKNOWLEDGMENTS

I would like to thank my advisor, Dr. Gary J. Koehler, for giving me the great

honor and pleasure to work with him. He has been beyond an advisor to me and I have

always admired and looked up to him. He has become the second most important person

in my life after my mom over the years I spent in Gainesville. He has an extraordinary

power of making research enjoyable with his surprising ideas, his unique way of thinking

and amazing style of attacking a problem. He has given me the most valuable skills,

tools and lessons that will last a lifetime. I would like to thank him especially for his

endless help and support at the times of stress. I would also wish to thank my cochair,

Dr. Haldun Aytug, for always encouraging and challenging me throughout the process. It

would not have been so much fun without his brilliance and smart ideas. I also would

like to thank Dr. David Sappington and Dr. Praveen Pathak for their time and interest.

Special thanks go to my friend Bilge Gokhan Celik for making Gainesville a more

bearable place by always having the time to listen to my pointless and endless

monologues and also for making me feel better by proving that there is at least one other

person around that could worry about anything and everything more than I could. I also

wish to thank Arzu Erenguc for being the best friend ever.

Last but certainly not least, I am forever indebted to my mother, Leyla Boylu, for

her influence and support in realizing my career goals and for teaching me how to be

tough and not to give up no matter how difficult things get.
















TABLE OF CONTENTS



A C K N O W L E D G M E N T S ................................................................................................. iv

L IST O F T A B L E S .................... .. ........................................... ... .. ............ vii

LIST O F FIG U R E S ......................................................... ......... .. ............. viii

ABSTRACT .............. ......................................... ix

CHAPTER

1 IN TRODU CTION ................................................. ...... .................

2 STR A TE G IC L E A R N IN G ................................................................ .....................3

Machine Learning and Data Mining Paradigm-Supervised Learning ......................3
Economic Machine Learning-Utility-Based Data Mining............... ..................6
C ost Sensitive L earning ........ ....................................... ...... ...... ..............
Data Acquisition Costs .................. ...................... ............... ............. 8
Strategic L earning ................................................. ................ .. 9
A dversarial C lassification .................................................................. ............ 11
Multi-Agent Reinforcement Learning.............. .........................................12
Learning in the Presence of Self-Interested Agents .........................................16
Strategic Learning with Support Vector Machines ..........................................17
Strategic Learning-Future Research ....................................... ............... 23

3 LEARNING IN THE PRESENCE OF SELF-INTERESTED AGENTS ................26

In tro d u ctio n ........................................................................................................... 2 6
Related Literature .............................................. .. ..........29
Illu station ....................................................................................................... ..... 34
R e se arch A re a s ..................................................................................................... 3 8
S u m m a ry ......................................................................................................4 1

4 DISCRIMINATION WITH STRATEGIC BEHAVIOR ..........................................42

Introduction and P relim inaries .............................................................................. 42
S trateg ic L earn in g ......................................................................................... 4 2
R elated L literature ............................................................46



v









Linear D iscrim inant Functions .................................. .........................................49
Linear Discriminant M ethods.............................. ......... ..................... 50
Statistical Learning Theory ........................................ ........................... 52
Support Vector M machines ................................... ......... .. ........... ..............54
Learning while Anticipating Strategic Behavior: The Base Case ............................57
The A gent Problem ........................... .. .................. .... ..... 57
T h e B ase C ase ........................ .. .......... ...... ...........................59
Learning while Anticipating Strategic Behavior: The General Case .......................66
P properties of P 3 ............................................ ................... .. ......68
Strategic Learning M odel ............................................................................ 74
Sam ple A application .......................... .............. ................. .... ....... 78
Stochastic V ersions....................................................... ..... ...... 86
Conclusion and Future R research ........................................ .......................... 89

5 USING GENETIC ALGORITHMS TO SOLVE THE STRATEGIC LEARNING
P R O B L E M ......................................................... ................ 9 2

An Unconstrained Formulation for Strategic Learning ...........................................92
A Genetic Algorithm Formulation for Strategic Learning ......................................98
Experim mental R results ............................................................................ 100
Discussion and Future Research................................................................ ....... 101

6 STRATEGIC LEARNING WITH CONSTRAINED AGENTS ............................103

Introduction and P relim inaries ............................................................. ............... 103
M odel ............. .......... ......... .................... 107
Application to Spam Filtering ........... ................................. 113
C on clu sion ......... ..... ..................................................... ........... .... .... 124

7 C O N C L U SIO N .......... .................................................................. ......... ....... .. 125

APPENDIX

PR O O F O F TH E O R E M 1 ...................................................................... ...................126

L em m a 1 ...................................................... ................ 12 6
T h e o re m 1 .................................................... ................ 12 6

LIST OF REFEREN CES ..................................................................... ............... 134

BIO GR A PH ICA L SK ETCH .................................... ............ ................ .....................139
















LIST OF TABLES

Table p

4-1 Possible cases depending on z ........................................ .......................... 68

4-2 D different regions of costs .............................................................. .....................72

4-3 N negative cases. ................................................................76

4-4 P ositiv e cases.....................................................77

4-5 Converted Germ an credit data.................................................... ......... ..........79

4-6 1-norm strategic SVM solutions (P4).................................... ....................... 82

4-7 2- norm strategic SVM solutions (P4).................................... ....... ............... 83

4-8 Strategic SVM solutions (P4) for 1000 instances. ................... .............. 86

5-1 D different regions of costs .............................................................. .....................94

5-2 GA sketch ..................................... ................................ .......... 100

5-3 2- norm strategic SVM solutions (P4) versus GA for 100 instances .....................101















LIST OF FIGURES

Figure page

2-1 Strategic L earning fram ew ork ..................................................................... ....... 17

2-2 Wider margins ................................................................... ...... 22

4 -1 T h eo rem 1 .................................................................6 2

4-2 M ulti-round non-strategic SVM ..................................... ............................ ........ 65

4-3 Positive case w ith = 1 and A = 0.5 ............................................ ............... 70

4-4 Positive case w ith = 1 and A = 0 ............................. ..... .............................. 70

4-5 N negative case w ith = 6 ..... .......................................................................... 71

4-6 N negative case w ith = 1 ................................................ .............................. 71

4-7 A typical graph of f (b w) ............................................................................72

4-8 Possible cases for points of discontinuity of f(b w) .........................................73

6-1 Spam em ail w ith r = 2 ...................................................................... 119

6-2 Non-Spam email without strategic behavior ........................................................ 122















Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

STRATEGIC LEARNING

By

Fidan Boylu

August, 2006

Chair: Gary J. Koehler
Cochair: Haldun Aytug
Major Department: Decision and Information Sciences

It is reasonable to anticipate that rational agents who are subject to classification by

a principal using a discriminant function might attempt to alter their true attribute values

so as to achieve a positive classification. In this study, we explore this potential strategic

gaming and develop inference methods for the principal to determine discriminant

functions in the presence of strategic behavior by agents and show that this strategic

behavior results in an alteration of the usual learning rule.

Although induction methods differ from each other in various aspects, there is one

essential issue that is common to all: the assumption that there is no strategic behavior

inherent in the sample data generation process. In that respect, the main purpose of this

study is to research the question, "What if the observed attributes will be deliberately

modified by the acts of some self-interested agents who will gain a preferred

classification by engaging in such behavior."? Hence, we investigate the need for

anticipating this kind of strategic behavior and incorporate it into the learning process.









Since classical learning approaches do not consider the existence of such behavior, we

aim to contribute by using rational expectations theory to determine optimal classifiers to

correctly classify instances when such instances are strategic decision making agents. We

carry out our analysis for a powerful induction method known as support vector

machines.

First, we define the framework of Strategic Learning. For separable data sets, we

characterize an optimal strategy for the principal that fully anticipates agent behavior in a

setting where agents have fixed reservation costs. For non-separable data sets, we

provide an MIP formulation and apply it to a credit-risk evaluation setting. Then, we

modify our framework by considering a setting where agent costs and reservations are

both unknown by the principal. Later, we develop a Genetic Algorithm for Strategic

Learning to solve larger versions of the problem. Finally, we investigate the situation

where there is a need to enforce constraints on agent behavior in the context of Strategic

Learning and thus we extend the concept of Strategic Learning to constrained agents.














CHAPTER 1
INTRODUCTION

In many situations a principal gathers a data sample containing positive and

negative examples of a concept to induce a classification rule using a machine learning

algorithm. Although learning algorithms differ from each other in various aspects, there

is one essential issue that is common to all: the assumption that there is no strategic

behavior inherent in the sample data generation process. In that respect, we ask the

question "What if the observed attributes will be deliberately modified by the acts of

some self-interested agents who will gain a preferred classification by engaging in such

behavior."? Therein for such cases there is a need for anticipating this kind of strategic

behavior and incorporating it into the learning process. Classical learning approaches do

not consider the existence of such behavior. In this dissertation we study this paradigm.

This dissertation is organized as a collection of articles, each of which covers one

of several aspects of the entire study. Each chapter corresponds to an article which is

complete within itself. Due to this self-contained style of preparation, we allowed for

redundancies across chapters. This chapter is intended to give an outline of this

dissertation.

In both Chapters 2 and 3, we provide an overview of Strategic Learning with the

exception that we intend to reach different type of readers. In Chapter 4, we give a

comprehensive and intricate study of Strategic Learning and provide the details of the

model. In Chapter 5, we develop a Genetic Algorithm for Strategic Learning to solve

larger versions of the problem. In Chapter 6, we extend the Strategic Learning model to






2


handle more complex agent behaviors and in Chapter 7, we conclude with a discussion of

results and future work.














CHAPTER 2
STRATEGIC LEARNING

Machine Learning and Data Mining Paradigm-Supervised Learning

Today's highly computerized environment makes it possible for researchers and

practitioners to collect and store any kind or amount of information easily in electronic

form. As a result, an enormous amount of data in many different formats is available for

analysis. This increase in the availability and easy access of data enables many

companies to constantly look for ways to make use of their vast data collections to create

competitive advantage and keep pace with the rapidly changing needs of their customers.

This strong demand for utilizing the available data has created a recent interest in

applying machine learning algorithms to analyze large amounts of corporate and

scientific data, a practice which is called "data mining." Here we use the terms data

mining and machine learning interchangeably.

An important type of machine learning commonly used in data mining tasks is

called "supervised learning" which is performed by making use of the information

collected from a set of examples called the "training set." The training set usually takes

the form S= ((x,,y,),...,(x,,y,)) where f is the total number of available examples.

Each example (also called an instance) is denoted by x, = (xl,..., xn) which is the vector

of n attributes for the example. The label of each example is denoted by y, and is

assumed to be known for each instance which is why supervised learning is sometimes

referred to as "learning with a teacher." Given this setting, we are interested in choosing









an hypothesis that will be able to discriminate between classes of instances. A wide

range of algorithms have been developed for this task including decision trees (Quinlan

1986), neural networks (Bishop 1995), association rules (Agrawal et al. 1993),

discriminant functions (Fisher 1936), and support vector machines (Cristianini and

Shawe-Taylor 2000). Particularly, throughout this chapter, we focus on binary

classification where y, e {-1,1}. Informally, we are interested in classification of two

classes of instances which we call the negative class and positive class respectively. We

choose a collection of candidate functions as our hypothesis space. For example, if we

are interested in a linear classifier, then the hypothesis space consists of functions of the

form w'x + b. Under these circumstances, the goal is to learn a linear function

f : X c R such that f(x) > 0 if x e X belongs to the positive class and f(x) <0

if it belongs to the negative class.

Recently, a new paradigm has evolved for the binary classification problem. We

call it "Strategic Learning." One typical aspect common to all data mining methods is

that they use training data without questioning the future usage of the learned function.

More specifically, none of these algorithms take into account the possibility that any

future observed attributes might be deliberately modified by their source when the source

is a human or collection of humans. They fail to anticipate that people (and collections of

people) might "game the system" and alter their attributes to attain a positive

classification. As an example, consider the credit card approval scenario where certain

data (such as age, martial status, checking account balance, number of existing credit

cards, etc.) are collected from each applicant in order to be able to make an approval

decision. There are hundreds of websites that purport to help applicants increase their









credit score by offering legal ways to manipulate their information prior to the credit

application. Also, the case of a terrorist trying to get through airline security is another

vivid example of how certain individuals might try to proactively act in order to stay

undetected under certain classification systems where a decision maker determines

functions such as f to be able to classify between classes of individuals. Throughout

this chapter, we will speak collectively of these yet to be classified individuals as

"agents" and the decision maker as the "principal."

Until now most researchers assume that the observed data are not "strategic" which

implicitly assumes that the attributes of the agents are not subject to modification in

response to the eventual decision rule. However, this type of strategic behavior is usually

observed in many real world settings as suggested by the above examples. Thus, it is

reasonable to think that individuals or companies might try to game systems and Strategic

Learning aims to develop a model for this specific type of classification setting where the

instances which are subject to classification are known to be self-interested, utility

maximizing, intelligent decision making units.

In the Strategic Learning setting, each agent i has a utility function, a "true" vector

of attributes x, and a true group membership (i.e., label) y,. For linear utility, an agent

has a vector of costs for modifying attributes c, and, for a given task, a reservation cost

r For the task of achieving a particular classification, the reservation cost can be viewed

as the maximum effort that an agent is willing to exert in order to be classified as a

positive example which we assume is desirable. On the principal's side, C, is assumed

to be the penalty associated with misclassifications of a true type y,. In that respect, we

develop a model within utility theory combined with cost sensitive learning.









Before going into the details of Strategic Learning we look at a related area of

machine learning where the discovery process considers economic issues such as cost and

utility. In utility-based data mining the principal is a utility maximizing entity who

considers not just classification accuracy in learning but also various associated costs.

We briefly review this area before discussing Strategic Learning.

Economic Machine Learning-Utility-Based Data Mining

Utility-based data mining (Provost 2005) is closely related to the problem of

Strategic Learning. This area explores the notion of economic utility and its

maximization for data mining problems as there has been a growing interest in addressing

economical issues that arise throughout the data mining process. It has often been

assumed that training data sets were freely available and thus many researchers focused

on objectives like predictive accuracy. However, economical issues come into play in

data mining since over time data acquisition may become costly. Utility-based methods

for knowledge induction incorporate data acquisition costs and trade these with predictive

accuracy so as to maximize the overall principal utility. Hence these methods become

more meaningful and reflective of the real world usage.

Utility-based data mining is a broad topic that covers and incorporates aspects of

economic utility in data mining and includes areas such as cost-sensitive learning, pattern

extraction algorithms that incorporate economic utility, effects of misclassification costs

on data purchase and types of economic factors. Simply, all machine learning

applications that take into account the principal's utility considerations fall into this

category of data-mining research. The researchers in this area are focused primarily on

two main streams. One stream focuses on cost sensitive learning (i.e., cost assigned to

misclassifications) (Arnt and Zilberstein 2005, Ciraco et al. 2005, Crone et al. 2005,









Holte and Drummond 2005, McCarthy et al. 2005 and Zadrozny 2005). The other stream

focuses on the costs associated with the collection of data (i.e., data acquisition cost)

(Kapoor and Greiner 2005, Melville et al. 2005, Morrison and Cohen 2005). In the

following section we will look at these two streams of research.

Cost Sensitive Learning

The first type of utility-based data mining explores the problem of optimal learning

when different misclassification errors incur different penalties. This area has been

revisited many times (Elkan 2001). Cost sensitive classification has been a growing area

of research and aims to minimize the expected cost incurred in misclassifying the future

instances rather than focusing on improving the predictive accuracy which is usually

measured by the number of correctly classified instances. This shift of focus from

predictive accuracy to cost of misclassifications is maintained by assigning penalties for

misclassified instances based on the actual label of the instance. For example, in medical

diagnosis domains, identifying a sick patient as healthy is usually more costly than

labeling a healthy patient as sick. Likewise, in the spam filtering domain, false

misclassification of a non-spam email is significantly more costly than misclassifying a

spam email. Arnt and Zilberstein (2005) examine a previously unexplored dimension of

cost sensitive learning by pointing to the fact that it is impractical to measure all possible

attributes for each instance when the final result has time-dependent utility and they call

this problem time and cost sensitive classification.

Holte and Drummond (2005) review the classic technique of classifier performance

visualization, the ROC (receiver operating characteristic) curve, which is a two

dimensional plot of the false positive rate versus the true positive rate and argue that it is

inadequate for the needs of researchers and practitioners as they do not allow any of the









questions to be answered such as what is the classifier's performance in terms of

expected cost or for what misclassification costs does the classifier outperform others.

They demonstrate the shortcomings of ROC curves and argue that the cost curves

overcome these problems. In that respect, the authors point to the fact that cost-sensitive

measurement of classifier performance should be utilized since misclassification costs

should be an important part of classifier performance evaluation.

Data Acquisition Costs

The second area of utility-based data mining, cost of data acquisition, is an

important area which has potential implications for real-world applications and thus is a

topic that receives positive attention from industry as well as academia. For example, for

large, real-world inductive learning problems, the number of training examples often

must be limited due to the costs associated with procuring, preparing, and storing the

training examples and/or the computational costs associated with learning from them

(Weiss and Provost 2003). In many classification tasks, training data have missing

values that can be acquired at a cost (Melville et al. 2005). For example, in the medical

diagnosis domain, some of the patient's attributes may require an expensive test to be

measured. To be able to build accurate predictive models, it is important to acquire these

missing values. However, acquiring all the missing values may not always be possible

due to economical or other type of constraints. A quick solution would be to acquire a

random subset of values but this approach may not be most effective. Melville et al.

(2005) propose a method called active feature-value acquisition which incrementally

selects feature values that are most cost-effective for improving the model's accuracy.

They represent two policies, Sampled Expected Utility and Expected Utility, that acquire

feature values for inducing a classification model based on an estimation of the expected









improvement in model accuracy per unit cost. Other researchers investigate the same

problem under a scenario where the number of feature values that can be purchased are

limited by a budget (Kapoor and Greiner 2005).

Whereas utility-based data mining incorporates a principal's utility, Strategic

Learning additionally considers the possibility that the objects of classification are self-

interested, utility maximizing, intelligent decision making units. We believe that

Strategic Learning considerations make a substantial contribution to utility-based data

mining. However it should be emphasized that Strategic Learning should be considered

as a totally new stream of utility-based data mining. Strategic Learning looks at the

problems where different classes of instances with different misclassification costs and

utility structures can act strategically when they are subject to discrimination. In the next

section, we cover the details of Strategic Learning.

Strategic Learning

As was mentioned before, we look into a certain class of problems in which a

decision maker needs to discover a classification rule to classify intelligent agents. The

main aspect of this problem that distinguishes it from standard data mining problems is

that we acknowledge the fact that the agents may engage in strategic behavior and try to

alter their characteristics for a favorable classification. We call this set of learning

problems "Strategic Learning." In this type of data mining, the key point is to anticipate

agent strategic behavior in the induction process. This has not been addressed by any of

the standard learning approaches.

Depending on the type of application, the agent can be thought of as any type of

intelligent decision making unit which is capable of acting strategically to maximize its









individual utility function. The following are some examples of strategic agents and

corresponding principals under different real world settings:

* a credit card company (the principal) decides which people (agents) get credit
cards.

* an admission board at a university (the principal) decides which applicants (agents)
get admitted.

* an auditing group (principal) tries to spot fraudulent or soon to be bankrupt
companies (agent).

* an anti-spam package (the principal is the package creator) tries to correctly label
and then screen spam (which is agent created).

* airport security guards (the principal) try to distinguish terrorists from normal
passengers (agents).

Apparently, in each of these settings and in many others that were not mentioned

here, if agents know or have some notion of the decision rule that the principal uses, they

can try to modify their attributes to attain a positive classification by the principal. In

most cases, the attributes used by a principal for classification are obvious and many

people can discern which might be changed for their benefit. In the credit approval case,

it is likely that an increase in one's checking account balance or getting ajob will be

beneficial. Thus, it is reasonable to anticipate that the agents will attempt to manipulate

their attributes (either thorough deceit or not) whenever doing so is in their best interest.

This gaming situation between the agents and the principal leads to a need for

anticipating this kind of strategic behavior and incorporating it to the standard learning

approaches. Furthermore, if one uses classical learning methods to classify individuals

when they are strategic decision making units, then it might be possible for some agents

to be able to eventually game the system.









To date, very few learning methods fall in the Strategic Learning paradigm. The

closest is called adversarial classification which we review below. Another related area

is called reinforcement learning. This area has some aspects of Strategic Learning that

we discuss below also.

Adversarial Classification

Dalvi et al. (2004) acknowledge the fact that classification should be viewed as a

game between the classifier (which we call the principal) and the adversary (which we

call the agent) for all the reasons that we have discussed so far. They emphasize the fact

that the problem is observed in many domains such as spam detection, intrusion

detection, fraud detection, surveillance and counter-terrorism. In their setting, the

adversary actively manipulates data to find ways to make the classifier produce a false

decision. They argue that the adversary can learn ways to defeat the classifier which

would result in a degrading of its performance as the classifier needs to modify its

decision rule every time agents react by manipulating their behaviors. Clearly, this leads

to an arms race between the classifier and the adversary resulting in a never ending game

of modifications on both sides since the adversary will react to classifier's strategy in

every period and classifier will need to adjust accordingly in the next period. This poses

an economical problem as well since in every period more human effort and cost are

incurred to be able to modify the classifier to adapt to the latest strategy of the adversary.

They approach the Strategic Learning problem from a micro perspective by

focusing on a single-shot version of the classification game where only one move by each

player is considered. They start by assuming that the classifier initially decides on a

classification rule when the data are not modified by the adversary. But, knowing that

adversary will deploy an optimal plan against this classification rule, the classifier instead









uses an optimal decision rule which takes into account the adversary's optimal

modifications. They focus on a Bayesian classification method.

Although their approach is quite explanatory, it is an initial effort since their goal

was to be able to explain only one round of the game. However as they discuss, the

ultimate solution is the one that solves the repeated version of this game. However, by

viewing the problem as an infinite game played between two parties, they tend to

encourage modifications rather then prevent these modifications. That leads to a key

question that needs to be answered: is there an optimal strategy for the classifier which

will possibly prevent an adversary from evolving against the classifier round after round

when this strategic gaming is pursued infinitely? or is it possible to prevent an agent's

actions by anticipating them before the fact and taking corrective action rather than

reacting to the outcome after the game is played? These points are intensively

investigated in Chapters 4 and 5 where we formulate the problem as the well-known

principal-agent problem where the principal anticipates the actions of agents and uses that

information to discover a fool-proof classifier which takes into account the possible

strategic behavior. In that sense, their approach is more of a preventive one than a

reactive one. Also, their model involves many strategic agents acting towards one

principal as opposed to a two-player game setting.

Multi-Agent Reinforcement Learning

Another related area of research is reinforcement learning which involves learning

through interactions (Samuel 1959 and Kaelbling et al. 1996). In this section, we will

briefly discuss this area and point out its similarities and differences with Strategic

Learning. Reinforcement learning is a field of machine learning in which agents learn by

using the reward signals provided from the environment. Essentially, an agent









understands and updates its performance according to the interactions with its

environment. In reinforcement learning theory, an agent is characterized by four main

elements: a policy, a reward function, a value function, and a model of the environment.

The agent learns by considering every unique configuration of the environment. An agent

acts according to a policy that is essentially a function that tells the agent how to behave

by taking in information sensed from the environment, and outputting an action to

perform. Depending on the action performed, the agent can go from one state to another

and the reward function assigns a value to each state the agent can be in. Reinforcement

learning agents are fundamentally reward-driven and the ultimate goal of any

reinforcement learning agent is to maximize its accumulated reward over time. A reward

function outputs the immediate reward of a state while the value function specifies the

long run reward of that state after taking into account the states that are likely to follow,

and the rewards available in those states. This is generally achieved through a particular

sequence of actions to be able to reach the states in the environment that offer the highest

reward (Sutton and Barto 1998).

In that respect, a Strategic Learning agent is quite similar to reinforcement learning

agent as the former also aims to maximize its utility while attempting to reach a preferred

classification state. Also, the environment can be thought of as analogous to the principal

who essentially decides on the reward function (the classifier).

However, the main difference of reinforcement learning from Strategic Learning is

that learning is realized over time through interactions between the environment and

agents, something that Strategic Learning essentially aims to avoid. The ultimate goal in

Strategic Learning is to be able to anticipate the possible actions of the agents and take









preventive action. Strategic Learning attempts to avoid those interactions for the

principal's sake by anticipating the agent's behavior since otherwise the principal is

forced to modify its classifier after every interaction with the agents which is time

consuming and costly in many data mining problems and, more importantly, causes

degradation of the classifier's performance. In that respect, by using Strategic Learning

model, a principal takes a more sophisticated approach and plans for his/her future course

of action as opposed to doing simple reaction-based, trial-and-error exploration as in

reinforcement learning. In addition, in reinforcement learning the interaction between the

principal (the environment) and the agent is cooperative unlike in Strategic Learning

where it is assumed adversarial.

A more specialized type of reinforcement learning which has more insight to the

Strategic Learning is reinforcement learning in leader-follower multi-agent systems

(Bhattacharyya and Tharakunnel 2005, Littman 1994). In leader-follower systems which

have a number of applications such as monitoring and control of energy markets

(Keyhani 2003), e-business supply chain contracting and coordination (Fan et al. 2003),

modeling public policy formulation in pollution control, taxation etc. (Ehtamo et al.

2002), a leader decides on and announces an incentive to induce the followers to act in a

way that maximizes the leader's utility, while the followers maximize their own utilities

under the announced incentive scheme. This is similar in many ways to our principal-

agent terminology as the leader acts as the principal and tries to identify and announce

the ultimate decision rule that would maximize his/her own objective while the agents act

as followers who seek to maximize their own utilities.









Bhattacharyya and Tharakunnel (2005) apply this kind of a sequential approach to

propose a reinforcement based learning algorithm for repeated game leader-follower

multi-agent systems. One of the interesting contributions of their work is the introduction

of non-symmetric agents with varying roles to the existing multi-agent reinforcement

learning research. This is analogous to our asymmetric setting where both the principal

and agents are self-interested utility maximizing units with the exception that principal

has a leading role in setting the classifier to which the agents react. This is similar to the

interaction between leader and followers in their work.

However, a key difference between the two is the sequential nature of the decisions

made by the leader and the followers. In the leader follower setting, the learning takes

place progressively from period to period as leader and followers interact with each other

according to the ideas of trial-and-error learning. Essentially, the leader announces

his/her decision at specific points in time based on the aggregate information gained from

the earlier rounds and the followers make their decisions according to the incentive

announced at that period. In this scenario, the leader aims to learn an optimal incentive

based on the cumulative information from the earlier periods while the followers try to

learn optimal actions based on the announced incentive. Learning is achieved over

successive rounds of decisions with information being carried from one round to the next.

Even though the leader-follower approach has similarities with Strategic Learning,

there are fundamental differences. First, Strategic Learning is an anticipatory approach.

In other words, in Strategic Learning, learning is achieved by anticipating strategic

behavior by agents and incorporating this anticipation in the learning process rather than

following an after the fact reactive approach. Second, Strategic Learning does not









involve periods based on the principles of principal-agent theory. Strategic Learning

results show that a sequential approach will often yield suboptimal results.

Learning in the Presence of Self-Interested Agents

We approach to Strategic Learning more extensively in Chapters 3 and 4.

Particularly, in Chapter 3, we explore the problem under the name of "Learning in the

Presence of Self-interested Agents" and propose the framework for this type of learning

that we briefly discuss in this section.

Refreshing the previously developed notation, let X c 9" be the instance space

which can be partitioned into two sets, a set consisting of positive cases consistent with

some underlying but unknown concept and the remaining negative cases. For example,

in Abdel Khalik and El-Sheshai (1980) the underlying concept is described by "firms that

won't default or go bankrupt." Attributes such as the firm's ratio of retained earnings to

total tangible assets, etc. were used in this study. One forms a training set by randomly

drawing a sample of instances of size from X, and then determining the true label (-1

or 1) of each such instance. Using this sample, a machine learning algorithm infers a

hypothesis. A key consideration is the choice of sample size. Whether it is large enough

to control generalization error is of key importance.

Under our setting, the principal's goal is to determine the classification function f

to select individuals (for example, select good credit applicants) or to spot negative cases

(such as terrorists who try to get through as security line). However, each strategic

agent's goal is to achieve positive classification (e.g., admission to university) regardless

of their true label. For that, they act strategically and take actions in a way to alter their

true attributes which may lead to an effectively altered instance space. In some cases it is









possible for the agents to infer their own rules about how the principal is making

decisions under f and spot attributes that are likely to help produce positive

classifications. Most classical learning methods operate using a sample from X but this

space may be altered (call it X) due to the ability of agents to infer their own rules about

the classification rule. This possible change from X to X needs to be anticipated.

Strategic Learning does this by incorporating rational expectations theory (Muth 1961)

into the classical learning theory. Figure 2-1 outlines the Strategic Learning framework.

Figure 2-1 shows that when strategic behavior is applied to the sample space, X, it

causes it to change from X to X (reflecting strategic behavior by the agents). Also,

classical learning theory operates on X while Strategic Learning operates on all

anticipated x's.


S I Classical Learning Theory




State eha r I Rational Expectations Theory





X a r Strategic Learning


Figure 2-1. Strategic Learning framework.

Strategic Learning with Support Vector Machines

In Chapter 4 we consider Strategic Learning while inducing linear discriminant

functions using Support Vector Machines (SVMs). SVM is an algorithm based on









Statistical Learning Theory (Vapnik 1998). In this section, we discuss Chapter 4 and

some of our results.

SVMs are a computationally efficient way of learning linear discriminant functions

as they can be applied easily to enormous sample data sets. In essence, SVMs are

motivated to achieve better generalization by trading-off empirical error with

generalization error. This translates to the simple goal of minimizing the margin of the

decision boundary of the separating hyperplane with parameters (w, b). Thus, the

problem reduces to minimizing the norm of the weight vector (w) while penalizing for

any misclassification errors (Cristianini and Shawe-Taylor 2000). An optimal SVM

classifier is called a maximum margin hyperplane. There are several SVM models and

the first model is called the hard margin classifier which is applicable when the training

set is linearly separable. This model determines linear discriminant functions by solving

Min w'w
w,b (1)
s.t. y,(w'x +b)>l i 1,...,

The above formulation produces a maximal margin hyperplane when no strategic

behavior is present. To illustrate how strategic behavior alters the above model, we

briefly look at the approach used in Chapter 4 of this dissertation to incorporate strategic

behavior into the model. We start by introducing the agent's "strategic move problem"

that shows how rational agents will alter their true attributes if they knew the principal's

SVM classifier. If the principal's classification function (f(x,) = w'x +b -1) was

known by rational agents, then each agent would solve what they call the strategic move

problem to determine how they could achieve (or maintain) positive classification while









exerting minimal effort in terms of cost. This is captured in the objective function (cost

minimization). Thus the agent's problem can be modeled as

min c 'd
st w'[x +D(w) d+b >
d >0

where D(w) is a diagonal matrix defined by

1 w >0
D(w) =
wJJ 1 wv <0


This problem finds a minimal cost change of attributes, D(w)d,, if feasible. This

is the amount of modification that an agent needs to make on his/her attributes to be

classified as a positive case. This would be undertaken if this cost doesn't exceed the

agent's reservation cost, i Let d* (w, b) be an optimal solution for the agent's strategic

move problem (d" (w, b) is zero if the strategic move problem is infeasible or if the agent

lacks enough reservation). Then the principal's strategic problem becomes the following

min w'w
w,b
wb (2)
s.t. y, w'[x, +D (w)d, (w,b)]+b> i ,... (2)

When compared with the non-strategic SVM model, the difference is the term

D(w) d (w,b) which depends on the agent's problem. Basically, this term represents

the principal's anticipation of a modification of attributes by agent i. By incorporating

this term into the principal's problem, this formulation makes it possible to prevent some

misclassifications by taking corrective action before the fact (i.e., before the principal

determines a classification rule and incurs misclassification cost as agents make

modifications). The essential idea is to anticipate an agent's optimal strategic move and









use that information to infer a classification rule that will offset the agent's possible

strategic behavior.

In Chapter 4, we derive a complete characterization for the solution of the

principal's strategic problem under the base setting where all agents have the same

reservation and change costs (i.e., r = r and c = c) and S= ((x,,y,),...,(x, y,)) is

linearly separable. Theorem 1 in Chapter 4 states the following: (w*,b*) solves (1) if and

2 2b* t
only if 2- w t solves (2) where t* is given by t = r max w*k ck and
2+t 2 + t k

S(c,) max(0,l-(b+w'x,))
k = arg mmin -
J,wO Iw

In essence, Theorem 1 states that a principal anticipating strategic behavior of

agents all having the same utilities and cost structures will use a classifier that is parallel

to the non-strategic SVM solution (w, b*). The solution to the strategic SVM is a

scaled (by 2+t )) and shifted form of (w ,b*). The margin of the strategic SVM


solution hyperplane is greater than the non-strategic SVM solution and thus the

probability of better generalization is greater. This scaling factor depends on the cost

structure for altering attribute values, the reservation cost for being labeled as a positive

case, and (w ,b*).

Figure 2-2 (a) shows a completely separable training set with a hyperplane using

the non-strategic SVM classifier (solid line) and corresponding positive and negative

margins (dotted lines) along with data points in two-dimensional space. In Theorem 1,

we show that the "negative" agents will try to achieve a positive labeling by changing









their true attributes if the cost of doing so doesn't exceed their reservation cost. This is

indicated in Figure 2-2 (a) with the horizontal arrows pointing out from some the points

for negative agents towards the positive side of the hyperplane. Clearly, these are the

agents whose reservation costs are sufficiently high and are willing to engage in strategic

behavior to move to the positive side of the hyperplane. However, the principal,

anticipating such strategic behavior, shifts and scales the hyperplane such that no true

negative agent will benefit from engaging in such behavior. Figure 2-2 (b) shows the

sample space and the resulting strategic classifier. However, as Figure 2-2 (b) shows, the

negative agents would have no incentive to change their true attributes since they will not

be able to move to the positive margin any more and, hence, would not exert any effort.

Apparently, this shift may leave some marginally positive agents in danger of being

classified as negative agents. Since they too anticipate that the principal will alter the

classification function so as to cancel the effects of the expected strategic behavior of the

negative labeled agents, they might undertake changes. In other words, they are forced to

move as indicated by the arrows in Figure 2-2 (b). Thus, the ones who are "penalized"

for engaging in a strategic behavior (i.e., must exert effort to attain a positive

classification) are not the negative agents but rather the marginal positive agents.

Moreover, the resulting classifier has a greater margin and better generalization capability

compared to the non-strategic SVM learning results.

In Figure 2-2 (c), the new strategic classifier with wider margins on each side and

the resulting altered instances due to strategic behavior is pictured. Notice that the

margin of the resulting hyperplane is wider and is in fact a scaled and a shifted version of

the hyperplane in Figure 2-2 (a) and thus differs from non-strategic SVM results.












e
e


ee
,e e e
le-e, l' e
e ee

(b)









(c)
e e e e




Figure 2-2. Wider margins.(a) Separable training set with a non-strategic SVM classifier
(b) Sample space as the result of strategic behavior (c) Strategic SVM
classifier.

For non-separable datasets, there are no results comparable to Theorem 1.

However, for that case where S is not linearly separable and all agents have their own

reservation and change costs (i.e., r, and c,), we. derive mixed integer programming

models for the solution of the principal's strategic problem. We apply our results on a

credit card application setting and our results show that the strategic formulation

performs better than its non-strategic counterpart.









Chapter 5 is an extension of our work which considers the cases when it is not

realistic to let each attribute be modified unboundedly without posing any constraints on

how much they can actually be modified. In other words, agents are constrained on how

much and in which way they can modify their attributes. Towards that end we look at a

spam categorization problem where spammers (negative agents) are only allowed to

make modifications in the form of addition or deletion of words with an upper limit on

the number of modifications allowed. Essentially, we formulate the problem by allowing

only binary modifications which is an interesting constraint on the agent behavior.

Clearly, agents can be constrained in many other ways such as upper and lower bounds

on the modifications or the modifications may need to belong to a certain set of moves

(like in checkers or chess).

Interestingly, for the spam categorization problem, we point out that not all agents

are strategic and in fact only the negative agents (spammers in our case) act strategically

since it is not reasonable for legitimate internet users to engage in strategic behavior and

change the content of their emails. This is quite distinguishing as it is a Strategic

Learning model for an environment where non-strategic and strategic agents coexist.

Strategic Learning-Future Research

The form of Strategic Learning problem discussed in this chapter assumes that the

only costs are misclassification costs and there is no cost associated with making the true

positive agents alter their behavior. Including these costs would create a formulation that

is equivalent to the social welfare concept common in economics literature as the

principal may need to trade-off the misclassification cost with the disutility caused by

forcing the true positive agents to move. In that way, the principal would be able to

maximize his/her own utility while minimizing the disutility of positive agents. Also,









Theorem 1 is only applicable to separable datasets and an important contribution would

be to develop a similar theoretical result for non-separable datasets.

One of the most interesting angles for future research is to be able to remove the

assumption that both the principal and the agents know each other's parameters. In

practice this assumption rarely holds and it's usually the case that principal and agents

will try to somehow roughly predict each others parameters. We provide several

formulations in Chapter 4 for cases where the agent's utility parameters are not known

with certainty. More work is needed in this area.

It is possible that the classifier developed when the agents do not collude may be

suboptimal when the agents cooperate and act seemingly in an irrational way. For

example, determining what happens in a scenario where agents collude and offer

incentives to other agents to make sub-optimal changes in their attributes to confuse the

principal would make this problem more realistic.

It is possible to reverse the Strategic Learning problem and use these ideas to create

a classifier (or a policy) that promotes certain actions (rather than avoid) or use these

ideas as a what-if tool to test the implications of certain policies. For example, board of

directors could develop executive compensation policies to promote long term value

generation by anticipating how the CEOs can game the system to their advantage (which

usually causes short term gains at the cost of long term value).

All discussion so far has focused on using SVMs as a classifier even though

learning theory and rational expectations theory are independent of implementation

details. It would be very useful to determine the validity of these results independent of

implementation and introduce the Strategic Learning problem to the other classifiers like









decision trees, nearest neighbor, neural networks, etc. Essentially, Strategic Learning is a

general problem that will arise in any learning situation involving intelligent agents so it

should be applied to other learning algorithms.

Another key area of future research is the application of domain knowledge to

Strategic Learning. Kernel methods accomplish this by using a nonlinear, higher

dimensional mapping of attributes to features to make the classes linearly separable. It

may be possible to compute an appropriate kernel that can anticipate and cancel the

effects of strategic behavior. Such a possible kernel can be developed using agents'

utility functions and cost structures which are a form of domain specific knowledge.

The current research on Strategic Learning so far only addresses static situations.

However, it is possible that some exogenous factors like the environment or the

parameters being used are changing over time. For example, it might be possible that

over time some new attributes may be added to the data set or conversely some may

become obsolete. This type of a dynamic situation might need to be modeled in a way to

accommodate the possible changes to be able to determine classifiers that will adapt

efficiently.

There is yet another angle to approach the problem which is the game theoretical

point of view which has been partially addressed by Dalvi et al. (2004). However, further

investigation of this angle is an interesting and relevant task for future research in the

area.

An important area of research in Strategic Learning is to find better algorithmic

methods for solving the Strategic Learning problem. While mixed integer formulations

exist, solution methods currently do not scale-up like the non-strategic counterparts.














CHAPTER 3
LEARNING IN THE PRESENCE OF SELF-INTERESTED AGENTS1

In many situations a principal gathers a data sample containing positive and

negative examples of a concept to induce a classification rule using a machine learning

algorithm. Although learning algorithms differ from each other in various aspects, there

is one essential issue that is common to all: the assumption that there is no strategic

behavior inherent in the sample data generation process. In that respect, we ask the

question "What if the observed attributes will be deliberately modified by the acts of

some self-interested agents who will gain a preferred classification by engaging in such

behavior."? Therein for such cases there is a need for anticipating this kind of strategic

behavior and incorporating it into the learning process. Classical learning approaches do

not consider the existence of such behavior. In this chapter we study the need for this

paradigm and outline related research issues.

Introduction

Machine learning research has made great progress in many areas and applications

over the past decade. Many machine learning algorithms have evolved to the point that

they can be used in typical commercial data mining tasks including credit approval

(Chapter 4), spam detection (Fawcett 2003), fraud detection (Fawcet and Provost 1997),

text categorization (Dumais et al. 1998), etc.




1 An earlier version of this chapter was published in the Proceedings of the 39th Annual Hawaii
International Conference on System Sciences (HICSS'06), Track 7, p. 158b.









One of the most common types of data mining task is supervised learning. Here

the learner acquires a training sample that consists of f previously labeled cases and

applies a machine learning algorithm that uses the sample to choose a "best fit"

hypothesis from a designated hypothesis space. This chosen concept will be used to

classify (i.e., label) unseen cases later. We can think of an example as a vector of

attribute values (an instance) plus a class label. The set of all instances is called the

instance space.

Suppose there are n attributes. Then a vector of attributes is x e 9" For example,

in a credit approval application, attributes might be age, income, number of dependents,

etc. The label can be binary, nominal or real valued. For example, in the credit approval

problem, the labels might be "good" or "bad" or simply +1 or -1. Then an example

would be the pair (x,y) where x e 9" and y e {-1,1}. An hypothesis space consists of

all the possible ways the learner wishes to consider representing the relationship between

the attributes and the label. For example, an often used hypothesis space is the set of

linear discriminant functions. A particular hypothesis would be (w, b) where w 9 ".

Then (w,b) : 9 ({-1,1} meaning if w'x + b > 0 then the label is +1, otherwise it is -1.

A learning algorithm selects a particular hypothesis (e.g., a particular(w, b)) that best

satisfies some induction principle. The "supervised" in supervised learning means that

the training sample contains correct labels for each instance.

There are a plethora of learning methods. Some of the most popular methods

include decision tree methods (Quinlan 1986), neural network methods (Rosenblatt

1958), Bayesian methods (Duda and Hart 1973), Support Vector Machines (SVMs)

(Cristianini and Shawe-Taylor 2000), etc. Although many of the algorithms differ from









each other in many aspects, there is one essential issue that is common to all: the implicit

assumption that there is no strategic behavior inherent in the sample data generation

process. For example, in supervised learning where a decision maker uses a training set

to infer an underlying concept, the training examples are taken "as is." In that respect,

we ask the question "What if the observed attributes will be deliberately modified by the

acts of some self-interested agents who will gain a preferred classification by engaging in

such behavior."?

If these agents know the true classification rule, they can easily discern how

changing their attributes could lead to a positive classification and, assuming the cost to

change these attributes is acceptable, then proceed to make the changes. If the

classification rule is not known by the agents, either the agents can attempt to discover

what the important attributes might be or, more likely, use common sense to alter obvious

attributes.

For example, poor credit risk individuals interested in obtaining credit might

proactively try to manipulate their attributes (either through deceit or not) so as to obtain

a positive rating (e.g., there are many websites that purport to help people change their

credit ratings).

A more extreme example is the terrorist who tries to appear "normal" so as to gain

access to potential target sites. Less extreme are spammers who continuously try to break

through screening rules by changing their e-mail messages and titles.

So there is a need for anticipating this kind of strategic behavior and incorporating

such considerations into the classical learning approaches. Currently none consider the

existence of such behavior.









In this chapter, we outline the need for this kind of a paradigm and present many

related emerging research issues. Many types of learning situations potentially fall in this

setting. In fact, whenever the instances represent adaptive agents, the risk of strategic

behavior is possible. We frame this type of learning in a principal-agent setting where

there are many rational agents who act as autonomous decision making units working to

maximize their individual utilities and a principal who needs to classify these agents as

belonging to one class or the other. For example, a credit card company (the principal)

decides which people (agents) get credit cards. An admission board at a university (the

principal) decides which applicants (agents) get admitted. An auditing group (principal)

tries to spot fraud or detect imminent bankruptcy in companies (agent). An anti-spam

package (the principal is the package creator) tries to correctly label and then screen spam

(agent created). In each of these examples, we assume that the agents can anticipate what

the resulting classifier is and modify their attributes accordingly. Knowing this the

principal will create a classifier to cancel the effects of such behavior. Such anticipation

is based on the assumption that each party acts rationally and each knows the other's

parameters.

We start with a discussion of related literature in the next section. Only two recent

papers directly address Strategic Learning. We also discuss a somewhat similar scenario

that arises in leader-follower learning. Later, we illustrate the ideas of Strategic Learning

and outline areas of possible research. We end with a summary in the last section.

Related Literature

Since the agents can learn, anticipate and react to the principal's classification

method by altering their behavior, the principal in turn needs to adapt his/her strategy in

order to be able to cancel the effects of agents' strategic efforts. This arms race between









the two parties (principal as opposed to agents) is the essential motivation of Strategic

Learning. However, as was pointed out in Dalvi et al. (2004) the key goal for the

principal is to identify the ultimate decision rule initially especially when this strategic

gaming could be pursued infinitely. That is, the principal rather than reacting to the

agents' actions in an after the fact fashion, needs to anticipate their strategic behavior and

identify an optimum strategy taking the agents' possible strategic behavior into account,

an approach which is explored deeply in Chapter 4.

We note that even in learning situations where an optimal induction can be

achieved, as we study in Chapter 4, the usual "arms race" approach may not discover this

rule. That is, if a principal gathers data, induces a rule and starts using this rule, strategic

agents will attempt to alter their attributes for positive classification. The instance space

is now different forcing the principal to gather a new data set and induce a new rule. In

Chapter 4 we give an example where this adaptive learning does not converge to an

optimal rule. Indeed, in our setting, such convergence may be rare.

Dalvi et al. (2004) argue that in many domains such as spam detection, intrusion

detection, fraud detection, surveillance and counter-terrorism, the data are being

manipulated actively by an adversary ("agent" in our terminology) seeking to make the

classifier ("principal" in our terminology) produce a false decision. They further argue

that in these domains, the performance of a classifier can degrade rapidly after it is

deployed, as the adversary learns to defeat it. We agree.

They view classification as a two period game between the classifier and the

adversary, and produce a classifier that is optimal given the adversary's optimal strategy.

They also show that a Nash equilibrium exists when the adversary incurs unit cost for









altering an observation. However, as they suggest, computation of a Nash equilibrium is

hard since the computation time is exponential in the number of attributes. The

experiments they do in the spam detection domain show that their approach can greatly

outperform a standard classifier that does not take into consideration any strategic

behavior by automatically adapting the classifier to the adversary's evolving

manipulations.

Referring to their example, in the domain of e-mail spam detection, standard

classifiers like naive Bayes were initially quite successful but spammers soon learned to

fool them by inserting "non-spam" words into e-mails or breaking up Spamm" ones with

spurious punctuation, etc. As the spam filters were modified to detect these tricks,

spammers started using new tricks (Fawcett 2003). Eventually spammers and filter

designers are engaged in a never-ending game of modification as filters continually

include new ways to detect spam and spammers continually invent new ways to avoid

detection.

This kind of gaming is not unique to the spam detection domain and is found in

many other domains such as computer intrusion detection, where the anti-virus programs

are continuously updated as new attacks are experienced; fraud detection, where

perpetrators change their tactics every time they get caught (Fawcett and Provost 1997);

web search, where search engines constantly revise their ranking functions in order to

cope with pages which are manipulated in order to have higher rankings; etc. As a result,

the performance of principal can drop remarkably when an adversarial environment is

present.









In Chapter 4 we develop methods to determine linear discriminant classifiers in the

presence of strategic behavior by agents. We focus on a powerful induction method

known as support vector machines, and for separable data sets, we characterize an

optimal strategy for the principal that fully anticipates agent behavior. In our setting,

agents have linear utility and have a fixed reservation costs for changing attributes. The

principal anticipates the optimal agent behavior for a given classifier (i.e., he uses rational

expectations) and chooses a classifier that can't be manipulated within the acceptable cost

ranges for the agents. In this setting there is no possibility for a "cat and mouse" type

scenario. No actions by the agents can alter the principal's classification rule.

Here, our first important result is that under specific conditions, an optimal linear

discriminant solution with strategic behavior is a shifted and scaled version of the

solution found by SVMs without strategic behavior. This result is striking since we

prove that it is optimal. So far, this is the only Strategic Learning result that incorporates

rational expectations theory into the classical learning approach and is proved to be

optimum. Hence, the main contribution of our work is that rational expectations theory

can be used to determine optimal classifiers to correctly classify instances when such

instances are strategic decision making agents. This provides a new way of looking at the

learning problem and thus opens up many research areas to investigation.

For non-separable data sets we give mixed integer programming formulations and

apply those to a credit-risk evaluation setting. Our results show that discriminant analysis

undertaken without taking into account the potential strategic behavior of agents could be

misleading and can lead to unwanted results.









Although Strategic Learning approaches have not been used before, learning

problems involving intelligent agents in a gaming situation have been investigated in

other settings. For example, in leader-follower learning systems which have a number of

applications such as monitoring and control of energy markets (Keyhani 2003), e-

business supply chain contracting and coordination (Fan et al. 2003), modeling public

policy formulation in pollution control, taxation etc. (Ehtamo et al. 2002), a leader

decides on and announces an incentive to induce the followers to act in a way that

maximizes the leader's utility, while the followers maximize their own utilities under the

announced incentive scheme. This is analogous in some sense to our setting since the

principal (the "leader" in their terminology) tries to identify (i.e., learn) and announce the

ultimate decision rule that would maximize her own objective while the agents'

("followers" in their terminology) seek to maximize their own utilities. In both cases, it

is possible to think of the situation as the principal (or leader) aiming to maximize some

kind of a social welfare function given the self interested actions of the agents.

Specifically, these kinds of decision situations are termed incentive Stackelberg

games (Von Stackelberg 1952) where the leader first determines an incentive function

and announces it and the followers, after observing the announced incentive, make their

own decisions. For example, in their work, Bhattacharyya et al. (2005) apply this kind of

a sequential approach to propose a reinforcement based learning algorithm for repeated

game leader-follower multi-agent systems. The key point here is the sequential nature of

the decisions made by the leader and the followers. The learning takes place

progressively as principal and agents interact with each other based on the principles of

reinforcement learning (Kaelbling et al. 1996) which is centered around the idea of trial-









and-error learning. Specifically, the leader announces his decision at specific points in

time based on the aggregate information gained from the earlier rounds, and the followers

make their decisions according to the incentive announced at that period.

In this scenario, the leader aims to learn an optimal incentive based on the

cumulative information from the earlier periods while the followers try to learn optimal

actions based on the announced incentive. Learning is achieved over successive rounds

of decisions with information being carried from one round to the next. This is where

existing research and the framework proposed in this paper differ from each other. The

ensuing analysis demonstrates that this sequential approach will often yield suboptimal

results while the ultimate solution can only be found by anticipating and incorporating

this anticipation in the learning process rather than following an after the fact reactive

approach.

Illustration

To further illustrate the Strategic Learning framework discussed in this chapter, let

X c 9T be the instance space. X is partitioned into two sets, a set consisting of positive

examples consistent with some underlying but unknown concept and the remaining

negative examples. For example, in Messier and Hansen (1988) the underlying concept

is described by "firms that won't default or go bankrupt." In their study, the attributes

consists of values such as the firm's ratio of retained earnings to total tangible assets, etc.

A training set is formed by randomly drawing a sample of instances of size from X,

and then determining the true label (-1 or 1) of each such instance. From this sample, one

uses a machine learning algorithm to infer a hypothesis from these examples. The choice

of a sample size that is large enough to control generalization error is of key importance.









Under this setting, the principal's desired goal is to determine a classification

function f (i.e., an hypothesis) to select individuals (for example, select college

applicants likely to succeed) or to spot negative cases (such as which firms are likely to

file for bankruptcy or commit fraud, etc.). However, the agents acting strategically may

take actions in a way to improve their situation in hopes of achieving positive

classification (e.g., admission to university). This may lead to an effectively altered

instance space. After f is discovered by the principal, agents may take the classification

rule into account even if f has not been announced. For example, agents can infer their

own rules about how the principal is making decisions under f and spot attributes that are

likely to help produce positive classifications. Most classical learning methods will

operate using a sample from X but this space may be altered by subsequent agent actions

(call it X). This change needs to be anticipated. One powerful way of doing this is by

incorporating rational expectations theory to the classical learning theory. Figure 2-1 of

Chapter 2 outlines this new proposed framework.

Figure 2-1 shows that when strategic behavior is applied to the sample space, X, it

causes it to change from X to X and parallel to that rational expectations theory applied

to classical learning theory transforms it to what we suggest as Strategic earning. .Notice

that while classical learning theory operates on X, strategic learning operates on all

anticipated x's.

Learning theory assumes that the principal (learner) has a loss function that usually

measures the error associated with the classification task. In general, the principal has to

worry about two types of errors: the empirical error and the generalization error. The

empirical error is easily estimated by using the sample. However, the generalization









error, a measure of how well the function will predict unseen examples,depends on what

functional form is assumed for the target function (the hypothesis), the size of the training

set and the algorithm used to discover it. Once the principal has chosen a representation

for the function (a language for representing an hypothesis such as a decision tree, neural

network, or linear discriminant function, etc.), learning theory dictates that the

generalization error is bounded assuming that the true function can be represented using

this language. For example, linear discriminant analysis assumes that the true function is

linear.

The success of each algorithm operating on this language depends on how well it

uses the information presented by the sample and how well it trades off the estimate of

the generalization error with the empirical error. For example, decision tree algorithms

achieve this balance by first minimizing the empirical error and then intelligently pruning

leaves until it reaches some balance between the two types of errors. A very successful

linear discriminant technique called the Support Vector Machines (Cristianini and

Shawe-Taylor 2000), achieves this balance by trading off the margin of separation

between two classes with respect to the separating hyperplane with a measure of

empirical error. It is a well known result that the maximum margin hyperplane, which is

the hyperplane far from both classes, minimizes the generalization error (Vapnik 1998).

It is however not clear whether these results carry over when the input space is altered by

strategic positioning of the agents.

To appreciate the impact of strategic behavior in classification, consider the setting

in Chapter 4. We show that "negative" agents try to achieve a positive labeling by

changing their true attributes if the cost of doing so doesn't exceed their reservation cost.









However, the principal, anticipating such strategic behavior, shifts and scales the

classification function (a maximum margin hyperplane) such that no true negative agent

will benefit from engaging in such behavior. Thus, in practice, the negative agents have

no incentive to change their true attributes and, hence, do not exert any effort. However,

this shift may leave some marginally positive agents in danger of being classified as

negative. Since they too anticipate that the principal will alter the classification function

so as to cancel the effects of the expected strategic behavior of the negative labeled

agents, they will undertake changes. Thus, the ones who are "penalized" for engaging in

a strategic behavior are not the negative agents but rather the marginal positive agents.

Moreover, the resulting classifier has a greater margin and better generalization capability

(Vapnik 1998) compared to the SVM learning results when we assume X is static.

Hence, normal SVM methods can never discover an optimal strategic classifier, even

under repeated applications. Such observations have implications not only for induction

of classifiers but also for tasks such as policy setting. Figure 2-2 of Chapter 2 illustrates

the resulting classifier under strategic behavior. Notice that the margin of the resulting

hyperplane is wider and is in fact a scaled and a shifted version of the hyperplane in

Figure 2-2 (a) and thus differs from normal SVM results without strategic behavior.

Figure 2-2 (a) shows the sample space without strategic behavior and the

corresponding SVM classifier (the continuous line). Given these conditions some

negative agents can engage in strategic behavior as indicated by arrows. Figure 2-2 (b)

shows the sample space and the resulting optimum classifier given the strategic behavior

of agents. Notice that the marginally positive agents are the ones that are forced to move

shown by the arrows while the negative agents prefer not to alter their attributes since









they no longer have any incentive of doing so. It costs them too much. Finally, in Figure

2-2 (c), the new classifier with wider margins on each side and the resulting altered

instances due to strategic behavior is pictured.

The fact that most of the learning takes place in dynamic and changing

environments where there is interaction between agents and the principal or among

agents themselves leads us to question the fundamentals of the supervised learning

algorithms which mainly operate on fixed but unknown distributions. This static

approach to supervised learning, that does not take into account any possible strategic

activity in data generation, does not seem to be realistic given the vast array of examples

where active manipulation of attributes by actions of agents is realized. Although

learning algorithms exist that consider actions of agents (Kaelbling et al. 1996), their main

concern is to research how agents learn from such interactions so, in that sense, the

research framework proposed here differs from those. Here the main concern is not to

learn from the mistakes but to prevent mistakes from happening and to improve learning

by anticipating behavior that would otherwise cause faulty decisions.

Research Areas

We believe there is an important need to explore various aspects of this new

paradigm. In the following we identify some issues that have not been explored yet.

An obvious area of future research is to generalize and extend the methods shown

in Chapter 4 and Dalvi et al. (2004) to the other classifiers like decision trees, nearest

neighbor, neural networks, etc. Essentially, the problem can be extended appropriately

and applied to many other learning algorithms since it is beneficial to take strategic

behavior into account since not doing so might be misleading as the current research in

the area suggests.









One interesting research angle would be to approach the problem from a game

theoretical point of view and to be able to answer questions like "What is the best

(optimum) strategy for the principal given the agents' strategic behavior?" Are there any

conditions other than those identified by Dalvi et al. (2004) under which these kinds of

problems have Nash equilibria? If so, what form do they take, and are there cases where

they can be computed efficiently? Under what conditions do repeated games converge to

these equilibria?

Since strategic behavior alters the input space it is not clear to what extent the

results on learning bounds (Cristianini and Shawe-Taylor 2000, Vapnik 1998) from

statistical learning theory apply even when the classifier anticipates this behavior. The

results in Chapter 4 suggest that the results may not carry over as is.

One issue of great importance that has not been explored yet is the application of

domain knowledge. Each learning technique incorporates domain knowledge differently.

So-called kernel methods (such as those used with SVMs) use a nonlinear, higher

dimensional mapping of attributes to features to make the classes linearly separable. It

can be shown that such a task can be carried-out by kernel functions (hence the name)

and the kernel itself can be seen as a similarity measure that is meaningful in that domain.

It may be possible to anticipate and cancel the effects of strategic behavior by applying

an appropriate kernel mapping. It may be possible to develop such a kernel using agents'

utility functions and cost structures since that knowledge is one form of domain specific

knowledge.

Existing research in Strategic Learning (i.e., Chapter 4 and Dalvi et al. (2004))

assumes that all agents have the same cost of changing an attribute. Also, it assumes that









agent's have linear utilities. Relaxing these assumptions would be interesting for future

research.

Additional areas of future research include the following. It is possible to apply the

proposed ideas to cooperative multi-agent systems. For example, if agents can collude

and offer side payments to other agents to make sub-optimal moves to confuse and thwart

the principal, can this be anticipated in the induction process? Are there conditions under

which collusion by the agents will beat a classifier that does not explicitly consider it?

Human beings are often quite good at the level of adaptation (either as a principal

or agent) in which we are interested. Good sports players will carefully watch how their

opponents play and change strategy based on their opponents' actions. Inspired by such

human behavior, we would like to apply Strategic Learning to competitive multi-agent

settings where multiple principals/agents interact and try to learn while competing.

Research so far focuses on static situations. If the instance space changes over time

(due to some exogenous factors), is it possible to dynamically model user behavior and

determine classifiers that will adapt efficiently? For example, allowing new features to

be added as time progresses could be a good way to model such a dynamic and

interactive environment. The goal would be to make this adaptation as easy and

productive as possible.

Current research assumes that principal and agent know each others' parameters.

In other words, the principal is well informed about the costs and the problem that the

agent faces. When the principal and the agent do not know each other's parameters, how

would that affect the optimal strategies and what would be the additional learning needs?

This is the issue on which economists focus their analysis of the "principal agent"









problem where they consider cases in which the principal is less omniscient (Laffont and

Martimort 2002).

If the computation of an optimal strategy is too expensive in some cases, would

approximate solutions and weaker notions of optimality become sufficient in real-world

scenarios? It would be valuable to derive approximately optimal solutions to the

Strategic Learning problem.

This framework can also be used to encourage real change rather than preventing

negative behavior. This might have applications in public policy problems.

Summary

In this chapter we outline the need for machine learning methods that anticipate

strategic actions by the agents over which a principal is inducting a classification rule.

For example, credit approval, fraud detection, college admission, bankruptcy detection,

spam detection, etc. are all cases involving strategic agents who might try to achieve

positive classifications by altering their attributes.

We reviewed two recent studies initializing results on this problem. Dalvi et al.

(2004) use a two stage model to produce a superior principal classifier. In our study

(Chapter 4), we go a step further. Ideally, using rational expectation theory, one might be

able to fully anticipate agent actions and incorporate this in a machine learning induction

process to determine dominant classifiers as done in Chapter 4.

After outlining and illustrating the new paradigm, we discussed many potential

research areas and questions.














CHAPTER 4
DISCRIMINATION WITH STRATEGIC BEHAVIOR

Introduction and Preliminaries

Strategic Learning

We study the problem where a decision maker needs to discover a classification

rule to classify intelligent agents. Agents may engage in strategic behavior and try to

alter their characteristics for a favorable classification. We show how the decision maker

can induce a classification rule that fully anticipates such behavior. We call this

"learning in the presence of self-interested agents" or simply "Strategic Learning."

Suppose X c 91" contains vectors whose n observable components represent

values of attributes for rational agents that will be classified by a decision maker (for

example, as good credit risks or bad ones). X is partitioned into two sets, a set consisting

of cases consistent with some underlying but unknown concept (called the positive

examples of the concept) and the remaining cases (a set of negative examples of the

concept). For example, in Messier and Hansen (1988) the underlying concept is

described by "firms that won't default or go bankrupt" and attributes consists of values

such as the firm's ratio of retained earnings to total tangible assets, the firm's ratio of

total earnings to total tangible assets, etc. A desired goal is to sample X, determine the

true label of each example and infer the concept from these examples. This is a typical

task in data mining, machine learning, pattern recognition, etc.

In such situations, the concept of interest is assumed fixed but unknown. The

observable relevant attributes of interest are assumed given for our problem. During the









inference process, a decision maker observes instances drawn randomly with replacement

from X each having a label -1 or 1 denoting whether it is a negative or positive example

of the concept. These labels are not normally observable, but during the inference

process we assume that such labels are available, perhaps through extensive study or

from past outcomes. The collection of these examples forms the training "set"

S =((x,,y,),...,(x,,y,)) of observations where y,e {-1,1} identifies the label. We

assume there are at least two elements of S having opposite labels. (Strictly speaking, S

is not a set since there may be duplicate entries).

The decision maker uses the training set to determine an instance of a

representation of the concept that we embody in a function f : X 91 where f(x) > 0 if

x e X belongs to the positive class and f(x) < 0 if it belongs to the negative class. In

the language of learning theory (Vapnik 1988) we are performing "supervised learning"

when we infer f from a sample S, each example of which has a known label. The

decision maker must choose a general form for f (e.g., a decision tree, a linear function, a

neural network, etc.). Depending on the representation chosen for the target concept, one

may use inference methods that produce the desired output. For example, for

representations such as neural networks (Tam and Kiang 1992), decision trees (Quinlan

1986), discriminant functions (Hand 1981), support vector machines (Cristianini and

Shawe-Taylor 2000), etc. many methods have been developed to determine an actual f

given a sample S. The representation choice sets the induction bias and the methodology

choice determines the quality of the final f found.

It is often the case that a decision maker, heretofore the principal, determines

functions such as f to select individuals (as for college entrance, credit approval, etc.) or









to spot early signs of problems in companies (such as bankruptcy, fraud, etc.). We will

speak collectively of these yet to be classified individuals or companies as "agents"

where agent i is represented by his/her true vector of attributes x, and true label y,.

Although the exact nature of f might be unknown to these agents, it is typically the

case that the direction of change in an attribute that increases the likelihood of a positive

classification is known. For example, it is generally believed that better grades positively

influence admission to most universities. Hence, taking actions to improve one's grades

will help in achieving a positive labeling by a principal in charge of admission decisions.

Hence, it is reasonable to anticipate that such agents who are subject to

classification by a principal using f might attempt to alter superficially their true attribute

values so as to achieve a positive classification under fwhen they may actually belong to

the negative class or be only marginally positive. This is not to say that agents need to lie

or engage in deceit, although these behaviors could certainly be used to alter the true

attribute values. Instead, they might proactively try to change their attribute values prior

to their classification by a decision maker. For example, in a college entrance decision,

one attribute often used to discriminate is the level of participation in extracurricular

activities. An individual could discern this and make an effort to join clubs, etc. merely

to increase his/her chance of a positive entrance decision. There are even websites that

purport to help one increase his/her scores (i.e., the value that f gives for them). For

example, http://www.consumercreditbuilder.com advertised they have ways one can

learn how to raise his/her credit scores, even as high as 30% or better.

http://www.testprep.com claims "Increases of 100 points on the SAT and 3 to 4 on the

ACT have been common, with higher scores often achieved."









http://www.brightonedge.org/ even offers a college admissions camp aimed at increasing

the chances of being admitted to college.

We explore this potential strategic gaming and develop inference methods to

determine f realizing that strategic behavior may be important in using f. That is, we

anticipate strategic behavior in the induction process leading to f.

Suppose the principal can assess the costs to an agent needed to change an attribute

value. Some attributes may have a very high cost to change and some a relatively low

cost. For example, a potential college applicant might note that the cost of participating

in extracurricular activities might be relatively low compared to studying harder to

change a grade point average. The latter in the short run may be impossible to alter. The

costs of getting caught with lying or deceit might be very high, for example, in fraud

cases. Let c, > 0 be the vector of costs to agent i for changing a true vector x, e 9" to

x, +Dd, for d, > 0 (diagonal matrix D, with diagonal components of +1 and -1, merely

orients the moves to reflect directions where increases lead to better scores under f). The

cost of such a change to the agent is then c 'd,. We assume that the reservation cost of

being labeled a positive example is r, for agent i. We further assume a rational agent will

engage in strategic behavior if

r> min c,'d s.t. f(x +Dd)> O.
d,>0

Thus, we envision a situation where the original instance space, X, is possibly

perturbed after f is discovered by the principal, even if f is kept secret. Most induction

methods will operate using a sample from X. However, we contend that strategic









behavior will result in a change X X from which future instances will be sampled.
f

This needs to be anticipated.

In this chapter we develop methods to determine linear discriminant classifiers in

the presence of strategic behavior by agents. We focus on a powerful induction method

known as support vector machines. For separable data sets, we characterize (Theorem 1)

an optimal principal induction method that anticipates agent behavior. For non-separable

data sets (i.e., data sets where no linear separator exists that can separate the negative

from positive examples) we give a general approach. Then, we apply these approaches to

a credit-risk evaluation setting. Later, we extend the general approach to a stochastic

version of the problem where agent parameters such as c, and r, are not known with

certainty by the principal. We conclude with a discussion and possible future research in

the last section.

Related Literature

Around the time of our first draft of this study, Dalvi et al. (2004) described a

scenario where an adversary alters the data of the negative class subject to known costs

and utility. They formulate the problem as a game with two players, one named

Adversary and the other Classifier, in which the Adversary tries to alter true negative

points to mislead the Classifier into classifying them as positive. In our terminology, the

Classifier is our principal. Their Adversary is able to control the agent attributes. The

authors show that if the Adversary incurs a unit cost for altering an observation then there

exists a Nash equilibrium solution for this game. Since, finding a Nash equilibrium is

prohibitive in the general case they focus on a one-step game in which they assume that

the Classifier publishes a classifier (Co) before the game and the Adversary modifies the









sample points to fool Co but knowing that the Adversary will engage in such behavior the

Classifier actually uses a new classifier Ci. The authors focus on the Naive Bayes

Classifier (NBC) and show that updating NBC based on the expectation that the

Adversary will try to alter the observations yields better results. They provide an

algorithm and an application to the spam filtering problem where spammers (agents)

quickly adapt their e-mail tactics (they alter their attribute vector) to circumvent spam

filters.

While our general task is somewhat similar to that studied in Dalvi et al. (2004), we

outline some major differences between our approaches. In general, we focus on a case

where agent attributes belonging to either class can be altered. Dalvi et al.(2004) only

consider negative cases. For example, if a marginal positive credit applicant would be

labeled as a negative instance by the principal, the agent may use some of the techniques

at http://www.consumercreditbuilder.com to increase the chance of a positive labeling.

This is a major difference. As we show in Theorem 1 only marginal positive cases must

change attributes. Secondly, we use support vector machines as our learning algorithm

since it has many nice theoretical properties relating to induction risk and optimization

which we discuss below. The linear classifier induced by support vector machines is also

very easy to interpret as a scoring function, unlike the Naive Bayes Classifier. Finally,

our formulation is a version of the basic principal/agent problem formulation which

inherently represents a two player game with infinite steps. Dalvi et al. only consider a

one step game. Since we assume that our observations are actual agents engaging in

strategic behavior, our formulation inherently models a multi-agent game in which one

principal, many negative agents and many positive agents may modify their behavior.









With the exception of the Dalvi et al. (2004) and our earlier drafts, Strategic

Learning approaches have not been used before. However, learning problems involving

intelligent agents in a gaming situation have been investigated in other settings. For

example, in leader-follower systems a leader (i.e., our principal) decides on and

announces an incentive to induce followers (i.e., our agents) to act in a way that

maximizes the leader's utility, while the followers maximize their own utilities under the

announced incentive scheme. This is analogous in some sense to our setting since the

leader tries to identify and announce the ultimate decision rule that would maximize

his/her own objective while the followers seek to maximize their own utilities. In both

cases, this can be viewed as the leader trying to maximize some kind of a social welfare

function given the self interested actions of the followers.

These kinds of decision are termed incentive Stackelberg games (Von Stackelberg

1952) where the leader first determines an incentive function and announces it and the

followers, after observing the announced incentive, make their own decisions. For

example, Bhattacharyya at al. (2005) apply this kind of a sequential approach to propose

a reinforcement based learning algorithm for repeated game leader-follower multi-agent

systems. A key point here is the sequential nature of the decisions. Learning takes place

progressively as principal and agents interact with each other based on the principles of

reinforcement learning (Sutton and Barto 1998) which uses the idea of trial-and-error

learning.

In this scenario, the leader tries to learn an optimal incentive based on the

cumulative information from the earlier periods while the followers try to learn optimal

actions based on the announced incentive. Learning is achieved over successive rounds









with information being carried from one round to the next. This differs from our method

in the sense that this sequential approach will often yield suboptimal results while the

ultimate solution can only be found by anticipating and incorporating this anticipation in

the learning process itself rather than following an after the fact reactive approach.

One other line of research that is closely related to our problem is utility-based data

mining (Provost 2005). Due to recent growing demand for solving economical problems

that arise during the data mining process, there has been an interest among researchers to

explore the notion of economic utility and its maximization for data mining. So far the

focus has been on objectives like predictive accuracy or minimization of misclassification

costs assuming that training data sets were freely available. However, over time, it may

become costly to acquire and maintain data causing economical problems in data mining.

Utility-based data mining trades off these acquisition costs with predictive accuracy to

maximize the overall utility of the principal. While utility-based data mining is

concerned with the principal's utility, Strategic Learning additionally considers the

possibility that the objects of classification are self-interested, utility maximizing,

intelligent decision making units.

Linear Discriminant Functions

Among the many possible classification methods, linear discriminant functions are

the most widely used since they are simple to apply, easy to interpret and provide good

results for a wide range of problems (Hand 1981). In this study we restrict the class of

functions, f, over which the principal searches, to linear discriminant functions (LDFs).

As we will show, this is not restrictive since kernel mappings can be applied for non-

linear domains. Many methods exist for finding linear discriminant functions. However,

a powerful methodology, support vector machines, has been developed over the last 10









years that builds on statistical learning theory ideas. We discuss the importance of this

below. A brief review LDFs is provided first.

Linear Discriminant Methods

Linear discriminant analysis for binary classification is usually performed by first

determining a non-zero vector w e 9~ and a scalar b such that the hyperplane

w 'x + b = 0 partitions the n-dimensional Euclidian space into two half-spaces. Then, an

observed vector x, is assigned to the positive class if it satisfies w'x + b _> 0. Otherwise

it is assigned to the negative class. That is, (w, b): 9" {- 1, +1} where +1 denotes the

positive class (points giving w'x1 + b > 0) and -1 denotes the negative class.

Fisher (1936) was the first to introduce linear discriminant analysis seeking a linear

combination of variables that maximizes the distance between the means of the two

classes while minimizing the variance within each class. He developed methods for

finding w and b from a training set already classified by a supervisor. The results of

Fisher were followed by many other approaches to determine (w,b) which mainly differ

on the criterion used to make the decision of choice between a number of candidate

functions. For instance, some of the statistical approaches focused on making different

assumptions about the underlying distribution. For example, logit analysis (Cooley and

Lohnes 1971) which is a type of regression, uses a dummy dependent variable which can

only have the values 1 or 0 and considers the maximum likelihood methods to estimate w

and b. Bayesian methods (Duda and Hart 1973) on the other hand seek the optimum

decision rule that minimizes the probability of error.

In determining LDFs, numerous mathematical programming methods have been

studied. These distribution free methods were first introduced in (Mangasarian 1965) and









then more actively explored in the 1980's (Freed and Glover 1981a, 1981b, 1982, 1986a,

1986b, Glover 1990, Koehler and Erenguc 1990b). In general these methods attempt to

find w and b that optimize directly (or some proxy for) the number of misclassifications.

For instance, Freed and Glover (1981) maximized the minimum deviation of any data

point from the separating hyperplane. Also, Freed and Glover (1986) focus only on the

observations that ended on the wrong side of the hyperplane. They determined a (w,b)

that minimized the maximum exterior deviation. Most of these methods exhibited

undesirable properties (Markowski and Markowski 1985, Koehler 1989a, 1989b).

Different approaches like non-linear (Stam and Joachimsthaler 1989) and mixed integer

programming (MIP) (Koehler and Erenguc 1990a) have also been studied. Moreover,

models that combine objectives such as minimizing the cost of misclassification along

with the number of misclassifications have also been developed (Bajgier and Hill 1982).

Heuristic optimization has also been used. For example, Koehler (1991) studied Genetic

Algorithms to determine linear discriminant functions.

Often mathematical programs that directly minimize the number of

misclassifications resort to some type of combinatorial search method and are

computationally expensive and usually impractical. An important exception was given

by Mangasarian (1994) who gave three non-linear formulations for solving the

misclassification problem using bi-linear constraints to "count" the number of

misclassifications. Good experimental results have been observed (Bennett and

Bredensteiner 1997).

However, the problem with these and other learning approaches is that they have

focused and depended on just the training data set such that the hypothesis correctly









classified the data on the training set but made essentially poor predictions on the unseen

data. That is, they typically would over-fit during the induction process at the later

expense of generalization. This is primarily due to the fact that these approaches were

based on procedures that directly minimize the classification error (or a proxy for

classification error) over a training data set.

Statistical learning theory (Vapnik 1998, 1999) attempts to overcome this problem

by trading-off this potential over-fitting in training with generalization ability. This fact

makes the theory a powerful tool for theoretical analysis and also a strong basis for

practical algorithms for estimating multidimensional discriminant functions. We build

our Strategic Learning model along these lines. In the following section, we provide a

brief overview of these statistical learning theory results which form the basis for support

vector machines. We follow this with a brief discussion on support vector machines.

Statistical Learning Theory

Statistical learning theory (Vapnik 1998, 1999) provides a solid mathematical

framework for studying some common pitfalls in machine learning such as over-fitting.

Assuming that x is an instance generated by sampling randomly and independently from

an unknown but fixed probability distribution, the learning problem then consists of

minimizing a risk functional represented by the expected loss over the entire set of

instances. Since the sampling distribution is unknown the expected loss cannot be

evaluated directly and some induction principle must be used. However, a training set of

instances is available.

Many approaches use the empirical risk minimization principle which infers a

function using the training set by minimizing the empirical risk (which is usually

measured as the number of misclassifications in the training data set). The empirical risk









minimization principle often leads to over-fitting of data. That is, it often discovers a

function that nicely discriminates on the training set but can not do better than chance (or

even worse) on as yet unseen points outside the training set. This has been observed in

many studies. For example, Eisenbeis (1987) critiques studies based on such over-fitting.

Statistical learning theory approaches this problem by using the structural risk

minimization principle (Vapnik 1999, Scholkopf and Smola 2001). It has been shown

that, given S, for any target function with a probability at least 1- r the risk functional

can be bounded by the sum of the empirical risk and a term largely capturing what is

called the structural risk (see Vapnik (1999) for details). The structural risk is a function

of the number of training points, f, the target confidence level, qr, and the capacity, h, of

the target function. The capacity, h, measures the expressiveness of the target class of

functions. In particular, for binary classification, h is the maximal number of points (k)

that can be separated into two classes in all possible 2k ways using functions in the

target class of functions. This measure is called the VC-dimension and the size of the

training set is required to be proportional with this quantity to ensure good generalization.

For linear discriminant functions, without additional assumptions, the VC-dimension is

h = n + 1 (Cristianini and Shawe-Taylor 2000,Vapnik and Chervonenkis 1981). A

common assumption added for certain learning situations is that x e X c 9"' implies

I|x| < R. This is called the boundedness assumption. A class of linear discriminant

functions of the form y(x)(w'x + b A) 0 with I|w = 1 is termed A- margin LDFs.

Under the boundedness assumption, the VC-dimension, h, for A- margin LDFs, is

bounded above by 1 + min (n, [R2 / A2) and may be much smaller than n+1. Support









Vector Machines determine an LDF by directly minimizing the theoretical bound on the

risk functional. The bound is improved by decreasing the VC-dimension, so that for

A- m argin LDFs one can focus on minimizing R2 /A2. The following section contains

a brief review of this methodology.

Support Vector Machines

Support Vector Machines (SVMs) offer a powerful method for discovering linear

discriminant functions by directly minimizing a theoretical bound on the risk functional.

Having its primary motivation of minimizing a bound on the generalization error

distinguishes SVM approaches from other popular methods such as neural networks

which use heuristic methods to find parameters that best generalize. In addition, SVM

learning is theoretically guaranteed to find an optimum concept (since the induction

problem reduces to a quadratic, convex minimization problem) which marks a distinction

between this system and most other learning methods. Neural networks, decision trees,

etc. do not carry this guarantee often leading to local minima and an accompanying

plethora of heuristic approaches to find acceptable results. For example, motivated by

the Ockham's razor principle, most decision tree induction and pruning algorithms try to

create the smallest tree that produces an acceptable training error in the hopes that smaller

trees generalize better (Quinlan 1996). Unfortunately, there is no guarantee that the tree

produced by such heuristics minimize generalization error. SVM algorithms also scale-

up to very large data sets and have been applied to problems involving text data, pictures,

etc.









There are several SVM models. The first model is the so-called maximal margin

classifier model. When the training set is linearly separable, this model determines linear

discriminant functions by solving

Min w'w
w,b
s.t. y,(w'x +b)> i = 1,...,

As discussed below, this formulation produces a maximal margin hyperplane with

a geometric margin equal to A = 1/ w 2. (Note, we show the 2-norm being used here.

Other norms can also be used as we discuss later.)

In general, for many real world problems the sample space may not be linearly

separable. When the data are not separable the SVM problem can be formulated with the

introduction of margin slack variables as follows:


Minw'w + CZ
w,b
s.t. y,(w'x +b)>- i = 1,...,

where ( is the margin slack variable measuring the shortfall of a point in its margin from

the hyperplane and C is a positive parameter. C is chosen to trade-off between margin

maximization and training error minimization. This formulation is termed the soft-

margin SVM (Cristianini and Shawe-Taylor 2000).

In the first model, where the data are separable, the objective function minimizes

the square of the norm of the weight vector, w. It can be shown that this is equivalent to

maximizing the geometric margin when the functional margin of the hyperplane is fixed

to 1 (Cristianini and Shawe-Taylor 2000.) as follows. For some x+ and x the (linearly

separable) SVM is guaranteed to find an optimal weight vector which will satisfy









w'x +b=l and w'x +b=-1

so the margin is then the half of the distance between x+ and x which is

A K'x w'x 1
2w w

Thus, minimizing w'w is the same as maximizing the margin which, in turn,

minimizes the IR2 / A2 1 term bounding the VC-dimension. The non-separable case

trades-off the margin with the margin shortfall. By minimizing a quadratic objective

function with linear inequality constraints, SVMs manage to escape the problem of local

optima faced by other learning methods since the problem becomes a convex

minimization problem having a global optimal solution.

It is also possible to employ kernel mappings in conjunction with SVMs to learn

non-linear functions. A kernel is an implicit mapping of input data onto a potentially

higher dimensional feature space. The higher dimension feature space improves the

computational power of the learning machine by implicitly allowing combinations and

functions of the original input variables. These combinations and functions of original

input variables are usually called features while the original input variables are called

attributes. For example, consider some financial attributes like total debt and total assets.

If a properly chosen kernel is used, it will allow a debt ratio (total debt to total assets) to

be examined as well as the total debt and total assets and, potentially, has more

informational power than total debt and total assets variables used alone. Thus, a kernel

allows many different relationships between variables to be simultaneously examined.

SVM finds a linear discriminant function in the feature space, which is usually then









nonlinear in the attribute space. It sometimes happens that a kernel mapping can map

data not linearly separable in the attribute space to a linearly separable feature space.

There are many classes of generic kernels (Genton 2001) but also kernels are

developed for specific application areas such as text recognition. For the purposes of this

dissertation, we assume a kernel has already been applied to the initial data so that we

may treat these features as primitive attributes. We thus focus our research on finding

LDFs under strategic behavior leaving the related complications introduced by kernels to

future research. In the next section we characterize the induction process for finding an

optimal SVM solution to the strategic LDF problem.

Learning while Anticipating Strategic Behavior: The Base Case

In this section we start by introducing the agent's "strategic move problem" that

shows how a rational agent will alter his/her true attributes if he/she knew the principal's

classification LDF. We then turn to the simplest version of the Strategic Learning

problem and derive a complete characterization for the principal's strategic LDF. The

base problem is generalized in subsequent sections.

The Agent Problem

If the principal's classification function (w'x + b -1) was known to the rational

agents, they would solve what we call the strategic move problem to determine how to

achieve (or maintain) positive classification under the principal's LDF at minimal cost to

themselves. The problem is

min c d
s.t. w'[x, +D(w) d +b >
d,> 0


where D(w) is a diagonal matrix defined by









1 w >0
D(w) = w
J''J -_1 w <0


If feasible, this problem determines a minimal cost change of attributes, D(w)d,,

needed to be classified as a positive case. This would be undertaken if this cost doesn't

exceed the agent's reservation cost, t. Since this optimization problem has only one

constraint (other than non-negativity constraints), the following can be determined. For

non-zero w, let k satisfy

j arg m (c)j max(0,l-(b +w'x))
j = arg mm -


then for

b)max(0,1-(b+ w'x,))
zz (w,b=-)=
W .

we have

z: (w, b)1 if (c,)* z,* (w, b) S(Mw, b)=
0 oIh/Wi i ict'


z, (w, b) can be interpreted as the projection of the amount of modification that the agent

needs to make on the j*th attribute to achieve positive classification with respect to the

(w, b) that the principal chooses. For w equal to zero or infeasible strategic move

problems, set d* (w, b)= 0. An infeasible move occurs when c .zJ (w,b)> ; Notice


that if the ratio (c ), / w, is the same for different values of j*, the agent problem has

alternate optimal solutions. That is, the agent will be indifferent between multiple j*









values corresponding to moving in different optimal directions (or convex combinations

thereof).

Notice that it might be possible for some attributes to be correlated with each other.

For instance, some of the variables of the agent's optimization problem can be linearly

dependent. This can be formulated by expressing dependent variables in terms of

independent variables. For example, the agent's problem would be

min c 'c
st w'[x,+D(w)A]j+b>1
d6 > 0

where Afd, captures the linear relationships. Solving gives

Z, (b)= (w,b) if (cZ, (wb)

0 o('I/ I\l i'e

Consequently, this effects d,* (w, b) such that it might be a combination of

movements in different directions since changing one attribute may cause others to

change. This merely complicates the presentation but does not substantively affect the

results. For this reason, we assume linear independence between the attributes for

simplicity.

The Base Case

We start by studying the simplest version of the principal's Strategic Learning

problem. We assume:

* all agents have the same reservation and change costs (i.e., r, = r and c, = c).
S S =((xl,y,),...,(x,,y,)) is linearly separable









These assumptions are removed in the next section. Using SVM while ignoring

strategic behavior would be accomplished by solving

P1: min w'w
w,b
s.t. y {w'x,+b}>l i= 1,...,

Under Strategic Learning the principal anticipates any possible agent actions and

solves the following.

P2: min w'w
w,b
s.t. y, {w'[x, +D (w) d (w,b)] +b > i= 1,...,

This problem is no longer a nice convex optimization problem with linear

constraints. The constraints are not even piecewise linear convex and/or concave (as we

show in the next section).

Nonetheless, the following result characterizes an optimal solution to the

principal's problems under strategic behavior by agents. For ease of presentation the

proof is in the Appendix A.

Theorem 1

2 2b* t,,
(w ,b*) solves P if and only if K2 w, 2 t solves P2 where t is given


by t =rmax w/ c .


Theorem 1 states that a principal anticipating strategic behavior of agents all having

the same utilities and cost structures will use a classifier that is parallel to the SVM-LDF

w*, b*) determined without taking into consideration strategic behavior. This function is


a scaled (by 2f )t ) and shifted form of the original so the objective of P2 is strictly
([2+ t)








smaller than Pi's meaning that the margin is greater and that the probability of better

generalization is greater. The scaling and shift of the hyperplane depend on the cost

structure for altering attribute values, the reservation cost for being labeled as a positive

case, and (w*, b*). For example, suppose we have the following training set:
2 e-1
Positive cases (y, =1): x, = x2 = x3 =6 [ '


Negative cases (y, = -1): x4 =4, x5 = and x6 =

I12] -0.7272-
and that c = and r = 3. Solving P1 gives w' = and
2 0.5454

b* = -0.272727. Theorem 1 shows that the optimal LDF under strategic behavior is

-0.347837
w = and b = -0.65217. Figure 4-1 shows this, the margins and the moved
0.26087


points.










Theorem1 Theorem 1
SPosi es Positives
Negative Negatives




0 ---------- -. ------- -.. -------..--- ...---... .-- ...---- --- -- --- -- -- ^ r --- t- --- -- ;


.2 0 2 4 6 8 4 -.2 0 2 4 6 8
(c) (d)

Figure 4-1. Theorem 1. (a) A normal SVM LDF would result in two negative points
changing an attribute enough to be classified as positive. (b) Theorem 1
shifts the LDF causing two positive points to move to stay classified as
positive. (c) The two negative points gain nothing by moving so would not
do so (thus figuratively returning to their original positions). (d) The final
LDF with wider margins reflects these anticipated steps.

In our setting, the true "negative" agents try to achieve a positive labeling by

changing their true attributes if the cost of doing so doesn't exceed their reservation cost.

However, an astute principal, anticipating such strategic behavior, shifts the hyperplane

such that no true negative agent will benefit from engaging in such behavior. Thus, in

practice, the negative agents have no incentive to change their true attributes and, hence,

will not exert any effort. However, the agents who are marginally positive are now in

danger of being classified as negative. Since they too anticipate that the principal will

alter the discriminant function so as to cancel the effects of the expected strategic

behavior of the negative labeled agents, they will undertake changes. Thus, roughly

speaking, the ones who are "penalized" for engaging in a strategic behavior are not the

negative agents but rather the marginal positive agents. So, the final discriminant

function leads to a bigger gap between the two classes of points and produces a

separation. Figure 4-1 illustrates these points.

In fact, Theorem 1 shows that the principal is pushing marginal positive agents to

alter their attributes in part to gain better generalization results. The new margin need not









be this large merely to keep negative agents from altering their attributes to gain a

positive classification. So the principal is left with a trade-off between forcing marginal

positive agents to make large changes and possibly increasing the generalization error of

the induced LDF. More specifically, the new margin is


*11 t2 + ).


Any margin greater than t* / 2 will label negative strategic agents as negative. So

--r1 t *' t ]
we can choose parallel hyperplanes giving geometric margins between ,-+1 .


Theorem 1, giving an optimal solution to P2, yields the largest margin. A principal might

elect to choose a smaller margin to spare marginal positive cases the extra effort needed

to still be labeled positive.

A separate line of thought might argue for the maximal margin used in Theorem 1.

Since positive agents are unaware of the locations of negative agents, they may act to

move maximally to ensure their positive labeling.

What if Theorem 1 (or P2) isn't used but rather iterated forms of Pl. Interestingly,

if a normal, non-strategic SVM were applied to the new instance space, X (i.e., resulting

from X X), it may not produce the same result with the altered versions of the original
f

sample as P2. Suppose that instead of using the shifted classifier that incorporates the

effects of strategic behavior, the principal chooses to use one that is found by the normal

SVM algorithm and updates the classifier every round after obtaining observed attributes

(now reflecting strategic behavior). For example, solving P1 for the sample formed by









--0.7272-
the above points yields w = -and b* = -0.272727. Realizing that the
0.5454

principal will use the hyperplane (w*, b*) in the first round, agents will adjust their

attribute vectors accordingly. Calculating d* (w *,b*) for these points we get

d,(w*, b)= d2(w*,b*)=0, d3(w*, b)= 0,


d4(w b) = d,(w b)= and d,(w b) = 0.
O 0

As a result of strategic behavior, the principal will observe the following perturbed

sample after the first round

2 5 -1 1.2498 2 6
xI = x2 = X3 = .2498 X = [2 and x6 =


Note that x, and x5 are the same point so this set is not separable. In the second

round the principal will adjust the hyperplane using the normal SVM solution with the

perturbed sample observed after the first round. However, note that the sample observed

after the first round is no longer linearly separable so the principal will use the soft

margin SVM (say with C, =1 andC I = 5 reflecting the fact that there is a greater penalty

for mislabeling a negative case). Solving for (w*, b) for the second round yields

-0.47
w = and b =-4.2.
0.8

Notice that c, / w is the same for j= 1,2, so the agent problem has alternate

optima. That is, the agent can choose to alter any of the two attributes since they give the

same objective value minc'd- = 5. For simplicity, we assume that all agents will choose


to move in the smallest indexed attribute in case of alternate optima.









The agents who engaged in strategic behavior in the first round had incurred some

cost causing a reduction in their reservation values. If we calculate the residual

reservation cost left for each agent after the first round we get r, = 3, r2 = 3, r3 = 3,

r4= 0.2498, r, = 0 and r6 =3.


Calculating d' (w b*) for these points for the second round, we get


d,(w*,b*)=0, d2(w*,b*)=0, d3(w*, b)=0,

d4(w*, b*)=0, d(w*, b)= and d6(w*, b)=0.

Thus, the sample after the second round stays the same and solving for (w*, b*) for

the second round yields the same hyperplane. None of the points in the sample satisfy

the constraint ckz (w, b) < r so the strategic behavior ends in the second round.

Figure 4-2 displays the hyperplane that solves P1 and the final hyperplane of the

iterative approach. The optimal hyperplane of Theorem 1 is superior to the resulting

hyperplane of the iterative approach in the sense that it prevents the movements of all

negative instances while the hyperplane of the iterative approach still can't prevent

misclassifications to occur as a result of strategic behavior.

P1
P s Posloes








4 -2 0 2 4 6 B 4 -2 0 2 4 6 B
(a) (b)
Figure 4-2. Multi-round non-strategic SVM. (a) shows the movement of negative points
after the SVM LDF is announced. (b) shows a new SVM based on these
moved points.









Remark: Theorem 1 is similar to Theorem 2.1 in Dalvi et al. (2004). Theorem 1

shows that if we knew the true class membership of each point before the game started, it

is possible to create a classifier that has the same performance when strategic behavior

(i.e., gaming) is present. Moreover, Theorem 1 characterizes the optimal strategy of each

type of agent as well as the principal; something that has been omitted in Dalvi et al.

(2004). It also shows that under rational expectations the positive instances also have to

change their behavior. In essence, they are the only ones that end up modifying their

behavior.

Learning while Anticipating Strategic Behavior: The General Case

In this section we assume that agents may have their own reservation and change

costs and that the data set may not be separable. Refreshing the notation in the Strategic

Learning setting, each agent i has a true vector of attributes xl, a true label y,

reservation cost r, and a vector of costs for modifying attributes c,. Reservation cost can

be viewed as the maximum effort that an agent is willing to exert in order to be classified

as a positive agent. On the principal's side, C, is the penalty associated with the margin

shortfall of an agent of true type y,. (Other schemes can be used to price the margin

shortfall of sub-categories of these cases, but we forego the extra notational burden to

show this). Towards that end, in the absence of Strategic Learning, we would solve the

soft-margin form of SVM. The straightforward modification of the soft-margin SVM to

handle Strategic Learning is as follows.

First, let q, (w,b) be the amount of bias that an agent i can introduce to the

principal's classification function w'x +b- by engaging in strategic behavior. As

discussed in Section 3, a principal may want to trade-off some confidence in the









generalization error of the induced LDF for lowered effort needed by positive agents to

stay positive. Later we argue that a reasonable way for the principle to implement this is

to penalize agent effort by adding





where 0 < A < C1 to the soft-margin objective. With this, the general model for Strategic

Learning is:


P3: minw'w+" Cy, + A q,(w,b)
1=1 Yz=+1
s.t. y,{w'x,+q,(w,b)+b}>l- ~ i =,...,j

where

0 if l-b-w'x,<0


q,(w,b)= 0 if 1-b-w' x>z


1 b -w'x, otheii i

and


z r max
J3(c ') iO (c )J


for c, > 0 with at least one j satisfying oc > (c, ) > 0.


In the next section, we study this problem and show it is not mathematically well-

posed but may be modified to produce an epsilon-optimal formulation. Following that

section, we develop a mixed integer quadratic program and a mixed integer linear

program (for the 1-norm counterpart) for solving P3.









Properties of P3

Let


f (w, b) =m, w) + I C, (w, b)+ A q, (w, b)
i=1 y,=+l

where

S(w,b) = max (0, 1- y, [w'x +q,(w,b) + b])

and A > 0, C, > 0 with I|wU being a norm ofw and m( ) an increasing function with

m(0)= 0. We are interested in minimizing f (w,b) which will be formulated as a

mixed integer program when m (|w||)= 1w w and a quadratic mixed integer program
J

when m(|w)-) =w 'w. The following table illustrates the cases that result depending on

an agent's functional distance to the positive margin (1- b -w'x).

Table 4-1. Possible cases depending on z,

Case y, 1- b-w'x q,(w,b) (w,b)
1 -1 <0 0 l+b+w'x,
2 -1 [0,z,] 1-b-w'x, 2
3a -1 > z and <2 0 l+b+w'x
3b -1 >zand >2 0 0
4 1 <0 0 0
5 1 e[0,z] 1-b-w'x, O
6 1 >z, 0 1-b-w'x,








Now, we define

f(b w) =m( w )+ Cy (b w)+2 A q, (b w)
=l1 y,=+l

which is the total cost function f (w, b) when w is fixed. Let b(w) be an optimal

solution to min f (b I w). There are four cases involved with determining an optimal
b

b (w). The following Figures illustrate these cases for the following two points:


Positive case (y, = +1): xp =[ ,

~-1



0.6 1
for w = 0.6] c= [2] C l1=1 andC = 5.


Figure 4-3 shows how the costs C max (0, 1- b q, (w, b)- w 'x )+ Aq, (w, b) vary

with b for a positive agent. Figure 4-4 shows the same when A = 0.

Figure 4-5 shows how the costs C max (0,1 + b + q, (w, b) + w'x) vary for a

negative agent.

Figure 4-6 shows this when the agent does not have a high enough reservation to

cover all positions within the margin. Notice it has a similar (though reversed) graph as a

positive agent with A > 0 .






















1
.5
E 2__________________
-J










b


Figure 4-3. Positive case with 3 = 1 and A = 0.5 .



Positive Case







-j 1.5
CI





005



3 -2 -1 1 2 3



Figure 4-4. Positive case with I = and A = 0.


Positive Case












Negative Case


"-;
0
--------- ----1
5



2 -10 -8 -6 -4 -2

b


Figure 4-5. Negative case with r, = 6.



Negative Case





30



C-10




-2 -10 -8 -6 -4 -2

b


Figure 4-6. Negative case with r =1.


Table 4-2 gives the regions shown in these cases.








Table 4-2. Different regions of costs
Positive Cases Negative Cases
A= 0 A > 0 z <2 z, 2
(-0,1-z, -w'x,) (-0,1-z -w'x,) (- ,-1-w'x,] (- Z,1-z -w'x,)
[1 -z,- w'x,,c) [1-z -w'x,,1-w'x,) [-1- w'x,,1-z ,-w'x,)
[1 w' x, C ) [- w'x,1 w'x,) [i- Z -w'x,1 -W'x)
[1- w' ,,0c) [1 w''x,,c )

As a typical graph of f (b w) where all negative and positive points are combined,

we see graphs like Figure 4-7 which was produced using the following six points.

Positive cases (y, =+1): x, =[ X, x2 =,3 = 42



Negative cases (y, -1): [x4 L6 5 and x6 0


Figure 4-7. A typical graph of f (b w) for w = c= C = 1, C = 5, r, =
0.6a 2
and A = 0.5.


Total Costs

I-----------------94e--------I
90

70
6O
0






-11 -6 -1 4
b







73



Clearly f (b w) is not any form of convexity (like quasiconvex). Furthermore,


there are points of discontinuity where the function is upper semi-continuous. When


trying to find a minimizing point forf (b w) these points of discontinuity pose a


problem when lim sup f (y w) < f (b w) which is caused by negative agents. Figure 4-
y->b


8 shows three cases that may arise with linear upper semi-continuous functions. Here


b = 2 is a point of discontinuity and all three cases have an open interval adjoining it to


the left. In the first case, b E is actually an optimal point (because there are multiple


optima and any point in [1,2) is optimal. The remaining two cases have no optimal point


in [1,2) so min f (b w) is not a well-posed problem.


Point of Discontinuity Point of Discontinuity Point of Discontinuity
12 1 2 12
t08
08 080
8 06 8 06 8 04
1 04 3 04 02
I- 1- o0 ----
002 0 2
0-2
0 0 04
02 05 1 15 2 25 3 5 -02 05 1 15 2 25 3 5
b b b


Figure 4-8. Possible cases for points of discontinuity of f (b w).


In the latter two cases we consider the point b E as optimal (assuming it has a


lower cost than at b) and call it an epsilon-optimal point of the neighborhood. We denote


this problem as


S-min f(b w)
b


and a solution as b (w) In the next section we reformulate P3 to produce epsilon


optimal solutions.









Strategic Learning Model

In this section we develop mixed integer models of the (epsilon) Strategic Learning

model. To evaluate


max-
S(c)

we introduce binary variables, H, to get


r J (cA (cJ)
Hj- =n-i


w
Here, for each i, one Hj must be zero so z, < tr for only one j. With this, the


lower bound forces z, to be the maximal value. These constraints need only appear for

the K different cost vectors (c, = vk). That is, although there are agents there may

only be K unique cost vectors (Theorem 1 assumes all have the same cost vector so K =

1).

Let q, be a decision variable representing w'D(w)d, (w,b). The cost of moving to


a positive agent is (c,). q, / w where


J =argmax
i (C)

This can be evaluated by

(c). q,/jw .

However, adding










YZ(c) q=

w ,


to the SVM objective for a positive A would yield a non-positive definite objective. So,

instead, as discussed earlier, we use a proxy for the agent effort of

AY q,
y,=l

where 0 < A < C ,.

To evaluate absolute values of components of w the usual trick of finding absolute

values by

min Y s
sk >0 J
s.t. s < w < s
other constraints

won't work because our objective function has terms whose values will be impacted by

minimizing the sum of the s variables. Hence we introduce another vector of binary

variables, I, and the following constraints to handle absolute values.

W W+ -W

MIJ, > w+ >0 j =,...,n
M-MIJ >w >0 j = ,...,n


We need to determine the q, variables. No agent will exert effort to adjust their

attributes if the effort does not yield a positive classification. Conversely, if exerting

effort (not exceeding the reservation limit) will result in a positive classification, then an

agent who would otherwise be classified as negative will exert the effort. Consider the

case of a negative agent. We replace









y, w'[x,+D(w)d:(w,b)]+b) >1-

with the following:

y, {w'x,+b}> 1-
> 2V
1-w'x -b +M, >z, +

where V, e {0,1} and where M > 0 is a sufficiently large and E > 0 sufficiently small.

Table 4-3 includes the full implications of these constraints together with the objective

function that minimizes when it is otherwise unconstrained from above. Notice that

q, is not explicitly needed for the negative cases. The last case has V 0 even though

the constraints also allow V = 1 because the minimization process will force ( (and

hence V,) to zero.

Table 4-3. Negative cases.
l-w'x -b V,
<0 1 1 -y,{w'x,+b>2

S[O, z, 1 2
> z, 0 0 for z, > 2
> 0 for z, <2


Consider the case of a positive agent. Here we replace

y, w'[x, +D (w) d, (w,b)]+b > l-

with the constraints

y, {wx, +b+q,} > l-
M-M V,>>
MW, > q, > 0
^>61










where V, {0,1} and M > 0 is sufficiently large. Table 4-4 includes the full implications


of these constraints together with the objective function that minimizes over q, when

possible.

Table 4-4. Positive cases.
1-w'x -b V, q,
<0 0orl 0 0

S[O,z ] 1 1-w'x -b 0
> z, 0 0 1-w'x, -b


Collecting the above gives:


P4: minw'w+ CY, +A q,
>=1 y,= 1
s.t.
effort (y, = +1):
w'x, +b+q, >1-,
M-M V,>
WM > q, > 0
z, >q,
effort (y, = -1):
-w'x, -b > 1-
> >2V,
1-w'x, -b+M, > z +E
max adjustment:
(w+ +w w jW +w )
r

XH, = n-
J
absolute value:
w =W+ -w
MIJ > w+ > 0

M -MIJ >w >0


j = ,...,n,i= 1,...,

i = 1,...,




j = ,...,n

j = ,...,n









int egrality:
IJ, H,, V {0,1}

We note that the SVM model typically uses a 2-norm measure (our w'w in the

objective) but a 1-norm alternative is equally acceptable and many researchers focus on it

(e.g., see Fung and Mangasarian (2002)). In such a case, the objective would be



] =1 y.=1

making the 1-norm version of P4 a Mixed Integer Linear program. This is fortuitous

since the 1-norm problem is much easier to solve. In the next section, we use P4 to solve

a strategic version of a credit-risk evaluation problem. We then look at stochastic

versions of P4 where the principal may not know certain agent parameters.

Sample Application

In this section we apply our results to a credit-risk evaluation dataset which is

publicly available at the UCI repository

(http://www.ics.uci.edu/-mlearn/MLRepository.html) and referred to as German credit

data. The original dataset consists of 1,000 instances with 20 attributes (7 numerical, 13

categorical).

For the purposes of our analysis, some of these categorical attributes, such as status

of existing checking account (greater than zero, between zero and 200 DM, greater than

200 DM) were converted to numerical values (e.g., 0, 100, 300) others were replaced by

binary dummy variables.

For the attributes that were converted, we assumed the value of the attribute to be

the midpoint of the interval it lies within and for values that are outside the specified

intervals we incremented with an amount reflecting the pattern of increase. Thus, the









resulting dataset has 52 attributes summarized in the Table 4-5. For numerical reasons we

standardized the converted data set.

Table 4-5. Converted German credit data.
Attribute index (i) Attribute name Type Ck
0 Checking Account Balance Converted to Continuous 0.1
1 Duration Continuous 100
2,3,4,5,6 Credit History Converted to Binary o
7,8, 9,10,11,12, Purpose Converted to Binary o0
13,14,15,16,17
18 Credit Amount Continuous 10
19 Savings Account Balance Converted to Continuous 0.01
20 Employment Since Converted to Continuous 100
21 Instalment rate Continuous 100
22,23, 24, 25, 26 Personal Status Converted to Binary o
27, 28, 29 Other Parties Converted to Binary o
30 Residence Since Continuous 100
31,32, 33, 34 Property Converted to Binary o
35 Age Continuous 100
36, 37, 38 Other Instalment Plans Converted to Binary o
39, 40, 41 Housing Converted to Binary o
42 Number of Existing Credit Cards Continuous 100
43, 44, 45, 46 Job Converted to Binary o
47 Number of Dependents Continuous 100
48, 49 Own Telephone Converted to Binary o
50, 51 Foreign worker Converted to Binary o0

The categorical attributes contained in the original dataset (that we converted to

binary variables) such as personal status (divorced male, married female and such) and

sex are almost impossible to alter by an agent or not worth altering for the purposes of a

credit application so we assigned their c value infinite cost.

This was operationalized in P4 by leaving out the inequalities


(w w) (w++w )
r( +w <) < r i +MH)
^\ (c \(c


corresponding to such j's and by reducing the right side of










H, =n-1
J

by one for each such j.

Each attribute is assigned a different cost ranging between 0.01 and 100 reflecting

our subjective assessment of the relative level of difficulty of changing that attribute's

value. These costs are summarized in Table 4-5.

We solve this problem assuming K = 1 (i.e., all agents have the same utility

structure) with r = 15. Further we set e = 107, C,1 =4 and C1 = 5.

Applying P4 to 100 point subset of the full 1,000 point, non-separable data set

using the same reservation and misclassification cost structure, we got the results

summarized in Tables 4-6 and 4-7.

Table 4-6 focuses on strategic solutions varying A for the 1-norm version of P4

and Table 4-7 focuses on the 2-norm version of P4. Also the results of the non-strategic

solutions are included in each table.

We used CPlex (ILOG 2005) to solve all the problems. In Tables 4-6 and 4-7, The

Number of Positive and Negatives Moved are the number of cases that could move to a

positive labeling with respect to the solution and the Total Reservation Cost Used is the

total cost to the agents for these moves ( (c, ). q, / w ).


The rows of the tables listed under the heading of "Strategic Impact" such as


number of points moved and misclassified, total misclassification cost ( C~ ,),
1=1

objective value with strategic moves etc. are the results of strategic behavior.










Notice that -C,54 is reported in two different rows. For the strategic solutions,
1=1

this result is the same in both rows for the obvious reason of P4 being modeled to

anticipate and take into account the possible strategic behavior. However for non-

strategic solutions, there is no such anticipation so the first row corresponds to the

misclassification costs without strategic behavior and the second one is calculated taking

into account the misclassification costs of points after they move with respect to the non-

strategic solution.

Similarly, the variable q, does not exist in any of the non-strategic formulations.

Thus, the term A2 q, corresponding to the non-strategic solutions in all tables are
y,=1

calculated theoretically taking in account the movements of the positive points with

respect to the solution (In Tables 4-6 and 4-7, A = 1). Likewise, the moved positive and

negative points and final misclassifications reflect after-solution moves.

j* was found to be attribute 19 ("Savings Account Balance") for most of the cases.

However in some of the cases we observe a tie between attribute 19 and attribute 0

("Checking Account Balance") leaving the agents indifferent between making

modifications on these two attributes. For those cases, j* was chosen to be the attribute

with the lower index number.














Table 4-6. 1-norm strategic SVM solutions (P4) for various values of A vs non-strategic solution for 100 instances.
1-Norm


Non-Strategic


Strategic


Attribute name A = A = 0 A =0.01 A=0.05 A = 0.1 A = 0.5 A = A=1.5 A = 2
0 Checking Account Balance 0 0.000565 0.000565 0.000733 0.000711 -0.01333 -0.01046 0.013333 -0.01333
1 Duration -1.42275 -0.00827 -0.00833 -0.00867 -0.00868 0.060293 0 -0.86934 -1.15545
18 Credit Amount 0 0 0 -0.00063 -0.00063 -0.7216 -0.84269 -0.32216 -0.43376
19 Savings Account Balance 0.276264 0.001333 0.001333 0.001333 0.001333 0.001333 0.001333 0.001333 0.001333
20- Employment Since 0.043407 0 0 0.000378 0.000494 0.255101 0.345983 0.151507 0.240438
21 Instalment rate -1.58332 -0.00527 -0.0053 -0.00543 -0.00547 -0.68458 -0.4795 -0.28348 -1.20469
30 -Residence Since 2.678459 0.010079 0.010143 0.010765 0.010761 1.312225 2.110051 2.06295 2.185361
35 Age 2.073676 -0.00203 -0.00201 -0.00023 0 0.11225 0 0.216032 0.7117
42 # of Existing Credit Cards -0.59592 -0.00314 -0.00316 -0.00386 -0.004 -0.92055 -1.01822 -0.6245 -0.46409
47 Number of Dependents 1.637194 0.003129 0.003154 0.003255 0.003241 1.559915 1.757889 3.144514 1.835584
b* 1.36938 -0.99817 -0.99816 -0.99714 -0.85675 -0.24652 0.23245 0.61498 0.83092
IHw 23.55181 0.07271 0.072972 0.07898 1.10333 11.98734 15.60196 19.13669 20.81358

c 101.6607 8.0395 8.0392 8.0366 8.0367 10.00 20.00 31.00 57.5435
1-1
Objective Value 125.213 8.11226 9.48827 14.9906 21.4947 53.6396 75.3268 93.7098 105.838
Strategic Impact
#of Positives Moved 6 69 69 69 64 51 32 24 15
#of Negatives Moved 26 0 0 0 0 1 1 2 4
# of Pos. Misclassifications 0 1 1 1 1 0 0 0 1
# of Neg. Misclassifications 30 0 0 0 0 1 2 3 4
Total Reservation Cost Used 3.73468 1034.28 1032.00 1031.25 926.60 474.78 297.93 217.86 103.05
t
C w Y, (with Strategic moves) 318.8339 8.0395 8.0392 8.0366 8.0367 10.00 20.00 31.00 57.5435
1

Sq, 103.17571 0 1.37600 6.8750 12.35469 31.65221 39.72485 43.57221 27.48069
y, 1
Objective Value (with Strategic moves) 350.2729 8.1122 9.4882 14.9906 21.4947 53.6396 75.3268 93.7098 105.838
Seconds to solve (3.4 GHz Xeon proc.) 0.016 4.376 4.391 2.422 2.844 2.047 1.344 1.625 1.282














Table 4-7. 2- norm strategic SVM solutions (P4) for various values of A vs non-strategic solution for 100 instances
I 2-Norm


Non-Strategic


Strategic


Attribute name A = A = 0 A =0.01 A=0.05 A=0.1 A = 0.5 A = A =1.5 A = 2
0 Checking Account Balance 0.077049 0.000685 0.000128 0 0.010434 -0.01333 0 0.013333 -0.01333
1 Duration -0.94589 -0.00709 -0.00679 -0.0057 -0.0648 0.099996 0.058722 -0.60349 -0.72603
18 Credit Amount -0.33725 -0.00225 -0.00253 -0.00404 -0.04935 -0.65761 -0.77481 -0.4234 -0.8176
19 Savings Account Balance 0.280419 0.001333 0.001333 0.001333 0.001333 0.001333 0.001333 0.001333 0.001333
20 Employment Since 0.418413 0.000513 0.001267 0.001798 0.036009 0.381561 0.439863 0.722547 0.338969
21 Instalment rate -0.89546 -0.00726 -0.00835 -0.00989 -0.11209 -0.57348 -0.43845 -0.81373 -0.80658
30 -Residence Since 1.316182 0.014112 0.01465 0.017694 0.15321 1.165015 1.881122 1.165456 0.97931
35 Age 0.885631 -0.00013 0.001142 0.001063 0.016148 0.243205 -0.03338 0.198731 -0.11161
42 # of Existing Credit Cards -0.57952 -0.00221 -0.00358 -0.00191 -0.02947 -0.7029 -0.63172 -0.22857 -0.38767
47 Number of Dependents 1.217864 0.006797 0.006577 0.010279 0.108277 1.380006 1.209247 0.956158 1.201656
b* 0.97802 -0.99691 -0.95366 -0.86202 -0.80088 -0.27503 0.15576 0.42523 0.68693
IHW 3.59426 0.00051 0.14567 0.47972 0.59942 2.65509 3.19144 2.95735 3.10397

c Cy 110.5845 8.0343 8.0343 8.0379 8.5655 10.00 20.00 39.7220 50.5038
1=1
Objective Value 123.503 8.03488 9.38724 14.4660 20.5455 49.2335 72.6676 89.5899 101.208
Strategic Impact
# of Positives Moved 10 69 69 65 63 52 35 24 19
# of Negatives Moved 28 0 0 0 0 1 0 1 3
# of Pos. Misclassifications 0 1 1 1 1 0 0 1 0
# of Neg. Misclassifications 30 0 0 0 0 1 2 2 5
Total Reservation Cost Used 3.01754 1035 998.71 929.70 871.55 482.76 318.61 205.60 154.01

(Cy, (with Strategic moves) 302.9685 8.0343 8.0343 8.0379 8.5655 10.00 20.00 39.7220 50.5038
t=1

Sq, 8.37569 0 1.33162 6.19800 11.62066 32.18402 42.48227 41.12195 41.06958
y ,=1
Objective Value (with Strategic moves) 324.2629 8.0348 9.3872 14.4660 20.5455 49.2335 72.6676 89.5899 101.208
Seconds to solve (3.4 GHz Xeon proc.) 0.063 81.661 102.694 45.643 32.408 6.328 8.672 8.953 9.642









The 2-norm results are very similar to their 1-norm counterparts. This is interesting

since the 1-norm problem is much simpler to solve.

The solutions to P4 provide several significant improvements over their non-

strategic counterparts.

First, strategic solutions perform better in terms of the total number of

misclassifications. In both Tables 4-6 and 4-7, we observe a drastic decrease in the

number of negative misclassifications for strategic solutions when compared with their

non-strategic counterparts.

Strategic solutions better separate the positive agents from the negative ones, and


hence their costs of misclassification, Cy is lower than the non-strategic results for


all values of A in both tables. They accomplish this by forcing a large number of

positive agents to modify their attributes at a significant total costs in effort to these

agents. This can be observed by the increase in number of positive points moved for the

strategic solutions compared to non-strategic solutions.

As discussed after Theorem 1, a principal may want to exchange some margin (i.e.,

1/ w I) for lowered moves and thus effort by positive agents. The downside could be a

looser bound on the principal's risk functional of the induced discriminant function and

hence lower confidence in the result.

Comparing the non-strategic and strategic results for A = 1 we see a significant

drop in objective value in both tables emphasizing the high payoff gained by Strategic

Learning.










Furthermore, when strategic results are compared for increasing values of A, we

see an increase in the objective values. This is a result of penalizing the objective

function more for each movement of positive agents as A is increased. This forces fewer

agents to move and hence causes an increase in the positive misclassification cost.

However, depending on the trade-off between an increase in A and a decrease in q,
y' 1

term, we observe fluctuations in the AY q, value.
y,=1

Comparing the signs of the various coefficients, we see switches in a few (Credit

Amount and Age) which reflect the change in effect on the classification after agents

adjust.

Table 4-8 below compares the results of non-strategic and strategic solutions for

the full 1,000 point German credit dataset which was not standardized. The 1-norm and

2-norm strategic results are very similar to each other.

A decrease in the number of agents moved for both 1-norm and 2-norm solutions

compared to the non-strategic case is observed. Hence, the reservation cost used for

strategic solutions is substantially lower than their non-strategic counterparts. This shows

that strategic solutions were able to prevent most of the agent movement. This leads to

an improvement in the term A q, and also in the strategic objective function.
,= 1

It should be noted that with A = 3.99 which is very close to C,1 = 4, strategic and

non-strategic solutions are quite similar.















Strategic


Table 4-8. Strategic SVM solutions (P4) for A = 3.99 vs non-strategic solutions for 1000
instances.
1-Norm 2-Norm


Strategic


0 Checking Account Balance -0.0006 -0.00023 -0.00062 -0.00027
1 Duration -0.02818 -0.03106 -0.02798 -0.0312
18 Credit Amount -8.3E-05 -7.7E-05 -8.5E-05 -7.8E-05
19 Savings Account Balance 0.000539 0.000225 0.000538 0.000223
20 Employment Since 0.071218 0.067657 0.070059 0.068999
21 Instalment rate -0.23377 -0.19916 -0.23657 -0.20034
30 Residence Since -0.02751 -0.04517 -0.02634 -0.04759
35 Age 0.007454 0.009296 0.007888 0.009507
42 # of Existing Credit Cards -0.16523 -0.12605 -0.16987 -0.13148
47 Number of Dependents -0.00372 -0.09361 -0.00321 -0.08893
b* 2.61065 3.09950 1.75885 1.87177

|w| 12.96727 13.09071 2.83717 2.78167

-C 2579.97 2524.75 2579.63 2526.27
i=1
Objective Value 2592.94 2606.92 2587.69 2601.57
Strate ic Impact
# of Positives Moved 196 111 212 112
# of Negatives Moved 86 10 85 7
# of Positive Misclassifications 115 204 116 205
# of Neg. Misclassifications 256 259 260 259
Total Reservation Cost Used 2035.05 787.95 2030.00 775.13

Cy, (with Strategic moves) 2465.39 2524.75 2460.56 2526.27
l=1

A 295.08 69.07 295.08 67.55
y,=1
Objective Value (with Strategic moves) 2763.69 2606.92 2763.69 2601.57
Seconds 0.438 784.67 1.891 73194.671

Stochastic Versions

The assumption that the principal knows the reservation values and costs (ri and ci)

of each agent is rather limiting. One way to relax this assumption is to assume that the

principal, through experience, knows that there are different types of agents and the

associated distribution function over these types. Consequently, in solving the strategic

problem the principal has to take into account the fact that he/she cannot count on known


Attribute name


ri and ci.values.


Non-
Strategic


Non-
Strategic






87


To model this we start by assuming that 0 = (r, c) is a random vector with finite

support s {1,..., S} and a discrete density function. We indicate each agent by his/her

type and if an agent is of "type s" he/she has costs c(s) and reservation value r(s). An

alternative interpretation would be to say that the random vector 0(s) depends on the

agent type s. Agents as usual solve the following

min c(s)'dc
St w'[x +D(w) d +b >
ds 20

Following the same logic as the deterministic case the following can be determined

for each s.

arg m c(s)jmax(0,l-(b+w'x,))
j, = arg mm -
Jwjo w

then for


(w,b) = max(0,1-(b + w'))
w

we have

zs (w, b)1 if c(s)z* (w,b)
d: (w, b)\ i=



For w equal to zero, set d, (0, b) = 0 .

There are many different ways of formulating the principal's stochastic Strategic

Learning problem. One such approach is to model the problem by taking all possible

realizations of 0 into account and populating the constraints of the deterministic case for









each s = 1,..., S. This can be interpreted as a worst case formulation of the problem.

Here, the principal's problem is written as:


min w'w + C max ,
w,b y s

s.t. y,{w'x, +D(w) d (w,b)] +b}> i = ,..., hands =1,...,S



Another approach might be to use expected values of random variables r(s) and

c(s) to arrive at an "average agent" type using the models discussed in earlier sections

where all agents have the same cost and reservation values. A third approach would be to

use chance-constrained formulation and replace the original constraints by corresponding

chance constraints. Let 0 < a, <1, then the principal's problem becomes

min w'w
w,b
s.t. Prob y, w'[x,+D(w) d (w,b)] +b > i =,...


Here (1-a,) represents the allowable risk that d* (w,b) takes on values that

wouldn't satisfy the constraints.

The approach we favor is a fourth approach that depends on minimizing the

expected total misclassification cost by hedging against different possible agents types as

shown in the following model. Let PI be the probability that an agent is of type s. Then

we solve

f S
P5: minw'w+ Cy, P
1=1 s=l
s.t. y,{w'x +D(w) d (w,b)]+b)}>1- i = ,...,ands =1,...,S


The counterpart of P5 (as P4 was to P3) is:










S S S
P6: mnw'w+ICy, P + AY q,
=1 sl s=l
y,= 1
s.t.

effort (y, =+1):

w'x +b +q, > 1- q i =1,..., ,s--
M -MV, > i= 1,..., ,s=

M1 > q, > 0 i = 1,..., ,s--

z, q,, i = 1,..., s --
effort (y, -1):

-w'x b >1- i=1,...,,s

> 2! i = 1,..., ,s -
1-w'x, -b+A + >z,+ i =1,...,J,s
max adjustment:


r) < z r + MH),


-H,, =n-1
J
absolute value:

W=W -w

MIJ > w > 0

M -MI > w > 0

int egrality:


-- 1,..., S
1,..., S
-: 1,..., S
--1,..., S


1,...,S
--1,..., S
:1,...,S



j =l,...,n,i=l,...,








j = 1,...,n

j=l,...,n


I,,H,,,, e 0,1}


Conclusion and Future Research

In this chapter we studied the effect of strategic behavior in determining linear

discriminant functions. We considered two cases. In the base case, we analyzed the

problem under the assumption that all agents have the same reservation costs. We

showed that the optimal solution with strategic behavior is a shifted, scaled version of the

solution found without strategic behavior.









For the case of equal reservation costs we find that the principal will choose a

discriminant function where negative agents will have no incentive to change their true

attributes but agents who are marginally positive will be forced to alter their attributes.

Thus, roughly speaking, the ones who are "penalized" for engaging in strategic behavior

are not the negative agents but rather the marginal positive agents. We also note that

under strategic behavior the final discriminant function used by the principal will produce

a bigger gap between the two classes of points.

For the general case where all agents have different reservation and cost structures,,

we developed mixed integer programming models and applied our results to a credit card

evaluation setting.

An issue of great importance that has not been explored yet is the application of

kernel mappings under strategic behavior. It may be possible to anticipate and cancel the

effects of strategic behavior by applying an appropriate kernel mapping. Of course, for

an agent to anticipate a useful direction of change in some unknown feature space seems

unlikely. Ramifications such as these make this approach daunting.

There are many other avenues to investigate. Usually, it might not be realistic to let

each attribute be modified unboundedly without posing any constraints on how much

they can actually be modified which we model in Chapter 5.

An interesting direction of research is to relax the assumptions on what the

principal and the agents know. Although we have modeled a stochastic version of the

Strategic Learning problem where we hedged for the uncertainty in the types of agents,

we have not yet incorporated informational uncertainty in terms of the available

knowledge to both parties. As an example, instead of modeling agent behavior for a