UFDC Home  myUFDC Home  Help 



Full Text  
APPLICATION OF SINGLE PARTY AND MULTIPLE PARTY DECISION MAKING UNDER RISK AND UNCERTAINTY TO WATER RESOURCES ALLOCATION PROBLEMS By GHINA M. YAMOUT A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2005 Copyright 2005 by Ghina Yamout This document is dedicated to my family; my mother, father, sister, brothers, and husband, I thank you. ACKNOWLEDGMENTS At last! The "there" is "here"! I started school at the age of three, finished at the age of (), let's just say by the year 2005! Twentyfour years of schooling (do your math)! As people set high standards for me, I set even higher ones for myself. You start something, you finish it, completely and perfectly; you start school, you finish it: kindergarten / PhD! It might not make sense to many; for me, there was no other way! Goals, trivial ones and less trivial ones, all but small steps towards an ultimate aspiration, that, once seeming so far away, now is unbelievably close. I look back at this journey of selffulfillment, and I realize that everything happens for a reason. I thank God for what seemed to be "adverse" circumstances at the time and for what still does. God grant me the blessing of being a good servant of His. This long longed for "end" is but a beginning; it is called commencement for a reason! At the onset of my "new" life, I would like to express my utmost gratitude to many, whose guidance and comfort had a hand in getting me where I am today. Firstly, I dedicate this work to my mother and father; their selfsacrifice was my drive. My achievement is theirs. Throughout my life, they made me recognize that when there is a will, there is always a way. It is with this will that I stand here today. I would like to express my deepest gratitude to my advisor, Professor Kirk Hatfield, whose character and energy added considerably to my graduate experience. I thank him for his unyielding understanding, patience, and faith in me when I was at my best and my worst. It has been and will always be a privilege. I doubt that I will ever be able to convey my appreciation fully, but I owe him my life long gratitude, for providing me with the opportunity to be part of an exceptional university, the University of Florida, in this exceptional country. An advice for new doctoral candidates: listen to the advisor! When he says stop taking courses, stop! There is always a "one more" course that needs to be taken! I would like to thank the other members of my committee: Professor Edwin Romeijn, for his unyielding patience and indispensable tutoring which added considerably to the value of this work; and Dr. Clayton Clark, Professor Warren Viessman, and Professor Scot Smith for their valuable comments. I owe my most loving thanks to the man whose support, encouragement, and persistence were behind the completion of this work, my husband, Husam Jumaa. A woman is so lucky to meet her better half as young as I was, as his touch can bring back the starlight and glow of years ago, for me, the first day I met this amazing man, when I was still twentytwo; with him, I have and will grow older and build memories, but I will always be 22. His high expectations from me and total devotion to me only better me, to become, one day, as amazing as he is. When people think that their prayers are not being answered, they should look around them, they might not be seeing clearly. Thank God everyday for his blessings, because she is a blessing; no lesser word can describe her. Zeina Najjar's unconditional love and guidance, so innate to her, at my worst and my best, are my sunshine in the happiest and the darkest moments. I will not let words define what she means to me and will use Martin Luther King's words as he said "Occasionally in life there are those moments of unutterable fulfillment which cannot be completely explained by those symbols called words. Their meanings can only be articulated by the inaudible language of the heart." I would also like to lovingly recognize Amal and Wafic Dabbous, my youngest sister and brother, whose genuineness and affection had the deepest effect on my heart and mind. I would also like to express my most affectionate gratitude to my sister, Aya, and brothers, AbdelGhani and Mohammad. I hope I have and will always be there for them; I hope they forgive me if at any time I haven't. The responsibility of being their older sister and the care I have for them were my biggest motivations. I thank my dearest sister, Aya, for being the older sister at every step of the way. I thank her for opening to me a world of possibilities, for loving me at my best and my worst, for knowing when I am smiling even in the dark, for being my teacher, my attorney, my stylist, even my shrink. I would also like to extend my loving appreciation to my mother, father, and brother in laws, Abla, Wafic, and Mohammad Luay Jumaa, for their watching eyes, indispensable prayers, and much needed advice which helped alleviate the heaviest obstacles. It is amazing how much comfort the mere realization of the presence of a caring and supporting hand brings to you. I would like to extend my warmest gratitude to three very dear people to my heart: my uncle, Mohammad ElWali, who answered my calls in the latest, or shall I say earliest, hours; my gentle aunt, Hoda Yamout Kandil, whose rare warm and gracious nature is a model very hard to follow; and my dear departed uncle, Talal Yamout, who saluted me with the word "Dr." before I even thought of becoming one. I would like on this occasion to extend my profound gratitude to another very dear person to my heart, Mrs. Leila Knio, whose admirable graciousness will be a model for me throughout my life. I thank her for giving me the privilege of being part of my life; rare are the people who mark one's life with unconditional guardianship and love. I would also like to express my warmest thanks to all my friends from the Water Resources and Hydrology group, Nebiyu Tiruneh, Sudheer Satti, Anirudha Guha, Ali Sedighi, Mark Neuman, Nanette Conrey, Qing Suny, and Sherish Bhat, with whom I shared many laughs, debates, exchange of knowledge, and venting of frustration, making my stay at UF unforgettable. I pray we all stay in touch. I would also like to extend my gratitude to Ms. Sonja Lee, Doretha Ray, and Carol Hipsley, who answered, over and over, every question and concern I had; their welcoming faces and ready assistance had an immense part in easing my transition as I came to the United States and my experience through my time at the University of Florida. Thanks also go out to Mr. Anthony Murphy for all of his computer and technical assistance throughout my graduate program. Finally, this work would not have been submitted on time without the tremendous assistance from the editorial office at UF. I also recognize that this research would not have been possible without the financial assistance of the Civil Engineering Alumni Fellowship at the University of Florida. I must also acknowledge the Saint Johns River Water Management District for providing me with the data used in part of this study. In particular, I would like to take the opportunity to extend my gratitude to Mr. Ron Wycoff, private consultant for the district, for generously giving me of his time and answering all my concerns. For all your guidance, I wish to express my sincerest appreciation. If I forgot somebody please forgive me, I thank you as well. TABLE OF CONTENTS page A C K N O W L E D G M E N T S ................................................................................................. iv LIST OF TA BLE S ...... .... .... ........ .... .... ...... ...................... ..... xii LIST OF FIGURES ......... ....................... .......... ....... ............ xiv ABSTRACT ........ .......................... .. ...... .......... .......... xvii CHAPTER 1 IN T R O D U C T IO N ............................................................................. .............. ... 2 DECISION MAKING UNDER UNCERTAINTY: A COMPARATIVE REVIEW OF METHODS AND APPLICATIONS TO WATER RESOURCES M A N A G E M E N T .............................................................................. .......... .. .. ...5 On the Origin of Risk .................. ................................................... .6 D definition of R isk .................................................................. ........................... . 8 D definition of R isk M anagem ent ....................................................... .... ........... 13 O ur D definition .................................................................................................. ........16 Risk M anagem ent Techniques......................................................... ............... 17 M them atical N otations................................................ ............ ............... 19 NonProbabilityBased RM Techniques .................................. ............... 20 Sensitivity analysis ............ .......... ........................... 20 D decision m making criteria ....................................................................... 20 Analytic hierarchy process or decision matrix...........................................21 U utility and gam e theory ...................................................... ..... .......... 21 M ultiobjective optim ization ...................................................................... 22 ProbabilityBased RM Techniques................................... ....................... 22 Scenario analysis .......................................... ...... .. .. .... ...... ......... .... 22 M om ents and quantiles........................................... .......................... 23 D e c isio n tre e s ......................................................................................... 2 3 Stochastic optim ization ........................................ .......................... 24 B ayesian analysis ..................................... ............... ..... ..... 25 F u z z y se ts ............................................................................................... 2 5 Inform ation gap ........................ .. ....................... .... .. ........... 26 D ow inside risk m etrics ............................................................. .. ............. 26 The Different RM M ethods: A Discussion............................................................ 27 ValueatRisk and Conditional ValueatRisk.....................................30 Scenario Tree .......................................... .... ................. .......34 D iscretization...................... ................................ .... .. ............ .. 37 Risk in the Water Resources Management Literature .........................................41 C o n clu sio n ...................... .. .. ......... .. .. ................................................. 4 2 3 COMPARISON OF RISK MANAGEMENT TECHNIQUESFOR A WATER ALLOCATION PROBLEM WITH UNCERTAIN SUPPLIES A CASE STUDY: THE SAINT JOHNS RIVER WATER MANAGEMENT DISTRICT .....................46 M odel Form ulation ................................................... .. .......... .............. ... 47 Objective Function .................. ............................ ... .... .. ........ .... 48 D decision V ariables........... ........................................................... .... .... .... ... 49 Problem Data .................................... ............................... ........ 49 C on strains ........................................................ .......... ................ 49 Deterministic Expected Value M odel ...................................... ............... 50 S cen ario M o d el .................................................................................. 5 1 TwoStage Stochastic Model with Recourse...........................................52 CVaR, Objective Function M odel................................... ....................... 53 CVaR, Constraint Model ............................ .......... ... ............... 54 Scenario G generation .......... .............................................................. ......... ....... 55 C ase S tu dy A rea ..................................................... ................ 59 W after Demand ................................. ........................... ............ 59 W after Su p p ly .................................................... ................ 6 5 W after Cost ................................. ... ................................ .........67 Scenario G generation .......... .............................................................. ......... ....... 68 R results and D iscu ssion .............................. ........................ .. ...... .... ...... ...... 72 5% Standard D aviation ............................................... ............................. 73 10% Standard D aviation ........................................................... ............... 85 A n a ly sis .................. ........................ ...... ................. ................ 9 6 Conclusions and recom m endations ........................................ ........................ 98 4 UTILITY, GAME, AND WATER: A REVIEW ............................................. 101 Theory of Preference ....................... .............. ... .......... ................. 103 Preference Comparison Relationship ..................................... ............... 104 Expected U utility Theory ............................................................................. 105 B ernoulli's utility theory ................................................. ............... 105 Linear expected utility theory ........... .......... ................... .............. 107 Subjective linear expected utility theory................................ ................. 111 M ultiattribute expected utility ................................................................. 112 Descriptive Limitations of LEUT and SEUT.............. .... ............... 113 Violation of independence ..... .................... ...............114 V violation of transitivity .................................. ................. ..................... 116 Probability judgm ent ............................................. ............................... 120 N onA rchim edean preferences........................................ ............... 121 A alternatives to Expected U utility Theory...........................................................121 L inear generalizations ........................................ .......................... 122 N onlinear generalizations ........................................ ...... ............... 122 Strategic D decision M making ............................................... ............................ 131 Game Theory ................................ ......... ... ..................... ......... 131 M them atical form ulations ............................................. ............... 132 Classification of gam es .......... ......................................... .................135 Solutions concepts ........................................ ......... .. .. ....... .... .......... 137 Extensions to Standard Game Theory ............. ........................................145 M etagam e analysis ............ ...... .................... ........ .. ............. 146 Hypergame theory .................. ................ .................... 147 Analysis of options........................ ...... .... ............ ........ .... 147 C conflict analy sis............ ............................................ ...... ..... .......148 Drama theory ..................... ......... ..........148 Graph model for conflict resolution ............................................148 Theory of m oves ........... .. ....... ............................ .. .. ...... .... .. 149 Alternatives to Standard Game Theory .................................. ............... 149 Lim ited thinking m odels ........................................ ........................ 150 L earning m odels............ .... ................................ ........ ........ ..... ....... 50 Social preferences m odels ................. ....... .... ......... ............... ............... 151 Game Theory in Water Resources Management ...............................................152 Conclusion ................ ......... .................... ........ .... ..... ........ 155 5 A MULTIAGENT MULTIATTRIBUTE WATER ALLOCATION GAME M O D E L ................................. ...................................................................1 5 7 P ow er and Preferences........................... ........................................ ............... 158 S alien ce T h eories.......... ................................................................ ......... ....... 160 Issue Linkage Theories ........................... ...................................... ............... 162 E quity T heories........... .. ........................... .. .................... .................. .... ... ... ..... .. 162 C coalition Form ation Theories........................................................ ............... 163 Minimum Resource Theory .......................... ....................163 B alan ce T h eory ........... ...... .......................................................... .. .... .. .... .. 164 M inim um Pow er Theory ............................................................................164 B bargaining Theory .......... ............ ...................... ..... .. .. ........ .... 165 Equal Surplus Theory ........................................... 165 PolicyDistance M inim ization Theory ................................... .................165 O utcom e G rouping Theory........................................... .......... ............... 166 Option Preferences Theory .................................................... ............... ... 167 Ordinal Deduction Selection System Theory .............................167 G raph M odel T h eory ...................... ......... .................................................... 16 8 Triads Theory .................... ................................ 168 Probability of Coalition Form ation........................................... .......................... 168 SizeProbability Model Theory ..... ................. ...............169 JohansenC Probability Model Theory.... ......... ...............................169 Central Union Theory ............ ........... ........................ 170 W willingness and Opportunity Theory ..................................... ............... 170 x Cohesion Theory ...................... ............. ................................171 Stochastic Communication Structures ............... ...... ........................171 R isk A ttitu d es ...............................................................17 1 PrattA rrow M odel .............................................. ............ .. .............. 172 R isk A version M atrix ............................................... ............................. 173 R iskV alue Theory .............................................. ............ .. .............. 174 M om ents RiskV alue M odel ........................................ ........................ 175 D e M esquita's Risk M odel ..... ............. ..................................... ...... .. 175 Model Development ................ ................... ................. 176 U utility F u n action .............................................................18 1 R elative G ain ..........................................184............................. C coalition Form ation.......... .............................................. ..................... .. 84 Political U uncertainty ............................. ......... .. .... ........................ 186 M odified U utility Function....................................................... ............... 187 H hypothetical A application ................................................ .................................... 187 Conclusion ........................................................... ................. 200 6 C O N C L U SIO N ......... .................................................................... ......... .. ..... .. 202 APPENDIX SJRWMD COSTS DEPRECIATION.................... ..................................207 L IST O F R E F E R E N C E S ...................................................................... .....................2 17 B IO G R A PH IC A L SK E T C H ........................................ ............................................266 LIST OF TABLES Table p 11 Possible decision making environment combinations ............................................2 21 The use of DMuU techniques in water resources management.............................44 31 Standard normal discrete distribution approximation for N=10 and up to the 2nd, 4th, 6th, and 8th moments constraints ................ .... .............. 60 32 M om ents con strains ........................................................................ .................. 60 33 Least square regression analysis results ............. ........................................... 61 34 SJWRMD caution area water demand projections ...............................................65 35 Supply sources, capacities, and costs .............................. ...............69 36 Scenarios of supply capacities at 5% and 10% standard deviation........................71 41 Choice m odel classification ............................................................................102 42 Applications of GT in water resources management grouped into area of application ........................ ...................... ..... 156 51 Pattern of risk attitudes ........................... .................. .................. ............... 172 52 Definition of model components............................... ............... 179 53 Player description ......... .... ............. ..... ... .... .... .. .... .. ............ 188 54 Input data .................................................................................................. 188 55 P lay ers ranking m atrix ............................................ ......................................... 194 5 6 C o alitio n s ...............................................................19 4 57 Coalition combination scenarios ....... .. ......................... ....... .. ............... 194 58 Measuring the probability of formation of different coalitions............................196 59 Input data ............... ....... ............................. ...........................196 510 Optimum resources allocation under the different scenarios, with the ratio of gain constraint .................................... .............................. ......... 199 511 Optimum resources allocation under the different scenarios, without the ratio of gain constraint .................................... .............................. ......... 199 A 1 C C I EN R (19082005) ................................................ ............................... 209 A2 CCI ENR projection and relative change(20002030).....................................210 A3 Discounted costs of the SJRWMD proposed projects for the caution area............211 LIST OF FIGURES Figure pge 11 D issertation organizational diagram ................................. ...................................... 3 21 Chapter 2 organizational diagram ........................................ ......................... 6 22 G generic decision tree ............................................................... .. .............. 24 23 A visualization of VaR and CVaR concepts .......... ............................................32 24 M measure of CVaR, scenario generation ....................................... ............... 36 25 Scenario tree for a twostage stochastic problem with recourse ...........................36 31 Chapter 3 organizational diagram ........................................ ....................... 47 32 Discretized standard normal distribution (moments in parentheses, i.e., (qth), are the unm watched m om ents) .............................................................. .....................62 33 Discretized standard normal distribution (moments in parentheses, i.e., (qth), are the unm watched m om ents) .............................................................. .....................63 34 M om ents leastsquare regression analysis plots................................... ................64 35 Priority water resource caution areas in the SJRWMD, Florida, USA....................64 36 Approximate locations of potential alternative water supply projects ...................66 37 Efficient frontier for a'= 50, 75, 80, 85, 90, 95, and 99 percent, 5% STD. ............75 38 Change of CVaR, with (A) /, (B) a, and (C) cost for a'= 50%, 5% STD........78 39 Change of CVaR, with (A) /, (B) a, and (C) cost for a'= 75%, 5% STD.........79 310 Change of CVaR, with (A) /, (B) a, and (C) cost for a'= 80%, 5% STD ........80 311 Change of CVaR, with (A) /, (B) a, and (C) cost for a'= 85%, 5% STD ........81 312 Change of CVaR, with (A) /, (B) a, and (C) cost for a'= 90%, 5% STD ........82 313 Change of CVaR, with (A) /, (B) a, and (C) cost for a'= 95%, 5% STD ........83 314 Change of CVaR, with (A) /, (B) a, and (C) cost for a'= 99%, 5% STD ........84 315 Efficient frontier for a'= 50, 75, 80, 85, 90, 95, and 99 percent, 10% STD. .........86 316 Change of CVaR, with (A) /, (B) a, and (C) cost for a'= 50%, 10% STD ......89 317 Change of CVaR, with (A) /, (B) a, and (C) cost for a'= 75%, 10% STD.......90 318 Change of CVaR, with (A) /, (B) a, and (C) cost for a'= 80%, 10% STD.......91 319 Change of CVaR, with (A) /, (B) a, and (C) cost for a'= 85%, 10% STD.......92 320 Change of CVaR, with (A) /, (B) a, and (C) cost for a'= 90%, 10% STD.......93 321 Change of CVaR, with (A) /, (B) a, and (C) cost for a'= 95%, 10% STD.......94 322 Change of CVaR, with (A) /, (B) a, and (C) cost for a'= 99%, 10% STD.......95 323 Efficient frontiers for a'= 50, 75, 80, 85, 90, 95, and 99 percent, (A) 5% STD and (B) 10% STD. ...................................... .. ........... ........ ..... 97 324 Comparison of CVaR,, CVaR+, CVaR,, and VaRy values calculated using m odel 3, (A) 5% STD and (B) 10% STD ................................... ..................99 325 Comparison of CVaR, values calculated using model, 5% and 10% STD............99 41 Chapter 4 organizational diagram ............................................. ............... 103 42 Expected utility indifference curves................ ... ..... .................. 109 43 Fanningout effect ......... ..... ......... ......... ......... ........ ... 115 51 Chapter 5 organizational diagram ............................................. ............... 158 52 Change of the utility function for coalition 1 w. r. t. issue 1 (horizontal axis) and issue 2 (vertical axis) for increasing values of x3 (0 1, top to bottom, left to rig h t). ............................................................................. 19 1 53 Change of the utility function for coalition 2 w. r. t. issue 1 (horizontal axis) and issue 2 (vertical axis) for increasing values of x3 (0 1, top to bottom, left to right). ...............................................................................192 54 Change of the utility function for coalition 3 w. r. t. issue 1 (horizontal axis) and issue 2 (vertical axis) for increasing values of x3 (0 1, top to bottom, left to right). ...............................................................................193 55 Change of the utility function with the relative gain constraint.............................200 56 Change of the utility function without the relative gain constraint........................200 A 1 C C I EN R historical data........................................................................... .... ... 208 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy APPLICATIONS OF SINGLE PARTY AND MULTIPLE PARTY DECISION MAKING UNDER RISK AND UNCERTAINTY TO WATER RESOURCES ALLOCATION PROBLEMS By Ghina Yamout December 2005 Chair: Kirk Hatfield Cochair: Edwin Romeijn Major Department: Civil and Coastal Engineering Decision theory refers to the analysis, formalization, and prediction, through mathematical models, of optimal and real decisionmaking; it involves the process of selection of perceived solutions, actions, and outcomes to a given problem from a set of possible alternatives. Depending on the extent of possible quantification, presence of uncertainty, number of decision makers, and number of objectives, decision theory is classified as quantitative or qualitative, deterministic or stochastic, single or multiple party, and single or multiobjective. The management of water resources systems involves unavoidable natural and social conditions of risk and uncertainty and multiple competing or conflicting parties and objectives, which introduce the risks of high economic and social costs due to wrong decisions, necessitating the formulation of models that adequately represent a given situation by incorporating all factors affecting it. Hence, the adequate modeling of such systems should incorporate risk and uncertainty; decision theory under risk and uncertainty is called decision analysis or risk management. Risk management approaches may be broadly categorized as non probability and probability based techniques. Nonprobability based techniques include sensitivity analysis, decision criteria, analytical hierarchy, and game theory; probability based techniques include scenario analysis, moments, decision trees, expected value, stochastic optimization, Bayesian and fuzzy analysis, and downside risk measures such as ValueatRisk and Conditional ValueatRisk metrics. Many of these methods, though very useful, suffer from critical shortcomings: sensitivity and scenario analysis only provide some intuition of risk, expected value fails to highlight extremeevent consequences, decision trees and hierarchical approaches fail to generate robust and efficient solutions in highly uncertain environments, central moments do not account for fattailed distributions and penalize positive and negative deviations from the mean equally, recourse does not provide means to control risk, and ValueatRisk does not provide information about the extent and distribution of the losses that exceed it and is not coherent. Game theory, used in situations of multiple party decision making, suffers from several systematic violations, such as the common consequence, preference reversal, and framing effects. To account for some of these shortcomings, several extensions and alternatives have been suggested such as the conditionalValueatRisk and behavioral game theory. xviii CHAPTER 1 INTRODUCTION Decision theory is an interdisciplinary area of study, related to and of interest to practitioners in many fields such as mathematics, statistics, economics, philosophy, management, sociology, political science, and psychology. Its main concern is the analysis, formalization, and prediction, through conceptual, physical, and mathematical models, of optimal and real decisionmaking, defined as the process of selection of a perceived solution, action, and corresponding outcome, to a given problem, from a set of possible alternatives, in a given situation. A situation is usually described by the extent of possible quantification, the presence of uncertainty, the number of decision makers, and the number of issues or objectives (Hipel et al., 1993; Radford et al., 1994); these four factors result in sixteen combinations, depicted in Table 11. Water resources systems management involves unavoidable natural and social conditions of risk and uncertainty and multiple competing or conflicting parties and objectives, which introduce the risks of high economic and social costs due to wrong decisions, necessitating the formulation of models that adequately represent a given situation by incorporating all the factors affecting it (Haimes, 2004). Hence, this research is focused on the formal representation of multiple objective situations involving risk and uncertainty, for single party and multiple parties decisionmaking (the bolded sections of Table 11). The diagram in Figure 11 previews the plan of this dissertation. Chapter 2 reviews the concepts of uncertainty, risk, and probability. Table 11. Possible decision making environment combinations Factors Combination Uncertainty Quantification Objectives Decision makers Absent Present Qualitative Quantitative Single Multiple Single Multiple 1 / 2 / 3 / 4 / 5 6 V V 7 8 / 9 / 10 , 11/ / 12 13 / / 14 / 15 / 16 Chapter 1 Plan of dissertation Chapter 2 Decision making under uncertainty: methods comparative review and applications to water resources management Chapter 3: Comparison of single decisionmaker risk management techniques using a water allocation case study Chapter 4: Utility theory, game theory, and water resources management applications Chapter 5: Multiagents multattributes water allocation game model: development and application Chapter 6 Conclusions and future research recommendations Figure 11. Dissertation organizational diagram The chapter also compares different probability and nonprobability based analytical techniques used in risk management, focusing on the conditional valueatrisk method, CVaR,. The chapter also presents the different methods used for scenario generation representation of uncertainty. The chapter concludes with an extensive review of the application of risk management techniques to water resources problems. Chapter 3 applies and compares different single party risk management techniques, presented in Chapter 2, to a water resources management problem, where risk is quantified as cost. These methods are the expected value, scenario model, twostage stochastic programming with recourse, and CVaR,. They were built into a mixed integer fixed cost linear programming framework. Uncertainty was introduced via water supplies and results were compared for two discrete distributions with equal means, and different standard deviations, developed using one of the scenario generation methods presented in Chapter 2. The models were applied to a case study, using the Saint Johns River Water Management District (SJRWMD), Priority Water Resource Caution Areas (PWRCA), in EastCentral Florida (ECF), as a study area. Chapter 4 presents a review of the development of utility theory and game theory, as theories of individual and strategic decision making under risk and uncertainty. The chapter starts with a summary of the formal conception of expected utility theory, followed by its critique as a human choice predictive tool in decision making situations, and an overview of its alternatives, and continues with the examination of standard game theory, its main taxonomy, and solution concepts, setting the stage for behavioral game theory. The chapter concludes with an extensive review of the application of game theory to water resources decision making problems. Chapter 5 presents the main concepts of research in the allocation of multiple resources between multiple competing or conflicting parties, such as preferences, risk, equity, salience, and power. Subsequently, the chapter proposes an applied modelling tool for common pool resources conflict resolution that combines essential concepts in an n parties game theoretic framework. Specifically, these concepts are utility, ideal position, issue linkage, equity, salience, risk propensity, conflict level, and political uncertainty. The chapter concludes with the illustration of this model using a hypothetical conflict situation over water, land, and financial resources among three different parties. CHAPTER 2 DECISION MAKING UNDER UNCERTAINTY: A COMPARATIVE REVIEW OF METHODS AND APPLICATIONS TO WATER RESOURCES MANAGEMENT Decision Theory, DT, is a subset of Operations Research, OR, and Systems Analysis (SA);1 it is a body of knowledge and related analytical techniques, of different degrees of formality, designed to help a decisionmaker choose among a set of alternatives in light of their possible consequences. Decision Analysis (DA) is a subset of DT that refers to the discipline of DecisionMaking under Uncertainty (DMuU) (Haimes, 2004; Winston, 1994). The process of DecisionMaking under Uncertainty is also equivalent to another process, namely, Risk Management, RM (Haimes, 2004). Decision Analysis, DecisionMaking under Uncertainty, and Risk Management are used interchangeably in the rest of the text. Although DMuU and RM refer to the same processes, their designations employ terms, namely uncertainty and risk, which, historically, have generated a great deal of argument. In the next sections we clarify the concepts of uncertainty, risk, and probability in the framework of RM. Subsequently, we present the general structure of the latter. These overviews are not aimed at presenting the field specific definitions and applications of risk and its management; rather they are meant to provide a summary of the general definitions. Following, we describe and compare different probability and 1 Originally, OR and SA referred to the use of quantitative techniques as a scientific approach to decision making and the analysis of complex interdependent elements of a system in a holistic interdisciplinary manner, respectively, to aid in decisionmaking. Currently, the analytical tools and evaluative techniques that both sciences utilize overlap; as a result, these terms are sometimes used interchangeably (Winston, 1994). nonprobability based analytical techniques used in RM, focusing on the conditional valueatrisk method. As a major issue in DMuU is the representation of uncertainty, we also present the different methods used for scenario generation. We conclude this chapter with an extensive review of the application of RM techniques to water resources problems. The diagram in Figure 21 exhibits the plan of this chapter. Risk Origin and Definition Risk Management Definition and Structure Risk Management Comparison of probability and nonprobability Techniques based Tools Scenario Generation Methods Conditional Value Formal definition atRisk Application of RM Review to Water Resources Figure 21. Chapter 2 organizational diagram On the Origin of Risk The first recorded practice of risk analysis, qualitative, dates to as back as early Mesopotamia, about 3200 B.C. in the TigrisEuphrates valley, where the Ashipu served as consultants for difficult decisions in ancient Babylonia; the Ashipu created ledgers of alternatives, their corresponding outcomes, and favorability (Oppenheim, 1977). Evidence of games of chance have been found in archeological ancient Egyptian, Sumerian, and Assyrian sites, where talis, the predecessor of dice, was used. Marcus Aurelius was regularly accompanied by his master of games (Covello and Mumpower, 1986). Quantitative risk analysis can be traced to early religious ideas concerning the probability of an afterlife. Plato addressed it in Phaedo in the fourth century B.C. Arnobius the Elder, a major church figure in the fourth century A.D. in North Africa, proposed a twobytwo matrix to analyze the alternatives of God's existence/inexistence versus accepting Christianity/being a pagan (Covello and Mumpower, 1986). In 1518, in response to a cash flow problem, the Catholic Church sanctioned usury as long as there was risk on the part of the lender (this definition was rescinded in 1586 to be resanctioned later in 1830) (Grier, 1981). The formal mathematical theories of probability, however, did not appear until the time of Pascal, who introduced probability theory in 1657. Following the steps of Pascal, the late seventeenth century witnessed a surge of related intellectual activity by authors such as Arbuthnot, who argued that probabilities of an event's causes can be calculated; Graunt, in 1662, and Halley, in 1693, who published life expectancy tables; and Hutchinson, who studied the tradeoffs between probability and utility in risky situations. In the early eighteenth century, 1738, Cramer and Bernoulli proposed solutions for the Saint Petersburg paradox.2 In 1792, LaPlace analyzed the probability of death as a 2 In probability theory and decision theory the St. Petersburg paradox is a paradox that exhibits a random variable whose value is probably very small, and yet has an infinite expected value. This poses a situation where decision theory may superficially appear to recommend a course of action that no rational person would be willing to take. That appearance evaporates when utilities are taken into account. The paradox is named from Daniel Bemoulli's original solution, published in 1738 in the "Commentaries of the Imperial Academy of Science of Saint Petersburg." function of smallpox vaccination (Covello and Mumpower, 1986). In the nineteenth century, Von Bortkiewicz examined ten year records of Prussian soldiers' mortality to study the event of death by kicks from horses (Campbell, 1980). Before the century was out, with the need to quantify risk, authors, mainly in the fields of economics, finance, and accounting, had begun to explore the relationship between uncertainty, probability, and risk. The next section shows how that task was revealed to be not as simple and straightforward a matter as it appears (McGoun, 1995). Definition of Risk Historically, the concept of risk has been far from easy to define. Its comprehension and quantification challenged and confused professionals such as philosophers, psychologists, economists, social scientists, physical scientists, natural scientists, mathematicians, and engineers (Haimes, 2004). The association between uncertainty, probability, and risk was a matter of great debate in the late nineteenth and early twentieth centuries (McGoun, 1995). Haynes (1895) argued that for many risks, historical relative frequency statistics are unreliable or inexistent and that risk exists even when statistics are not. Ross (1896) distinguished between variation, as the unquantifiable descriptive of possible outcomes with no statistics, and uncertainty as the consequence of this variation; he argued that the latter is equivalent to risk. Willett (1901) differentiated between probability or chance, uncertainty, and risk: uncertainty is the degree of rational ambivalence between two alternatives and also is the deviation of a probability from its normal value; risk, related to both uncertainty and probability, is the quantification of uncertainty in the form of mean absolute deviation. Willett also defined risk and uncertainty as the objective and subjective aspects of apparent variability, where the former is the effect of the psychological effect of the latter. Fisher (1906) described probability as a measure of ignorance; as such it is subjective and, in many cases, undefined; it is necessary information to assess risk, but it is not a measure of risk (McGoun, 1995). Lavington (1912) and Pigou (1920) were the first to explicitly define risk, as measurable, as the dispersion of relative frequency distribution. Knight (1921) underlined risk as "measurable uncertainty" where objective probability exists and uncertainty, or "unmeasurable uncertainty" as the cases where no quantitative distribution or only subjective probability exists. Lavington (1925) defined risk as the probability of a loss and uncertainty as the confidence in that probability. Florence (1929) elicited three values associated with uncertainty and risk: that of probability itself, the meaning of that value, and its quality or precision as objective statistics or subjective confidence. Fisher (Fisher, 1930) rejected the probabilistic measure of risk, which is the synonym of uncertainty or lack of knowledge. By 1930, probabilistic measurement of risk had been rejected (McGoun, 1995). It was with Fisher (1930) and Hicks (1931), in economic research, that the probabilistic measure of risk returned. Makower and Marschak (Makower and Marschak, 1938; Marschak, 1938) continued the movement toward an objective probabilistic measure of risk. In 1944, Domar and Musgrave (1944) acknowledged the skepticism toward the probabilistic measurement of risk; however, they recognized that, with the need of the quantification of values and their associated risks and with the absence of a satisfactory alternative approach to the subject of risk, this method should be adopted. Lawrence (1976) defined risk as a measure of the probability and severity of adverse effects. He also distinguished between risk and safety; whereby measuring risk is an empirical, quantitative, and scientific activity, judging safety is judging the acceptability of risk, a normative, qualitative, political activity. His definition was adopted by Haimes (2004) who stated that risk is a complex composition of two components: (1) "real" potential adverse effects and consequences, and (2) "imagined" mathematical intangible construct of probability. The Principles, Standards, and Procedures (PSP) published in 1980 by the U.S. Water Resources Council (USWRC, 1980) made a clear distinction between risk, uncertainty, imprecision, and variability as follows. In a situation of risk, the potential outcomes can be described using a reasonable wellknown probability distribution. In situations of uncertainty, the potential outcomes cannot be described in terms of objectively known probability distributions or subjective probabilities. In situations of imprecision, the potential outcomes cannot be described in terms of objectively known probability distributions, but can be estimated by subjective probabilities. Finally, variability is the result of inherent fluctuations or differences in the quantity concerned. The PSP identified two major sources of risk and uncertainty: (1) measurement errors of the variable complex natural, social, and economic situations and (2) unpredictability of future events that are subject to random influences. Kaplan and Garrick (1981) defined risk as a set R = {< S,,L,, X, where S,, L, and X, denote the risk scenario i, its likelihood, and its damage vector, respectively. Subsequently, Kaplan (1991; 1993) added a subscript c to indicate that the set of scenarios should be complete. He also added the idea of "success" or "as planned" scenario S,; the risk of a scenario S, is then visualized as the deviation from So. This idea matured into the theory of scenario structuring (TSS) (Kaplan et al., 2001; Kaplan et al., 1999). Haimes (2004) refined this risk definition as Rp = {< S,, L,, X, where Rp is an approximation to R based on the partition P of the underlying risk space, stating that the scenarios S, are finite in number, disjoint, and complete. Lave (1986) defined hazard as referring to some undesirable event that might occur; the probability of occurrence of a hazard is how frequently this hazard would be expected to occur. He defined risk as the expected loss or the sum of all products of each possible hazard and its probability of occurrence. Morgan and Henrion (1990) defined probability as a formal quantification of uncertainty. They distinguished between the classical, objective, or frequentistic and the Bayesian, subjective, or personalistic views of probability; the former defines the probability of an event occurring in a particular trial as the frequency with which it occurs in a long sequence of similar trials, while the latter defines a probability of an event as the degree of belief that a person has that it will occur given all the relevant information known by that person at that time, so it is a function not only of the event, but also of the state of information. Note that regardless of the view, probability assignments must be consistent with the axioms of probability. In his book, Technical Risk Management, Micheals (1996) distinguished between the terms hazard, peril, and risk. A hazard is a condition or action, following a decision, which may result in perilous conditions. Peril is the undesirable event resulting from a hazard. The peril probability is the probability of occurrence of that peril. The number of possible hazard peril combinations is an indication of the complexity of the system. In a system with four hazard factors such as product, process, intrinsic, and extrinsic hazard factors, and four impacts such as on quality, functionality, affordability, and 4 profitability, there exists 4 x Y n/r! (n r)!, in this case sixty, hazard peril r=l combinations. Peril recovery is the corrective action cost and time to recover from that peril. In that framework, Michaels (1996) defined risk as the uncertainty surrounding the loss from a given peril and risk cost and time as the product of corrective action costs and times, respectively, for recovery from peril and probability of the peril; they are probabilistic measures of risk to the cost and time, respectively, of corrective actions. Risk exposure is the sum of risk costs and risk times for a given system. Risk avoidance or reduction is the action taken to reduce the risk exposure. Risk recovery is the corrective action taken to offset perturbations caused by a materialized peril. A risk determinant factor is the quantified attribute, or risk determinant, that serves as a measure of factors contributing to risk exposure. Risk metrics are a system of risk determinant factors quantifying risk exposure. Micheals also categorized risk as (1) objective or subjective, which the former can be described by statistics and the latter as a reflection of attitudes and states of mind and subject to perceptions; (2) speculative or pure, where in the former there is uncertainty about both hazards and perils, while in the latter, there is only uncertainty as to whether a hazard leading to a peril will occur and not as to whether the resulting peril produces loss. Holton (2004a), in the context of financial and economic analysis, stated in his Contingency Evaluation website that risk has true meaning only when it refers to the possibility of incurring loss, mainly financial, directly or indirectly. In his book (Holton, 2003) and publications (Holton, 1997; Holton, 2004b), he defined uncertainty as ignorance, a personal experience, and risk as exposure to uncertainty. As such, risk has two components: uncertainty and exposure to it; if both are not present concurrently, there is no risk. Holton then argued that since ignorance is a personal experience, risk is necessarily subjective. Probability is a metric of uncertainty; at best, it quantifies perceived uncertainty. There are many more efforts to define risk; a further review of definitions, however, is superfluous for the purpose of this work. In the next section, we present some frameworks of risk management. Definition of Risk Management There are almost as many definitions for Risk Management as there are management disciplines. Everybody makes decisions: scholars and analysts in the fields of economics, finance, and accounting, psychology and social sciences, biology, toxicology, and medicine, mathematics and computer science, and engineering, all have been addressing the field of risk management. What all disciplines have in common though, is the definition of RM as a feedback process consisting of several steps. The following definitions are based on the corresponding ones, if available, presented in the previous section. Kaplan and Garrick (1981) defined risk assessment as the process of identifying what can go wrong, the likelihood of it going wrong, and the consequences. Haimes (1991) used this definition to define risk management as a process that builds on risk assessment and finds the available management alternatives and solutions, their costs, benefits, and risk tradeoffs, and the impacts of the management decisions on future options. It requires the synthesis of the empirical and normative, the quantitative and qualitative, and the objective and subjective. Lave (1986) defined the risk management process as a cyclical succession of (1) risk identification, or the identification of hazard and its associated risks, (2) risk assessment, or the identification of the quantitative magnitudes of the hazards, (3) management options, or the identification of goals, (4) decision analysis, or the identification of alternatives, and (5) implementation and monitoring. Michaels (1996) defined RM as the executive function of controlling hazards and their consequential perils that causes some kind of loss. Its aim is to reduce risk exposure rather than recovery; hence it stresses risk avoidance as first line of defense and risk recovery as a backup. He divided RM to three concurrent processes: (1) risk identification, (2) risk quantification, and (3) risk control. The first step includes determining the scope of investigation and, establishing the baseline model, and identifying the hazards and perils. The second consists of deriving the risk hierarchy, selecting the risk metrics and formulation, establishing a risk model, calculating risk exposure, and estimating contingency reserve. The third includes establishing risk organization and funding it, propagating best practices, implementing audits, initiating motivational programs, and rewarding performance. Haimes (2004) defined risk assessment and management as two overlapping processes; he used two perspectives, qualitative normative and quantitative empirical. In his qualitative normative perspective, Haimes defined risk assessment as the set of five logical, systemic, and welldefined activities of (1) risk identification, (2) risk modeling, quantification, and measurement, (3) risk evaluation, (4) risk acceptance and avoidance, and (5) risk management. Haimes distinguished risk identification, the first step of RM, as the process of identifying the sources and nature of risk and the uncertainty associated with it; this stage attempts to uncover and describe all riskbased events that might occur, be it natural hydrologicc, meteorological, and/or environmental,) or manmade (demographic, economic, technological, institutional, and/or political) causes. The second step, risk modeling, quantification, and measurement, involves assessing the occurrence likelihood of adverse events through objective or subjective probabilities and modeling the causal relationship among the different sources of risk, or adverse events, and their impacts. In other words, it involves the quantification of the input and output relationships of the random and decision variables and their relationship to the state variables, objective functions, and constraints. In the third step of RM, risk evaluation, various policy options are formulated, developed, and optimized. Risks, benefits, and costs tradeoffs are generated and evaluated. The fourth step, risk acceptance and avoidance, is the decision making step where the level of acceptability of risk is determined by evaluating the considerations that fall beyond the modeling and quantification process; it answers the question of "how safe is safe enough?". The fifth and final step, risk management, is the execution step where the chosen policy option is implemented. In his quantitative empirical perspective, Haimes defined risk assessment as the set of three major, though overlapping, activities: (1) information measurement, (2) model quantification and analysis, and (3) decisionmaking. Information measurement includes data collection and processing. Model quantification and analysis includes the quantification of risk and other objectives, the generation of Paretooptimal solutions and their tradeoffs, and the conduct of impact and sensitivity analysis. Finally, decision making involves the interaction between analysts and decisionmakers and the subjective judgment for the selection of preferred policies. Haimes also defined total risk management as the process that harmonizes risk management with the overall system management; it addresses hardware, software, human, and organizational failures involving all aspects of the system's life cycle, planning, design, construction, operation, and management. Finally, he defined risk based decisionmaking refers to a decisionmaking process that accounts for uncertainties through some process in the formulation of policy options. Our Definition For the purpose of this dissertation, we do not dwell on the philosophical concerns associated with uncertainty, probability, and risk, mainly, the concepts of their existence and the extent of their objectivity. We start with Holton's (1997; 2003; 2004a; 2004b) distinction between the terms metric, measure, and measurement. A metric is the designation of a tool. An operation or algorithm that supports a metric is called a measure. The value obtained from applying a measure is a measurement. Hence, a measure is used to obtain a measurement of a metric; there may be many measures to one metric. We associate uncertainty with ignorance. We define stochasticity as a special type of uncertainty associated with randomness. For practicality, we classify uncertainty into two main groups: natural and manmade; natural uncertainty is associated with a natural system's components, while manmade uncertainty is associated with a man generated system's components. Uncertainty is quantified using probability. We adopt the classical objective theory of statistics; we assume that probability exists and can be quantified. The methods of its generation, through historical information or mathematical algorithms, and their accuracy and precision are outside the scope of this work. Probability is a metric of uncertainty. We associate risk with loss; any type of loss, resulting from a decisionmaking policy or action. Loss, or risk, is hence represented using risk metrics. The choice of a risk metric and measure depends on the problem at hand and the decisionmaker's objectives and priorities. The description of risk is done through measurement of the risk metric and its associated statistics. We define risk management, RM, as the processes of risk identification, risk estimation, risk evaluation, and risk monitoring. Risk identification consists of uncovering and describing natural, manmade riskbased events that might occur. Risk estimation refers to the quantification of these events, their probability of occurrence, and their causal relationship. In the third step of RM, risk evaluation, various policy options are formulated, developed, and optimized. Risks, benefits, and costs tradeoffs are generated and evaluated. This definition is based on Haimes (Haimes, 2004). Finally, risk monitoring is the continuous process of the first three steps. RM is a feedback process. Risk Management Techniques "To manage risk, one must measure it" is an adage that Haimes (2004) uses in his book Risk Modeling, Assessment, and Management. Public interest in the field of RM has expanded significantly during the last two decades as an effective and comprehensive procedure that complements/supplements the management of almost all aspects of our lives. As federal and state legislators and regulatory agencies have been addressing the importance of the assessment and management of risk, industrial and government agencies in many management disciplines such as financial management, health care, human safety, manufacturing, the environment, and physical infrastructure (e.g. water resources, transportation, and power generation) all started incorporating risk analysis in their decisionmaking process. In parallel, the scholastic community witnessed an unprecedented release of articles covering the development of theory, methodology, and practical applications (Haimes, 2004). What are the methods, how are they distinguished, and how are they categorized? This section is devoted to the presentation of the main classes of RM techniques. Note that it is not meant to be an exhaustive reference of every available tool, subtool, and application developed in every discipline, but a comprehensive categorized overview of RM tools. As we undertake the task of classification, we are faced with the multiplicity of possible ways with which this task may be approached, depending on our interests and objectives. For the purpose of this dissertation, we base our classification, on a first level, on the concept of probability. Therefore, we classify riskbased decisionmaking methodologies into probabilitybased and nonprobabilitybased techniques. Note that the presented classes of methods are not mutually exclusive; in other words, in many instances, their concepts may overlap and multiple methods may be incorporated into one model, as always, depending on the problem at hand and the decisionmaker's objectives. We start this section by defining the general mathematical notations we use. Following, we describe nonprobabilistic and probabilistic risk management techniques. The nonprobabilistic methods presented are sensitivity analysis, decisionmaking criteria, decision matrix, multiobjective optimization, and game theory. The probabilistic techniques presented are scenario analysis, moments and quantiles, decision trees, stochastic optimization, downside risk metrics, and utility theory. Mathematical Notations An uncertain parameter Z may be continuous or discrete. A continuous parameter can assume all values in a specified interval. A discrete parameter assumes different values with different associated probabilities. Either way, the uncertainty of Z is described via its statistics; such statistics are its mean, variance or standard deviation, and, at best, its probability density function, pdf, and cumulative distribution function, cdf The pdf and cdf are represented by the functions f(Z) and F(Z), which are formalized differently for continuous and discrete parameters. In the following sections, we consider only cases of discrete parameters; we assume that continuous ones may be discretized. The occurrence of each value, Z of Z, with a probability p(s ), is represented by a scenario, s where j = 1,2,...,J denotes the scenario's name or number. Each scenario s, occurs with a probability p(s ) of Z We denote by a, the decision or action alternatives adopted by the decisionmaker, where i = 1,2,..., is the alternative's name or number. We also define the pair (a ,s ) as the outcome from the combination of the decision a, and the scenario s,. The risk resulting from a pair (a,s, ) is r. Finally, the parameter value Z the corresponding scenario, s,, and risk or loss r have the same p(s ), f(Z), and F(Z). NonProbabilityBased RM Techniques Although the management of risk, generally, connotes the quantification of risk through reliance on probability and statistics (Haimes, 2004), several riskbased decision making methodologies do not require the knowledge of probabilities. These methods include sensitivity analysis, decisionmaking criteria, decision matrix, multiobjective optimization, and game theory; they are described in the following sections. Sensitivity analysis Sensitivity or whatif analysis is the process of varying model input parameters over a reasonable range (range of uncertainty in values of model parameters) and observing the relative change in model response. The purpose of this type of analysis is to demonstrate the sensitivity of the model simulations to uncertainty in values of model input data. The sensitivity of one model parameter relative to other parameters is also demonstrated. Sensitivity analysis is also beneficial in determining the direction of future data collection activities. Data for which the model is relatively sensitive would require future characterization, as opposed to data for which the model is relatively insensitive (Morgan and Henrion, 1990; Winston, 1994). Decision making criteria Decision making criteria are methods for handling risk and uncertainty without adhering to probability (Haimes, 2004; Winston, 1994) in an optimization formulation. The three most common criteria are the pessimistic rule, the optimistic rule, and the Hurwitz rule. Pessimistic rule. Also called the maximin or minimax criterion because it consists of maximizing the minimum gain or minimizing the maximum loss; the rationale is that, by applying this rule, a decisionmaker will at least realize the minimum gain or avoid the maximum loss. Its formulation is min maxr . 1<1 Optimistic rule. Also called the maximax or minimin criterion because it consists of maximizing the maximum gain or minimize the minimum risk. Its formulation is min min r . 1<1 Hurwitz rule. This rule is a compromise between the two extreme criteria through an a index, where 0 < a <1 that specifies the degree of the decisionmaker's optimism: the smaller the a the greater the optimism; it is the linear combination of the minimax and minimin criteria formulated as min r, (a) = min a max r, + (1 a)min r 1K<1 111 <] Analytic hierarchy process or decision matrix The Analytic Hierarchy Process, AHP, also called decision matrix method for the evident reason that it is based on a ranking matrix of decision and corresponding attributes, such as risk. It is a semiquantitative decision making tool for situations where the attributes are not amenable to explicit quantification. The attributes are also assigned weighing factors. The decision option with the highest weighted sum of attributes is considered the best solution. Changing the weights of the assigned attributes is performed for sensitivity analysis (Haimes, 2004; Winston, 1994). Utility and game theory Utility is used to represent individual preferences, therefore predict their choice behavior. The theory of games uses utility to model strategic interactions between competing and conflicting decisionmakers (Heap, 2004; Myerson, 1991). Both theories, their limitations, and alternatives are discussed in details in Chapter 4. Multiobjective optimization Multiobjective optimization, MO, also known as MultiCriteria Decision Making, MCDM, refers to optimization problems with several, possibly conflicting or competing, objectives. Objectives are weighted according to their priority. There are several methods for solving MO problems. The most commonly used method is Goal Programming, where the objectives are given goals that are ranked by weighting factors and the problem is reduced to a single objective function of the weighted minimization of the deviations from the assigned goals. Another method is the weighted sum approach, where the objectives are assigned weighing factors and combined into one objective forming a single optimization problem. A third method is the hierarchical optimization method, where the objectives are ranked in a descending order of importance and each objective is then optimized individually subject to a constraint that does not allow the optimum for the new function to exceed a prescribed fraction of a minimum of the previous function. Other methods are the tradeoff, constraint, or F method, the global criterion method, the distance function method, and minmax optimization (Haimes, 2004; Winston, 1994). ProbabilityBased RM Techniques This section describes the following probabilitybased RM technique: scenario analysis, moments and quantiles, decision trees, stochastic optimization, downside risk metrics, and utility theory. Scenario analysis Scenario analysis combines sensitivity analysis and the expected value metric, where the uncertain parameter is assigned different values, corresponding to different scenarios, with associated probabilities, and the expected value of the scenarios results is calculated (Haimes, 2004; Winston, 1994). Moments and quantiles The expected value operator, E, mean, or first central moment, is a central metric that for a discrete parameter multiplies the parameter's values, such as loss, from different scenarios, by their corresponding probability of occurrence and then sums these products. For example, in an optimization framework, the expected value of loss for all policy options is minimized; formally, for a discrete problem: min Y ps )r j=1 The variance, "2, or second central moment, and standard deviation, C, are measures of dispersion of the values of a parameter around its mean. For example, the variance of a discrete loss is o2 = E(r ))p() E= E(r2 Er) J1 A quantile is the generic term for any fraction that divides the values of a parameter arranged in order of magnitude into two specific parts. For example, the 90th percentile of the loss is the value for which the value of F(Z) is 90% or 0.9; in other words, 90% of the losses lay below the value of the 90th percentile. Decision trees Decision Trees are one of the most used tools in riskbased decisionmaking; it relies both on a graphical descriptive and an analytical probabilitybased representations (Haimes, 2004; Winston, 1994). The basic components of a decision tree are the decision nodes, designated by squares, chance nodes, designated by circles, and consequences, designated by rectangles (Figure 22). Branches emanating from a decision node represent the various decisions or actions, a,, to be investigated. Branches emanating from a chance node represent the various scenarios, s,, with their associated probabilities, p(s ); at their end are the consequences, /u,, associated with the scenario/action pair (a,,s (Figure 22). s' ] I(a, 1, ) s12 (al s 2 ) sZ p J1 (a3, s) s2 IU32 (a3 s2) Figure 22. Generic decision tree (adopted from Haimes, 2004) Stochastic optimization The theory and applications of Stochastic Programming, SP, appeared in the 1950s, when authors such as Beale, Dantzig, Charnes, and Cooper (Beale, 1955; Charnes and Cooper, 1959; Dantzig, 1955) realized the need to incorporate uncertainty in Linear Programming, LP. Stochastic optimization, SO, refers to optimization in the presence of uncertain parameters, with the uncertainty quantified statistically by continuous or discrete probability distributions. Depending on the way the uncertainty is expressed and modeled, SP models can be categorized as recourse problems, SPR, or chance constrained problems, CCP; these methods are briefly explained below (Birge and Louveaux, 1997; Di Domenica et al., 2003). Recourse optimization. Recourse optimization, RO, is also referred to as multi stage optimization, MSO, or dynamic optimization, DO. Recourse is the ability to take corrective actions after an uncertain event has taken place. An example is twostage recourse problem; in simple terms, the problem involves choosing a variable to control what happens in the present time and taking some recourse corrective action after an uncertain event occurs in the future. Chanceconstrained optimization. Chanceconstrained optimization problems, CCP, or probabilistically constrained optimization problems, PCP, are optimization problems that involve statistical terms in their objectives and/or constraints. Bayesian analysis Bayesian analysis involves uncertainty caused by incomplete understanding or knowledge. One technique is Bayesian network, also known as belief networks, causal networks, Bayesian nets, qualitative Markov networks, influence diagrams, or constraint networks. Bayesian networks use a graphical structure to represent the complex causal chain linking decisions and consequences via a sequence of conditional relationships; variables are represented by a round node and the dependence between two variables is represented by an arrow. Dependence is represented by a conditional probability distribution for the node at the end of the arrow, based on Bayes formula. The graphical network constitutes a description of the probabilistic relationships among the system's variables (Batchelor and Cain, 1999; Borsuk et al., 2004; Bromley et al., 2005). Fuzzy sets Fuzzy set theory was suggested (Zadeh, 1965) to deal with decision situations involving risk and uncertainty without using probabilities. It deals with situations characterized by imprecise information described by membership functions (Hatfield and Hipel, 1999). Information gap Information gap models, also know as convex models, define uncertainty to be an information gap between what is known and what is needed to be known for the decision making process; its aim is to quantify and reduce this information gap (BenHaim, 1997; Hatfield and Hipel, 1999). Downside risk metrics Under this category are the second, first, and zero order lower partial moments, LPM, valueatrisk, and conditional valueatrisk metrics. In general, these metrics are referred to as measures in the literature (Albrecht, 2003). Note that an LPM of order n is computed at some fixed quantile q and defined as the nth moment below q. Developed by Bawa (Bawa, 1975) and studied by Fishburn (Fishburn, 1977), LPM measure risk by a probability weighted mean of deviations below a specified target level q; the higher the n, the higher the risk aversion. Second order LPM or semivariance. The second order LPM, or SLPM, is also referred to as semivariance, SV; it describes the downside risk computed as the average of the squared deviations below a target loss. Formally, it is SLPM = q r )2 p ). j=1 First order LPM. The first order LPM, or FLPM, describes the downside risk computed as the average of the deviations below a target loss. Formally, it is defined as FLPM = (q r)p(s ). It refers to risk neutral behavior. J1 Zero order LPM. The zero order LPM, or ZLPM, describes the downside risk computed as the average of the probabilities below a target loss; it coincides with the q cumulative probability of q. Formally, it is defined as ZLPM = p(s )= F(q). J1 ValueatRisk. ValueatRisk is denoted by VaR In simple terms, VaR, is a quantile. If the cumulative probability of q is denoted F(q) = p, then VaR, is the inverse of the ZLPM, such as VaRX = F (p) = q, and is defined as the maximum potential loss with a confidence level a = p. A detailed and more correct discussion of VaR, is presented later in this chapter. Conditional ValueatRisk. Conditional ValueatRisk is denoted by CVaR,; it is equivalent to expected shortfall, ES. Generally, it measures the expected value of losses exceeding VaR At a confidence level a, CVaR, = E(rr < VaRy). A detailed and more correct discussion of CVaR, is presented later in this chapter. The Different RM Methods: A Discussion In the past, risk was investigated using a variety of ad hoc tools, such as sensitivity and scenario analysis. Although these techniques allow the observation of model response versus the change in an uncertain parameter using a deterministic model, they only provide some intuition of risk. Risk was also commonly quantified through the mathematical expected value concept. Although an expected value provides a valuable measure of risk, it fails to highlight extremeevent consequences, which are adverse events of high consequences and low probabilities in advantage of events of low consequence and high probabilities, regardless of the former' potential catastrophic and irreversible impacts. From the perspective of public policy, events like dam failure, floods, water contamination, or water shortage, with low probabilities cannot be ignored. This is exactly what the use of expected value would ultimately generate. The pre commensuration of these low probability high damage events with high probability low damage events into one expectation function by the analyst markedly distorts the relative importance of these events as they are viewed, assessed, and evaluated by the decision makers (Haimes, 2004). Other methods developed to surpass the drawbacks of these quasideterministic techniques relied on decision trees and hierarchical approaches where uncertainty is introduced via discrete probabilities of uncertain parameters. These methods fail to generate robust and efficient solutions in situations of highly uncertain environments with a large number of dynamic and correlated stochastic factors and multiple types of risk exposures. The explicit introduction of statistical central moments, such as variance and standard deviation, which have well established calculation methods, into stochastic simulation and/or optimization approaches, such as chanceconstrained programming, allowed for some control of uncertainty and associated risks. Central moments, however, do not account for fattailed distributions and penalize positive and negative deviations from the mean risks value equally (Cheng et al., 2003). The introduction of the recourse and multistage concepts into stochastic optimization allows for the separation of the decisionmaking process to accommodate information, and associated uncertainties, available at different time steps. Recourse, however a very useful concept in practical decisionmaking applications, does not provide means to control risk, and most importantly, downside risk, which is risk associated with low probability and high losses. In the past ten years, a new method became popular in industry regulations, the Valueatrisk, VaRy, which was introduced by the leading bank, JP Morgan. Unlike the past methods, VaR, provides a downside risk measure with a probability associated with it. VaR,, however, has several shortcomings. The reduction of the risk information to this single number may lead to misleading interpretations of results. VaRy does not provide any information about the extent and distribution of the losses that exceed it, where for the same VaRy, we can have very different distribution shapes with different associated maximum losses; Hence, it is incapable of distinguishing between situations where losses that are worse may be deemed only a little bit worse, and those where they could well be overwhelming. In addition, the recent research on the axiomatic characterization of risk metrics revealed that VaR, is not coherent. The coherence concept was first introduced by Artzner, Delbaen, Eber, and Heath (Artzner et al., 1999), who defined a coherent measure of risk as one that satisfies the following four axioms: 1. Subadditivity: p(X + Y) < p(X)+ p(Y) 2. positive homogeneity: if A > 0, p(AX) = Ap(X) 3. Monotonicity: if X < Y, p(X)> p(Y) 4. Translation invariance: p(X + W)= p(X) W Where p is a risk metric, X,Y, and W are different risk functions. A major drawback of VaR, is that it does not satisfy axiom 1, i.e., it is not (with a few exceptions) subadditive. In practical terms, this signifies that the total risk associated with a certain project may be larger than the sum of individual risks resulting from different sources (Cheng et al., 2003). In addition, VaRy is not convex; hence, may have many local extremes, which makes it unstable and difficult to handle mathematically. An alternative measure that was developed to overcome the limitations associated with VaR, is the Conditional ValueatRisk, or CVaR, (Rockafellar and Uryasev, 2000; Rockafellar and Uryasev, 2002). Rockafellar and Uryasev defined expectationbounded risk measures as satisfying axioms 1, 2, 4, and axiom 5, which is: 5. p(X) > E( X) if X non constant, and p(X)= E( X) if X constant. If axiom 3, monotonicity, is also satisfied, then the risk measure is coherent and expectationbounded. CVaR, is both coherent and expectationbounded. It is a simple representation of risk that accounts for risk beyond VaRy, making it more conservative than VaR,. CVaR, is also stable as it has integral characteristics. It is continuous and consistent with respect to the confidence level a CVaR, is also a subadditive convex function with respect to decision variables, allowing the construction of efficient optimizing algorithms; it can be optimized using linear programming techniques, which makes it efficient. ValueatRisk and Conditional ValueatRisk To set the ground for formal definition of CVaR,, we start with the formal definition of VaR, VaR is a quantile; it has three components: a time period, a confidence level, and a loss amount; it answers the question: with a given confidence level, a, what is our maximum loss, VaR,, over a specified time period, T? Note that there is always a probability, 1a, that the actual loss will be larger. What does this mean? In reference to Figure 23, which presents the probability and cumulative distribution functions for two loss functions (solid and broken lines), we can read that with a confidence (60 percent), we expect that our worst loss, over time T, will not exceed VaR, (5.5 loss units, for both loss functions); there is a probability 1a (40 percent), that this measure may be exceeded in the right tail of the distribution (> 5.5 loss units) (shaded area). Increasing the confidence level will result in an increase in VaR,, by moving into this right tail; for a confidence level a of 95 percent, the VaR, corresponds to 8 and 9.5 units for the broken and solid lines loss functions, respectively. In reference to Figure 23, CVaR, quantifies the losses in the right tail of the distribution, the shaded area. Most importantly for applications, CVaRY can be expressed by a minimization formula that can be incorporated into problems of optimization that are designed to minimize risk or shape it within bounds, such as the minimization of CVaR, subject to a constraint on loss, the minimization of loss subject to a constraint on the CVaR,, and the maximization of a utility function that balances CVaR, against loss. But how do these concepts translate mathematically? And how is this risk measure, CVaR,, calculated and controlled? 02 100 0 2 4 6 8 10 12 Loss Figure 23. A visualization of VaR and CVaR concepts Note that the following mathematical formulation is limited to losses with discrete distribution functions; those with continuous distribution functions can be discretized. For the mathematical formulation using losses with continuous functions refer to the literature (Rockafellar and Uryasev, 2000; Rockafellar and Uryasev, 2002). Let L(x, ) be a loss function depending on a decision vector x and a stochastic vector Let Y(x, <) be its cumulative distribution function. Assume that the behavior of the stochastic parameter can be represented by a discrete probability distribution function, from which a scenario model can be constructed. Index the scenarios s = 1,..., S corresponding to the stochastic parameter ,s, with corresponding probabilities ps, such that the losses are listed in an increasing order L(x, 1 ) ...< L(x, s ). In this setting, we can define the following terms: * VaR, is the value of L(x, <) corresponding to the confidence level a Formally, VaR, = min [L(x, )Y(x, ) a] = L(x, r ). * CVaR+, or upper CVaR,, is the expected value of L(x, <) strictly exceeding VaR,; it is also called Mean Excess Loss and Expected Shortfall. Formally, for equally probable scenarios, CVaR+ = E[L(x, () L(x, ) > VaR, ] = 1 (x, ,). S Sa s=s+l * CVaR or lower CVaR,, is the expected value of L(x, <) weakly exceeding VaR,, i.e., values of L(x, ) which are equal to or exceed VaR,; it is also called Tail VaR, Formally, for equally probable scenarios, CVaR = E[L(x, )L(x, )>VaRj= ] ZL(x, 4). S s, + 1s=Sa * Y ,a is the probability that L(x, ) does not exceed VaR Formally, Y, = max [Y(x, JL(x, )< VaR where a < Ya <1. * A is a weighing factor. Formally, A = (Y, C a(1 a), where a < Y V <1 and 0 A <21. * Finally, CVaRc is the A weighted average of VaRy and CVaR+. Formally, CVaR, = A VaR, + (1 A) CVaR . Having defined all the terms, we can distinguish four cases in the calculation of CVaR,. These cases are demonstrated in Figures 24(a) through 24(d). The examples provided correspond to cases where we have 6 scenarios, s = 1,...,6, and 4 scenarios, s = 1,...,4, with equal probabilities p1= p2 = = 4 = p5 6 = 1/6 and P = P 2 3 = 4 = 1/4, respectively: 6. a corresponds to the cumulative probability of one of the scenarios sc Y, = a, A = 0, VaR, = L(x, ), and VaR, < CVaR, < CVaR, = CVaR. 7. a does not correspond to the cumulative probability of one of the scenarios: Y(VaR, )> a, A > 0, VaR, = L(x, ), and VaR, < CVaR, < CVaR, < CVaR . 8. a does not correspond to the cumulative probability of one of the scenarios and is greater than that of the last scenario: W(VaR) = 1 > a, A =1 > 0, VaR, = L(x, ,v, ), and VaR, = CVaR, = CVaR, with CVaR undefined. 9. a corresponds to the cumulative probability of the last scenario s,: Ya = 1 = a, A is undefined, VaR, = L(x, ), and VaR, = CVaR~ = CVaR, with CVaR undefined. A major issue in decisionmaking under uncertainty is the representation of the underlying uncertainty. Usually, we are faced with either continuous distribution or a large amount of data, making the problems too complex or too large to solve regardless of the algorithm or computing capacity. Hence, we need to pass from the continuous distribution, or data, to a discrete distribution with a small enough number of realizations, or scenarios, for the stochastic program to be solvable, and a large enough number of scenarios to represent the underlying continuous distribution or data as close as possible (Dupacova et al., 2000b; Dupacova et al., 2003). Scenario Tree The process of creating this discrete distribution is called scenario generation; it results in a scenario tree (Di Domenica et al., 2003; Dupacova et al., 2000b; Hoyland et al., 2003; Hoyland and Wallace, 2001). Formally, The uncertainty in the model is represented by the parameter with a probability density function, pdf, f(<) and a corresponding cumulative distribution function, cdf, F(s). The true pdf, f(s), of is approximated by a discrete probability function, or mass distribution function, mdf, denoted P(O), concentrated on a finite number of scenarios s = 1,..., S corresponding to the stochastic parameter s,, with corresponding probabilities p, = P(O, ),such that S p, = 1. The scenario tree for a twostage stochastic problem with recourse is s=l illustrated in Figure 25. Probability 1 6 0 fI Loss a= 2/3 =4/6, ,, CVaR 6 6 6 6 f3 /14 5i f6 VaR CVaR CVaR+ S4/6 = a, A = 0,CVaR4/6 = CVaR+ = 1/2 .................. ................. .. (a ) f5,+1/2f6 CVaR Probability 1 6 f, Loss VaR CVaR a =7/12, =8/12 >a, = 1/5,CVaR, Probability 4 4 5 + /f6 =CVaR... ................. (b) 7/12 =1/5 VaR12 + 4/5CVaR = 1/5 f4 +2/5f, + 2/5 f CVaR 41 SjIT a = 7/8, yT. 8/8 = > a, A = 1, CVaRI, '4 VaR = VaRV, ........................ ................. .. (c ) f4, CVaR is undefined Probability CVaR 1 1 1 1 1 4 4 4 4 8 fl f, f3 f4' Loss VaR ...................... ... ............ (d) a = 8/8, T, = 8/8 = 1 > a, is undefined CVaR = VoR = f4, CVaR/ is undefined Figure 24. Measure of CVaR, scenario generation The process of scenario generation is done through the discretization of a continuous process, the aggregation of a discrete process, or internal sampling. In the following paragraphs we provide an overview of the first process, i.e., the methods used for discretization or quantization of continuous distributions. 0 Os=l 0Os=2 0 0Os=3 0 s=S First Stage ,, Second Stage I I I I I I t=l t=2 t=k t=k+l t=k+2 t=T Figure 25. Scenario tree for a twostage stochastic problem with recourse Aggregation or reduction processes consist of deleting scenarios or data from an already existing large collection. An overview of the reduction methods is outside the scope of this work; for examples, refer to the literature (Chen et al., 1997; Consigh and Dempster, 1996; Dupacova, 1996; Dupacova et al., 2000a; Dupacova et al., 2003; Wang, 1995). Internal sampling methods sample the scenarios during the solution procedure, without using a pregenerated scenario tree; some of these methods are stochastic and L shaped decompositions and stochastic quasigradient methods. The reader is referred to the literature (Casey and Sen, 2002; Dantzig and Infanger, 1992; Dempster and Thompson, 1999; Ermoliev and Gaivoronski, 1992; Higle and Sen, 1991; Infanger, 1992; Infanger, 1994). Discretization Discretization or quantification of a continuous distribution function is the process of constructing a discrete distribution function from this continuous distribution. Several methods have been developed; they can be classified into three main groups: Monte Carlo simulations, bracket methods, momentmatching methods, and optimization methods (Dagpunar, 1988; Hoyland et al., 2003; Pfeifer et al., 1991). One of the most widely used techniques to generate discrete values from a continuous distribution are Monte Carlo simulations, which draw randomly from the distribution of a parameter. Monte Carlo simulations require a sequence of random numbers, usually provided by random number generators, RNG. RNG may be broadly classified as mixed linear congruential, multiplicative linear congruential, and general linear congruential, such as Fibonacci, Tauseworthe, shuffled, and portable generators. For univariate distributions, some general methods for generating random numbers are inversion, composition, stochastic model, envelope rejection, band rejection, ratio of uniforms, Forsythe, alias rejection, and polynomial sampling methods. The applicability and degree of suitability of each method varies with the type of distribution (Beaumont, 1986; Dagpunar, 1988; Pfeifer et al., 1991). The simplest and most trivial discrete approximations are the bracket methods. There are two traditionally used bracket methods: the bracketmedian and bracketmean methods. In the bracketmedian approximation the cdf scale is divided into a number of equal intervals, or brackets, and the median of each is assigned the probability of its interval. The error in calculating the moments can be substantial if only a few intervals are used; the most commonly used is the fivepoint equiprobability bracket median approximation. The bracketmean method is similar to the bracketmedian method except that the intervals are represented by their means rather than their median (Clemen, 1991; Hoyland and Wallace, 2001; Miller and Rice, 1983; Smith, 1993; Tagushi, 1978; Zaino and D'Errico, 1989a). Another type of approximations is the momentmatching approximations. Generally, an npoint momentmatching discrete distribution approximation, nPDDA, consists of n values and their corresponding probabilities of occurrence chosen to approximate the pdf of a continuous parameter (Di Domenica et al., 2003; Kaut and Wallace, 2003). Usually, the n values are specified fractiles from the cdf of the uncertain parameter with specified probabilities to work well in estimating moments of the pdf. The standard type of nPDDA is the threepoint approximations, 3PDDA, since at least three points are needed to represent the underlying pdf well while the number of scenario tree paths increases exponentially with n. In 3PDDA, the pdf is substituted by a threepoint mdf. 3PDDA provide a convenient and simple way to approximate a pdf; in addition, it can be constructed to match the first three moments of a pdf exactly (Keefer, 1994). Several 3PDDA have been developed based on different methods such as the PearsonTukey, PT (Pearson and Tukey, 1965), the Extended PearsonTukey, EPT (Keefer and Bodily, 1983), the BrownKahrPeterson, BKP (Brown et al., 1974), the SwansonMegill, SM (Megill, 1977), the Extended SwansonMegill, ESM (Keefer and Bodily, 1983), the MillerRice OneStep, MRO (Miller and Rice, 1983), the McNameeCelona Shortcut, MCS (McNamee and Celona, 1987), the ZainoD'Errico Tagushi,ZDT (D'Errico and Zaino, 1988), and the ZainoD'Errico Improved, ZDI (Zaino and D'Errico, 1989b) approximations. Miller and Rice (Miller and Rice, 1983) introduced the use of the Gaussian quadrature technique for approximating npoint mdf. The result is an npoint discrete distribution that matches the first 2n1 moments of the underlying continuous distribution. The values and probabilities for many distributions are obtained as solutions to polynomials; they are tabulated for different distributions in many references (Abramowitz, 1965; Beyer, 1978; Stroud and Secrest, 1966). Smith (Smith, 1993) developed another npoint the momentmatching approximation for approximating mdf using the Gaussian quadrature technique; like Miller and Rice, the result is an npoint discrete distribution that matches the first 2n1 moments of the underlying continuous distribution. A characteristic of this method is that it incorporates extreme values of the latter. Hoyland, Kaut, and Wallace (2003) developed a new momentmatching approximation by applying a leastsquare model to minimize the distance between the generated and target first four moments and correlations for multivariate problems. Another general method to construct a discrete distribution is optimal discretization. Hoyland and Wallace (2001) suggested a nonconvex optimization problem by minimizing a measure of distance between the moments of the constructed distribution and the ones of the underlying distribution. The model is rerun heuristically from different starting points until a local minimum is obtained. Other optimization techniques were developed by Pflug (2001). The accuracy of a discrete approximation of a probability distribution is measured by the extent to which the moments of the approximation match those of the original distribution. Several authors undertook the task of comparing the performance of the previously presented methods for different underlying distributions. Miller and Rice compared their Gaussian quadrature based method to the brackets methods for uniform, normal, beta, and exponential distributions. Their method resulted in smaller approximation errors on the mean, variance, skew, and kurtosis; in addition, that error decreased as the number of points in the discrete approximation increased The bracket methods resulted in the underestimation of almost all moments. Keefer and Bodily compared several twopoint, threepoint, fivepoint, and bracket approximations in estimating the mean, variance, and the cdf, for beta and lognormal distributions. They showed that different methods result in different approximation errors depending on the performance measures and types of distributions. The best performance was observed for the EPT method followed by the ESM method; these methods, however, perform poorly in cases of extremely skewed or peaked distributions. Zaino and D'Errico performed a Monte Carlo Simulation to compare different bracket and threepoint approximations. They showed that all methods perform comparably well in estimating the mean and the variance and that the ZDI was superior when estimating higher order moments. Smith compared the bracketmedia, bracketmean, EPT, and his momentmatching methods for normal, lognormal, beta, and gamma distributions. He concluded that the bracketmean method accurately approximates the mean but generally underestimates all the other moments (Miller and Rice, 1983) and that the EPT approximation accurately estimates the mean and the variance. Keefer (1994) compared the performance of six threepoint approximations in estimating the mean, variance, and certainty equivalent, at different risk aversion, for beta and lognormal distributions. He demonstrated that while 3PDDA methods can exactly match the first three moments of the pdf, they do not approximate the certainty equivalent accurately, except for the EPT and ZDI approximations. Keefer concluded that the choice between the different approximations should depend on the tradeoff between the approximation accuracy and the reliability required. For example, these methods' accuracy in representing extremely skewed or peaked pdfs can be very poor. The authors argued that most of the errors associated with these discrete approximations can be reduced by taking more points; however, the tree branches grow in proportion to the number of points, n, raised to the power of the number of uncertain parameters being discretized (Hoyland and Wallace, 2001; Smith, 1993). It is also argued that the errors generated from the use of these approximations are acceptable based on the premise that very little information is available about the actual distribution anyways. Risk in the Water Resources Management Literature In the water resources management literature, a large number of research emphasized the need to account for the uncertainties and dynamism. Authors used methods such as two stage programming with recourse, chance constrained programming, dynamic and goal programming, fuzzy analysis, genetic algorithm, neural networks, Bayesian methods, probabilistic analysis, and scenario and sensitivity analysis, to name a few, to manage reservoir operation, groundwater pumping, groundwater contamination, water quality, conjunctive supply, irrigation, etc. (Table 21). Most of these studies considered expected values of the issue of interest; only a few accounted for the risks of low probability events. Rouhani (1985) minimized the mean square error of differences between measured and predicted groundwater head values in the design of monitoring networks. Asefa, Kemblowski, Urroz, McKee, and Khalil (2004) used support vector machines to minimize the bound on generalized risk of the difference. Feyen and Gorelick (2004) inspected the effect of uncertainty in spatially variable hydraulic conductivity on optimal groundwater production scheme via a multiple realization approach. Wang, Yuan, and Zhang (2003) applied reliability and risk analysis for reservoir operation and flood analysis using Lagrange multipliers. Sasikumar and Mujumdar (2000) used fuzzy sets of low water quality to manage a river system. Ziari, McCarl, and Stockle (1995) introduced a variance term in a twostage stochastic model to manage irrigation scheduling and crop mix. Others used scenario or sensitivity analysis to model hydrologic uncertainties (refer to Table 21). These methods provide estimates of risk but no means of controlling or managing this risk, other than through trial and error (Watkins and McKinney, 1997). In the next chapter, we propose the application of CVaR, to water resources management problems. Conclusion The process of Risk Management may be viewed through many lenses depending on the discipline and objectives. In this chapter we introduced the fundamentals of risk and its management. We reviewed the different definitions or risk and risk management and provided a definition that was adopted in this work. We then categorized and briefly described the various general risk management approaches with emphasis on valueat risk and conditional valueatrisk concepts. The approaches were compared and the advantages and drawbacks were outlined. As representing uncertainty is as important as modeling it, we also described and compared different scenario generation techniques. Finally, we presented a review of the various risk management modeling approaches in the field of water resources management. In the following chapter, we present an application of the recommended approaches to a case of water resources management. Table 21. The use of DMuU techniques in water resources management Application Reservoir operation Groundwater management Optimization technique  Twostage (Ferrero et al., 1998; Huang and Loucks, 2000; Loucks, 1968; Wang and Adams, 1986)  Chanceconstrained (Abrishamshi et al., 1991; Askew, 1974; Azaiez et al., 2005; Ouarda and Labadie, 2001; ReVelle et al., 1969)  Dynamic (Ben Alaya et al., 2003; Burt, 1964; Chaves et al., 2003; El Awar et al., 1998; Estalrich and Buras, 1991; Karamouz and Mousavi, 2003; Kelman et al., 1990; Mousavi et al., 2004b; Nandalal and Sakthivadivel, 2002; Philbrick and Kitanidis, 1999; Stedinger and Loucks, 1984; Stedinger etal., 1984; Trezos and Yeh, 1987; Wang et al., 2003)  Fuzzy sets (Chang et al., 1997; Chang et al., 1996; Chaves et al., 2004; Hasebe and Nagayama, 2002; Maqsood et al., 2005; Maqsood et al., 2004; Mousavi et al., 2004a)  Bayesian networks (Wood, 1978)  Multistage stochastic programming (Pereira and Pinto, 1985; Pereira and Pinto, 1991; Watkins et al., 2000)  Optimal control (Georhakakos and Marks, 1987; Hooper et al., 1991)  Neural networks and genetic algorithms (Akter and Simonovic, 2004; Hasebe and Nagayama, 2002; Ponnambalam et al., 2003)  Twostage stochastic (Wagner et al., 1992)  Chance constrained programming (Chan, 1994; Eheart and Valocchi, 1993; Hantush and Marino, 1989; Morgan etal., 1993; Tung, 1986; Wagner, 1999; Wagner and Gorelick, 1989)  Dynamic(Andricevic, 1990; Andricevic and Kitanidis, 1990; McCormick and Powell, 2003; Provencher and Burt, 1994; Whiffen and Shoemaker, 1993)  Optimal control (Georgakakos and Vlasta, 1991; Whiffen and Shoemaker, 1993)  Neural networks and genetic algorithms (Hilton and Culver, 2005; Ranjithan et al., 1993)  Scenario and sensitivity analysis (Aguado et al., 1977; Burt, 1967; Feyen and Gorelick, 2004; Flores etal., 1975; Gorelick, 1982; Gorelick, 1987; Hamed et al., 1995; Kaunas and Haimes, 1985; Maddock, 1974; Mao and Ren, 2004)  Bayesian networks (Batchelor and Cain, 1999)  Fuzzy sets (Bogardi et al., 1983; Dou et al., 1999)  Other (Asefa et al., 2004; Bell and Binning, 2004; Rouhani, 1985; Tiedeman and Gorelick, 1993; Wagner and Gorelick, 1987) Table 21. Continued Application Water quality Floodplain Eutrophication Water transfer Estuary Conjunctive management Lake/wetland Irrigation Planning Hydrologic Cycle Runoff Optimization technique  Chance constrained (Fujiwara et al., 1988; Huang, 1998)  Goal programming (AlZahrani and Ahmad, 2004)  Genetic algorithm (He et al., 2004)  Bayesian networks (Ricci et al., 2003; Varis, 1998; Varis and Kuikka, 1999)  Neural networks (Boger, 1992)  Fuzzy sets (Baffaut and Chameau, 1990; Chang et al., 2001; Chaves et al., 2004; Jowitt, 1984; Julien, 1994; Koo and Shin, 1986; Lee and Chang, 2005; Lee and Wen, 1997; Liou et al., 2003; Liou and Lo, 2005; Ning and Chang, 2004; Sasikumar and Mujumdar, 2000)  Scenario and sensitivity analysis (Chu et al., 2004; Kawachi and Maeda, 2004a; Kawachi and Maeda, 2004b; Mao and Ren, 2004; Vemula et al., 2004)  Two stage (Lund, 2002)  Fuzzy sets (Esogbue et al., 1992)  Neural networks (Sahoo et al., 2005)  Bayesian networks (Despic and Simonovic, 2000)  Simple recourse (Somlyody and Wets, 1988)  Bayesian networks (Borsuk et al., 2004)  Two stage (Lund and Israel, 1995)  Optimal control (Zhao and Mays, 1995)  Chance constrained (Nieswand and Granstrom, 1971)  Evolutionary annealing (Rozos et al., 2004)  Sensitivity analysis (Escudero, 2000; Jenkins and Lund, 2000)  Mean/Variance (Maddock, 1974)  Two stage (Cai and Rosegrant, 2004; Watkins and McKinney, 1998; Ziari et al., 1995)  Dynamic (Bergez et al., 2004)  Two stage (Lund and Israel, 1995)  Bayesian networks (Varis and Kuikka, 1999)  Fuzzy sets (Hobbs, 1997)  Chance constrained (Dupacova et al., 1991; Loucks, 1976)  Bayesian networks (Batchelor and Cain, 1999)  Fuzzy sets (Suresh and Mujumdar, 2004)  Scenario and sensitivity analysis (Pallottino et al., 2005)  Goal programming (Sutardi et al., 1995)  Bayesian networks (Bromley et al., 2005)  Fuzzy sets (Alley et al., 1979; Babovic et al., 2002; Bender and Simonovic, 2000; Chen and Fu, 2005; Faye et al., 2005; Slowinski, 1986; Sutardi et al., 1995; Virjee and Gaskin, 2005; Yi and Zhang, 1989)  Information gap (Hipel and BenHaim, 1999)  Fuzzy set (Cheng et al., 2002) CHAPTER 3 COMPARISON OF RISK MANAGEMENT TECHNIQUESFOR A WATER ALLOCATION PROBLEM WITH UNCERTAIN SUPPLIES A CASE STUDY: THE SAINT JOHNS RIVER WATER MANAGEMENT DISTRICT In this chapter, we applied and compared different risk management techniques to a water resources management problem, where risk is quantified as cost. These methods are the expected value, scenario model, twostage stochastic programming with recourse, and CVaR,. They were built into a mixed integer fixed cost linear programming framework. Five models were developed: (1) a deterministic expected value model, (2) a scenario analysis model, (3) a twostage stochastic model with recourse, (4) a CVaR, objective function model, and (5) a CVaR, constraint model. Uncertainty was introduced via water supplies. Assuming continuous normal distribution for the allowable withdrawals, two discrete distributions with equal expected values, or means, and different standard deviations were developed based on the method developed by Miller and Rice (1983). The two different central dispersion parameters were assumed for the additional assessment of extreme events effect on the results. The result is 9 models formulations. To compare the performance of the different formulations, the models were applied to a case study, using the Saint Johns River Water Management District, SJRWMD, Priority Water Resource Caution Areas, PWRCA, in EastCentral Florida, ECF as a study area. The diagram in Figure 31 exhibits the plan of this chapter. Model Formulation Scenario Definition Study Area Results and Analysis Conclusions Definition of decision variables, state variables, constraints, and different formulations Description of the uncertainty representation process Definition of the water allocation case study Presentation and comparison of results from different models Summary of main findings Figure 31. Chapter 3 organizational diagram. Model Formulation Five models were developed: (1) a deterministic expected value model; (2) a stochastic single stage scenario model; (3) a twostage stochastic model with recourse, which is the base of, (4) a CVaR, objective function model and (5) a CVaR, constraints model. Uncertainty was introduced via water supplies. Two discrete distributions, with equal expected values, or means, of supplies, were developed by assuming a standard normal underlying continuous distributions; for additional assessment of extreme events effect on the results, two different central dispersion parameter, variance, were assumed for each distribution, leading to the notations (a) and (b). The result is 10 (2 x 5) models formulations denoted l(a), l(b), 2(a), 2(b), 3(a), 3(b), 4(a), 4(b), 5(a), 5(b). Note that since the expected value of supplies is the same for both assumed distributions, models l(a) and l(b) are equivalent, and hence denoted by 1, reducing the total number of formulations to 9. This section presents the different model formulations, their decision variable and problem data, objective function, and constraints. Objective Function The problem at hand is a fixed cost problem. Given several options (i,j,k), each of which corresponds to one ofm supply sources i (i=, ...,m), one ofn locations (j=1, ...,n), and one of P capacity levels k (k = 1,..., ), the decision makers have to decide in which of the options to invest in which time period t, in a way to incur minimum cost over the planning horizon (t=l,...,T), while satisfying the situation's constraints. Once an investment is made, the corresponding option will be available for the remainder of the planning horizon. The objective is to minimize the sum of two main terms: (1) the total fixed costs, FC, corresponding to the initial investments, or capital costs, CC, incurred to make the chosen options available from the chosen times and the corresponding yearly operation and maintenance costs, OMC, and (2) the total variable costs, ContC, corresponding to the continuous operational costs of withdrawing water from each option after it is made available and the total penalty costs, PC, penalizing unsatisfied demand, or alternative source cost, AC from using an alternative source: FC = CC + OMC = I T CJk XJk, + Ok X Jkt =1 =l k=l t=1 r1= Sn T T m n m n T T VC = ContC + PC = Ckt kt + D xkt kt P kt + ptD ,=1 j=1 k=l t=l t=l i=1 j=1 k=1 i=1 j=1 k=l t=l t=l Note that the last term of VC is a constant and is irrelevant in the optimization. Decision Variables * Xjkt denotes the decision variable for the investment option (i,j,k) at time t: it is a binary variable which assumes the value of 0 if investment option (ij,k) will not be made available at time t and its corresponding investment will not be made, or 1 if option (i,j,k) will be made available and its corresponding investment will be made. * xkt denotes how much water will be withdrawn from option (i,j,k) in period t. Problem Data * Skt is the total capacity of option (i,j,k) in period t. * Wt is the total capacity of source i in period t. * Cikt is the fixed cost of making option (i,j,k) available at time t; it is a onetime investment cost. For example, you could have C ak = at1C where Ck is the nominal fixed cost of making option (i,j,k) available and a e (0,1) is a discount factor representing the time value of money. * Oijkt is the O&M cost incurred every period t starting the time of making option (i,j,k) available; it is a yearly cost. * c,kt is the unit cost of supplying water from option (i,j,k) in period t. For example, you could have ckt = at 'lc, where c,k is the nominal unit cost of withdrawing water from option (i,j,k) and a e (0,1) is a discount factor representing the time value of money. * pt or is the unit cost of not supplying water in period t. For example, you could have p, = a' p, where is the nominal penalty/alternative cost and a e (0,1) is a discount factor representing the time value of money. The penalty/alternative cost could either be the unit cost for acquiring water from an alternative supply, or it could be a penalty cost to indicate that shortages are undesirable. Constraints The problem is subject to different sets of constraints depending on the formulation; below is a description of all the used constraints: 1. For all ij,k,t: xykt option (i,j,k) in period t only if option (ij,k) was made available before or at time t. m n 1 2. For all t: D, V x ,k > 0, which states that the total water made available in i=1 ]=1 k=l period t from all options should not exceed the total water demand in period t. m n I Note that any shortage will be penalized; or Y xkt > Dt i=1 j=1 k=l n 3. For all i, t: I xklk <_ W,, which states that for each source i, the maximum j=1 k=l allowable withdrawal in period t should not be exceeded. I T 4. For all i,j: X k Xkt 1, which ensures that for each source i and location, only k=1 t=1 one capacity k can be chosen; also, investment option (i,j,k) will be built at most once. 5. For all ij,k,t: Xkt e {0,1}, which are the binary constraints on the investment choice variables. 6. For all i,j,k,t: xkt > 0, which are the nonnegativity constraints on the quantities of water withdrawn. Summarizing, the entire optimization model now reads: Deterministic Expected Value Model The deterministic expected value model, model 1, treats the uncertainty of supplies by averaging them into one number, the expected value of allowed withdrawals, resulting in the following formulation: min rCUk X +O jk (x X A )xkt =l 1 1k= t=1 r= Tl j 1 k=l t=l subject to t Skt < Sjkt Xykr for all i = 1,...,m; j = 1,...,n; k = 1,...,; t = 1,...,T r=1 m n D ZZZ kt >0 for all t = ,...,T i1 j=1 k=l n 1 I xIjkt I T Xkt <1 for ali =1,..., m; j = 1,...,n k=1 t=1 X kt 1 for alli = ,...,m; j = ,...,n;k = ,...,; t = 1,.., T xjkt > 0 for all i = 1,...,m; j = 1,...,n; k = 1,..., 1; t = 1,.., T Scenario Model The scenario model, model 2, is equivalent to a singlestage deterministic model. Unlike the expected value method, which considers only the expected value of supplies, the uncertain supplies are considered by different independent scenarios; the result is the minimization of the expected value of the objective function over the scenarios set. In T t m n I T min E' SCk Xkkt + Ok, Xk, + P t)kt =Il =1 k= t=1 =1 ) = l =1 k=l t=l m n I T t m n I T Or, E' min C (kXJk + ,Jkt Xy', + Ckt )k =l =1 k= t=1 =l j=1 k= t= subject to t kt Skt X kt for alli 1,...,m; j ,...,n; k 1,...,; t = ,...,T; s =l,...,S m n I Dt ZZZXkt > 0 for all t = 1,...,T; s = 1,...,S z=1 j=1 k=1 n 1 l ZX t < kt for all i = 1,...,m; t = 1,...,T; s = 1,...,S ilk l ]=1 k=1 I T ZZ Xkt <1 for all i = 1,...,m; j = 1,...,n k=1 t=1 X kt 0, 1 for all i= ,...,m; j = ,...,n; k = ,...,; t = 1,...,T xAkt > 0 for all i = 1,...,m; j = 1,...,n; k = 1,...,1; t = 1,...,T; s =l,...,S TwoStage Stochastic Model with Recourse The twostage stochastic model, model 3, like the scenario model, represents uncertainties by a scenarios set. Unlike the scenario model, however, in this model, the scenarios are linked by a set of variables, referred to as the firststage variables; the first stage variables are the same for all the scenarios, and hence are scenario independent. The expected value operator is applied only to the terms involving the rest of the variables, referred to as the second stage variables; the second stage variables are scenario dependent and hence they are different for each scenario. The result is a fixed cost problem with recourse. In other words, the model allows the decisionmaker to make two sets of decision: (1) a first stage decision, an uncertainty independent decision, consisting, in this case, of the fixed costs of making supplies available, and (2) a second stage decision, an uncertainty dependent decision consisting, in this case also, of the variable costs of allocating the resources from the supplies made available by the first decision. This process allows the decision maker to postpone his allocation decisions until information about the uncertainties is revealed. The model is formulated below. m n I T t m n I T (' min C .ykt +OUkt X,,k + E[S k P kJk ,=l j1 k1 '71 1 = =l k 1 t 1 subject to t x kt < Sjkt Xk for alli 1,...,m; j = ...,n; k = ...,; r=1 t = l,...,T; s = l,...,S m n D ZZZXkt 0 for all t = 1,...,T; s = 1,..., S i=1 j=1 k=1 n l I T ZZXkt <1 for all = ,...,m;j =,...,n k=1 t=1 X kt e {0, 1} for all i = 1,..., m; j = 1,...,; k = 1,..., t 1,...,T xkt > 0 for all i = 1,..., m;j = 1,...,n;k ,...,; t = l,...,T; s = l,...,S CVaR, Objective Function Model The CVaR, objective function model, model 4, is based on the twostage recourse model. In this case however, the CVaR, operator is applied to the extreme events of high costs scenarios only, corresponding to a cumulative probability > a; high costs events are associated with high risk events of water shortage. min CVaR { (CkX, + O I Xk + k  pt )xk =1 jl k 1 t 1 to1 = j=1 k 1 t 1 subject to t x jkt < Sijkt f Xzjki T=1 m n ZZZ jkt > i=1 j1 k=1 n 1 xkt < ts j lkl t it k=1 t=1 Xzkt E (0, 1} xk > 0 i jkt  for all i = 1,...,m; j t = l,...,T; s = l,...,S for all t = 1,..., T; s for all i = 1,...,m; t for alli = 1,...,m; j = for all i = 1,...,m; j for all i = 1,...,m; j: t = ,...,T; s = ,...,S 1,...,n;k = ,...,1; : 1,..., S 1,...,T; s = I,...,S 1,..., n ,...,n; k = ,...,; t = ,...,T  1,...,n; k = 1,...,1; CVaR, Constraint Model. The CVaR, constraint model, model 5, is also based on the twostage recourse model. In this case however, the CVaR, of high risk events are restrained to a value < P rather than minimized, while the total twostages costs are minimized. m in T t m n'T([ min C yktX kt +Oykt X"kt +E kt t Pykt Il 1 k=1 t=1 r=1 t =l J=1 k=1 t=1 subject to [. I=1 j= k= t= t xijkt < S"t Zj t Xzjk = 1 kZZZ jkt  n I j=1 kT= I T Oijkt XJkt +kt =1 1=1 j=1 k=1 t=1 for all i = 1,..., m; j = 1,..., n; k t= ,..., T; s=,..., S for all t = 1,..., T; s = 1,...,S 1,..., ; for all i = 1,..., m; t Xk 1 for alli = 1,..., m; j = 1,..., n k=1 t=1 X jkt e {0, 1} for all i= 1,..., m; j = 1,..., n; k = 1,...,; t = 1,..., T xkt >0 for all i = ,..., m; j = ,..., n; k = ,...,; t= ,..., T; s= ,..., S Scenario Generation As noted in Chapter 2, a major issue in stochastic optimization is the accurate representation of underlying uncertainties. In the case where those uncertainties are represented by continuous distributions, this may be done by scenario generation through discretization. The various discretization methods were discussed in Chapter 2. Discretization allows for the approximation of the uncertain parameter, , underlying continuous distribution, f(s), by a discrete probability function, or mass distribution function, mdf, denoted P(I), concentrated on a finite number of scenarios s = 1,..., S corresponding to the stochastic parameter ,s, with corresponding S probabilities p, = P(O,),such that V p, = 1. In this case, e, corresponds to the s=1 available water supply from different projects, denoted W, in the previously presented model formulation section. 1,...,T; s = I,...,S We chose to use the an optimization framework of the method developed by Miller and Rice (Miller and Rice, 1983), who suggested a momentmatching approximation that allows the construction of an npoint momentmatching discrete distribution, which consists on n pairs of probabilityvalue for the uncertain parameter, chosen to represent the pdf of a continuous parameter that matches 2n1 moments of the latter. There method is based on the Gaussian quadrature technique of numerical integration, which approximates functions integrals by a linear set of polynomials summation. The values, z,, for different total number of pairs (i.e. order of polynomial approximation), n, and many functions can be easily obtained from tables in published literature (Abramowitz, 1965; Beyer, 1978; Stroud and Secrest, 1966), as the solutions to the polynomials. For example, for a standard normal distribution we used Table 5, page 218 in Stroud and Secrest. The table gives values of the integral fe x f(x)dx, zlI; hence to obtain the values, z, for a standard normal distribution, the table values were multiplied by F2 . The values corresponding probabilities are then obtained as a solution to N linear equations, by substituting these values into the set of equations for the moments of the approximate discrete distribution. Following the model formulation notation, these N equations are of the form (q)= p g where q <1,2,..., 2N 1 is the moment order. s=1 For example (eq) is the sum of probabilities, the mean, and the variance for q equal to 0, 1, and 2, respectively; they are equal to 1, 0, and 1, respectively, for a standard normal distribution. We chose to represent our uncertain parameter by a continuous normal distribution, with equal mean and different standard deviations, to simulate different cases of parameters scattering and extreme events. The normal distribution is applicable to a very wide range of phenomena that occur in nature, industry, and research and is the most widely used in statistics. Physical measurements in meteorological and hydrological experiments as well as manufactured parts are often adequately represented using normal distributions. In addition, the normal distribution is in many conditions a good approximation to other distributions. It is also the asymptotic form of the sum of random variables or parameters under a wide range of conditions, if the underlying phenomena are additive (DeGroot, 2002; Evans et al., 2000; Walpole, 1989). We present here the results for standard normal distributions based on 10 pairs, corresponding to 10 scenarios (Tables 31 and 32). The values, <,, can be transformed for specific normal distributions by as simple transformation using the specific mean, j/, and standard deviation, a For normal distribution, this is obtained using the formula 5, = z, + /, where z, are the values obtained for a standard normal distribution. Note that for a standard normal distribution, symmetric around the mean, 0, i.e., with +n/2 = , if N is even, the sum is zero for all odd q. In addition, its symmetry allows the reduction the number of linear equations to n/2, as ps+n/2 = Ps In the case of 10 scenarios, N = 10, nineteen moments may be matched. Since N in this case is even, all odd moments are equal to zero, and the linear system reduces to 10 equations with 10 unknowns, corresponding to the 0th, 2nd, 4th, 6th, 8th, 10th, 12th, 14th 16t, and 18th moments. As p, = p,, the unknown probabilities are reduced to 5 and for all practical reasons we restrict ourselves to the first even moments equations. Pl +P2 +P3 +P4 +P5 = 1/2 p12+p2 +p3 2+p4 1+p2 =/2 P P12 2 P 3 3 4r2 P5 52 =1/2 P 14 P24 3 34 4 +5 54 =3/2 PI +6 P22 +P33 +P444 +p 5 =15/2 Pm1 +P22 +P3V +p4 +P5 = 105/2 2 3 4 =105/2 The qth moments were calculated from the distributions' moments generating function, M(y); My)= e 2 for o < y < +c, for a normal distribution, where the qth moments equals the qth derivative of M(y) at zero, M(")(0); for details refer to the literature (Beaumont, 1986; DeGroot, 2002). This system was solved using the excel What's Best Optimization package, where the sum of probabilities was minimized subject to the set of the first two, three, four, and five linear equations as constraints. Additional constraints were imposed setting higher probabilities for values closer to the mean. The resulting sets of probabilities for matching up to the 2nd, 4th, 6th, and 8th moments, a total of four minimization problems, are presented in Table 33. Note that matching the 4th moment was a source of infeasibility in the minimization problems; hence, its corresponding constraint was not satisfied, i.e., it was relaxed, in the four minimizations. Matching up to the 8th moment resulted in violation of the 6th moment too. The moments from each minimization in comparison with the continuous standard normal distribution moments are presented in Table 32. The resulting histograms and smooth distributions are compared in Figures 3 2 and 33. The visual comparison of the results reveals that constraining all the moments results in the best discrete approximation. To confirm this observation, a leastsquare regression analysis was run on the results. Analysis of Table 33 and Figure 34 regression findings verified our observation. Constraining up to the 8th moments resulted in the best fits, with the highest r2, slope closest to unity, and intercept closest to zero, and smallest corresponding errors. Case Study Area Groundwater serves as the primary water source for most of the State of Florida. Faced with continuous growth, local utilities throughout the state are working on identifying alternative viable and economic sources of potable water. Recognizing the need to develop new sources and plan well in advance, the Saint Johns River Water Management District, SJRWMD, one of five water management districts in Florida (Figure 35), initiated in the year 2000 several water supply plans that (1) identified limit of fresh groundwater in Priority Water Resources Caution Areas, PWRCA (Figure 35), which are areas where existing and anticipated sources of water and conservation efforts are not adequate; (2) identified alternative water resources options and development projects with cost data and likely project users; (3) initiated the Alternative Water Supply Construction Cost Sharing Program in 1996 to provide cooperative funding for the construction of alternative water supply facilities (Vergara, 2004; Wilkening, 2004; Wycoff and Parks, 2005). Water Demand The total water demand of the SJRWMD PWRCA is projected to linearly reach 830 MGD in the year 2025 (Table 34). This demand consists of public supply, domestic, agriculture and recreational irrigation, commercial, industrial, institutional, and power generation water needs (Wilkening, 2004; Wycoff, 2005). Table 31. Standard normal discrete distribution approximation for N=10 and up to the 2nd, 4th, 6th, and 8th moments constraints q 0th, 2nd 0th, 2nd, 4th 0th, 2nd, 4th, 6th 0th, 2nd, 4, 6, a 8th zI, zs pdf cdf pdf cdf pdf cdf pdf cdf 3.43616 4.85946 0.001534 0.001534 0.02464 0.02464 0.000000 0.000000 0.000000 0.000000 2.53273 3.58182 0.001534 0.003067 0.02464 0.04928 0.031523 0.031523 0.016867 0.016867 1.75668 2.48433 0.001534 0.004601 0.02464 0.07392 0.053653 0.085176 0.068881 0.085748 1.03661 1.46599 0.247699 0.252301 0.02464 0.098559 0.053653 0.138829 0.068881 0.154629 0.3429 0.48494 0.247699 0.500000 0.401441 0.500000 0.361171 0.500000 0.345371 0.500000 0.3429 0.48494 0.247699 0.747699 0.401441 0.901441 0.361171 0.861171 0.345371 0.845371 1.03661 1.46599 0.247699 0.995399 0.02464 0.92608 0.053653 0.914824 0.068881 0.914252 1.75668 2.48433 0.001534 0.996933 0.02464 0.95072 0.053653 0.968477 0.068881 0.983133 2.53273 3.58182 0.001534 0.998466 0.02464 0.97536 0.031523 1.000000 0.016867 1.000000 3.43616 4.85946 0.001534 1.000000 0.02464 1.000000 0.000000 1.000000 0.000000 1.000000 Table 32. Moments constraints Discrete Approximation Moments q V Moment Oth, 2nd Oth, 2nd, 4th Oth, 2nd, 4th, 6th Oth, 2nd, 4, 6, a 8th Constraints 2 Moment Constraints 2 Moment Constraints 2 Moment Constraints 2 Moment 0 0.5 0.500 =0.500 =0.500 =0.500 2 0.5 0.500 =0.500 =0.500 =0.500 4 1.5 0.656 Not= 1.197 Not= 0.936 Not= 0.871 6 7.5 2.324 18.870 =7.500 Not= 5.737 8 52.5 26.258 382.538 79.716 =52.500 Table 33. Least square regression analysis results Regression Analysis Results Moments Constraints 0th, 2nd 0th, 2nd, 4th 0th, 2nd, 4th, 6th 0th, 2nd, 4, 6, a 8th Slope 0.501 7.464 1.538 1.008 Intercept 0.218 12.580 1.398 0.575 Standard error of slope 0.0179 0.361 0.0397 0.0192 Standard error of intercept 0.425 8.558 0.943 0.454 Correlation coefficient r2 0.996 0.993 0.998 0.999 Standard error on ordinate 0.808 16.266 1.793 0.864 05 04 03 02 01 486 3 58 2 48 1 47 0 48 048 147 2 48 358 486 pdf(a) cdf(a) Moments Matched: 0th, 1st, and 2nd 05 486 358 248 147 048 048 14, pdf(a) Moments Matched: 0th, st, 2nd, and (4th) 05 1 2 48 358 486  cdf(a) 486 358 248 147 048 048 147 248 358 486 486 358 248 147 048 048 147 248 358 486 pdf(a) cdf(a) pdf(a) cdf(a) Moments Matched: 0th, 1st, 2nd, (4th), and 6th Moments Matched: 0th, st, 2nd, (4th), 6th, and (8th) Figure 32. Discretized standard normal distribution (moments in parentheses, i.e., (qth), are the unmatched moments) 05 04 03 02 01 0 A A  4 86 3 58 2 48 1 47 0 48 048 Spdf(a) Moments Matched: th, 1st, and 2nd 05 04 I V 1 47 2 48 358 486 cdf (a) 01 486 358 248 1 47 048 048 Spdf(a) Moments Matched: 0th, 1st, 2nd, and (4th) 05 04 03 02 0 1 1 47 2 48 358 486 cdf(a) 6 6 486 358 248 1 47 048 048 147 248 358 486 486 358 248 1 47 048 048 147 248 E pdf(a) + cdf(a) E pdf(a) + cdl Moments Matched: 0th, 1st, 2nd, (4th), and 6th Moments Matched: 0th, 1st, 2nd, (4th), 6th, and (8th) Figure 33. Discretized standard normal distribution (moments in parentheses, i.e., (qth), are the unmatched moments) 358 486 f(a) ~ 41 1 400 300 ' 200 100 0 100 200 300 Continuous pdf Moments Figure 34. Moments leastsquare regression analysis plots Figure 35. Priority water resource caution areas in the SJRWMD, Florida, USA (Vergara, 2004; Wilkening, 2004) Table 34. SJWRMD caution area water demand projections (Wilkening, 2004) Year Demand 2010 676 2011 686 2012 697 2013 707 2014 717 2015 727 2016 738 2017 748 2018 758 2019 768 2020 779 2021 789 2022 799 2023 809 2024 820 2025 830 Water Supply The demand is currently supplied from the Floridan Aquifer groundwater. It is projected that the aquifer's capacity for the caution areas, 670 mgd, will be reached before the year 2010 (Wilkening, 2004; Wycoff, 2005). With that in mind, the SJRWMD identified three main potential alternative sources for water, with a total capacity of 335 mgd: (1) 175 mgd from the Saint Johns River basin (SJR) at seven locations; (2) 100 mgd from the Lower Oklawaha River (LOR),3 at one location; and (3) 60 mgd from Collocated seawater (CSW) at three locations. These rates correspond to the maximum allowable withdrawals from the sources; they are assumed to 3 Note that although 100 mgd may be withdrawn from the Oklawaha River, a project capacity of 21.5 mgd has been suggested since this source is at a remote location with respect to the priority caution areas of the SJRWMD, rendering the transmission costs from this source prohibitive. be the source of uncertainty in the models (Vergara, 2004; Wilkening, 2004; Wycoff and Parks, 2005). In sum, the SJRWMD identified 11 alternative water supply projects; the approximate locations of these projects are shown in Figure 36. Various project development scenarios were examined by the district to provide examples of various water supply quantities and costs associated with each project. The projects details are presented in Table 35. Each of these projects has a maximum allowed withdrawal, which is the total water that can be withdrawn from this source while subject to constraints such as upstream and downstream water levels, sea and riverine ecology, and aesthetics. Cost estimates were provided for several possible average capacities at each location; only one of these capacities may be chosen at each location. The maximum capacities are only design capacities and not demand capacities. The design, permitting and construction of new source facilities will likely take 5 to 10 years (Vergara, 2004; Wycoff and Parks, 2005). St Jal. River ar Lake G orge nLml L ower \V L__kawaha River in Putn am Counly 7 Intc.astal Waterway at ." \Nw Smyrna Be ch U J Si ,onr P . n Lake .., L \ Ind. .e, Loa .1r aA an ri . 1.o:3 La A pka 'k .I FP&L Ceap anIral ( [ _ p , i r .,J a 6 6, Z a :,. , 1", s ... .. S jr.na r r r B 01 A T r Crk Rervolr Figure 36. Approximate locations of potential alternative water supply projects (Vergara, 2004; Wilkening, 2004) Water Cost Table 35 presents a summary of the estimated total capital, O&M, and unit costs of water from different projects locations and capacities. The method followed for these estimates are briefly summarized in this section. Note that the last row in Table 35 stands for existing groundwater supplies, hence, they are not part of the proposed projects, have no associated fixed cost, a one significant figure unit cost of $1/1000 gallons, and an O&M cost of 0.2/100 gallons, based on year 2000 Dollar (Wycoff, 2005). Total capital cost is the sum of construction cost, nonconstruction capital cost, land cost, and land acquisition cost. The Operation and Maintenance, O&M, Cost is the estimated annual cost of operating and maintaining the water supply project when operated at average day capacity. The Equivalent Annual Cost is the total annual life cycle cost of the water supply project based on facility service life and time value of money. Equivalent annual cost, expressed in dollars per year, accounts for total capital cost and O&M costs with facility operating at average day design capacity. Finally, the Unit Production Cost is Equivalent annual cost divided by annual water production. The unit production cost is expressed in terms of dollars per 1,000 gallons produced (Wycoff and Parks, 2005). These costs are in year 2003 Dollar. They were converted to future years Dollar values using the Construction Cost Index (Michaels, 1996), or CCI; results are tabulated in Appendix A. CCI is estimated by Engineering News Records, ENR, on a monthly basis, and represents the underlying trends of construction costs in the USA. It is determined by several factors such as labor, materials, and others (ENR, 2005). Table Ai lists historical yearly averages of CCI for the years 1908 2005. Figure Ai is a plot of these values to obtain a best fit of the year CCI relationship. Using the Equation of the best fit, projections of CCI were calculated for the years 2000 2025; CCI was also estimated from the equation for the years 2000 2005 for consistency. But what is the significance of CCI and how is it used? Actually, CCI is used as a measure of change of costs between different years. This change, ACCI, for consecutive time periods, years, is estimated using Equation 2. The change in CCI can also be calculated for nonconsecutive years using Equation 3, such as t'< t. ACCI (CCI CCIt ) ACCI x 100 2 CCIt1 ACCI, (c C ) x 100 3 CCI,, To estimate the value of costs at time t, C,, the cost at time t', C, is multiplied by ACCI,, with t'< t, Equation 4. C, = ACCI, x C,, 4 Scenario Generation The scenarios were defined around the uncertainty in supply. The previously described projects are assigned deterministic expected values of water supply designated as average withdrawals capacities. Assuming a normal distribution, two mass distribution functions, (a) and (b), each with 10 scenarios, were defined, assuming two different standard deviations, at 5 and 10 percent of the mean of each supply, for each supply (Table 36). This was based on the Miller and Rice (1983) moment matching method in an optimal discretization framework. Both methods were discussed earlier in this chapter. Table 35. Supply sources, capacities, and costs4 Source (Maximum Allowed Location (J) System Capacity Capital O&M Unit Production Withdrawals) () (mgd) Cost Cost Cost Average Maximum k ($M) ($M/yr) ($/1,000 __gallons) Saint Johns River Saint Johns Near SR 520/528 (J= 1) 20 30 k=1 189 7.56 3.03 Basin (175 MGD) River Near SR 50 (j = 2) 10 15 k=1 91 3.81 3.00 il1 Near Lake Monroe (= 3) 50 75 k=1 457 18.71 2.93 30 45 k=2 238 11.29 2.74 20 30 k=3 217 7.56 3.27 9.6 14.4 k= 4 84 3.67 2.94 Near Lake Monroe 10 15 k=1 81 3.80 2.80 (j=4) 50 75 k=2 372 18.80 2.63 100 150 k=3 714 37.20 2.55 Near DeLand (j = 5) 20 30 k =1 210 7.56 3.22 10 15 k=2 105 3.80 3.25 50 75 k=3 447 18.80 2.91 100 150 k= 4 871 37.20 2.84 Near Lake George (j= 6) 33 49.5 k= 1 386 12.40 3.41 Taylor Near Cocoa (j = 7) 10 15 k =1 55 2.20 1.66 Creek 25 37.5 k=2 134 6.00 1.68 Reservoir Lower Lower Putnam( = 1) 21.5 32.25 = 1 255 5.45 2.94 Ocklawaha River Ocklawaha (100 MGD) River i=2 4 (Vergara, 2004; Wilkening, 2004) Table 35 Continued Source (Maximum Allowed Withdrawals) (i) Location (J) System Capacity (mgd) Capital Cost ($M) O&M Cost ($M/vr) Unit Production Cost ($/1,000 gallons) Collocated Indian FP&L Cape Canaveral 10 15 k = 1 90 5.00 3.33 Seawater (60 River Power Plant (J =1) 20 30 k = 2 180 9.40 3.23 MGD) Lagoon 30 45 k =3 274 13.60 3.20 i = 3 Reliant Power Plant 10 15 k = 1 90 4.50 3.20 (j = 2) 20 30 k=2 177 8.40 3.07 30 45 k =3 268 12.10 3.28 Intracoastal Near New Smyrna 5 7.5 k = 1 83 3.10 5.06 Waterway Beach (J =3) 10 15 k = 2 121 5.20 3.99 15 22.5 k =3 159 7.60 3.61 Floridan Aquifer5 670 48.91 5 Wycoff, R. (2005). "Phone Interview." Consultant, Saint Johns River Water Management District. Table 36. Scenarios of supply capacities at 5% and 10% STD Supply Source (Mean) 1 (175.0) 2(21.5) 3 (60.0) GW (670) Scenario, s = 8.75 (a) = 17.5 (b) = 1.07 (a) = 2.15 (b) a = 3 (a) = 6 (b) =33.5 (a) a = 67 (b) 1 132.4797 89.9594 16.27608 11.05215 45.42161 30.84322 507.208 344.416 2 143.659 112.3181 17.64954 13.79908 49.25453 38.50906 550.0089 430.0178 3 153.2621 131.5243 18.82935 16.1587 52.54702 45.09404 586.7751 503.5502 4 162.1726 149.3452 19.92406 18.34812 55.60203 51.20407 620.8894 571.7787 5 170.7568 166.5136 20.97869 20.45739 58.54519 57.09039 653.7547 637.5093 6 179.2432 183.4864 22.02131 22.54261 61.45481 62.90961 686.2453 702.4907 7 187.8274 200.6548 23.07594 24.65188 64.39797 68.79593 719.1106 768.2213 8 196.7379 218.4757 24.17065 26.8413 67.45298 74.90596 753.2249 836.4498 9 206.341 237.6819 25.35046 29.20092 70.74547 81.49094 789.9911 909.9822 10 217.5203 260.0406 26.72392 31.94785 74.57839 89.15678 832.792 995.584 Results and Discussion As explained in the previous sections, five model formulations were considered: the expected value of supplies formulation, model 1, the scenario or expected value of costs model, model 2, the twostage stochastic model with recourse, model 3, the CVaR, minimization model, model 4, and the CVaR, constraint model, model 5. As the objective of this work was to demonstrate the tradeoffs between costs and risk, as CVaR,, the next sections focus on the Pareto efficient frontier. The significance of this method is to provide for a given CVaR, /cost value, the minimum cost/CVaR, that can be obtained without exceeding that CVaR, /cost value. Focusing on model 5, model 5 was run for three constraints corresponding to three confidence levels a', 50, 75, 80, 85, 90, 95, and 99 percent; the models are designated as 5 a', 550, 575, and 595, respectively. In addition, each of the models 5 a' was run at different values of /, designated as 1,...,b,...B in an increasing order; the values of ,/ ranged between the smallest feasible value to the value at and after which no change was observed. Different runs are referred to by the model number, the confidence level of the constraint, and the number of the / value used in the constraint, i.e., 5 a'b. For the confidence level a', the Pareto efficient frontier, for a given confidence level a, was generated by plotting a point, corresponding to total cost on the abscissa and CVaR, on the ordinates, for each model run, or value of f8; there exists a different frontier for each confidence level a and a set of frontiers for each confidence level a'. To compare solutions of models 1, 2, and 3, the costs solutions from these models were added as points to the efficient frontiers plots; this allows to see whether these models solutions are efficient in the Pareto sense, i.e., undominated. Note that model 3 can be obtained from model 5 by deleting the CVaRY constraint or setting p / oc ; hence, the solution to model 3 corresponds to one of the endpoints of the Pareto efficient frontier. Also note that model 4 minimizes CVaRY of the total cost and does not control costs that are below the confidence level at which CVaRY is minimized; as a result, the twostage total cost result of this model does not possess any practical significance; this model, however, finds the smallest / for which a feasible solution can be obtained, providing the other endpoint of the Pareto efficient frontier, hence, model 4 is used to find this point and not as a model by itself. The models were ran for two normal distributions of allowable withdrawals, W, with equal means and different standard deviations, namely 5 and 10 percent. In this section, we present and compare the different model results within and across both distributions, (a) and (b). 5% Standard Deviation The results for the distribution (a), corresponding to 5 percent standard deviations are summarized in Figures 37 to 314. Figure 37 presents plots of the change in cost with p = CVaR,,, where a' is the confidence level, a, of the constraint; these plots correspond to the efficient frontiers at different a'. The figure demonstrates that, for all values of a', the cost increases as / decreases; in other words, a tighter constraint results in an increase in cost. In addition, intuitively, a higher value of a' results in a higher range of f/. The range of / consists of a lower minimum below which the model can no longer be feasible and an upper limit beyond which the solution is independent of the constraint. The lower limit coincide with the minimized CVaR,, for a = a', of model 4; the upper limit corresponds to CVaR,, for a = a', calculated from the solution of model 3. The cost solutions obtained from model 1, model 2, and model 3, i.e., expected value solution, individual scenarios, and twostage stochastic solutions, are plotted in the graphs at values of / equal to the cost for model 1 and model 2 scenarios and at / = CVaR,, for a corresponding to a', for model 3. Note that the efficient frontier delineates a riskreturn space corresponding to the lowest risk for a given level of return or cost or the best possible return or minimum cost for a given level of risk; points below the concave frontier line correspond to inefficient or suboptimal solutions and points above the frontier are infeasible. Figure 37 shows that, for all a', (1) the expected value solution cannot be achieved for any level of risk, (2) only two of the scenarios of model 2, corresponding to high risk values, are feasible but inefficient, and (3) the twostage stochastic solution is an efficient solution corresponding to the lowest possible cost and high level of risk. Figures 38 to 314 demonstrate the dependency of CVaR,, calculated at different confidence levels, on (A) a, (B) f/, and (C) cost, for each model 5 a' simulated, 50, 75, 80, 85, 90, 95, and 99 percent, respectively. For each a', for different / values, the model's CVaR, were recalculated and plotted; for example, if the model was run for 10 different constraints values f/, 10 lines were plotted, monitoring the change of the recalculated solution CVaR, with the change in the confidence level, the constraint, and the cost solution. The results were consistent with each other and the theory for all plots; a higher confidence level corresponds to a higher solution CVaR,, a tighter constraint corresponds to a lower solution CVaR,, and a higher cost corresponds to solution a higher CVaR,,. 1.17 0.60 I 1P 4.' 1.29 0.60 r\ U 4., 01 2 X 3 50% 40 50% 0 C o O 18 1.09 0 o 40 75% 01 2 X3 + 80% 18 18 1.20 1.35 0.60 4.40 P P 80% Figure 37. Efficient frontier for a'= 50, 75, 80, 85, 90, 95, and 99 percent, 5% STD. 01 2 X3 75% 18 L 1.17 0 F `O~I OK _1 _I 0 o 29 1.25 0 o 29 1.34 0 0 o 31 1.50 Figure 37 Continued 0 0 29 1.46 0.60 0 0 29 1.46 0.60 0 0 0 31 1.91 0.60 01 2 X3 85% 4.40 85% 01 2 x3 90% 4.40 90% 01 2 X3 895% 4.40 95% 0 0 34 01 2 X 3 99% 0 ) o 34 2.07 2.77 0.60 99% Figure 37 Continued. 4.40 i 34 a 0 5 10 15 20  25 30 35 40 45 50 55 60 65 x70 75 80 85 90 95 99  Cost 1.16 5 15 25 35 45 55 65 75 85 95 X1 03 10 20 30 40 50 60 70 80 90 99 2 Cost Figure 38. Change of CVaR, with (A) /, (B) a, and (C) cost for a'= 50%, 5% STD.  3  5501 5502 5503 5504  5505  5506  5507 5508 5509 55010 34 0 1.10 34 O CU 0 34 CU 0 0 34 0 0 1.17 1.17 1.29 5 15 25 35 45 55 65 75 85 95 X1 03 10 20 30 40 50 60 70 80 90 99 2 Cost Figure 39. Change of CVaR, with (A) /, (B) a, and (C) cost for a'= 75%, 5% STD.  5751 5752 5753 5754  5755 5756 100 5  10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 99 Cost 34 ro 80 34 3 5801 3 increasing 5802 5803  5804 * 5805 y 5806 5807 > 5808 5809 58010 58011 58012 58013 0 58014 0 100 c (A) 34 e5 10 15 20 a increasing *25 e30 I 35 40 45 50 C 55 60 0 65 70 75 80 85 90 95 99 0 . Cost 1.20 1.35 P (B) 34 5 10 15 20 25 30 35 40 45 50 55 60 > 65 70 0 \ 75 80 t\ 85 90 a increasing 95 99 I 1 2 0 03 17 0 Cost (C) Figure 310. Change of CVaR, with (A) /, (B) a, and (C) cost for a'= 80%, 5% STD. 34 CU 0 0 0 34 CU 0 1.25 34 C > 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 99  Cost 1.46 5 15 25 35 45 55 65 75 85 95 X1 03 10 20  30 40 50 60 70 80 90 99 2 Cost Figure 311. Change of CVaR, with (A) /, (B) a, and (C) cost for a'= 85%, 5% STD.  3  5851 5852 5853  5854  5855  5856  5857 5858 5859 58510 58511 58512  58513 58514 34 O (0 1.46 34 CU 0 0 1.34 34 CU > 10 20 30 40 50 60 70 80 90 99 2 Cost Figure 312. Change of CVaR, with (A) /, (B) a, and (C) cost for a'= 90%, 5% STD. 3  5901 5902 5903 5904  5905 5906 5907 5908 5909 100 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 99 Cost 5 15 25 35 45 55 65 75 85 95 X1 03 