Citation
Modeling Typha Domingensis in an Everglades Wetland

Material Information

Title:
Modeling Typha Domingensis in an Everglades Wetland
Creator:
Lagerwall,Gareth L
Place of Publication:
[Gainesville, Fla.]
Publisher:
University of Florida
Publication Date:
Language:
english
Physical Description:
1 online resource (269 p.)

Thesis/Dissertation Information

Degree:
Doctorate ( Ph.D.)
Degree Grantor:
University of Florida
Degree Disciplines:
Agricultural and Biological Engineering
Committee Chair:
Kiker, Gregory
Committee Co-Chair:
Munoz-Carpena, Rafael
Committee Members:
Hatfield, Kirk
James, Andrew L
Wang, Naiming
Graduation Date:
8/6/2011

Subjects

Subjects / Keywords:
Ecological modeling ( jstor )
Everglades ( jstor )
Mathematical independent variables ( jstor )
Mathematical variables ( jstor )
Modeling ( jstor )
Phosphorus ( jstor )
Simulations ( jstor )
Soil science ( jstor )
Spatial models ( jstor )
Water depth ( jstor )
Agricultural and Biological Engineering -- Dissertations, Academic -- UF
ecology -- everglades -- modeling -- typha
Miami metropolitan area ( local )
Genre:
Electronic Thesis or Dissertation
born-digital ( sobekcm )
Agricultural and Biological Engineering thesis, Ph.D.

Notes

Abstract:
The regional simulation model (RSM), developed by the south Florida water management district (SFWMD), was originally coupled with the transport and reaction simulation engine (TARSE) in order to model phosphorus dynamics in an Everglades wetland in Southern Florida, USA. The dynamic nature and user-defined inputs and interactions of this coupled model allowed for adapting it towards modeling ecology. Specifically, it was applied towards modeling Typha domingensis (Southern Cattail, or more generally, cattail) densities across Water Conservation Area 2A (WCA2A). In order to address the issues of complexity, uncertainty, and sensitivity, (i.e. how complex can a model be made in order to reduce uncertainty, while maintaining a relatively low level of sensitivity/instability) five levels of increasing algorithmic complexity were used. The two main factors determining cattail density are water depth and soil phosphorus concentration, and were thus used to inform the levels of complexity. A simple logistic function was used as the Level 1 complexity to model cattail density. Water depth was used to influence the logistic function in the Level 2 complexity. Water depth along with soil phosphorus concentration, were used to influence the logistic function in the Level 3 complexity. An inter-species inhibition factor in the form of a Level 1 Cladium jamaicense (sawgrass) modeled density was used along with water depth and soil phosphorus concentration to influence the logistic function in the Level 4 complexity. And lastly, an inter-species feed-back mechanism was implemented in the Level 5 complexity, which is essentially a Level 4 complexity but with the cattail density negatively influencing the sawgrass density. Vegetation maps for the years 1991, 1995, and 2003 were used for initialization and comparison of model output during training (1991-1995), testing 1 (1991-2003) and testing 2 (1995-2003) simulations. The growth rate value which influences the logistic function throughout all the levels of complexity was calibrated to 6.7*10-9 g/g?s during the training simulation. The difference between model output and historical data was calculated, along with the Moran?s I statistic for spatial correlation, and an abundance-area curve for comparing regional density distribution, and it was determined that Level 4 and Level 5 complexities were best suited for matching the historical data. Spatial uncertainty, through the use of sequential indicator simulation, was used to influence a global uncertainty and sensitivity analysis (GUSA). The variance based Sobol method was used to conduct the GUSA, and it was determined here too that a Level 4 complexity was best suited to model cattail densities in the region ? providing the best balance between complexity, uncertainty and sensitivity. Finally, based on the previous two findings, a Level 4 and Level 5 complexity was used to determine the impact of alternate management scenarios on the area. Scenarios included high, medium, and low, as well as annually alternating (high and low) water depths and soil phosphorus concentrations. A GUSA was conducted on these management scenarios to determine their influence relative to the other uncontrollable factors such as the growth and death rates. As with the previous GUSA, the depth was a highly influential parameter, with initial cattail and sawgrass densities coming into play largely through their interaction effects. Time series of select management scenarios were plotted, and it was determined that expansive cattail growth required a high soil phosphorus concentration. Also, in order to prevent cattail densities increasing significantly, it was determined that a high water depth be used in combination with a low soil phosphorus concentration. This last statement requires a caveat, in that a high water depth will likely kill most other vegetation species, including sawgrass, which is undesirable. In summary, it is a complex task to manage the cattail expansion in this region, requiring the close management and monitoring of water depth and soil phosphorus concentration, and possibly other factors not considered in these model complexities. However, this modeling framework with user-defineable complexities and management scenarios, can be considered a useful tool in analyzing many more alternatives which could be used to aid management decisions in the future. Lastly, this is a unique, spatially distributed, deterministic, ecological model, providing cattail density values across WCA2A. Provided adequate data, this coupled RSM/TARSE model, along with the groups of analyses conducted, could be applied towards simulating other vegetation species in other habitats. ( en )
General Note:
In the series University of Florida Digital Collections.
General Note:
Includes vita.
Bibliography:
Includes bibliographical references.
Source of Description:
Description based on online resource; title from PDF title page.
Source of Description:
This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Thesis:
Thesis (Ph.D.)--University of Florida, 2011.
Local:
Adviser: Kiker, Gregory.
Local:
Co-adviser: Munoz-Carpena, Rafael.
Statement of Responsibility:
by Gareth L Lagerwall.

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright Lagerwall,Gareth L. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Resource Identifier:
872113184 ( OCLC )
Classification:
LD1780 2011 ( lcc )

Downloads

This item has the following downloads:


Full Text

PAGE 1

1 MODELING TYPHA DOMINGENSIS IN AN EVERGLADES WETLAND By GARETH LYNTON LAGERWALL A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCT OR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2011

PAGE 2

2 2011 Gareth Lynton Lagerwall

PAGE 3

3 To M y P arents and B rother: Gary, Rene, and Dane Lagerwall

PAGE 4

4 ACKNOWLEDGMENTS I would like to thank the chair and members of my supervisory committee Gregory Kiker Rafael Mu oz Carpena, Andrew J ames, Naiming Wang, and Kirk Hatfield for the roles that they played and the assistance that they provided ; The University of Florida, the Water Resources Research Center and the South Florida Water Management District for providing an interesting project and funding to match ; My colleagues, especially Stuart Muller, Oscar Perez, William Pelletier, Zuzanna Zajac, and Matteo Convertino ; Susan Risko, for filling my life wi th joy; And finally I would like to thank all of my family and friends who stood by my side, kept me motivated, and lifted me up when I was down. Again, Thank You.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS ................................ ................................ ................................ .. 4 LIST OF TABLES ................................ ................................ ................................ ............ 8 LIST OF FIGURES ................................ ................................ ................................ .......... 9 LIST OF ABBREVIATIONS ................................ ................................ ........................... 12 CHAPTER 1 INTRODUCTION ................................ ................................ ................................ .... 17 2 A SPATIALLY DISTRIBUTED, DETERMINISTIC, APPROACH TO MODELING TYPHA DOMINGENSIS (CATTAIL) IN AN EVERGLADES WETLAND ................. 23 Background ................................ ................................ ................................ ............. 23 Materials and Methods ................................ ................................ ............................ 29 The Regional Simulation Model (RSM) ................................ ............................ 29 Simulating Transport and Reactions Using TARSE ................................ ......... 30 Model Application ................................ ................................ ............................. 32 Test Site ................................ ................................ ................................ ........... 35 Initial Conditions, Boundary Conditions, and Time Series Data ....................... 36 Statistical Analysis of Simulated and Monitored Biomass ................................ 39 Model Training and Testing ................................ ................................ .............. 41 Results ................................ ................................ ................................ .................... 42 Discussion ................................ ................................ ................................ .............. 45 3 GLOBA L UNCERTAINTY AND SENSITIVITY ANALYSIS OF A SPATIALLY DISTRIBUTED ECOLOGICAL MODEL ................................ ................................ .. 61 Background ................................ ................................ ................................ ............. 61 Materials and Methods ................................ ................................ ............................ 63 Test Site ................................ ................................ ................................ ........... 64 The Var iance Based Method of Sobol ................................ .............................. 65 Parameter Distribution Functions ................................ ................................ ..... 67 Method of Operation ................................ ................................ ......................... 71 Results ................................ ................................ ................................ .................... 72 Discussion ................................ ................................ ................................ .............. 73 4 ACCOUNTING FOR THE IMPACT OF MANAGEMENT SCENARIOS ON AN EVERGLADES WETLAND ................................ ................................ ..................... 85 Background ................................ ................................ ................................ ............. 85

PAGE 6

6 Materials and Methods ................................ ................................ ............................ 87 Hydrology Manag ement Scenarios ................................ ................................ .. 87 Soil Phosphorus Management Scenarios ................................ ......................... 88 Global Uncertainty And Sensitivity Analysis (GUSA) ................................ ........ 89 Rep resentative Statistic ................................ ................................ .................... 90 Time Series Analysis ................................ ................................ ........................ 90 Results ................................ ................................ ................................ .................... 91 Discussion ................................ ................................ ................................ .............. 93 5 BEYOND CATTAILS: LIMITATIONS, CURRENT AND FUTURE RESEARCH .... 110 Background ................................ ................................ ................................ ........... 110 Limitations (Short Term) ................................ ................................ ....................... 110 Limitations (Long Term) ................................ ................................ ........................ 111 Extra Research ................................ ................................ ................................ ..... 112 6 CONCLUSION ................................ ................................ ................................ ...... 118 APPENDIX A SAMPLE XML INPUT FILES ................................ ................................ ................ 120 RSM Input File ................................ ................................ ................................ ...... 120 TARSE Input File ................................ ................................ ................................ .. 123 B ANALYSIS SCRIPTS ................................ ................................ ............................ 135 Main.py ................................ ................................ ................................ ................. 135 RunHSE.py ................................ ................................ ................................ ........... 138 Analysis.py ................................ ................................ ................................ ............ 139 MoransI.py ................................ ................................ ................................ ............ 151 Abundance.py ................................ ................................ ................................ ....... 165 TimeSeries.py ................................ ................................ ................................ ....... 177 C MANAGEMENT TIME SERIES SCRIPTS FOR LEVEL4 ................................ ..... 190 CreateXMLs_mgmt.py ................................ ................................ .......................... 190 TSInputs.sam ................................ ................................ ................................ ........ 194 pythonTSmgmt.py ................................ ................................ ................................ 196 pypostplot_1.py ................................ ................................ ................................ ..... 202 D SEQUENTIAL INDICATOR SIMULATION FILES ................................ ................. 205 SISIM.par ................................ ................................ ................................ .............. 205 Addcoord.par ................................ ................................ ................................ ........ 207 MainMapPy.py ................................ ................................ ................................ ...... 208 MapProcessing_files_MAIN_2011_03_13.py ................................ ....................... 210 MappingScript ................................ ................................ ................................ ....... 216

PAGE 7

7 E MANAGEMENT TIME SERIES PLOTS FOR LEVEL 4 ................................ ........ 217 F MANAGEMENT TIME SERIES PLOTS FOR LEVEL 5 ................................ ........ 222 G GUSA INPUT FILES ................................ ................................ ............................. 227 SimlabInput.fac ................................ ................................ ................................ ..... 227 SimlabOut.sam (portion) ................................ ................................ ....................... 234 CreateXMLs.py ................................ ................................ ................................ ..... 235 wqBASE ................................ ................................ ................................ ................ 238 runBASE ................................ ................................ ................................ ............... 249 HSEBatch ................................ ................................ ................................ ............. 251 pythonHSE.py ................................ ................................ ................................ ....... 252 LIST OF REFERENCES ................................ ................................ ............................. 260 BIOGRAPHICAL SKETCH ................................ ................................ .......................... 269

PAGE 8

8 LIST OF TABLES Table page 2 1 Cattail Class and Density Values for Formatting Data Maps .............................. 47 2 2 Summary of Nash Sutcliffe values comparing model and observed data for s I, and Abundance Area statistics ................................ ........... 48 2 3 Parameter description for the increasing levels of complexity studied ................ 49 3 1 Probabili ty distributions of model input factors used in global uncertainty and sensitivity analysis ................................ ................................ .............................. 75 3 2 Summary table of Sobol first order sensitivity indices of delta mean (DM) for all 5 levels of complexity (L1 through L5). ................................ .......................... 76 3 3 Summary table of Sobol total order sensitivity indices of delta mean (DM) for all 5 levels of complexity (L1 through L5). ................................ .......................... 77 4 1 Probability distributions of model input factors used in global uncertainty and sensitivity analysis ................................ ................................ .............................. 96 4 2 Management distributions for soil phosphorus, d epth and sawgrass parameters ................................ ................................ ................................ ......... 97 4 3 Example of management scenarios used for time series analysis. This table repeats for medium and high initial sawgrass densities. ................................ ..... 98 4 4 Values associated with management scenarios ................................ ................. 99 4 5 Summary table of Sobol first order sensitivity indices of delta mean (DM) for the Region (R) and NE, CE and SW zones ................................ ...................... 100 4 6 Summary table of Sobol total order sensitivity indices of delta mean (DM) for the Region (R) and NE, CE and SW zones ................................ ...................... 1 01

PAGE 9

9 LIST OF FIGURES Figure page 2 1 Test site, water conservation area 2a (wca2a). ................................ .................. 50 2 2 Formatting of cattail input maps ................................ ................................ .......... 51 2 3 Sawgrass input maps for the years 1991, 1995, and 2003 respectively ............. 52 2 4 Results for a)Training (1991 1995) b) Testing 1 (1991 2003) c) Testing 2 (1995 2003) simulations for the Level 1,2,3,4,5 complexities ............................. 53 2 5 Regional and Zonal trends for a) Training b) Testing 1 c) Testing 2 simulation periods, for all five lev els of complexity ................................ .............................. 54 2 6 Regional statistics for Training period (1991 1995) and all five levels of complexity ................................ ................................ ................................ ........... 55 2 7 Regional s tatistics for Testing 1 period (1991 2003) and all five levels of complexity ................................ ................................ ................................ ........... 56 2 8 Regional statistics for Testing 2 period (1995 2003) and all five levels of complexity ................................ ................................ ................................ ........... 57 2 9 Nash Sutcliffe Summary of statistics. A graphical representation of Table 2 2. The Level 4 and 5 complexities performs consistently well. ............................... 58 2 10 Classified difference maps for a)Training (1991 1995) b) Testing 1 (1991 2003) c) Testing 2 (1995 2003) simulations ................................ ....................... 59 2 11 Classified difference summary ................................ ................................ ............ 60 3 1 Test site, water conservation area 2a (wca2a) ................................ ................... 78 3 2 Uncertainties of delta mean (DM) for a) Level 1 through e) Level 5 respectively. In frequency format. ................................ ................................ ....... 79 3 3 Uncertainties of delta mean (DM) for a) Level 1 through e) Level 5 respectively. In cumulative distribution format. ................................ ................... 80 3 5 Individual first order and total order sensitivities of delta mean (DM) for all 5 levels of complexity ................................ ................................ ............................ 82 3 6 Individual first order (a) and total order (b) sensitivities of delta mean (DM) for all 5 levels of complexity ................................ ................................ ..................... 83 3 7 Combined theoretical total uncertainty, sensitivity, and complexity interactions, in order to determine the most relevant model complexity ............. 84

PAGE 10

10 4 1 Test site, water conservation area 2a (WCA2A) ................................ ............... 102 4 2 Water conservation area 2a (WCA2A) triangular mesh, with numbered cells, and NE, C E and SW zones ................................ ................................ .............. 103 4 3 Model uncertainty plots in frequency format for a) Regional b) North East Zone c) Central Zone d) South West Zone ................................ ....................... 104 4 4 Model uncertainty plots in cumulative distribution format for a) Regional b) North East Zone c) Central Zone d) South West Zone ................................ ..... 105 4 5 Model uncertainty plots in cumulative d istribution a) and frequency b) format for Level 5 complexity, regional DM ................................ ................................ .. 106 4 6 Model 95% confidence interval, from uncertainty plots ................................ ..... 107 4 7 Model sensitivity for first order (a) and total order (b) of parameters CatGF, SawGF, DepthM, PhosphorusM, Sawgrass, Cattail ................................ ......... 108 4 8 Time series analysis of Level 4 ................................ ................................ ......... 109 5 1 UML diagram of TARSE showing included random walk and free movement modules ................................ ................................ ................................ ............ 115 5 2 Flow chart representing functioning of mass fl ow (FreeMove) algorithm .......... 116 5 3 UML diagram representing inclusion of random walk algorithm within TARSE 117 E 1 Regional mean d ensity trend for Level 4 complexity applied to various management scenarios, with low initial sawgrass density ................................ 217 E 2 Regional mean density trend for Level 4 complexity applied to various man agement scenarios, with medium initial sawgrass density ......................... 218 E 3 Regional mean density trend for Level 4 complexity applied to various management scenarios, with high initial sawgrass density ............................... 219 E 4 Complete table of management scenarios for Level 4 complexity. With final regional mean cattail densities above 400 g/m 2 ................................ ............... 220 E 5 Complete table of management scenarios for Level 4 complexity With final mean cattail densities below 200 g/m 2 ................................ ............................. 221 F 1 Regional mean density trend for Level 5 complexity applied to va rious management scenarios, with low initial sawgrass density. ............................... 222 F 2 Regional mean density trend for Level 5 complexity applied to various management scenarios, with medium initial sawgrass density. ........................ 223

PAGE 11

11 F 3 Regional mean density trend for Level 5 complexity applied to various management scenarios, with high initial sawgrass density. .............................. 224 F 4 Complete table of management scenarios for Level 5 complexity. With final regional mean cattail densities above 400 g/m 2 ................................ ............... 225 F 5 Complete table of management scenarios for Level 4 complexity. With final mean cattail densities below 200 g/m 2 ................................ ............................. 226

PAGE 12

12 LIST OF ABBREVIATION S ADRE Advection Dispersion Reaction Equation ARC GIS Arc Geographic Information System ATLSS Across Trophic Level System Si mulation CATGF Cattail Growth Factor CCDF Conditional Cumulative Distribution Function CERP Comprehensive Everglades Restoration Plan DepthF Depth Factor DepthMgmt Depth Management factor DM Delta Mean ELM Everglades Landscape Model FAST Fourier Amplitude Sensitivity Test FTLOADDS Flow and Transport in a Linked Overland Aquifer Density Dependent System GF Growth Factor GSLIB GeoStatistical Library GUSA Global Uncertainty and Sensitivity Analysis HSE Hydrologic Simulation Engine HSI Habitat Suitability Inde x K Carrying Capacity k Number of Parameters L1 Level 1 complexity algorithm L2 Level 2 complexity algorithm L3 Level 3 complexity algorithm L4 Level 4 complexity algorithm L5 Level 5 complexity algorithm

PAGE 13

13 L1_ DM Level 1 Delta Mean MSE Management Simulation Engiune ODE One Dimensional Equation OOP Object Oriented Programming P population size PDF Probability Distribution Function PMgmt Soil Phosphorus Management factor phosphorusF Soil Phosphorus Concentration Factor RTE Ecological implementation of the coupl ed RSM and TARSE model RSM Regional Simulation Model SA Sensitivity Analysis SAWGF Sawgrass Growth Factor SFWMD South Florida Water management District SFWMM South Florida Water Management Model SS Sequential Simulation SISIM Sequential Indicator Simulatio n program STA Stormwater Treatment Area TARSE Transport And Reaction Simulation Engine UA Uncertainty Analysis UFHPC University of Florida High Performance Computing Center VFSMOD Vegetative Filter Strip Model WCA2A Water Conservation Area 2A WQ Water Qual ity WQE Water Quality Engine XML eXtended Markup Language

PAGE 14

14 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy MODELING TY PHA DOMINGENSIS IN AN EVERGLADES WETLAND By Gareth Lynton Lagerwall August 201 1 Chair: Gregory A. Kiker Major: Agricultural and Biological Engineering The r egional s imulation m odel (RSM), developed by the s outh Florida w ater m anagement d istrict (SFWMD), was originally coupled with the t ransport and r eaction s imulation e ngine (TARSE) in order to model phosphorus dynamics in an Everglades wetland in Southern Florida, USA. The dynamic nature and user defined inputs and interactions of this coupled model all owed for adapting it towards modeling ecology. Specifically, it was applied towards modeling Typha domingensis (Southern Cattail, or more generally, cattail) densities across Water Conservation Area 2A (WCA2A). In order to address the issues of complexity, uncertainty, and sensitivity, (i.e. how complex can a model be made in order to reduce uncertainty, while maintaining a relatively low level of sensitivity/instability) f ive levels of increasing algorithmic complexity were used. The two main factors deter mining cattail density are water depth and soil phosphorus concentration, and were thus used to inform the levels of complexity. A simple logistic function was used as the Level 1 complexity to model cattail density. Water depth was used to influence the l ogistic function in the Level 2 complexity. Water depth along with soil phosphorus concentration, were used to influence the logistic function in the Level 3 complexity. A n inter species inhibition factor in the form of a Level 1 Cladium

PAGE 15

15 jamaicense (sawgra ss) modeled density was used along with water depth and soil phosphorus concentration to influence the logistic function in the Level 4 complexity. And lastly, an inter species feed back mechanism was implemented in the Level 5 complexity, which is essenti ally a Level 4 complexity but with the cattail density negatively influencing the sawgrass density. Vegetation map s for the years 1991, 1995, and 2003 were used for initialization and comparison of model output during training (1991 1995), testing 1 (199 1 2003 ) and testing 2 (199 5 2003) simulations. The growth rate value which influence s the logistic function throughout all the levels of complexity w as calibrated to 6.7 *10 9 during the training simulation. The difference between spatial correlation, and an abundance area curve for comparing regional density distribution and it was de termined that Level 4 and Level 5 complexities were best suited for matching the historical data. Spatial uncertainty, through the use of sequential indicator simulation, was used to influence a global uncertainty and sensitivity analysis (G U S A). The varia nce based Sobol met hod was used to conduct the G U S A, and it was determined here too that a Level 4 complexity was best suited to model cattail densities in the region providing the best balance between complexity, uncertainty and sensitivity Finally, ba sed on the previous two findings, a Level 4 and Level 5 complexity was used to determine the impact of alternate management scenarios on the area. Scenarios include d high, medium, and low, as well as annually alternating (high and low) water depths and soi l p hosphorus concentrations. A GUS A was conducted on these management scenarios to determine their influence relative to the other uncontrollable factors such as the growth and death rates. As with the previous GUSA

PAGE 16

16 the depth was a highly influential para meter with initial cattail and sawgrass densities coming into play largely through their interaction effects Time series of select management scenarios were plotted, and it was determined that expansive cattail growth required a high soil phosphorus conc entration. Also, in order to prevent cattail densities increasing significantly, it was determined that a high water depth be used in combination with a low soil phosphorus concentration This last statement requires a caveat in that a high water depth wi ll likely kill most other vegetation species, including sawgrass which is undesirable. In summary i t is a complex task to manage the cattail expansion in this region, requiring the close management and monitoring of water depth and soil phosphorus concen tration, and possibly other factors not considered in these model complexities. However, this modeling framework with user defineable complexities and management scenarios, can be considered a useful tool in analyzing many more alternatives which could be used to aid management decisions in the future. Lastly, t his is a unique, spatially distributed, d eterministic, ecological model providing cattail density values across WCA2A Provided adequate data, this coupled RSM/TARSE model along with the groups of analyses conducted, could be applied towards simulating other vegetation species in other habitats

PAGE 17

17 CHAPTER 1 INTRODUCTION The Everglades wetland ecosystem of south Florida, USA, is an intensely managed system for water quantity, quality and ecological p rocesses. As early as the Central and South Florida Project, the Everglades were channelized in order to aid in flood protection and provide arable land for agriculture (Per ez, 2006). Today, almost all the water in south Florida passes through at least one canal before entering the surrounding ocean (Layzer, 2006). This management has had a negative impact on the environment, with wetland areas being reduced by up to 50%, and a variety of wildlife species becoming threatened. Certain bird populations have been reduced by 90%, and other species such as Trichechus manatus latirostris (Florida manatee), Puma concolor coryii (Florida panther), Ammodramus maritimus mirabilis (Cape Sable Seaside Sparrow), and Tantilla oolitica (Rim Rock Crowned Snake), are at risk of extinction (Brown et al., 2006). The c omprehensive Everglades r estoration p lan (CERP) was approved with the Water Resources Development Act of 2000 with the express goal of restoring some of focus of CERP has focused on improved water and water quality management; the assumption is that if the quantity and quality are adequate, the ecology wi ll follow suit. There is, however, an increasing concentration on the ecological impacts of various management decisions, and these efforts center on improving species diversity and protecting existing habitats (USACE, 2010b).

PAGE 18

18 In addition to the changes in hydrology, continuous mining, agriculture and urbanization activities have resulted in invasive and exotic plants becoming established in place of the original vegetation, altering habitats and often forming mono crop (single species dominated) ecosystems (Tarboton et al., 2004). One of these species in particular, Typha domingensis (cattail), has been labeled as an indicator species, or species of concern. Cattail is a native Everglades monocotyledonous vegetation species, typically occurring as sparse co mplements alongside Cladium jamaicense (sawgrass) stands. Cattails have become invasive, forming mono crop stands, and (WCA2A) have doubled, expanding southward into the saw grass marshes (Willard, 2010). As a result, cattail distribution is now used to determine the effectiveness of various water management decisions. In response to these hydrological and ecological challenges, computer models have been developed to aid scien tists and managers to explo re different system responses. These models aid our understanding of a complex system such as the Everglades, and allow us to evaluate different ecological outcomes of management decisions before the more costly task of their imp lementation (Fitz et al., 2010). There are a number of fixed form ecological models currently in use across the Ev erglades region. Of these, the a cross t rophic l evel s ystem s imulation (ATLSS) (Gross, 1996) and E verglades l andscape m odel (ELM) (Fitz & Trimb le, 2006b) are probably the most well known. These, and most other models available for modeling cattail in the Everglades, are qualitative in their algorithmic structure, that is, they instantaneously switch between one species assemblage or habitat to an other, given rule based conditions The

PAGE 19

19 majority of these current ecological models are also stochastic, that is, they are based on probabilities of change and a degree of noise or randomness. They generally run as post process ecological models, using hyd rological data output by other models such as the South Florida Water Management Model (SFWMM) (Fitz et al., 2010). A combined effort between the University of Florida and the South Florida Water Management District (SFWMD) and the US Geological Survey cre ated the Transport and Reaction Simulation Engine (TARSE) (Jawitz et al., 2008), which was originally designed to run in line with the SFWMD developed r egional s imulation m odel (RSM) (SFWMD, 2005c ) to simulate soil phosphorus dynamics in the Everglades sys tem. The o bject o riented p rogramming (OOP) structure of this coupled hydrologic/water quality model, along with the user definable inputs and interactions, allowed for the extension of this model beyond its original purpose into ecological processes and fe atures. This extension of the TARSE model system allows for a more deterministic representation of cattail densities and spread, which have important ramifications for management objectives in t his southern Florida system. The extension of TARSE into ecolo gical processes and objects allow a more nuanced view of ecosystem processes away from simplistic presence/absence predictions. As cattail dynamics are better understood in a scientific and systems modeling perspective, cattail density can be managed to a point where habitat diversity is maintained without the threat of cattail dominance and subsequent habitat degradation. I.e. The complete eradication of cattails is unnecessary, provided that there is an adequate amount of management and control. With thes e hydrological and ecological processes in in mind, the main goal of this research project is to determine, through the use of this uniquely coupled RSM/TARSE

PAGE 20

20 model, the impact of various water management and water quality control measures on the density a nd distribution dynamics of Typha domingensis (Southern Cattail), throughout WCA2A. This modeling effort could be subsequently scaled up to determine cattail dynamics across additional areas in the Everglades region. In order to attain this goal, three mai n objectives were decided upon, and are presented in the following three chapters in the form of individual journal articles. The first objective was to reproduce historical density distributions with the coupled RSM/TARSE model. In order to address the is sues of complexity, uncertainty, and sensitivity, (i.e. how complex can a model be made in order to reduce uncertainty, while maintaining a relatively low level of sensitivity/instability) discussed by Muller (201 1 ), four levels of increasing algorithmic c omplexity were used to model the cattail dynamics across WCA2A. The two main factors determining cattail density are water depth and soil phosphorus concentration (Newman et al., 1998), and were thus used to inform the different levels of complexity. Given the general lack of mechanistic growth models available for cattails, a logistic function (Reed & Berkson, 1929) was used as the Level 1 complexity to model cattail density. Water depth was used to influence the logistic function in the Level 2 complexity Water depth along with soil phosphorus concentration, were used to influence the logistic function in the Level 3 complexity. A n inter species inhibition factor in the form of a Level 1 (logistic function) Cladium jamaicense (sawgrass) modeled density wa s used along with water depth and soil phosphorus concentration to influence the logistic function in the Level 4 complexity Finally, a Level 5 complexity was used to represent an inter species feedback mechanism (as in (Hsu et al., 2000) ), whereby the ca ttail density of a Level 4 type

PAGE 21

21 cattail model had a negative impact on the growth of a Level 1 type sawgrass model Vegetation maps for the years 1991, 1995, and 2003 were used for initialization and comparison of model output during training (1991 199 5), testing 1 (199 1 2003 ) and testing 2 (199 5 2003) simulations. The second objective was to determine the sensitivity of model output to various input factors, through a global uncertainty and sensitivity analysis (GUSA). The variance based method of Sobo l (Sobol, 2001) was used for the GUSA. Spatial uncertainty in the initial data maps was addressed through the use of a sequential indicator simulation (SIS) (Goovaerts, 1996), whereby 250 alternate maps, all obeying observed data and spatial correlation, w ere produced. Other parameters used in the GUSA include the sawgrass initial density (uniformly distributed across WCA2A), the growth rate for both cattail and sawgrass, a water depth value, and soil phosphorus concentration. The statistic used for the GUS A output was the change in regional mean density (DM) (model output initial value) which signals an increase or decrease in mean density over time The third objective was to determine the impact of potential management decisions on the density and dist ribution of cattail throughout the WCA2A region, using insight gained from model development and GUSA results from the previous two objectives. The Level 4 complexity al gorithm, with a water depth, soil phosphorus concentration and sawgrass density influe nced logistic function, was used to simulate cattail densities through to the year 2030. Management scenarios included high, medium, low, and annually alternating (high and low) values for water depth soil phosphorus concentration and initial sawgrass de nsity respectively. The trends in mean

PAGE 22

22 regional density were plotted, and the best management scenario selected for controlling cattail densities. These three objectives and chapters work together to explore the challenge of developing ecological algorithm s within complex ecosystems. They focus on the challenges of ecological model development, testing and verification with monitored data and finally explore potential management options for control and ecosystem restoration

PAGE 23

23 CHAPTER 2 A SPATIALLY DISTRIBU TED, DETERMINISTIC, APPROACH TO MODELING TYPHA DOMINGENSIS (CATTAIL) IN AN EVERGLADES WETLAND Background The Everglades, commonly known as the River Of Grass (Layzer, 2006) in southern Florida, USA, once covered some 28500 km 2 This wetland ecosystem was sustained by the Kissimmee River, flowing through Lake Okeechobee and southwards as a shallow, slow moving sheet of water flowing freely to the estuaries of Biscayne Bay, Ten Thousand Islands, and Florida Bay. The channelization of the Everglades around 19 48 saw the reduction of the original wetland areas by up to 50%, with dependent wildlife affected even more. Local bird populations, such as herons, ibises and egrets have been reduced by 90%, while other populations such as Trichechus manatus latirostris ( Florida manatee ), Puma concolor coryii (Florida panther), Ammodramus maritimus mirabilis (Cape Sable Seaside Sparrow), and Tantilla oolitica (Rim Rock Crowned Snake), are at risk of disappearing completely (Brown et al., 2006) In addition to the changes in hydrology, continuous mining, agriculture and urbanization activities have resulted in invasive and exotic plants becoming established in place of the original vegetation, altering habitats and often forming mono crop stands (single species environment) (Odum et al., 2000) The comprehensive everglades restoration plan (CERP) was implemented around 2000 (USACE, 2010a) former extent and ecosystem functioning. The main focus of CERP has been on im proved water quantity and water quality management; with the assumption that if the water quantity and quality are adequate, the ecology will follow suit. There is, however, an increasing concentration on the ecological impacts of various management

PAGE 24

24 decisi ons, and these efforts center on improving species diversity and protecting existing habitats (USACE, 2010b) In an effort to achieve these goals the s torm water treatment areas (STA) were constructed just south of the Everglades agricultural area (EAA), in order to filter out phosphorus from the water before releasing it into the water wildlife habitat, and the water flows from them into the Everglades proper (Guardo et al., 1995) The emergent wetland species Typha domingensis (cattail) is a native Everglades monocotyledonous macrophyte, typically occurring as sparse complements alongside Cladium jamaicense (sawgrass) stands. The two species have significantly different morphology growth, and life history characteristics (Miao & Sklar, 1998) and this has enabled the cattail to expand prolifically under the altered conditions in the Everglades. In the 1980 s, the area covered by cattail stands in WCA2A doubled, expanding south ward into the sawgrass marshes (Willard, 2010) Cattails have hence been labeled as an indicator species, or species of concern, and their distribution is used to determine the effectiveness of various water management decisions. Their expansion has been studi ed extensively ( Miao 2004, Wu et al. 1997 and Newman et al. 1998 ) and it has been determined that there are four main external factors that affect their depth, hydrop eriod, soil phosphorus concentration, and disturbance (Newman et al., 1998) It was determined by Grace (1989) that the optimum depth at which cattail grow s is between 24 cm and 95 cm, with a hydroperiod of 180 days to 280 days, according to Wetzel (2001) Cattails have been found to be invading the natural sawgrass habitats of

PAGE 25

25 WCA2A along a soil phosphorus gradient running from the north west (high concentrations) to the south east (low concentrations) and Urban et al. (1993) mentions that, provided adequ ate water depth, soil phosphorus concentration is the next most important factor in deciding cattail expansion/invasion In creating their water quality model for simulating soil phosphorus concentrations dow nstream of the Everglades STA Walker & Kadlec (1996) determined that the lower bound soil phosphorus concentration for the optimum growth of cattail was 540 mg/kg Fires, and other disturbances such as hurricanes, were also found to affect the colonization of areas by cattails by altering local topog raphy and nutrient concentrations (Newman et al., 1998) Models aid our understanding of a complex system such as the Everglades, and allow us to evaluate different ecological outcomes of management decisions before the more costly task of their implementa tion (Fitz et al., 2010) To ensure numerical efficiency, most spatially distributed models have their equations, laws and assumptions changes in the functioning coming throu gh extensive code re writes and careful STELLA (Costanza & Voinov, 2001) Qn D (Kiker & Linkov 2006 and Kiker et al. 2006) and the Kepler system (Ludascher et al., 2006) are generally written using an OOP language such as C++ (Stroustrup, 2000) or Java (Arnold & Gosling, 1998) as opposed to a linear language such as FORTRAN (Cary et al., 1998) When interacting with free form models and their algorithms, designers do not inte ract directly with the program code. Rather, they influence objects through placing data, storage and logical structures

PAGE 26

26 into either a graphical user interface (STELLA, Kepler) or within a meta code structure such as the eXtensible Markup Language (XML) (H arold, 1998) There are a number of fixed form ecological models currently in use across the Everglades region. Of these, the across trophic system simulation (ATLSS) (Gross, 1996) and Everglades landscape model ( ELM ) (Fitz & Trimble, 2006b) are probably the most well known. These, and most other models available for modeling cattail in the Everglades, are all qualitative that is, switching between one species and another The majority of these current ecological models are also stochastic that is, based on probabilities and a degree of randomness. They generally run as post process models using hydrological data output by other models such as the south Florida water management model ( SFWMM ) (Fitz et al., 2010) The ATLSS vegetation succession model is u sed to determine the succession of one habitat type to another (e.g. sawgrass to cattail). The ATLSS model simulates with an annual time step on square 500 m cells, and uses a stochastic cellular automata model to switch between vegetation types. Currently t here is no way to determine vegetation densities within vegetation types (Duke Sylvester, 2005) The ELM model uses a counter to switch between species by accumulating days of water level and soil phosphorus concentration above certain limits. The model then switches between species based on their preferred hydroperiod and historical soil phosphorus concentrations (Fitz & Trimble, 2006a) The ELM model is the only currently available simulation tool for evaluating water quality across the Everglades lands cape (Fitz et al., 2010) and does not simulate detailed ecological features Another modeling effort by Wu et al. (1997) used Markov chain probabilities to switch between Cladium and Typha species. This model was in fact used to inform the

PAGE 27

27 ATLSS nutrient and fire disturbance model (Wetzel, 2003) Again, this is a stochastic, species specific, pres ence/absence type model. A modeling effort by Tarboton et al. (2004) develop ed a set of h abitat s uitability i ndices (HSI) for evaluating water management alterna tives. These HSI's provide d a range of probabilities for a particular species occurring across the landscape, and were based predominantly on local hydrological conditions such as depth (maximum, minimum, and mean), hydroperiod, velocity, and flow directio n. Given that water quantity (depth) and quality ( soil phosphorus concentration) affect cattail (and other plants) growth and distribution, there is a need to integrate these components in order to determine the more detailed biological outcomes of an Ever glades ecological model. There is also a need for a quantitative model to provide continuous density values for specific vegetation rather than simply presence/absence information. There is also a need for an adaptable ecological modeling engine The coupl ed RSM/TARSE model (henceforth referred to as RTE) implemented towards modeling ecological features within the southern Florida landscape, presented in this paper is a spatially distributed, free form model simulating cattail biomass distribution and dyna mics across WCA2A. Using the RTE model to couple vegetation dynamics with phosphorus dynamics has been alluded to by Jawitz et al. (2008) Muller (2010) and Perez Ovilla (2010) during their respective TARSE influenced, WQ simulations Zajac (2010) used veg etation types to influence Manning's n and evapo transpiration coefficients. The vegetation types informed initial conditions, but not changing vegetation distribution and densities over time. There is therefore a definite need for the RTE model which allo ws one to quantitatively model a vegetation species

PAGE 28

28 and determine the ecological impact of various management scenarios falling under the CERP initiative. This new engine would accommodate varying algorithms or new species, as available data o r new knowle dge become available. It would allow for interactions and feed back effects within species as well as between different species, and other environmental factors. The objectives of this paper are to test and apply a new spatially distributed, deterministic, free form (user definable), quantitative ecological model of cattail dynamics. An advantage of this free form modeling approach is that multiple ecological algorithms of differing complexity can be implemented and tested simultaneously. In order to test t he influence of increasing complexity on reducing uncertainty in model output (Lindenschmidt, 2006) five levels of increasing complexity were selected to model the cattail densities. Following the methodology used by Jawitz et al. (2008) a simple logisti c function (Keen & Spain, 1992) formed the base of the complexities, with water depth and soil phosphorus concentration (the two most important factors influencing cattail growth according to Newman et al. 1998 ) and sawgrass interaction influencing the hi gher levels of complexity. The entire WCA2A vegetation dataset (1991 1995, and 2003), obtained from Rutchey et al. (2008), was chronologically divided into model training and testing sections. Training of the model was conducted for the years 1991 1995, w here the growth factor ( found in e quation (2 3) ) w as fitted to the Level 1 complexity. Model testing was conducted on the two time periods of 199 1 2003 (testing 1) and 1995 2003 (testing 2) r espectively with the testing 2 time period being equivalent to a blind test (different initial conditions) The 1991 and 1995 vegetation maps were used to initialize the training, testing 1 and testing 2 simulations respectively.

PAGE 29

29 Model output from the training, testing 1 and testing 2 simulati ons was compared with the 1995 and 2003 vegetation maps respectively. Model output was compared to observed data and the most accurate level of complexity thus determined. Materials and Methods In order to achieve the objectives mentioned above, it was necessary to use both hydrolo gical and water quality data to inform the ecological cattail model. To this end, it was decided to use the regional simulation model ( RSM ), which was developed by the south Florida water management district (SFWMD) to replace the popular SFWMM coupled wi th the transport and reaction simulation engine ( TARSE ) to provide the base structure for modeling cattail dynamics across the test site. The Regional Simulation Model (RSM) Developed by the SFWMD, the RSM simulates hydrology over the South Florida region. It is often thought of as the successor to the successful SFWMM, referred to as the 2 by 2 model for its 2 mile resolution (SFWMD, 2005 a ) RSM operates over a variable triangular mesh grid, as opposed to the 3.22 km (2 mile) square grid of the SFWMM, wh ich enables higher resolution in areas of concern as well as the ability to delineate canals (SFWMD, 2005c ) The RSM uses a weighted implicit finite volume method to simulate two dimensional diffusional flow and hence implicitly simulates ground water flow and surface water flow (SFWMD, 2005 c ) The OOP design structure of RSM allows for the abstraction, and modularity of various components (SFWMD, 2005 b ) A result of this being that there are two engines that comprise the RSM the h ydrologic s imulation e ng ine (HSE) and the m anagement s imulation e ngine (MSE). The HSE simulates all the hydrological processes, while the MSE simulates various

PAGE 30

30 management or control regimes. These two engines interact at runtime to provide an accurate representation of the hydrod ynamics of the region (SFWMD, 2005 c ) Simulating Transport and Reactions Using TARSE The TARSE was recently developed to simulate water quality (WQ) components within the RSM model for areas in the Everglades system (Jawitz et al., 2008) The TARSE model w as designed to be as generic as possible in order to simulate multiple water quality components with a simple change in the input file. It was first implemented as another engine to be incorporated within the RSM framework, along with the HSE and MSE, cal led the Water Quality Engine (WQE). Due to its structure, the WQE does not simulate hydrology and requires a hydrologic driver to feed it values of flow and depth every time step (SFWMD, 2008c) TARSE has since been decoupled from RSM, and implemented with other hydrologic drivers such as the flow and transport in a linked overland aquifer density dependent system ( FTLOADDS ) ( Wang et al., 2007 Muller, 2010 ) and VFSMOD ( Muoz Carpena et al., 1999, Perez Ovilla 2010) TARSE solves the a dvection d ispersion r e action e quations (ADRE) over an unstructured triangular mesh (James & Jawitz, 2007) The ADRE is represented by equation (2 1 ), and every term is a function of a two dimensional spatial coordinate x, with component s (x 1 x 2 ), and time, t. (2 1 ) Where t is time [T] c(x,t) is the concentration [M/L 3 ] and (x,t) is the porosity of the medium (which may be 1 for surface water) [L 3 /L 3 ] h(x,t) is the water depth [L] or thickness of the saturated zone in groundwater flow, u(x,t) is the specific discharge [L/T] of water (either surface or ground water) and D = D (u(x,t)) is the dispersion tensor (a

PAGE 31

31 function of u). f 1 (x,t) is a source rate [M/L 3 T] with associated concentration c 1 and f 2 (x,t) is a first order decay rate [M/L 3 T] The density [M/L 3 ] of the water is assumed to be constant. The basis of TARSE invol ves transfers (e.g. settling, diffusion, growth) between various stores, such as soil water column solutes, pore water solutes, macrophytes, and suspended solids. The specifics of these stores, and the transfers between them, are user definable in the XML input file (Jawitz et al., 2008) TARSE equations are composed of pre equations, equations, and post equations. Pre and post equations are used for implementing conditional (if then else) statements as part of pre and post processing after the main proce ssing in the equations. For example pre processing could be used to determine if the current water depth [ m ] is above the threshold for c attail optimum growth, and reduce the depth influence factor accordingly. If the depth is less than the optimum growin g depth, then the influence factor decreases accordingly The logic just described is represented by equation ( 2 2 ) as described by (Grace, 1989), where cattail optimum depth is 70cm (2 2 ) The main equations are structu red as ordinary differential equations (ODE) (SFWMD, 2008b) An example presented by equation (2 3 ) is the logistic equation used for the Level 1 complexity

PAGE 32

32 The RSM/TARSE coupling represents possibly the first time that a free fo rm dynamic system model has been integrated with a fixed form, spatially distributed, hydrologic model (Muller, 2010) This unique coupling, with user defined interactions operating across a spatially distributed domain, lends itself to simulating ecologic al behaviors (growth, death, movement, and feeding) as well a s the original WQ interactions. The model cannot simulate ecological movement at the time of this research. Attempts to include this type of non ADRE movement are discussed in Chapter 5. Model Ap plication In order to test the influence of increasing complexity on reducing uncertainty in model output (Lindenschmidt, 2006) four levels of increasing complexity were selected to model the cattail densities. Following the methodology used by Jawitz et al. (2008) a logistic function (Keen & Spain, 1992) was used for the most basic, Level 1 complexity, due to its density dependent growth and rapid (exponential) early stages of growth The logistic function is represented in equation (2 3 ) (2 3 ) W here P is the population density [M/L 2 ] t is time [T] GF is the a constant growth rate [T 1 ] K is the carrying capacity, or maximum population density [M/L 2 ] Level 2 is a water depth influenced Level 1 complexity, seen in eq uation (2 4 ) A water depth factor (habitat suitability index) ranging from 0 to 1 is multiplied with the carrying capacity in the logistic function. The depth factor decreases linearly from 1 as the current depth either rises above or drops below the opti mum ( 70 cm) growing depth. Th is depth factor can be seen in equation (2 2 )

PAGE 33

33 (2 4 ) Where P is the population density [M/L 2 ] t is time [T] GF is a constant growth rate [T 1 ] DepthF is the water depth factor [L/L] K is t he carrying capacity, or maximum population density [M/L 2 ] Level 3 is a soil phosphorus influenced Level 2 complexity, seen in equation (2 5 ) with the soil phosphorus factor being incorporated in a similar fashion to the depth factor. (2 5 ) Where P is the population density [M/L 2 ] t is time [T] GF is a constant growth rate [T 1 ] DepthF is the water depth factor [L/L] phosphorusF is the soil phosphorus factor [M/L 3 /M/L 3 ] K is the carrying capacity, or maximum popula tion density [M/L 2 ] The soil phosphorus factor behaves like a logistic function, increasing from zero to one as soil phosphorus concentration increases from 200 mg/kg to 1800 mg/kg, as described by (Walker & Kadlec, 1996), and can be seen in equation (2 6 ) (2 6 ) Where phosphorusF is the soil phosphorus HSI, ranging from 0 to 1, and phosphorus is the current soil phosphorus concentration (mg/kg). Level 4 builds on a Level 3 complexity with an adde d sawgrass interaction factor, much like the soil phosphorus and depth factors. It decreases linearly from 1 to 0 .16 as sawgrass densities in crease from 0 to 1958 g/m 2 (Doren et al., 1999) which is their reported maximum density (Miao & Sklar, 1998) The s awgrass is set to grow according

PAGE 34

34 to a Level 1 complexity as in equation (2 4 ) thus t he Level 4 complexity is represented by equation (2 7 ) (2 7 ) Where P is the population density [M/L 2 ] t is t ime [T] GF is a constant growth rate [T 1 ] DepthF is the water depth factor [L/L] phosphorus F is the soil phosphorus factor [M/L 3 /M/L 3 ] sawgrassF is the sawgrass influence factor [M/L 2 /M/L 2 ] K is the carrying capacity, or maximum population density [M /L 2 ] The sawgrass factor varies according to equation (2 8 ) (2 8 ) Wh ere sawgrassF is the sawgrass H S I ranging from 0 to 1, sawgrass is the current sawgrass density, and K SAW is the sawgrass car rying capacity. The Level 5 complexity is the same as Level 4, with a density dependant influence on the Level 1 sawgrass model, which is represented by equations (2 9) and (2 10) respectively. (2 9 ) Where P is the population density [M/L 2 ], t is time [ T], GF is a constant growth rate [T 1 ], cattailF is the cattail factor ranging from 0 to 1, K is the carrying capacity, or maximum population density [M/L 2 ]. (2 10 ) Where cattailF is the cattail HSI ranging from 0 to 1, cattail is the current cattail de nsity, and K CAT is the cattail carrying capacity.

PAGE 35

35 The depth, soil phosphorus, and sawgrass interaction factors are all calculated using the pre equations, similar to that presented in equation (2 2 ) These factors are then incorporated into the main growth equations presented in equations (2 4 ) (2 5 ) and (2 7 ) representing levels of complexity 2 through 5 respectively In TARSE, components are listed as either mobile or stabile. Mobile components are moved in the water using the ADRE equations, while the stabile components do not move, and only undergo the reaction part of the ADRE. Given the complexities associated with simulating wind borne or water borne transportation of seeds and rhizome expansion which is another mode of expansion noted by Miao (2 004) all mesh elements were initialized (seeded) with cattail, with areas originally not containing cattail being seeded with the minimum value of 1 0 g(dry weight)/m 2 This assumption represent s the presence of a seed bank, providing cattail the opportuni ty of colonizing an area as soon as conditions become favorable. Vegetation then is modeled as a stabile component, with no means for dispersal. Also, a s a result of this inability for dispersal the maximum influence that the aforementioned factors such a s phosphorus F sawgrassF and cattailF can have, has been limited so that they reduce the cattail population to 1% of its maximum density. Test Site The test site used for ecological model development and testing was the WCA2A, seen in Figure 2 1 WCA2A is a 547 km 2 managed wetland just south of Lake Okeechobee, Fl, and accounts for about 6.5% of the total area of the Everglades It came into existence in 1961 with the construction of the L35 B canal, and receives inflow from the Storm water Treatment Areas ( o downstream water conservation areas, and eventually into the Everglades N ational

PAGE 36

36 P ark (Urban et al., 1993) According to Rivero et al. (2007) the region has an average annual temperature of 20 C, and precipitation betwe en 1175 mm and 1550 mm. The elevation range in WCA2A is between 2.0 m to 3.6 m above sea level, which generates a slow sheet flow from the NW to the SW of the region. The hydrology is controlled by the SFWMD at a number of inlet and outlet structures (gree n squares in Figure 2 1 ) along the surrounding canals (blue lines in Figure 2 1 ). The landscape is composed of dominant sawgrass marshes, shrub and tree island communities, and invasive cattail communities (van der Valk & Rosburg, 1997) WCA2A has been use d extensively as a research site by the SFWMD, with extensive trial and monitoring programs for a number of biogeochemical components, especially soil phosphorus and vegetative structure (Rivero et al., 2007) T he triangular mesh grid used for simulation i s also displayed in Figure 2 1 with the green border cells used for numerical stability of the hydrological RSM component. An overview of the HSE setup for WCA2A which provides the hydrological operating conditions, can be found in SFWMD (2008d). Initial Conditions, Boundary Conditions, and Time Series Data Cattail vegetation maps are used for the initial conditions, as well as comparing model output with measured data Hydrological time series are used for initial and boundary conditions along the surrou nding canals. Using RSM, the hydrological boundary conditions are converted into depth values across the domain, which is then used as inputs to the Level 2 complexity algorithm. Soil p hosphorus concentration maps provide initial conditions and an influenc e factor for the Level 3 complexity algorithm. Sawgrass vegetation maps are used as initial conditions for the Level 1 complexity sawgrass model, which serves as an influence factor for the Level 4 complexity cattail algorithm. The following sections provi de additional detail towards these model inputs.

PAGE 37

37 Hydrological t ime s eries The hydrology of WCA2A is controlled primarily by the operation of control points along the S10 and L35 B canals. The hydrology data was obtained from the SFWMD, who use the WCA2A si te as a test site for the RSM. The input dataset consist ed of a daily time series of hydraulic head values (m) at the inlet and outlet control structures of WCA2A (represented by the green squares in Figure 2 1 ) for the years 1979 to 2000 (Wang, 2009) The time series have since been updated to 2008 for all control structures using data collected from the DBHYDRO website (SFWMD, 2009) Soil p hosphorus A gradient of soil phosphorus exists along WCA2A, with a high concentration near the inlets at the north, a nd a low concentration at the outlets in the south. This soil phosphorus gradient has been widely documented and studied (DeBusk et al., 1994) Given the unavailability of spatial soil phosphorus data beyond map classifications (Grunwald, 2010) soil phosp horus input maps were created by overlaying the WCA2A mesh on the existing maps obtained from (Grunwald et al., 2004) and (Grunwald et al., 2008) The soil phosphorus map of 1990 was used for the mod el training period of 1991 1995, while the soil phosphoru s map 2003 w as used for the testing 1 (199 1 2003 ) and testing 2 (199 5 2003) simulation periods respectively. Due to the poor quality of these soil phosphorus input maps, and the inability of TARSE to adequately simulate phosphorus dynamics in the WCA2A reg ion (due to it still being in development), the soil phosphorus concentration itself was not simulated. I.e. The static soil phosphorus concentration provided by the input maps was used to inform the model throughout the simulation period.

PAGE 38

38 Cattail and s awg rass Vegetation maps for WCA2A were obtained for the years 1991, 1995 (Rutchey, 2011) and 2003 (Wang, 2009) which were all used in (Rutchey et al., 2008) These maps provided density (g/m 2 ) distributions across the test site for cattail Based on the neg ative correlation between sawgrass and cattail reported by (Doren et al., 1999) and (Richardson et al., 2008) and various other vegetation maps of the area, nameley 1991 (Jensen et al., 1995), 1995 (SFWMD, 1995), 1999 (SFWMD, 1999) and 2003 (Wang, 2009), confirming this negative correlation, sawgrass was assigned densities according to a negative correlation with the cattail maps mentioned earlier. I.e. high sawgrass density values (1600 g/m 2 ) were assigned to regions with typically low cattail density va lues, and low sawgrass density values (600 g/m 2 ) were assigned to regions with high cattail density values. The program ARC GIS (Ormsby et al., 2001) was used to create a uniform raster format with cell size of 0.93 m 2 f or all the maps. The vegetation clas s values were converted to density values according to T able 2 1 The input file was created by overlaying the mesh grid of 385 triangles (510 triangles total which includes a row of triangles along the border) on the rasterized vegetation map, and calcu lating the mean value of all the raster cell density values within each triangular element. This new aggregated map is used to create the input file. A graphical overvie w of this process for the data map s can be seen in Figure 2 2 The final sawgrass maps are viewable in Figure 2 3. The maximum densities of 1240 g/m 2 for cattails, and 1958 g/m 2 for sawgrass, are reported by Miao & Sklar (1998) An overview of the parameter descriptions for the increasing levels of complexity can be found in Table 2 3.

PAGE 39

39 Stat istical Analysis of Simulated and Monitored Biomass Besides a side by side visual comparison of the model output, the re were three sets of statistical analysis techniques that were used to compare the model results and the raw data. All comparisons were ac companied by a Nash Sutcliffe coefficient (McCuen et al., 2006) represented by equation (2 11 ) which provides a singular number for the comparison of the model statistics, and how they compare to the observed data. The coefficient is a comparison of mode l results with the mean of the data. g (2 11 ) Where E f is the Nash Sutcliffe coefficient; is the predicted variable, y i is the observed variable, is the mean of the observed variable; n is the sample size. A Nash Sutcliffe val ue of 1 means that the model completely matches the data, while a value of 0 means that the model performs no better than the mean of the data. Any value less than 0 is a poor representation of the data. A direct comparison between model output and the dat a was performed through the use of a classified difference technique (Kiker, 1998) Since the data maps were initialized with a minimum density of 1 0 g/m 2 to account for movement between triangular elements that is not simulated in this model application a difference between model output and the data value falling within 20 g/m 2 match This is loosely based on the fact that (Miao & Sklar, 1998) reported a roughly 10% error in measurement of the maximum density of 1240 g/m 2 So fo r example, if the data value was 10 g/m 2 (representing a typically non cattail region), and the model

PAGE 40

40 output was 12 g/m 2 with a difference of 2 g/m 2 (falling within the 20 g/m 2 range), then class of differenc es lie within the 200 g/m 2 range, which is the value assigned to the low cattail density class during the formatting and creation of the input data maps. This 200 g/m 2 range is also half the range between the successively higher cattail density classes. Th e third class of differences lie within 400 g/m 2 which can be thought of as a data class difference (e.g. between low and medium densities) or also as being within 40 % of the maximum possible difference (the maximum data density is set as 1000 g/m 2 ). Fin ally, any difference above the 400g/m 2 threshold is placed in the fourth class of differences, and represents a significant misrepresentation of the data by the model A box and whiskers plot (Ott & Longnecker, 2004) was created with all model element valu es compared with their corresponding data element values The desired f igure is a plot with the means and ranges corresponding to the associated data ranges The box and whiskers plots cover the entire range of possible values from 0 1240 g/m 2 I statistic (Cliff & Ord, 1970 and Paradis, 2010) was used to determine the spatial autocorrelation between cells separated by an increasing (2 1 2 ) (2 12 ) Where x i is the current cell value; x j is the val ue of the cell separated by a given distance; x (bar) is the mean, W is the number of cells surrounding the current one, found within the given distance. These values are plotted against an increasing

PAGE 41

41 distance, as in Marani et al. ( 2006) to determine the trend in spatial autocorrelation across the entire region. A landscape scale Abundance Area (Martin 1980, Michalski & Peres 2007) plot was used to measure the average change in density across the test site. One hundred randomly distributed cells were used as base cells. From these, the densities of all cells falling within a given radius are summed. This total is then divided by the number of base cells, and plotted against the area of circles with an increasing radius, as in (Martin, 1980) A trend of the regional mean density was plotted with a daily timestep for a visual comparison of the trends between the different levels of complexity. This was repeated for the individual levels of complexity, and selected zones (elements) within the region, for a more detailed view of the effect of external parameters on different areas of the region. Elements 209, 244 and 380 located in the northeast, central and southwest, were selected as representative elements for typically high, medium and low cattail densities respectively. These elements are marked by red squares in Figure 2 1. Model Training and Testing There were three time periods over which the model was simulated, using the available data maps of 1991, 1995 and 2003. Training was performed for the time per iod 1991 to 1995, using the Level 1 complexity to establish the growth rate (6.7*10 9 g/g s) and results from the other levels will be due solely to the effect of their included external parameters. It is therefore expected that the results of the other l evels of complexity will not be as accurate a s the Level 1 complexity for this time period. Testing of the model was performed for the time period of 1991 to 2003 (through 1995). This provides an extended forecast based on the original calibration time per iod, and initial

PAGE 42

42 data. Finally the 1995 to 2003 time period was used as a blind test of the model, using different initial conditions and determining its ability to accurately predict the density distribution of the 2003 cattail map. Results From the catta il maps of Figure 2 2 and those in Rutchey et al. (2008) a trend in cattail distribution over the years is observable It appears that cattail density and distribution increase from 1991 to 1995. From 1995 to 2003 the general distribution continues to incr ease, but with more dispersed patches of high d ensity cattail. Figure 2 4 shows the model output maps for the different simulation periods, and all five levels of complexity, compared to the final data maps on the left. These density maps have had their va lues aggregated into eight classes for visual comparison only. For the Training (1991 to 1995) time period, t he Level 1, Level 4 and Level 5 complexit ies appear to have the most similar results to the observed 1995 data map on the left. It was expected tha t the Level 1 output provide the best match for this time period since it was the level which was used for calibration over this period. For the two testing simulation periods the Level 1 complexity clearly over estimates the historical data. The Level 2 c omplexity consistenytly under predict the historical values, while Level 3 and Level 4 tend to get progressibely better The Level 5 complexity sees no significant improvement from that of Level 4 A better depiction of these trends is found in the classif ied difference maps of Figure 2 10 wi th a summary plot in Figure 2 11 showing the percent of triangular elements falling within each class, for all four levels of complexity and simulation periods. Upon further inspection of these figures, the Level 4 and Level 5 complexities consistently outperform the other levels of complexity, with

PAGE 43

43 either the highest percentage of combined classes 0 (<20 g/m 2 ) and 1 (<200 g/m 2 ), or the lowest percentage of combined classes 2 (<400 g/m 2 ) and 3 (>400 g/m 2 ). Figure 2 5 s hows a time series plot for the f ive levels of complexity across all three simulation periods. It provides added insight to the trends of the model, without relying purely on the end points. The plots are for the regional mean density (R), in red, and elem ents 209 (blue), 244 (green) and 380 (cyan) respectively. For the 1991 to 1995 simulation period the Level 1 complexity has a smooth, slowly increasing trend for all observed points. The Level 1 regional trend ends directly on the data density, which is e xpected since this is the calibration period on which Level 1 was calibrated. The southwest (element 380) and central (element 244) trends over predict the data points. During the same time period, the Level 2 complexity drops significantly. This effect is due solely to the influence of water depth on the cattail density, and is most noticeable in the higher (element 209) and central(244) trends. The Level 3 complexity is able to effectively obtain the value for element 380, but severely under predicts the other two elements. The regional trend is therefore affected with a similar under prediction. This trend, specifically for element 244 is similar to that of the Level 2 trends, and one can surmize that the water depth remains a significant influencing fact or for this level Considering the Level 4 complexity of the same time period, the regional trend manages to reach the final data point, with element 380 over predicting its value and elements 209 and 244 under predicting their points The Level 5 complexi ty has essentially the same trends as the Level 4 complexity These trends and observations are essentially replicated in the subsequent simulation periods, with the Level 1 complexity over predicting all the data points Level 2 under predicting all data points, Level 3 improving

PAGE 44

44 the predictive capability, and Levels 4 and 5 improving still more A look at the accompanying statistics will help to better inform th e s e initial observation s, and form a concusion The statistics and comparison time series for t he calibration period 1991 to 1995 can be found in Figure 2 6 The regional mean timeseries plot for all four levels of complexity can be found in a). As mentioned earlier when discussing the individual Level 1 reaches the data point, w ith the other levels falling below. Examining the abundance area plot in b), Levels 1, 4 and 5 are the closest approximations of the data statistic which is represented by the black line. Considering the boxplot in c) enables a comparison of the spread of model densities with that of the of L evels 2 and 3 is much reduced when compared to Levels 1, 4 and 5 Lastly, ollow the same basic trend as the data (represented by the black line). Levels 2 and 3 are slightly lower initially, but they are all zero by th e 60000 ft mark. This distance corresponds approximately to the width of the region, and the total distance of 1 20000 ft corresponds to the longest north south distance of the region. It is believed that the statistic drops to zero by the 60000 ft mark due to overlapping and boundary effects, and that this elevates the Nash Sutcliffe coefficient for all levels of co mplexit y in this statistic. Figures 2 7 and 2 8 display the same three statistics and regional mean density trends for the other two simulati on periods, namely 1991 to 2003 and 1995 to 2003. Overall, the trends are the same as those observed for the 1991 t o 1995 time period. Considering the box plots, one notices that the Level 4 and 5 complexities provide an almost identical density distribution to the

PAGE 45

45 data, only with slightly elevated minimums. Otherwise, Level 4 and 5 tend to slightly overpredict, while Level 3 slightly underpredicts the data trends and statistics. Level 2 consistently underpredicts the trends and statistics. A summary of all these statistics is provided by the Nash Sutcliffe coefficients in Table 2 2 and can be visually compared in Figu re 2 9 with the box plots (or 1 to 1 comparisons) located in a), abundance area located located in c). From this figure it can be noted that the Level 4 and 5 complexit ies, which include depth, soil phosphorus and sawgrass interaction s, consistently perform better than the other levels of complexity. Discussion The methods of modeling cattail for ecological models currently in use were compared, their similarities and differences were noted, and a knowledge gap identified that there determining the spatial distribution of cattail in the Everglades. A coupled free form/fixed form model was introduced to solve this problem. An added benefit of the free form nature of the RSM/TARSE coupled model is the user definable equations of interaction, which can be modified as data and/or new theories become available. This new ecological implementation of the model (RTE ) was successfully applied towards modeling cattail dynamics ac ross the WCA2A test site for a training (1991 to 1995), testing (1991 to 2003) and blind test (1995 to 2003) simulation periods F ive algorithms, with increasing complexity, were used to match the historical data. Upon analysis of the performance of these different levels, it can be concluded that the Level 4 and 5 complexities, which include depth, soil phosphorus and sawgrass interaction parameters, are the most suitable model s for matching the historical data. This would be in agreement with the likes of Newman et al. (1998) and Miao & Sklar (1998) where

PAGE 46

46 water depth and soil phosphorus concentration are the most important fact ors aiding in cattail expansion, and includes an interaction parameter with sawgrass which is of interest in that region. Limitati ons to the model include the element /triangle size which has a range of 0.5 km 2 to 1.7 km 2 (Wang, 2009) Even though this is a relatively fine grid size, there is still considerable heterogeneity within each cell (Zajac, 2010)

PAGE 47

47 Table 2 1. Cattail Class a nd Density Values for Formatting Data Maps Vegetation Class Cattail Density Value (g/m 2 ) Sawgrass Density Value (g/m 2 ) 1 High Density Cattail 1000 10 2 Medium Density Cattail 600 600 3 Low Density Cattail 200 1000 4 Other 10 1600

PAGE 48

48 Table 2 2 Summary of Nash Sutcliffe values comparing model and observed data for Area statistics (represented by Figure 2 6 Figure 2 7 and Figure 2 8 respectively) for Level 1, Level 2, Level 3, Level 4 Level 5 training (1991 1 995), testing 1 (199 1 2003 ), and testing 2 (199 5 2003) simulations YEAR LEVEL 1 TO 1 BOX PLOT ABUNDANCE 1991 1995 1 0.74 0.98 0.98 2 0.13 0.99 0.94 3 0.49 0.95 0.23 4 0.7 4 0.98 0.96 5 0.74 0.98 0.96 199 1 2003 1 0.75 0.97 1.89 2 0.0 2 0.86 0.35 3 0.23 0.98 0.44 4 0.4 9 0.98 0.7 7 5 0.49 0.98 0.76 199 5 2003 1 0.95 0.99 0.80 2 0.14 0.94 0.29 3 0.36 0.97 0.51 4 0.3 9 0. 99 0.7 7 5 0.39 0.99 0.77

PAGE 49

49 Table 2 3 Parameter description for the increasing levels of complexity st udied Parameter Parameter Description Levels influenced Affected variables Parameter Equation/Logic Cattail Cattail density 1,2,3,4 ,5 Cattail Population density CATGF Cattail growth rate 1,2,3,4 ,5 Cattail Rate of increase of population DepthF Water dept h influence 2,3,4 ,5 Cattail Carrying Capacity, Cattail Equation 2 2 phosphorusF Soil phosphorus concentration influence 3,4 ,5 Cattail Carrying Capacity, Cattail Equation 2 6 Sawgrass Sawgrass density 4 ,5 Sawgrass, Cattail Carrying Capacity, Cattail Popul ation density SAWGF Sawgrass growth rate 4 ,5 Sawgrass Rate of increase of population

PAGE 50

50 Figure 2 1 Test site, water conservation area 2a (wca2a), in the northern everglades. Green squares represent inlet and outlet control structures; blue lines represent canal structures. Triangles represent the mesh used for simulation, with green triangles representing the border cells used in the central difference method The red squares fall on zonal elements 209, 244 and 380.

PAGE 51

51 Figure 2 2. Formatting of cattail input map s Rasterized raw data on the left, overlaid with the WCA2A triangular mesh in the middle, and the final triangular mesh cattail input map on the right.

PAGE 52

52 Figure 2 3 Sawgrass input maps for the years 1991, 1995, and 2003 respectively

PAGE 53

53 Figure 2 4 R esults for a) Training (1991 1995) b) Testing 1 (1991 2003) c) Testing 2 (1995 2003) simulation s for the Level 1,2,3,4 ,5 complexities. The data maps which these results are compared to are in the first (Data) column. Densities have been agg regated into 8 classes for visual comparison only.

PAGE 54

54 Figure 2 5 Regional and Zonal trends for a) Training b) Testing 1 c) Testing 2 simulation periods, for all f ive levels of complexity The points at the beginning and end of the trends represent the o bserved data densities

PAGE 55

55 Figure 2 6 Regional statistics for Training period (1991 1995) and all f ive levels of complexity. a) Regional mean trend (red dots represent initial and final data values) b) Abundance area (The black line represents the data ) c) Box plot

PAGE 56

56 Figure 2 7 Regional statistics for Testing 1 period (1991 2003 ) and all five levels of complexity. a) Regional mean trend (red dots represent initial and final d ata values), b) Abundance area (The black line represents the data) c) Box plot

PAGE 57

57 Figure 2 8 Regional statistics for Testing 2 period (1995 2003) and all five levels of complexi ty. a) Regional mean trend (red dots represent initial and final data values), b) Abundance area (The black line represents the data) c) Box plot

PAGE 58

58 Figure 2 9 Nash Sutcliffe Su mmary of statistics. A graphical representation of Table 2 2 The Level 4 and 5 complexities performs consistently well.

PAGE 59

59 Figure 2 10 Classified difference maps for a) Training (1991 1995) b) Testing 1 (1991 2003) c) Testing 2 (1995 2003) simulation s for the Level 1,2,3,4 ,5 complexities. The classified difference s of the data maps which these results are compared to are in the first (Data) column.

PAGE 60

60 Figure 2 11 Classified difference summary. Percent cells occurring within each class, for all levels of complexity and time periods a) Training (1991 1995) b) Testing 1 (1991 2003) c) Testing 2 (1995 2003)

PAGE 61

61 C HAPTER 3 GLOBAL UNCERTAINTY AND SENSITIVITY ANALYSIS OF A SPATIALLY DISTRIBUTED ECOLOGICAL MODEL Background Models aid our understanding of a complex s ystem, and allow us to evaluate different scenarios of management decisions before the more costly task of their implementation (Fitz et al., 2010) A number of ecological models have been implemented across the Everglades region of south Florida, U SA. E .g the E verglades landscape model ( ELM ) (Fitz & Trimble, 2006) and the across trophic level system simulation ( ATLSS ) (Gross, 1996) A recent addition to the list is the use of the hydrological regional simulation model ( RSM ) (SFWMD, 2005 a ) coupled with the transport and reaction simulation engine ( TARSE ) (Jawitz et al., 2008) and implemented towards modeling vegetation ( Typha domingensis (cattail)) dynamics across a wetland in the Everglades WCA2A (Lagerwall et al., 201 1 ) This coupled ecological model ( known as RTE ) like all models, has issues of complexity, sensitivity and uncertainty which Muller ( 201 1 ) refers to as the relevance trilemma The objective of this paper is to explore this relevance trilemma. Specifically, what are the most important fa ctors to consider when using this ecological model? How much does increasing complexity affect the model performance ? How does uncertainty in the spatial input affect model output? How do all of these factors interact, and how sensitive are they to change? These questions are addressed by applying a unique global uncertainty and sensitivity analysis ( GUSA ) for spatially distributed models, similar to that presented by Zajac ( 2010) Formal Uncertainty and Sensitivity Analysis (UA/SA) provides insight into mo del behavior and reliability, and is used to increase confidence in model predictions.

PAGE 62

62 Sensitivity and uncertainty are closely linked. Saltelli et al. ( 2004) describes Sensitivity respond (pg. 3) the overall uncertainty associated with the response as a result of uncertainties in the (pg. 4) Where standard UA/SA involves systematically ch anging the value of a single parameter while holding everything else constant, GUSA involves the creation of input datasets with values obtained randomly from a probability distribution function ( PDF ) for all variables. This method discards the assumed imp ortance of a few parameters, and allows one to screen the entire set of input parameters (van Griensven et al., 2006) Spatial heterogeneity is a common theme in ecological modeling and a strong source of uncertainty. However, there is little incorporation of spatial uncertainty in GUSA (Crosetto et al., 2000) Zajac ( 2010) developed a two step procedure based on the geo statistical technique of sequential indicator simulation (SIS) and the variance based method of Sobol (Sobol, 2001) to incorporate spatial uncertainty into the GUSA. The procedure follows roughly thus: The SIS is used to create multiple alternate maps based on the original sample points. This is different from Kriging ( Krige 1951 via Cressie 1990 ) which creates a singular map. The multipl e maps produced by the SIS are then incorporated into the Sobol method by being referenced as a discrete range of integers representing each individual map. In order to test the influence of increasing complexity on reducing uncertainty in model output (L indenschmidt, 2006), f ive levels of increasing complexity were selected to model the cattail densities. Following the methodology used by Jawitz et al. (2008), a

PAGE 63

63 simple logistic function (Keen & Spain, 1992), formed the base of the complexities, with water depth and soil phosphorus concentration (the two most important factors influencing cattail growth according to Newman et al. 1998) and sawgrass interaction influencing the higher levels of complexity. These levels of complexity were analyzed as separate cases, and their analysis results displayed separately for comparative purposes An objective of this paper, besides testing the influence of increasing algorithmic complexities for relevance, will also be to test the importance of the various parameters ( growth rate initial density distribution water depth, soil phosphorus concentration and inter species inhibition ) affecting these levels. Materials and Method s Typha domingensis (cattail) is a native Everglades monocotyledonous macrophyte, typically occu rring as sparse complements alongside Cladium jamaicense (sawgrass) stands. The two species have significantly different morphology, growth, and life history characteristics (Miao & Sklar, 1998) enabling the cattail to expand prolifically under the altere stands in WCA2A doubled, expanding southward into the sawgrass marshes (Willard, 2010) Cattails have hence been labeled as an indicator species of habitat well being or species of concern, and their distribution is used to determine the effectiveness of various water management decisions. Their expansion has been studied extensively, and it has been determined that there are four main external factors that affect their growth as we depth, hydroperiod, soil phosphorus concentration, and disturbance (Newman et al., 1998)

PAGE 64

64 Test Site The test site used for ecological model development and testing was the WCA2A, s een in Figure 3 1. WCA2A is a 547 km 2 managed wetland just south of Lake Okeechobee, Fl, and accounts for about 6.5% of the total area of the Everglades. Established in 1961 with the construction of the L35 B canal, WCA2A receives inflow from the bef ore discharging into the downstream water conservation areas, and eventually into the Everglades N ational P ark (Urban et al., 1993) According to Rivero et al. ( 2007) t he elevation range in WCA2A is between 2.0 m to 3.6 m above sea level, which generates a slow sheet flow from the northeast to the southwest of the region. The hydrology is controlled by the south Florida water management district ( SFWMD ) at a number of inlet and outlet structures (green squares in the Figure 3 1 ) along the surrounding canal s (blue lines in the Figure 3 1 ). The landscape is composed of dominant sawgrass marshes, shrub and tree island communities, and invasive cattail communities (van der Valk & Rosburg, 1997) WCA2A has been used extensively as a research site by the SFWMD, w ith many trial and monitoring programs for a number of biogeochemical components, especially soil phosphorus and vegetative structure (Rivero et al., 2007) T he triangular mesh grid used for simulation is also displayed in Figure 3 1 with the green border cells used for numerical stability of the hydrological RSM component. WCA2A was previously used as a test site by Lagerwall et al. ( 201 1 ) and Zajac ( 2010) and hence was a good candidate for use in this analysis The mesh and associated HSE model setup fi les were developed and provided by SFWMD. An overview of the HSE setup for WCA2A, which provides the hydrological operating conditions, can be found in SFWMD (2008d).

PAGE 65

65 The Variance Based Method of S obol Variance based methods for GU SA such as the e xtended F ourier a mplitude s ensitivity t est (FAST) (Cukier et al., 1973) or Sobol (Sobol, 2001) provide a quantitative measure of the output variance with respect to the variance associated with the input parameters. These sensitivity indices are described in term s of direct (first order), and interaction (second and higher order) effects of the input parameters (Saltelli et al., 2004) The Sobol method is not hindered by discrete (non continuous) inputs, and is thus appropriate for analyzing alternate realizations of input maps (Lilburne & Tarantola, 2009) The first order sensitivity indices are calculated as the ratio of the variance associated with the input variable to the total variance of the model output. While the total effect sensitivity is calculated as t he ratio of the total variance (first order plus all interactions) associated with the input variable to the total variance of the model output. A succinct methodology for calculating these indices is provided by Lilburne & Tarantola ( 2009) and follows th us: Select an integer N to represent the sample size Saltelli et al ( 2004) suggest using the highest n umber possible (upwards of 500), with the caveat that larger sample sizes come at an increased computational cost. Next, generate a matrix of size (N, 2 k), where k is the number of input parameters and N is Split this matrix into two matrices A and B of size (N, K). Define a matrix D i which is the same as matrix A, except with the i th column obta ined from matrix B. And define a matrix C i which is the same as B with the i th column coming from A. Finally, compute the model output for all the input values in A, B, C i D i this is represented by equation ( 3 1 ). ( 3 1 )

PAGE 66

66 The mathematical representation of the first order and total sensitivity indices is represented in equations ( 3 2 ) and ( 3 3 ) respectively W ith a set of (2K+2)N simulations it is possible to estimate the first order and total sensitivity indices for each input (Lilburne & Tarantola, 2009) For the simulations conducted in this paper a total 14336 distinct model simulations were required Where the te rm is described by equation (3 4) ( 3 4) The program SIMLAB (Saltelli et al., 2004) was used to perform the GUSA, creating the 14336 permutations of input parameters and comparing this to the model output scalar/statistic of the respective data sets. Th e statistic used to represent the model output during this analysis was the change in regional mean density, Delta Mean (DM) Th i s statistic was calculated for the five levels of complexity associated with each simulation. The levels of complexity are cons idered as alternative scenarios in the final analysis and not as variable parameters in order to compare them side by side. ( 3 2 ) ( 3 3 )

PAGE 67

67 Parameter Distribution Functions As mentioned above, there are six parameters that were used in the GUSA. These parameters include t he growth rate for both cattail and sawgrass; water depth; soil phosphorus concentration; initial sawgrass density; and initial cattail density/distribution obtained from the SIS. Hydrological parameter The hydrology of WCA2A is controlled primarily by the operation of control points along the S10 and L35 B canals. Being surrounded by canals, it is possible to completely control the volume of water in the test site. The depth parameter used for the GUSA was a uniform distribution varying between 0 m to 3 m, with the optimum depth for cattail growth lying between 0.24 m and 0.95 m (Grace, 1989) For the purpose of the GUSA, the depth was assumed constant across the entire region. Soil p hosphorus parameter A gradient of soil phosphorus exists along WCA2A, with a high concentration near the inlets at the north, and a low concentration at the outlets in the south. This soil phosphorus gradient has been widely documented and studied (DeBusk et al., 1994) Due to best management practices on the farmlands upstream, as well as altered flow/canal regimes, this gradient is now diminishing and shifting, as documented by Grunwald et al. ( 2008) For the purpose of the GUSA, the soil phosphorus concentration was assumed constant across the entire region for each simulation The soil phosphorus concentration values used in the GUSA were obtained from a uniform distribution ranging from 0 ppm to 1000 ppm, with the threshold value for a switch to cattail dominance occurring around 540 mg/kg (Walker & Kadlec, 1996)

PAGE 68

68 Cattail and s awgrass parameters The main species of concern in this study is cattail. Sawgrass is used only in the fourth and fifth level s of complexity (discussed below), to provide an inter species interaction component. Both cattail and sawgrass growth is governed by the logistic equation (Reed & Berkson, 1929) and have a growth rate ( GF ) as found in Equation 3 5. (3 5) As reported by Miao et al. ( 2000) the maximum growth rate for cattail is 2.31*10 7 g/g s. The values used for the GF and DF for both these species were selected from a triangular distribution ranging from 1.0*10 6 g/g s to 1.0*10 9 g/g s with a peak at 1.0*10 7 g/g s Both species also require initial density distributions. Due to the fact that sawgrass density and distribution is not the main focus of this study, and that it is only implicated in the Level 4 and 5 complexity algorithm s (discussed below), it was assumed that there was a uniform distribution of this species across the region The initial values used for the GUSA were obtained from a uniform distribution ranging from 0 g/m 2 to 1958 g/m 2 which is the reported maximum density (Miao & Sklar, 1998) for this species Initial vegetation maps In the previous study by Lagerwall et al. ( 2011 ) v egetation maps were obtained for the years 1991, 1995 (Rutchey, 2011) and 2003 (Wang, 2009), which were all used in (Rutchey et al., 2008) It is apparent that there is much spatial heterogeneity and it is here that it is tested. The ground tru thed data points used in the creation of the 2003

PAGE 69

69 vegetation map ( mentioned above), were used as the initial points informing the sequential indicator simulation discussed below. Sequential i ndicator s imulation Probably the two most popular methods of dete rmining spatial uncertainty are Kriging ( Krige 1951 via Cressie 1990 ) and Sequential Simulation (SS) (Goovaerts, 1996) W ithin these there are two main sub methods: Gaussian & Indicator (Goovaerts, 2001) In order to include the spatial uncertainty of t he vegetation data into the GUSA, multiple alternate maps are required for initial conditions. This requirement favors the use of SS. Also, the initial vegetation densities are considered in terms of differing, discrete classes (cattail densities) which f avors the indicator method However, due to the continuous range of possible densities, the indicator method was implemented using a continuous distribution function as opposed to a categorical data function Rossi et al. ( 1993) offer s a simplistic compari son between SS and an ill defined and incomplete jigsaw puzzle. The data points can be thought of as puzzle pieces with known locations. Then, provided an infinite supply of other pieces, it is possible to construct multiple, alternate realizations of the complete puzzle, all adhering to the observed, raw data. A more detailed and complete description can be found in Goovaerts ( 2001) and follows: Define a random path visiting each undefined node in the area exactly once. Using available information from th e collected data such as location, class histograms, and spatial variabil ity, collectively known as the c onditional c umulative d istribution f unction (CCDF) for the region, determine the value of the unknown point. Update the CCDF to include this newly simu lated point, and use this new CCDF to inform the next point. It is suggested that this be repeated at least 100 times, and in fact for this GUSA 250 alternate maps were created this way, using the GSLIB, SISIM routines (Deutsch &

PAGE 70

70 Journel, 1992) The GF var iable, along with initial density distribution maps for cattail and sawgrass, and the depth and soil phosphorus variables, provide the six parameters used in the GUSA analysis. Levels of c omplexity There are five levels of complexity that are used within t his paper as alternate scenarios. They are the same levels as discussed in Lagerwall et al. ( 2011 ) and consist of: Level 1, which is a purely logistic function, is influenced by the current density and the GF variable Level 2 is a depth influenced Level 1 function, with the depth factor negatively impacting the carrying capacity as the water depth rises above or drops below the optimum range which reduces the maximum obtainable density Level 3 adds soil phosphorus concentration to Level 2, with a n avera ge soil phosphorus and depth factor negatively influencing the carrying capacity as the soil phosphorus concentration drops below the threshold. Level 4 adds inter species interaction to the Level 3 complexity, with increasing sawgrass density negatively i nfluencing the carrying capacity as an average with the depth and soil phosphorus factors Level 5 includes a feed back mechanism over the Level 4 complexity, whereby cattail density negatively affects the sawgrass density in the same manner as the sawgras s affects the cattail density in the Level 4 complexity. In TARSE, components are listed as either mobile or stabile. Mobile components are moved in the water using the advection dispersion reaction equations ( ADRE ) equations, while the stabile components do not move, and only undergo the reaction part of the ADRE. Given the complexities associated with wind borne or water borne transportation of seeds and rhizome expansion ( which is another mode of expansion

PAGE 71

71 noted by Miao ( 2004) ) all mesh elements were in itialized (seeded) with cattail, with areas originally not containing cattail being seeded with the minimum (low) class value of 200 g/m 2 Realistically, this can represent the presence of a seed bank, providing cattail the opportunity of colonizing an are a as soon as conditions become favorable. Vegetation is then modeled as a stabile component, with no means for dispersal. Method o f Operation The ground truthed data points that were used for the 2003 vegetation data map, were analyzed, re classified into equivalent cattail density values of 200 g/m 2 600 g/m 2 and 1000 g/m 2 respectively, as discussed by Lagerwall et al. ( 2011 ) and respective histograms and variogram statistics calculated for each vegetation class. Using SIS, 250 alternate map realizations, with approximately 30,000 square 50 m raster grids, were created to match the raw data points above. The triangular mesh was overlaid on these maps, and the average cattail density within each triangle assigned to that triangular element. Final formatting was performed to account for WCA2A canal structures, and the corresponding TARSE input file was thus created. The GUSA input file was created using SIMLAB, which created 14336 rows, representing individual model simulations, with six corresponding columns and their associated parameter values. These simulations were split into batches of 50 and run simultaneously on the University of Florida High Performance Computing (UFHPC) center (UFL, 2010) Each model was run for approximately thirty years from 2003 u ntil 2030, on a daily time step. The DM statistic w as compiled for all simulations in an order corresponding to the line of the input file that produced them. This compiled statistics file was then entered into SIMLAB, and the Sobol GUSA conducted. All of the input files associated with this G USA are available in Appendix G. The parameter probability distribution ranges and

PAGE 72

72 impacted levels of complexity can be found in Table 3 1. Sobol first order and total order sensitivities can be found in Tables 3 2 and 3 3 respectively. Results The frequency distribution uncertainty plots of DM for individual levels are shown in Figure 3 2 with their respective cumulative distributions in Figure 3 3. Considering these figures, the Level 1 complecity has a narrow distr ibution across a high DM value. i.e. The final regional mean density for Level 1 is consistently high er tha n the initial density. This is expected due to the lack of external influence factors. The range in output observed is due solely to variations in th e growth rate and initial density distribution. For the other levels of complexity there is a definite bi modality to the distribution. This bi modality is most apparent in the Level 2 complexity, and tends towards a more uniform distribution through Level s 3 4 and 5 This bi modality is expected when considering the historical cattail density distributions of the region as there are clearly regions with high densities and low densities. These uncertainty distributions are compared side by side in Figure 3 4 according to a 95% confidence interval. As noted previously, the Level 1 complexity has an exceptionally narrow distribution, which relates to a low confidence interval. However, as the complexity increases from Level 2, to 3, to 4, there is a signific ant reduction in the confidence interval with an increase at Level 5 This reduction and subsequent increase, in uncertainty equates to an increase in precision, and more confidence in the model results. This trend indicates that Level 4 is the most prec ise level of complexity and nicely resembles the uncertainty sensitivity complexity figure ( Figure 3 7) compiled by Muller (2010 ), after Hanna (1988) and Snowling & Kramer (2001)

PAGE 73

73 The sensitivities of the different levels of complexity to the various pa rameters is displayed in Figure 3 5 in a pie chart format, and Figure 3 6 in a scatter plot format. The first order (direct effects) sensitivities are the same as the total order sensitivities, which means that there are no interaction effects occurring in these models. At least with regard to the DM statistic. This is expected, since the only possible true interaction occurs in Level 5 with the sawgrass feedback which is likely averaged out, and far outweighed by the other more dominant parameters like w ater depth and soil phosphorus concentration. The most dominant parameter is water depth, i.e. the model is most sensitive to changes in water depth, with other sensitive parameters being soil phosphorus concentration and initial density distribution. The sensitivities for Level s 3, 4 and 5 are similar W ith the Level 4 complexity however, one obtains an increase in model complexity for no significant increase in sensitivity or instability (uncertainty) The risk of over parameterization in Level 4 remains low. Discussion A global uncertainty and sensitivity analysis was conducted on the WCA2A region, using different models with increasing levels of algorithmic complexity as alternate scenarios to be compared side by side. The variance based method of Sobol was used toward this end, and was informed by the six parameters of cattail growth rate, cattail initial density distribution, sawgrass growth rate, sawgrass initial density, water depth and soil phosphorus concentration. The initial cattail density distri butions were created using a sequential indicator simulation based on field verified data points of the map, and entered into the GUSA in the form of 250 different maps, all adhering to the original data. The model was run for approximately 30 years, and t he change in regional mean density was calculated, and used in the analysis. Using frequency distributions, and

PAGE 74

74 confidence intervals, along with sensitivity analyses, it was determined that an increased complexity (based on the f ive levels used) resulted i n a reduction in uncertainty, for no significant change in sensitivities This holds only for levels up to Level 4, with the Level 5 complexity increasing the uncertainty In terms of solving the relevance trilemma mentioned in the introduction balancing complexity, uncertainty and sensitivity the Level 4 complexity model/algorithm with depth, soil phosphorus and sawgrass interaction parameters, is best suited to this task.

PAGE 75

75 Table 3 1. Probability di st r ibutions of model input factors used in global unc ertainty and sensitivity analysis Parameter Definition Symbol Distribution Units Present in Level 1 Present in Level 2 Present in Level 3 Present in Level 4 Present in Level 5 Cattail Initial Densities Cattail D (1 to 250) SIS maps yes yes yes yes yes Ca ttail growth rate CATGF T (1.0E 6, 1.0E 7, 1.0E 9) g/g.s yes yes Yes yes yes Regional water depth Depth U (0,10) ft no yes yes yes yes Regional soil phosphorus concentration Phosphorus U (0,1000) mg/kg no no yes yes yes Sawgrass Initial Densities Sawgra ss U (0, 1958) g/m 2 no no no yes yes Sawgrass growth rate SAWGF T (1.0E 6, 1.0E 7, 1.0E 9) g/g.s no no no yes yes

PAGE 76

76 Table 3 2 Summary t able of Sobol f irst o rder s ensitivity i ndices of d elta m ean (DM) for all 5 l evels of c omplexity (L1 through L5 ) Sobol First Order Sensitivities L1_DM L2_DM L3_DM L4_DM L5_DM CATGF 0.008 0.002 0.003 0.003 0.003 SAWGF 0 0 0 0.002 0.002 Depth 0 0.982 0.871 0.838 0.859 Phosphorus 0 0 0.088 0.085 0.087 Sawgrass 0 0 0 0.003 0.003 Cattail 0.992 0.016 0.038 0.069 0.046

PAGE 77

77 Table 3 3 Summary t able of Sobol total o rder s ensitivity i ndices of d elta m ean (DM) for all 5 l evels of c omplexity (L1 through L5 ) Sobol Total Order Sensitivities L1_DM L2_DM L3_DM L4_DM L5_DM CATGF 0.01 0.005 0.017 0.006 0.006 SAWGF 0 0 0 0.007 0.007 Depth 0 0.992 0.889 0.864 0.882 Phosphorus 0 0 0.102 0.088 0.09 Sawgrass 0 0 0 0.006 0.007 Cattail 1.043 0.019 0.041 0.073 0.05

PAGE 78

78 Figure 3 1 Test site, water conservation area 2a (wca2a), in the northern everglades. Green squares represent inlet and outlet control structures; blue lines represent canal structures. Triangles represent the mesh used for simulation, with green triangles representing the border cells used in the central difference method

PAGE 79

79 Figure 3 2 U ncertainties of d elta m e an (DM) for a) Level 1 through e ) Level 5 respectively In frequency format

PAGE 80

80 Figure 3 3 U ncertainties of d elta m ean (DM) for a) Level 1 through e ) Level 5 respectively. In cumulative distribution format.

PAGE 81

81 Figure 3 4. Model 95% confidence intervals based on uncertainties in Figures 3 3 and 3 2, of d elta m ean (DM) for 1) Level 1 through 5 ) Level 5 respectively.

PAGE 82

82 Figure 3 5 Individual f irst o rder and t otal o rder s ensitivities of d elta m ean (DM) for all 5 l evels of c omplexity a) Level 1 through e ) Level 5 The p ie c harts p rovide an a lternate v isualization of the Tables 3 2 and 3 3

PAGE 83

83 Figure 3 6 Individual f irst o rder (a) and t otal o rder (b) s ensitivities of d elta m ean (DM) for all 5 l evels of complexity The scatter plot provides an alternate v isualization to Figure 3 5 and Tables 3 2 and 3 3.

PAGE 84

84 Figure 3 7. Combined theoretical total uncertainty, sensitivity, and complexity interaction s in order to determine the most relevant model complexity. From Muller (2010), Hanna (1988) and Snowling & Kramer (2001).

PAGE 85

85 CHAPTER 4 ACCOUNTING FOR THE IMPACT OF MANAGEMENT SCENARIOS ON AN EVERGLADES WETLAND Background The Everglades wetland ecosystem of south Florida, USA, is an intensely the Swamp and Overflow Act (Glennon, 2002) and again in 1948 with the Central and South Florida Project, the Everglades were channelized in order to aid in flood protection and provide arable land for agriculture (Gunderson et al., 2001) Today, almost a ll the water in south Florida passes through at least one canal before entering the surrounding ocean (Layzer, 2006) This had a negative impact on the environment, with wetland areas being reduced by up to 50%, and wildlife species becoming threatened. Ce rtain bird populations have been reduced by 90%, and other species such as Trichechus manatus latirostris ( Florida manatee ), Puma concolor coryii (Florida panther), Ammodramus maritimus mirabilis (Cape Sable Seaside Sparrow), and Tantilla oolitica (Rim Roc k Crowned Snake), are at risk of extinction (Brown et al., 2006) The comprehensive Everglades restoration plan ( CERP ) was approved with the Water Resources Development Act of 2000 with the express goal of some of the xtent and ecosyste m functioning (USACE, 2010a) The main focus of CERP has focused on improved water and water quality management; the assumption is that if the quantity and quality are adequate, the ecology will follow suit. There is, however, an increasing concentration o n the ecological impacts of various management decisions, and these efforts center on improving species diversity and protecting existing habitats (USACE, 2010b)

PAGE 86

86 In addition to the changes in hydrology, continuous mining, agriculture and urbanization acti vities have resulted in invasive and exotic plants becoming established in place of the original vegetation, altering habitats and often forming mono crop stands (single species environment) (Odum et al., 2000) One of these species in particular, Typha do mingensis (cattail), has been labeled as an indicator species, or species of concern. Cattail is a native Everglades monocotyledonous vegetation species, typically occurring as sparse complements alongside Cladium jamaicense (sawgrass) stands. They have be come invasive, and in the 1980s, the area covered by cattail stands in WCA2A doubled, expanding southward into the sawgrass marshes (Willard, 2010) Their distribution is now used to determine the effectiveness of various water management decisions. Fitz e t al. ( 2010) notes that m odels allow us to evaluate different scenarios of management decisions before the more costly task of their implementation. The r ecently coupled RSM/TARSE ( SFWMD, 2008b and Jawitz et al., 2008 ) model was used by Lagerwall et al. ( 2 011 ) to quantitatively and deterministically model ecology. The ecological implementation of this coupled model (henceforth RTE) was used to model cattail density dynamics across WCA2A. The complexity/uncertainty/sensitivity trilemma mentioned by Muller ( 2 01 1 ) was addressed for this model through a global uncertainty and sensitivity analysis ( GUSA ) with an added component of spatial uncertainty, very much like that conducted by Zajac ( 2010) The results of this GUSA can be found in Lagerwall et al. ( 201 1 a ) The objective of this chapter is to evaluate the impact of management scenarios as they relate to the cattail density distribution throughout WCA2A, in the Southern

PAGE 87

87 Florida Everglades. This is achieved through applying a GUSA to the management scenarios, as well as a close observation of density trends due to various specific scenarios. Materials and Method s In the previous two studies by Lagerwall et al. (201 1 ) and Lagerwall et al. (201 1 a) five different levels of increasing complexity were used to simu late cattail growth. When matching historical data in Lagerwall et al. (2011 ) the Level 4 and 5 complexities w ere determined to be the best match (most accurate), with only slightly elevated minimums, but all other statistics and trends matching the data well. Aft er conducting a GUSA on the five levels of complexity in Lagerwall (201 1 a) it was determined that the Level 4 complexity provided a reduced uncertainty for an insignificant change i n sensitivity from the other three levels of complexity (L3, L 2 a nd L5 ), which implies and increase in model precision, without any risk of over parameterization. Based on these previous results it can be concluded that the most relevant model (of the f ive tested), the one which balances complexity, uncertainty and sens itivity, is the Level 4 complexity, which includes parameters for water depth, soil phosphorus concentration and sawgrass density interaction. The Level 5 complexity could be deemed the next most relevant model algorithm, and possibly the more realistic (b etween L4 and L5) in terms of the included feed back mechanism. As the most relevant model s tested, this paper then will only consider the Level 4 and 5 complexities to determine the importance of various management scenarios. Hydrology Management Scenario s Hydrology is one of the main controlling factors in fluencing cattail distribution (Newman et al., 1998) The hydrology of WCA2A is controlled primarily by the operation

PAGE 88

88 of control points along the S10 and L35 B canals (seen in Figure 4 1) The mesh and a ssociated HSE model setup files were developed and provided by SFWMD. An overview of the HSE setup for WCA2A, which provides the hydrological operating conditions, can be found in SFWMD (2008d). Because it is a highly controlled wetland, and the depth is a factor that can be relatively well managed, the depth was used as a parameter input to the model. Due to the fa c t that the model is being run into the future, normal time series data cannot be used. The depth is therefore set as a uniform value across the region. Scenarios involving the control of depth include a high, medium, and low (or dry) water depth. The optimum growing depth for cattail has been documented as 24 96cm (Grace, 1989) A high water depth can then be considered as 3 m (as in the previous chapter) a medium depth as 0.5 m, and a low (dry) depth as 0 m. Another management scenario include s an annual alternation among high and dry water levels Soil Phosphorus Management Scenarios A gradient of soil phosphorus exists along WCA2A, with a high concentration near the inlets at the north, and a low concentration at the outlets in the south. This soil phosphorus gradient has been widely documented and studied (DeBusk et al., 1994) Soil p hosphorus concentration is the second most important externa l factor affecting the distribution of catta il (Urban et al., 1993) and is another factor that can be managed. Changing soil phosphorus distributions in WCA2A are noted by Grunwald et al. ( 2004) and Grunwald et al. ( 2008) and is largely affected by incom ing (upstream) water which is high in phosphorus. As Rutchey et al. ( 2008 ) has noted this incoming water phosphorus can be controlled through the use of best management practices For the purpose of simulation in this paper, soil phosphorus will be applie d uniformly across the domain in high, medium, and low concentrations. The uniform distribution is not wholly

PAGE 89

89 realistic, but will suffice to prove the impact of increasing, decreasing, or altering levels of concentration on the current distribution and dyn amics of the cattail. High concentration is 1500 mg/kg, medium concentration is 600 mg/kg, and low concentration is the desired 0 mg/kg. O ther management scenario s include an annual alternation among high and low soil phosphorus concentrations as well as a linearly decreasing concentration from the high (1500 mg/kg), and decreasing at a constant 3% (45 mg/kg) per anum With the two alternating water depth and soil phosphorus management scenarios, the alternations are set to occur at the same time, with a h igh water depth related to a high soil phosphorus concentration. A final set of management scenarios was to include an out of sync combination of these alternating variables, where a high water depth corresponds with a low soil phosphorus concentration. Gl obal U ncertainty And S ensitivity A nalysis ( GUSA ) It was decided that the variance based method of Sobol (Sobol, 2001) would be used to conduct the GUSA in order to maintain consistency with Lagerwall et al. (201 1 a) Also, the Sobol method is one of the few methods that can use discrete input distributions, as was necessary when using alternate initial density values in Lagerwall et al. ( 201 1 a ) and is necessary here for realizing alternate management scenarios as well A succinct overview of the Sobol metho d is provided by Lilburne & Tarantola ( 2009) The process involves reducing the probability distribution function (PDF) of the model output (uncertainty) into sensitivities of specific model input parameters There were six input parameters used in this analysis: The CAT GF SAWGF, Cattail (initial densities), Sawgrass (initial densities) Depth Management scenarios (DepthMgmt), and the Phosphorus

PAGE 90

90 Management scenarios (PMgmt) as seen in Table 4 1 The initial cattail density maps were the same as those used previously in Lagerwall et al. (201 1 a), obtained through a sequential indicator simulation using ground truthed points for the 2003 vegetation map. For this analysis a total of 14336 separate simulations we re required The program SIMLAB (Saltelli et al., 2004) was used to perform the GUSA and compare the parameter input file to the compiled output statistics Representative Statistic The scalar statistic used for the GUSA will be the change in regional mea n density (DM) that was used previous ly in Lagerwall et al. (201 1 a) A zonal analysis was included, and the change in mean density (DM) was calculated for three zones of historically high, medium and low cattail densities respectively The northeast (NE) z one covered cells 175 through 180, the central (CE) zone covered cells 280 through 283, and the southwest (SW) zone covered cells 376 through 380. These zones and respective cell numbers can be seen in F igure 4 2. Time Series Analysis A time series of regi onal mean density was created for a select number of management scenarios in order to gain greater insight into the GUSA results and the system dynamics in general. The GF w as set at 6.7 *10 9 g/g s, after t he calibrated values obtained by Lagerwall et al. (201 1 ) Instead of a uniform distribution of initial sawgrass densities, it was decided to use three levels representing high (1500 g/m 2 ), medium (900 g/m 2 ), and low (300 g/m 2 ) densities respectively The decreasing soil phosphorus concentration management scenario involved a decrease in concentration of 3% (of the initial High concentration) per annum over the 30 year period, for a total decrease of 90 percent. A summary of the management distributions, scenarios and the

PAGE 91

91 associated level s can be found in T able s 4 2 4 3 and 4 4 respectively with the complete version of Table 4 3 in the appendices as Figure E 4 Results The uncertainty plots for the regional and zonal model output frequency distribution of Level 4 can be found in Figure 4 3 along with thei r cumulative distribution version in Figure 4 4 The bimodal distribution from the GUSA in Lagerwall et al. (2011 a) is greatly reduced, and could be said to be a tri modal or even pent modal distribution. The difference in this distiribution is due solely to the management scenarios used, and the reduced range of the associated parameter values. The distributions for the region, northeast, central and southwest zones are all similar, and the regional uncertainty statistic is a good average indicator of the other zonal uncertainty statistics. As a result, only the regional distributions for Level 5 are displayed in Figure 4 5. All the distributions can be compared using their 95% confidence intervals, and these are plotted in Figure 4 6 The model sensitiviti es to the input parameters, both first order and total order, are plotted in Figure 4 7 (alto tabulated in Tables 4 5 and 4 6) As with the uncertainty distributions, the regional sensitivities are fairly representative of the zonal sensitivities. Upon fur ther consideration of these plots, there are some quite evident (altho slight) distinctions between the Level 4 and Level 5 sensitivities. The sawgrass initial density decreases (for Level 5) in both the first order and total order sensitivities, while the cattail initial density increases (for Level 5) in both plots. The sawgrass growth factor has a reduced sensitivity (for Level 5) in the first order sensitivities, with an increase in the cattail growth factor total order sensitivity (for Level 5) The de pth factor is still dominant in the first order sensitivities, although the depth and soil phosphorus factors

PAGE 92

92 are much reduced in the total order sensitivities when compared to the GUSA in Lagetwall et al. (2010a) Overall, the differences are slight, but they do hint at the increased importance of initial densities of both cattail and sawgrass and their respective growth rates on the sensitivities of the Level 5 complexity model when compared to the Level 4 complexity model. In considering the management t ime series plots those trends with a regional mean density ending over 400 g/m 2 were considered an expansive cattail growth. For Level 4, these plots are found in Figures E 1, E 2 and E 3, and compiled in Figure 4 8. The timeseries representing expansive growth are highlighted in blue in Figure E 4. The common trend between the expansive growth plots is a high soil phosphorus concentration. For Level 5, these plots are found in Figures F 1, F 2 and F 3. The timeseries representing expansive growth are high lighted in blue in Figure F 4. The only common immediately noticeable trend between the expansive growth timeseries is a high soil phosphorus concentration where there are also high initial sawgrass densities. For lower initial sawgrass densities, the Leve l 5 complexity model tends to dominate across a range of combinations of depth and soil phosphorus concentrations. There is a notable lack of plots whose regional mean density drops close to zero, and this would be due to the previously noted (Lagerwall et al. (201 1 )) elevated minimum density predictions of the Level 4 and 5 model complexities In looking for management solutions that reduce, or at least do not increase regional cattail densities, one can consider those plots in Figures E 1, E 2, E 3 and F 1, F 2, F 3 which fall below the 200 g/m 2 mark, or at least level out enough so that further increases are ruled out. These are summarized in Figure E 5 and Figure F 5 for the Level 4 and Level 5

PAGE 93

93 complexities respectively, where the decreasing or stagnant timeseries are highlighted in red. T he common theme was a low soil phosphorus concentration. Or in the case of a low initial sawgrass density, typically representative of areas currently dominated by cattail, a depth above the medium (0.5 m) and a linearly decreasing soil phosphorus concentration were the primary causes of a stagnant cattail density trend. Specificall y management scenarios 12 and 13 in this case, which represent combinations of high and medium depth, with a low initial sawgrass concentratio n, and a linearly decreasing soil phosphorus concentration. These plots level off after about half the simulation period, which equates to roughly 15 years, and a threshold soil phosphorus concentration of roughly 750 mg/kg. Discussion A GUSA was conducted to test the importance of various management scenarios using the Level 4 and 5 complexity models as well as the effectiveness of using zonal statistics Management scenarios included high, medium and low initial water depths and soil phosphorus concentra tions, as well as annually alternating (in phase and out of phase, high low) water depths and soil phosphorus concentrations, and a steadily decreasing soil phosphorus concentration. A selection of these scenarios, with initial sawgrass densities set as hi gh, medium or low, were plotted over time to gain further insight of possible management practices and their expected results From the GUSA, it can be conclude d that a regional analysis is an acceptable representation of the various zones within that regi on. Or, that the statistics for a zonal analysis can be used as a fair representation of the region (using the current Level 4 and Level 5 complexity model s ). This is important for data collection and mapping programs to ensure the most accurate data repre sentation possible. Again, depth is a

PAGE 94

94 highly influential factor when considering management scenarios, with initial densities of cattail and sawgrass also coming into play. From the management time series analysis, the h igh soil phosphorus requirement for expansive growth is consistent with literature Newman et al. (1998) and Miao & Sklar (1998) with the depth and initial sawgrass parameters accounting for the observed variation of these plots. The lack of significantly decreasing trends is possibly due to the averaging out of the three influencing parameters as they are calculated in th e Level 4 and 5 complexity model s After analyzing the stagnant trends, with final densities either below the 200 g/m 2 mark or a level trend below the 400 g/m 2 mark, the imp licit threshold soil phosphorus concentration of roughly 750 mg/kg is of importance because when it is combined with a relatively high depth (>= 0.5 m) t he cattail densities can be relatively well managed. It is possible to find other trends specifically referring to Level 4 management scenarios 3, 6, 7, 8, 16, 17, 20, 21, which match the stagnant growth requirements, and see that they too have visibly lower densities, and less aggressive trends, than the other management scenarios. This is significant be cause when on e considers the interplay of depth, soil phosphorus and sawgrass, it is not neccesarry to reduce soil phosphorus completely, provided depth is relatively high (at least for certain periods). These last statements require a caveat in that the v alues provided do not necessarily reflect the real world threshold values. They illustrate only that there exist thresholds of soil phosphorus and water depth which, when managed, can be used to control the cattail population. It must also be noted that a significantly increased water depth will also kill off most other vegetative species, including sawgrass, which is undesirable.

PAGE 95

95 It must be noted that due to the structure of the Level 4 complexity, provided that there is a long enough simulation period, th e sawgrass density will achieve its maximum value (due to it being based on a purely logistic function), as well as its maximum impact on the cattail density. This is not seen as a major problem in short term simulation periods as in Lagerwall et al. (2011 ), but it can become a major bias in longer term simulations as in Lagerwall et al. (2011a) and here. For this reason, a Level 5 complexity model with its feed back effect, despite its higher uncertainty, could be considered the most applicable model for m anagement purposes due to its more realistic structure. The initial results from this analysis are positive and confirm trends found in literature Newman et al. (1998) and Miao & Sklar (1998) It is a complex task to manage the cattail expansion in this re gion, requiring the close management and monitoring of water depth and soil phosphorus concentration, and possibly other factors not considered in these model complexities. However, t his modeling framework with user defineable complexities and management s cenarios, can be considered a useful tool in analyzing many more alternatives which could be used to aid management decisions in the future.

PAGE 96

96 Table 4 1. Probability distributions of model input factors used in global uncertainty and sensitivity analysis Parameter Definition Symbol Distribution Units Cattail Initial Densities Cattail D (1 to 250) SIS maps Cattail growth rate CATGF T (1.0E 6, 1.0E 7, 1.0E 9) g/g.s Regional water depth Depth D (1,5) Ft Regional soil phosphorus concentration Phosphorus D (1,6) mg/kg Sawgrass Initial Densities Sawgrass U (0, 1958) g/m 2 Sawgrass growth rate SAWGF T (1.0E 6, 1.0E 7, 1.0E 9) g/g.s

PAGE 97

97 Table 4 2. Management distributions for soil phosphorus depth and sawgrass parameters Distribution Depth Phosphorus Sawgrass 1 High High High 2 Medium Medium Medium 3 Low Low Low 4 Even Years High Even Years High 5 Even Years Low Even Years Low 6 n/a Decreasing (Max 3%p.a.)

PAGE 98

98 Table 4 3 Example of m anagement scenarios used for time series analysis This table repeat s for medium and high initial sawgrass densities. Management Scenario Depth Soil Phosphorus Concentration Sawgrass Initial Density 1 High High Low 2 Low Low Low 3 High Low Low 4 Low High Low 5 Medium High Low 6 Medium Medium Low 7 Medium Low Low 8 High Medium Low 9 Low Medium Low 10 (Alternating in sync) Even High Even High Low 11 (Alternating out of sync) Even Low Even High Low 12 High Linearly Decreasing Low 13 Medium Linearly Decreasing Low 14 Low Linearly Decreasing Low 15 Even High Linea rly Decreasing Low 16 High Even High Low 17 Medium Even High Low 18 Low Even High Low 19 Even High High Low 20 Even High Medium Low 21 Even High Low Low

PAGE 99

99 Table 4 4 Values associated with management scenarios Level Depth (m) Soil Phosphorus Concent ration (mg/kg) Sawgrass Initial Density (g/m 2 ) High 3 1500 1500 Medium 0.5 600 900 Low 0 0 300 Alternating (Even High or Even Low) 3 and 0 1500 and 0 Na Linearly Decreasing na 1500 to 150 Na

PAGE 100

100 Table 4 5 Summary t able of Sobol f irst o rder s ensitivity i ndices of d elta m ean (DM) for the Region (R) and NE, CE and SW zones Sobol First Order Sensitivities Level 4 Level 5 R_DM NE_DM CE_DM SW_DM R_DM NE_DM CE_DM SW_DM CATGF 0.001 0.002 0.002 0.001 0.006 0.004 0.006 0.005 SAWGF 0.034 0.032 0.03 4 0.032 0.048 0.046 0.047 0.046 Depth 0.016 0.016 0.016 0.017 0.010 0.010 0.010 0.010 Phosphorus 0.008 0.007 0.009 0.006 0.013 0.012 0.014 0.012 Sawgrass 0.011 0.009 0.011 0.009 0.013 0.011 0.012 0.011 Cattail 0.021 0.023 0.020 0.0 24 0.026 0.027 0.025 0.028

PAGE 101

101 Table 4 6 Summary t able of Sobol total o rder s ensitivity i ndices of d elta m ean (DM) for the Region (R) and NE, CE and SW zones Sobol Total Order Sensitivities Level 4 Level 5 R_DM NE_DM CE_DM SW_DM R_DM NE_DM CE_DM S W_DM CATGF 1.093 1.093 1.091 1.093 1.101 1.102 1.100 1.101 SAWGF 1.018 1.017 1.019 1.015 1.020 1.019 1.020 1.018 Depth 1.043 1.044 1.042 1.046 1.037 1.037 1.035 1.039 Phosphorus 1.038 1.037 1.037 1.040 1.036 1.035 1.035 1.037 Sawgrass 1.001 1.002 1.00 1 1.002 0.991 0.991 0.991 0.991 Cattail 1.046 1.046 1.045 1.046 1.050 1.050 1.049 1.050

PAGE 102

102 Figure 4 1 Test site, water conservation area 2a ( WCA2A ), in the northern everglades. Green squares represent inlet and outlet control structures; blue lines r epresent canal structures. Triangles represent the mesh used for simulation, with green triangles representing the border cells used in the central difference method

PAGE 103

103 Figure 4 2. W ater conservation area 2a ( WCA2A) triangular mesh, with numbered cells, and NE, CE and SW zones

PAGE 104

104 Figure 4 3. Model uncertainty plots in frequency format for a) Regional b) North East Zone c) Central Zone d) South West Zone

PAGE 105

105 Figure 4 4. Model uncertainty plots in cumulative distribution format for a) Regional b) North East Zone c) Central Zone d) South West Zone

PAGE 106

106 Figure 4 5. Model uncertainty plots in cumulative distribution a) and frequency b) format for Level 5 complexity regional DM

PAGE 107

107 Figure 4 6 Model 95% confidence interval, from uncertainty plots in Figures 4 4 and 4 3, for Region (1), High/ Northeast (2), Medium/ Central (3), Low/ Southwest (4) zones for Level 4 (L4) and Level 5 (L5) model complexities.

PAGE 108

108 Figure 4 7 Model sensitivity for first order ( a ) and total order ( b ) of parameters Cat GF SawGF, DepthM, P hosphorusM, Sawgrass, Cattail, for the entire Region (1 and 5 ) as well as High/ Northeast (2 and 6 ) Medium/ Central (3 and 7 ) and Low/ Southwest (4 and 8 ) zones for Levels 4 and 5 respectively

PAGE 109

109 Figure 4 8 Time series analysis of Level 4 Plot of daily regional mean density for 30 year s for management scenarios 1 through 63 A red line is drawn horizontally along the 400 g/m 2 density, and this is used as the threshold to distinguish expansive cattail growth.

PAGE 110

110 CHAPTER 5 BEYOND CATTAILS : LIMITATIONS, CURR ENT AND FUTURE RESEARCH Background As with any model undertaking there are limitations as to what can be achieved within the time frame provided. These limita tions can be divided into limitations with the current modeling effort (short term), and limitati ons with the model structure in general (long term). This chapter discusses these limitations, and describes efforts that have been made to address some of them. Particularly, work has been performed with the goal of expanding the coupled RSM/TARSE framewo rk to include ecology in general and not simply cattail vegetation. Limitations (Short Term) Limitations of the current modeling effort have been noted periodically throughout this dissertation. The most significant of these limitations are discussed. Acco rding to (Obeysekera & Rutchey, 1997) t he triangle size, averaging 1.1km 2 (Wang, 2009), is large enough to effectively remove all fine scale, heterogeneous detail. This was observed when aggregating the data maps to create the input maps and was partiall y addressed through the use of the mean density occurring within each element A solution would be to reduce the size of the triangles. However, this would have an impact on the hydrological component of the model, and require a re parameterization. Also, the computational time would become too costly, especially when expanding to a regional southern Florida scale. Including additional levels of algorithmic complexity would aid in selecting the most relevant model, and minimizing uncertainty. An example wo uld be a Level 2b complexity where the density dependent logistic growth function is limited by soil phosphorus concentration alone, and not by water depth. W hen

PAGE 111

111 considering the inclusion of additional alternate models, a completely different growth algori thm could be included, such as an exponential or von Bertalanffy (Walters & Martell, 2004) growth curve as opposed to the logistic function. Future analyses, besides the global uncertainty and sentitivity analyses considered here in previous chapters, coul d more specifically include a phase plane analysis to determine the stability of the Level 4 and Level 5 interactions (Hsu et al., 2000) along with more specific/realistic management scenarios. Limitations (Long Term) The limitations discussed in this sec tion refer to the coupled RSM/TARSE modeling framework as it applies toward modeling ecology in general, and how it could be improved. A separate ecological library following a similar design and implementation as TARSE would greatly enhance the ecological modeling aspect of this coupled framework. This would provide increased code modularity, and a separate ecological input file. This would enable a stand alone mode, for when the ecology is not dependent on nutrients or hydrology, as is the case with the L evel 1, logistic growth, complexity introduced in this document. The separate input file would allow the omission of certain (currently necessary) hydrology related input parameters, such as longitudinal dispersivity, which are required in the TARSE input file, and are largely irrelevant when dealing with ecology. A desired code addition would be a hydrological feed back mechanism, whereby This capability was mentioned by Jawit z et al. (2008), but has not yet been fully implemented. An implementation of this type of feed back mechanism would greatly enhande the possible system dynamics, and arguably provide more realism to the

PAGE 112

112 model. However, this type of mechanism is likely to create issues such as non identifiability, non uniqueness, and equifinality (Beven, 2001) The transport provided by TARSE is hydrologically linked. This means that currently, there is no mechanism to allow for purely ecological movement. Seed dispersal, o r rhizome spread, bird flight, or panther movement cannot be simulated. Two options to address this shortcoming are under development, and discussed in the next section. Extra Re search In order to simulate ecology in general, a movement component is requir ed in which the ecological species has control over its movement, or at the very least, that the movement is not necessarily directly dependent on the hydrology of the system. Two methods that address this limitation have been considered. They are incomple te and remain in the development stages. These methods are an Eulerian (grid based) mass flow solution, and an Eulerian Lagrangian (grid independent) (Goodwin et al., 2006) random walk solution. The conceptual inclusion of these two algorithms is illustrat ed by Figure 5 1, where the FreeMove function represents the mass flow Eulerian algorithm, and the RandomWalk function represents the random walk Eulerian Lagrangian algorithm. A flow chart representing the functioning of the FreeMove function is shown in Figure 5 2. This function was developed with a density dependent mass flow, or spread of Typha domingensis (cattail) in mind. With the population spreading through seed dispersal or rhizome expansion, to the neighboring cells based on the density of catta il within the current cell, t he population effectively spills over into the neighboring cells. Components are labeled in the XML input file as being able to move. The procedure

PAGE 113

113 follows thus: Loop through all the elements within the region. For each element loop through the components which have been marked as able to move. If the component is able to move, then determine whether movement (mass flow to the neighbor cells) should occur based on a threshold defined by the user in the input file (density in thi s case) above the threshold, spread the excess proportionately among neighbors whose mass is below the threshold. I.e. more mass is transferred to the neighbor with the lowest mass Finally, set the mass of the current cell to the threshold value. This last step might seem presumptuous given a scenario where all the neighboring cells have a mass at the threshold value, but can be explained in terms of a density dependent die off associated with the threshold value being reached. Currently, this algorithm has had by development and testing is required before including this as a permanent feature w ithin the TARSE framework. A UML representation of the RandomWalk function as implemented within TARSE is available in Figure 5 3. It was designed after the random walk algorithm presented by Prickett et al. (1981), with the idea of substituting the water velocity term (which drives the movement) for some other gradient such as distance from favorable food source as would be the case for example, when modelling panther movement. As it stands the driving function is water velocity, and the particle is move d a distance in line with the velocity (Advection), and then it moves a fraction of that distance perpendicular to the velocity (Dispersion). The system is initialized by converting element concentrations provided in the input file into representative part icles using the AddParticle function. The

PAGE 114

114 velocity associated with the element in which the particle is located is used to inform the advection and dispersion of the particle. The particle may then become associated with another element and respective velo city to be used in the next time step. The number of particles within each element at the end of the model simulation are used to calculate the concentration associated with that element. This algorithm is still in development, and has various i ssues inclu d ing computationally expensive particle location on a triangular grid (Liu et al., 2007), rules governing region boundaries (is the particle removed or deflected back?), the possibility for particle growth (in terms of seeds once dispersed, growing into p lants), and particle clustering ( representing a flock of birds with a super particle).

PAGE 115

115 Figure 5 1. UML diagram of TARSE showing included random walk and free movement modules RSM RunForTimeStep Transport Advection Reaction Di spersion Advection Reaction FreeMove Eulerian RandomWalk Lagrangian

PAGE 116

116 Figure 5 2. Flow chart representing functioning of mass flow (FreeMove) algorithm Loop through free_move components Movement Enabled? Loop through elements Density>threshold? Amount to move Loop through facets Find neighbors Neighbors value>threshold? Mov e weighted portion to neighbors Set density to threshold

PAGE 117

117 Figure 5 3. UML diagram representing inclusion of random walk algorithm within TARSE RSM TARSE Create RWParticleSystem Run_RW_ForTimeStep AddParticle GetElementVelocity MoveParticle ParticleWell Printing Anorm Advection Dispersion FindElement Calls Every Time Ste p Object created at initialization Called at initialization Particle Class

PAGE 118

118 CHAPTER 6 CONCLUSION In this dissertation an ecological implement ation of the coupled RSM/TARSE model was applied towards modeling cattail dynamics in WCA2A Everglades wetland The preceding chapters have addressed the objectives which were laid out in the introduction, specifically: historic cattail density distr ibutio ns were simulated using five levels of increasing algorithmic complexity The levels of complexity built on a density dependent logistic growth function, and included a water depth influenced level, a water depth and soil phosphorus concentration influenc ed level, and a sawgrass density water depth and soil phosphorus concentration influenced level and a level with a cattail density influenced sawgrass growth component The levels of complexity were assessed using a number of statistical methods in orde r to determine the most accurate at predicting historical values. A global uncertainty and sensitivity analysis (GUSA) was conducted for the four different model complexities. Spatial uncertainty was accounted for in the GUSA through the inclusion of multi ple alternate maps created through a sequential indicator simulation technique. From the GUSA it was determined that the Level 4 complexity provided the most complexity (better representation of reality), with the least uncertainty, and an insignificant ch ange in sensitivity. As such, considering the performance of the levels in matching historical data, and the GUSA, it was decided that a Level 4 complexity alg orithm, which comprises a depth, soil phosphorus concentration and sawgrass interaction influenc ed logistic growth function, was the most relevant model for approximating reality. This is in acc oncordance with literature that lists water depth and soil phosphorus concentration as the most influential factors for cattail expansion and the fact that c attail are encroaching on previously sawgrass

PAGE 119

119 dominated areas The impact of alternative management scenarios was assessed through the use of a GUSA once again, and was enhanced by a time series analysis of selected scenarios. It was determined that soil p hosphorus concentration, along with water depth w ere highly influential factor s to consider in terms of management, and that an elevated depth, along with reduced soil phosphorus concentrations, would aid in reducing cattail densities. Limitations of the c urrent modeling effort, and the coupled RSM/TARSE framework as it applies towards modeling ecology (RTE) as a whole were discussed, along with current attempts to address some of these limitations such as feedback mechanisms, and hydrology independent mov ement The modular structure of the input file allows one to simultaneously study multiple theories (levels of complexity) regarding a specific ecosystem or species of concern. Provided adequate data, this model framework could be immediately applied towar ds modeling other invasive species in the Everglades such as Melaleuca

PAGE 120

120 A PPENDIX A SAMPLE XML INPUT FIL E S RSM Input File ]> &levee_bc; &wellseep;

PAGE 121

121 &mesh_bc; < /mesh_bc> &HPM; &conveyance;
&canal_index; &canal_bc; &segmentSource; &junction_blocks;

PAGE 122

122 &levee; < globalmonitor attr="segmenthead"> ->


PAGE 123

123 T ARSE Input File backward_euler -> runge_kutta ./output/input_store_info.xml components_in_common water_column_p 10.0 gw_p 10.0 longitudinal_dispersivity 10.0 transverse_dispersivity 10.0

PAGE 124

124
molecular_diffusion 0.00001 surface_porosity 1.0 subsurfac e_longitudinal_dispersivity 10.0 subsurface_transverse_dispersivity 10.0 subsurface_molecular_diffusion 0 .00001 subsurface_porosity 1.0 k_st 0.0 k_rs 0.0
all all

PAGE 125

125 surface_water settled_p ../Input/Phosphorus/phosphorus_1990.dat ecology stab_cat_L1 ../Input/Cattail/1991_cat_mean.dat stab_cat_L2 ../Input/Cattail/1991_cat_mean.dat stab_cat_L3 ../Input/Cattail/1991_cat_mean.dat stab_cat_L4 ../Input/Cattail/1991_cat_m ean.dat stab_cat_L5 ../Input/Cattail/1 991_cat_mean.dat

PAGE 126

126 stab_cat_L1pre ../Input/Cat tail/1991_cat_mean.dat stab_cat_L2pre ../Input/Cattail/1991_cat_mean.dat stab_cat_L3pre ../Input/Cattail/1991_cat_mean.dat stab_cat_L4pre ../Input/Cattail/1991_cat_mean.dat stab_cat_L5pre ../Input/Cattail/1991_cat_mean.dat stab_saw_L1 ../Input/Sawgrass/1991_saw_mean.dat stab_saw_L1a ../Input/Sawgrass/1991_saw_mean.dat stab_saw_L1pre ../Input/Sawgrass/1991_saw_mean.da t stab_saw_L1apre

PAGE 127

127 ../Input/Sawgras s/1991_saw_mean.dat
cat_depth_HSI 1 cat_p_HSI 1 cat_saw_HSI 1 cat_saw_HSIa 1 saw_cat_HSIa 1 cat_com_HSI_L3 1 cat_com_HSI_L4 1 cat_com_HSI_L5 1

PAGE 128

128 cat_inthsi_depth_Hi 1
cat_inthsi_depth_Lo 1 daycount 0.00 yearcount 1.00
cat_gro w_factor 6.7e 09 cat_die_factor 7.9e 09 cat_max_dens 1240.00 cat_init_dens ../Input/Cattail/1991_cat_mean.dat saw_grow_factor 1.0e 09

PAGE 129

129 saw_die_factor 8.0e 08 < /parameter> saw_max_dens 1958.00 saw _init_dens ../Input/Sawgrass/1991_saw_mean.dat c at_min_depth 0.16 cat_peak_depth 2.30 cat_max_depth 3.77 cat_depth_risingDenom 3.66 cat_depth_fallingDenom 3.6 cat_min_p 200 cat_max_p 1800 cat_p_diff

PAGE 130

130 1034
cat_p_denom 144 cat_saw_max 1 cat_saw_grad 0.84
saw_cat_max 1 saw_cat_grad 0.84
daycount 1/86400 yearcount if(floor((daycount/365))>=yearcount,yearcount+1,yearcount) cat_inthsi_depth_Hi if( depth>cat_max_depth, 0.01, 1 ((depth cat_peak_depth)/cat_depth_fallingDenom) )

PAGE 131

131
cat_inthsi_depth_Lo if( d epth>cat_min_depth, 1 ((cat_peak_depth depth)/cat_depth_risingDenom), 0.01 ) cat_depth_HSI if( depth>cat_peak_depth, cat_inthsi_depth_Hi, cat_inthsi_depth_Lo ) cat_p_HSI (1+exp( (settled_p cat_p_diff)/cat_p_denom))^ 1 cat_saw_HSI cat_saw_max+(cat_saw_grad*(stab_saw_L1/saw_max_dens)) cat_saw_HSIa cat_saw_max+(cat_saw_grad*(stab_saw_L1a/saw_max_dens)) saw_cat_HSIa saw_cat_max+(saw_cat_grad*(stab_cat_L5/cat_m ax_dens)) cat_com_HSI_L3 (cat_depth_HSI+cat_p_HSI)/2 cat_com_HSI_L4 (cat_depth_HSI+cat_p_HSI+ca t_saw_HSI)/3 cat_com_HSI_L5 (cat_depth_HSI+cat_p_HSI+cat_saw_HSIa)/3 cat_com_HSI_L3 cat_depth_HSI*cat_p_HSI

PAGE 132

132 cat_com_HSI_L4 cat_depth_HSI*cat_p_HSI*cat_saw_HSI cat_com_HSI_L5 cat_depth_HSI*cat_p_HSI*cat_saw_HSIa -> stab_cat_L1 cat_grow_factor*stab_cat_L1*(1 (stab_cat_L1/stab_cat_L1pre)) stab_cat_L1pre cat_max_dens*1 stab_cat_L2 cat_grow_factor*stab_cat_L2*(1 (stab_cat_L2/stab_cat_L2pre)) stab_cat_L2pre if(cat_depth_HSI>0.001, cat_max_dens*cat_depth _HSI, 0.001) stab_cat_L3 cat_grow_factor*stab_cat_L3*(1 (stab_cat_L3/stab_cat_L3pre)) stab_cat_L3pre if( cat_com_HSI_L3>0.001, cat_max_dens*cat_com_HSI_L3, 0.001) stab_cat_L4 cat_grow_factor*stab_cat_L4*(1 (stab_cat_L4/stab_cat_L4pre)) stab_cat_L4pre i f(cat_com_HSI_L4>0.001, cat_max_dens*cat_com_HSI_L4, 0.001)

PAGE 133

133
stab_cat_L5 cat_grow_factor*stab_cat_L5*(1 (stab_cat_L5/stab_cat_L5pre)) stab_ca t_L5pre if(cat_com_HSI_L5>0.001, cat_max_dens*cat_com_HSI_L5, 0.001) stab_saw_L1 saw_grow_factor*stab_saw_L1*(1 (stab_saw_L1/stab_saw_L1pre)) stab_saw_L1pre saw_max_dens*1 stab_saw_L1a saw_grow_factor*stab_saw_L1a*(1 (stab_saw_L1a/stab_saw_L1apre)) sta b_saw_L1apre if(saw_cat_HSIa>0.001, saw_max_dens*saw_cat_HSIa, 0.001)
stab_c at_L1 stab_cat_L2 stab_cat_L3

PAGE 134

134 stab_cat_L4 stab_cat_L5 stab_saw_L1 stab_saw_L1a -> daycount yearcount -> ./output/xmlOut_1991to1995.xml


PAGE 135

135 APPENDIX B ANALYSIS SCRIPTS Main.py import sys, os import RunHSE, FormattingXML, ParsingXML, Analysis, Correlation, MoransI, Abundance, XmlToClasses, Time series flush = sys.stdout.flush() ###################################################### ###################RUN HSE############################ ###################################################### def run(): #flush os.system('clear') #flush print 'Running Main' #flush os.system('rm rf ./out*') os.system('rm rf ./analysis/') print "Running Model" flush if os.path.exists('./output/'): print 'exists clearing previous contents' os.system('rm rf ./ output/*') RunHSE.runhse() else: print 'doesnt exist creating directory' os.system('mkdir ./output/') RunHSE.runhse() ###################################################### #################LOG ANALYSIS############### ########## ###################################################### print \ nStarting Log' padst = Analysis.padst loggingfile = open('./log/Anal_Log.dat','w') header = 'Logging Analysis Efficiencies \ n' loggingfile.write(header) #title = padst('Model')+' \ t'+padst('1 to 1 Box')+' \ t'+padst('Moran \ 's I')+' \ t'+padst('Abundance')+' \ n' #loggingfile.write(title) #(str(n) + \ n') loggingfile.close() ######################################################

PAGE 136

136 ###################ANALYSIS##### ###################### ###################################################### #Level 1 Stat print \ nPerforming Analysis" flush if os.path.exists('./analysis/'): print 'exists clearing previous contents' os.system('rm rf ./an alysis/*') Analysis.analysis() #Level 1 else: print 'doesnt exist creating directory' os.system('mkdir ./analysis/') Analysis.analysis() #Level 1 ###################################################### ################ CORRELATION II######################## ###################################################### #level 2 Stat print \ nCalculating Correlation_ii" flush if os.path.exists('./analysis/'): print 'exists clearing previous contents' os.system('rm rf ./analysis/*semivarii*') MoransI.correlation_ii() #Level 2 else: print 'doesnt exist. run analysis first' ###################################################### ##################ABUNDANCE########################### ###################################################### #level 3 Stat print \ nCalculating Abundance Area" flush if os.path.exists('./analysis/'): print 'exists clearing previous contents' os.system('rm rf ./analysis/*abund*') Abundance.abundance() #Level 3 else: print 'doesnt exist. run analysis first' ###################################################### ##################TIMESERIES########################### ############################################# ######### #Extra print \ nDetermining TimeSeries" flush

PAGE 137

137 if os.path.exists('./analysis/'): print 'exists clearing previous contents' os.system('rm rf ./analysis/*TS*') Timeseries.timeseries() #Level 3 else: print 'doesnt exist. run analysis first' print \ nFinished Run : )" flush ##### #RUN# ##### run()

PAGE 138

138 RunHSE.py ##################### #Automating HSE Runs# ##################### import string, os def runhse(): # cmd = 'find name "run_seep_p1l3*.xml" print' #find is a standard unix tool cmd = 'find name "run_19*.xml" print' #find is a standard unix tool # cmd = 'find name "run_1995to2003.xml" print' #find is a standard unix tool for file in os.popen(cmd).readlin es(): #run find command num=1 full_name = file[: 1] #strip \ n' print \ t'+full_name #run hse with full_name as input # os.system('/opt/local/share2/rsm/RSM WQ/wq_mod/testing/hse '+ full_name+' >/dev/null' + '&') #no outp ut and run in background os.system('/opt/local/share2/rsm/RSM WQ/wq_mod/testing/hse '+ full_name+' >/dev/null') #run in foreground os.system(' \ n') # os.system(' \ n')

PAGE 139

139 Analysis.py ########## #Analysis# ########## import string, os, math, sy s, matplotlib.pyplot, numpy, pprint ###########Sorting Dictionary########## http://code.activestate.com/recipes/52306 to sort a dictionary def sortedDictValues1(adict): vector=[] vec_key=adict.keys() vec_key.sort() for i in vec_key: tmpv = [i,adict[i]] vector.append(tmpv) return vector ###########Sorting Dictionary########## ###########paded string conversion########## def padst(input): newin = str(input).ljust(10) return newin ###########paded string conversi on########## ###MEAN### def calc_mean(input): sum=0 for i in range(len(input) 1): if(input[i+1]<0) or (str(input[i+1])=='nan') or (str(input[i+1])=='inf') or (str(input[i+1])==' inf'): mean=9999 return mean else: sum+=input[i+1] #print len(input) mean = sum/(len(input)) return mean ###MEAN### ###Variance### def calc_var(mean, input): variance=0 sum=0 for i in range (len(input) 1):

PAGE 140

140 if(input[i+1]<0) or (str(input[i+1 ])=='nan') or (str(input[i+1])=='inf') or (str(input[i+1])==' inf'): variance=9999 return variance else: sum += (input[i+1] mean)**2 variance = sum/(len(input) 1) return variance ###Variance### ###find, format, parse, calculate### def quickParse(full_name): contents1, contents2, contents3, contents4, contents5, contents6, contents7 = [], [], [], [], [], [], [] vec_L1, vec_L2, vec_L3, vec_L4, vec_L5, vec_SL1, vec_SL1a = [], [], [], [], [], [], [] vegvalstr1, vegvalstr2, vegvalstr3, vegvalstr4, vegvalstr5, vegvalstr6, vegvalstr7 = '', '', '', '', '', '', '' openfile = open(full_name,'r') filebuffer = openfile.readlines() openfile.close() MainCount=0 for line in range(len(fileb uffer) 1,0, 1): #move from the last line up linestr = filebuffer[line] startptr1 = linestr.find('') if (startptr1 != 1): endptr1 = linestr.find('',startptr1) vegvalstr1 = linestr[startptr1+27:endptr1] contents1 = vegvalstr1.split(',') counter1=1 for iter1 in contents1: if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,1 22,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,4 70,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(counter1)+',') == 1): number1 = float(iter1) vec_L1.append(number1) counter1=co unter1+1 MainCount=MainCount+1 startptr2 = linestr.find('') if (startptr2 != 1): endptr2 = linestr.find('',startptr2)

PAGE 141

141 vegvalstr2 = linestr[startptr2+27:endptr2] contents2 = vegvalstr2.split(',') counter2=1 for iter2 in contents2: if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,1 83,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,4 91,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(counter2)+',') == 1): number2 = float(iter2) vec_L2.append(number2) counter2=counter2+1 MainCount=MainC ount+1 startptr3 = linestr.find('') if (startptr3 != 1): endptr3 = linestr.find('',startptr3) vegvalstr3 = linestr[startptr3+27:endptr3] contents3 = vegvalstr3.split (',') counter3=1 for iter3 in contents3: if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,2 57,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,5 00,501,502,505,506,507,508,509,510,".find(','+str(counter3)+',') == 1): number3 = float(iter3) vec_L3.append(number3) counter3=counter3+1 MainCount=MainCount+1 startptr4 = linestr.f ind('') if (startptr4 != 1): endptr4 = linestr.find('',startptr4) vegvalstr4 = linestr[startptr4+27:endptr4] contents4 = vegvalstr4.split(',') counter4=1 for iter4 in contents4: if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,3 01,302,303,304,305,335,336,337,338,339

PAGE 142

142 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510," .find(','+str(counter4)+',') == 1): number4 = float(iter4) vec_L4.append(number4) counter4=counter4+1 MainCount=MainCount+1 startptr5 = linestr.find('') if (startptr5 != 1): endptr5 = linestr.find('',startptr5) vegvalstr5 = linestr[startptr5+27:endptr5] contents5 = vegvalstr5.split(',') counter5=1 for iter5 in contents5: if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,3 39 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(counter5)+',') == 1): number5 = float(iter5) vec_L5.append(number5) counter5=counter5+1 MainCount=MainCount+1 startptr6 = linestr.find('') ###Slight difference here if (startptr6 != 1): endptr6 = linestr.find('',startptr6) vegvalstr6 = linestr[startptr6+27:endptr6] contents6 = vegvalstr6.split(',') counter6=1 for iter6 in contents6: i f (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341 ,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(counter6)+',') == 1): number6 = float(iter6) vec_SL1.append(number6) ###Slight difference here counter6=counter6+1

PAGE 143

143 MainCount=MainCount+1 if MainCount==6: break #print count return vec_L1, v ec_L2, vec_L3, vec_L4, vec_L5, vec_SL1 ###find, format, parse, calculate### ###findDataMean### def findDataMean(year): cmd_dat = 'find ./raw_data/ name "*mean.dat" print' for fileD in os.popen(cmd_dat).readlines(): #run find command full _nameD = fileD[: 1] #strip \ n' #print full_nameD nameD = full_nameD[11: 9] dat_match_year = full_nameD[11: 13] dat_match_type = full_nameD[16: 9] #print dat_match_year, dat_match_type if ( (dat_match_type==" cat") and (dat_match_year==year) ) : #print "observed CAT data:",nameD,dat_match_year vec_datMEAN_cat = [] file_data_cat = open(full_nameD,'r') data_buff_cat = file_data_cat.readlines() file_data_ cat.close() #print data_buff_cat CATcounter=1 for lineC in data_buff_cat: if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153, 154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488, 489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(CATcounter) +',')== 1): itemsC = lineC.split() data_datC = float(itemsC[1]) vec_datMEAN_cat.append(dat a_datC) CATcounter=CATcounter+1 if ( (dat_match_type=="saw") and (dat_match_year==year) ) : #print "observed SAW data:",nameD,dat_match_year vec_datMEAN_saw = [] file_data_saw = open(full_nameD,'r ') data_buff_saw = file_data_saw.readlines() file_data_saw.close() SAWcounter=1 for lineS in data_buff_saw:

PAGE 144

144 if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78 ,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,299,300,301,302,303,304,305,335,336,337,338,339,340 ,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457,458 ,459,465,46 6,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494,495 ,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(SAWcounter)+',')= = 1): itemsS = lineS.split() data_datS = float(itemsS[1]) vec_datMEAN_saw.append(data_datS) SAWcounter=SAWcounter+1 return vec_datMEAN_cat, vec_datMEAN_saw ###findDataMean### ###findDataSTDEV### def findDataSTDEV(year): cmd_dat = 'find ./raw_data/ name "*stdev.csv" print for fileD in os.popen(cmd_dat).readlines(): #run find command full_nameD = fileD[: 1] #strip \ n' #print full_nameD nameD = full_nameD[11: 4] #print nameD dat_match_year = full_nameD[11: 14] dat_match_t ype = full_nameD[16: 10] #print dat_match_year, dat_match_type if ( (dat_match_type=="cat") and (dat_match_year==year) ) : #print "observed CAT data:",nameD,dat_match_year vec_datSTDEV_cat = [] file_data_ cat = open(full_nameD,'r') data_buff_cat = file_data_cat.readlines() file_data_cat.close() #print data_buff_cat CATcounter=1 for lineC in data_buff_cat: if (",1,2,3,4,5,7,8,9,10,11 ,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,4 02,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(CATcounter) +',')== 1): itemsC = lineC. split() data_datC = float(itemsC[1]) vec_datSTDEV_cat.append(data_datC)

PAGE 145

145 CATcounter=CATcounter+1 if ( (dat_match_type=="saw") and (dat_match_year==year) ) : #print "observed SAW dat a:",nameD,dat_match_year vec_datSTDEV_saw = [] file_data_saw = open(full_nameD,'r') data_buff_saw = file_data_saw.readlines() file_data_saw.close() SAWcounter=1 for lineS in data_buff_ saw: if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,3 36,337,338,339 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(SAWcounter ) +',')== 1): itemsS = lineS.split() data_datS = float(itemsS[1]) vec_datSTDEV_saw.append(data_datS) SAWcounter=SAWcounter+1 #print len(vec_datSTDEV_cat) return vec_datSTDEV _cat, vec_datSTDEV_saw ###findDataSTDEV### ###Calculate Difference### def calcDiff(Invec,datavec): retvec=[] #print len(Invec[0]) for vec in Invec: #print vec[1] diffvec=[] difference=0 counter=0 #print len(vec), len(datavec) for iter in vec: difference = float(iter) float(datavec[counter]) diffvec.append(difference) counter=counter+1 #calc diff etc here #print len(diffvec) retvec.append( diffvec) return retvec ###Calculate Difference###

PAGE 146

146 ###Calculate ABSDifference### def calcABSdiff(Invec,datavec): retvec=[] #print len(Invec) for vec in Invec: #print vec[1] diffvec=[] difference=0 counter=0 #print len(vec), len(datavec) for iter in vec: difference = abs(float(iter) float(datavec[counter])) diffvec.append(difference) counter=counter+1 #calc diff etc here #print len(diffvec) retvec.append(diffvec) return retvec ###Calculate ABSDifference### ###Calculate Class Difference### def classDiff(Invec,datastdev): retvec=[] #print len(Invec) for vec in Invec: #print vec[1] Cdiffvec=[] Cdiff erence=0 counter=0 #print Invec #print \ n' #print len(vec), len(datastdev) for iter in vec: #print counter, iter if float(iter)<20.0: Cdifference=0 Cdiffvec.ap pend(Cdifference) elif float(iter)<200: Cdifference=1 Cdiffvec.append(Cdifference) elif float(iter)<400: Cdifference=2 Cdiffvec.append(Cdifference) else: Cdifference=3 Cdiffvec.append(Cdifference)

PAGE 147

147 counter=counter+1 #print len(Cdiffvec) retvec.append(Cdiffvec) return retvec ###Calculate Class Difference### ###Nash Sutcliffe### def NashSutcliffe(I nvec, data): retvec=[] for model in Invec: Sxx=0 Syy=0 RSQ=0 mean_data = calc_mean(data) for i in range(0,len(data) 1,1): #if(data[i]<0) or (str(data[i])=='nan') or (str(data[i])=='inf') or (str(d ata[i])==' inf'): if(str(model[i])=='nan') or (str(model[i])=='inf') or (str(model[i])==' inf'): #RSQ=9999 #return RSQ print i, model[i] break else: Sxx +=(data[i] model[i])**2 Syy+=(data[i] mean_data)**2 RSQ = 1 Sxx/Syy retvec.append(RSQ) return retvec ###Nash Sutcliffe### ###BoxPlot### def BoxPlot(datavec,Invec,start_year,match_year): boxvector=[] boxvector. append(datavec) for iter in Invec: boxvector.append(iter) #print boxvector matplotlib.pyplot.clf() matplotlib.pyplot.hold(True) if len(Invec)==5: matplotlib.pyplot.figure().add_subplot(111) matplotlib.pyplot.figu re().add_subplot(111).boxplot(boxvector) # matplotlib.pyplot.axis([0,len(boxvector)+1, 50,1400]) #0to1240 matplotlib.pyplot.ylim( 50,1500) nameM = 'boxplot_'+'cat_'+start_year+'to'+match_year matplotlib.pyplot.subplot(111).se t_xticklabels(['Data','L1','L2','L3','L4','L5',''])

PAGE 148

148 else: matplotlib.pyplot.figure().add_subplot(111) matplotlib.pyplot.figure().add_subplot(111).boxplot(boxvector) nameM = 'boxplot_'+'saw_'+start_year+'to'+match_year # m atplotlib.pyplot.axis([0,len(boxvector)+1, 50,2000]) #0to1958 matplotlib.pyplot.ylim( 50,2000) matplotlib.pyplot.subplot(111).set_xticklabels(['Data','SL1']) matplotlib.pyplot.ylabel('Mean Density (g/m^2)') matplotlib.pyplot .title(nameM) matplotlib.pyplot.savefig('./analysis/'+nameM) matplotlib.pyplot.hold(False) matplotlib.pyplot.clf() ###BoxPlot### ####### #BEGIN# ####### flush = sys.stdout.flush() def analysis(): #flush ###################COMPARISON###### ##################### #./output/xmlOut_1991to1995.xml cmd = 'find ./output/ name "xmlOut_*.xml" print' for file in os.popen(cmd).readlines(): full_name = file[: 1 ] #strip \ n' filename = full_name[9:] match_year = filenam e[13: 4] start_year = filename[7: 10] #flush #print full_name, filename, match_year #flush vec_L1, vec_L2, vec_L3, vec_L4, vec_L5, vec_SL1 = quickParse(full_name) #print len(vec_L1) vec_datMean_cat, v ec_datMean_saw = findDataMean(match_year) vec_datStdev_cat, vec_datStdev_saw = findDataSTDEV(match_year) #print len(vec_datMean_cat) #print [vec_L1, vec_L2, vec_L3, vec_L4, vec_L5] vec_diff_L1, vec_diff_L2, vec_diff_L3, vec_ diff_L4, vec_diff_L5 = calcDiff([vec_L1, vec_L2, vec_L3, vec_L4, vec_L5],vec_datMean_cat) vec_diff_SL1 = calcDiff([vec_SL1],vec_datMean_saw) vec_ABSdiff_L1, vec_ABSdiff_L2, vec_ABSdiff_L3, vec_ABSdiff_L4, vec_ABSdiff_L5 = calcABSdiff([vec_L 1, vec_L2, vec_L3, vec_L4, vec_L5],vec_datMean_cat) vec_ABSdiff_SL1 = calcABSdiff([vec_SL1],vec_datMean_saw)[0]

PAGE 149

149 vec_CdiffL1, vec_CdiffL2, vec_CdiffL3, vec_CdiffL4, vec_CdiffL5 = classDiff([vec_ABSdiff_L1, vec_ABSdiff_L2, vec_ABSdiff_L3, vec _ABSdiff_L4, vec_ABSdiff_L5],vec_datStdev_cat) #print vec_ABSdiff_SL1 vec_CdiffSL1 = classDiff([vec_ABSdiff_SL1],vec_datStdev_saw)[0] #print vec_CdiffL2 ###write classified difference### cdifffile = open('./analysis/ cdiff.dat','a') header = start_year+'to'+match_year+' Classified Difference \ n' #for iter in [NS_L1, NS_L2, NS_L3, NS_L4, NS_L5, NS_SL1, NS_SL1a]: lines = 'L1: \ n'+str(vec_CdiffL1)+' \ n L2: \ n'+str(vec_CdiffL2)+' \ n L3: \ n'+str(vec_Cd iffL3)+' \ n L4: \ n'+str(vec_CdiffL4)+' \ n L5: \ n'+str(vec_CdiffL5)+' \ n SawL1: \ n'+str(vec_CdiffSL1)+' \ n \ n' cdifffile.write(header) cdifffile.write(lines) cdifffile.close() ###write classified difference### NS_L1, NS_L 2, NS_L3, NS_L4, NS_L5 = NashSutcliffe([vec_L1, vec_L2, vec_L3, vec_L4, vec_L5],vec_datMean_cat) NS_SL1 = NashSutcliffe([vec_SL1],vec_datMean_saw)[0] print NS_L1, NS_L2, NS_L3, NS_L4, NS_L5 #print NS_SL1, NS_SL1a BoxPlot(vec _datMean_cat,[vec_L1, vec_L2, vec_L3, vec_L4, vec_L5],start_year,match_year) BoxPlot(vec_datMean_saw,[vec_SL1],start_year,match_year) ####################Log N S############################## print 'logging NS for direct comparison' #print sortedDictValues1(vec_logging) #vec_logging = sortedDictValues1(vec_logging) loggingfile = open('./log/Anal_Log.dat','a') header = start_year+'to'+match_year+' N S For Direct Comparison \ n' #for iter in [NS_L 1, NS_L2, NS_L3, NS_L4, NS_L5, NS_SL1, NS_SL1a]: line = padst('L1:')+padst(NS_L1)+' \ t'+padst('L2:')+padst(NS_L2)+' \ t'+padst('L3:')+padst(NS_L3 )+' \ t'+padst('L4:')+padst(NS_L4)+' \ t'+padst('L5:')+padst(NS_L5)+' \ t'+padst('SL1:')+padst( NS_SL1)+' \ n' loggingfile.write(header) loggingfile.write(line) loggingfile.close() # ###TimeSeries### # TimeSeries(vec_datMean_cat,[vec_L1, vec_L2, vec_L3, vec_L4, vec_L5],match_year) # TimeSeries(vec_datMean_saw,[vec_SL1, vec_S L1a],match_year) #####

PAGE 150

150 #RUN# ##### #analysis()

PAGE 151

151 MoransI.py ############### #Semi Variance# ############### #www.tiem.utk.edu/~sada/help/TH_36.htm #http://en.wikipedia.org/wiki/Variogram #gamma(h) = SIGMA _i=1 i=N(h) [(xi yi)^2]/(2*N(h)) impor t string, os, sys, matplotlib.pyplot, math, numpy flush = sys.stdout.flush() ###########Sorting Dictionary########## http://code.activestate.com/recipes/52306 to sort a dictionary def sortedDictValues1(adict): vector=[] vec_key=adict.keys() ve c_key.sort() for i in vec_key: tmpv = [i,adict[i]] vector.append(tmpv) return vector ###########Sorting Dictionary########## ###########paded string conversion########## def padst(input): newin = str(input).ljust(10) retur n newin ###########paded string conversion########## ###MEAN### def calc_mean(input): sum=0 mean=0 counter=1 for i in range(0,len(input) 1,1): if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79, 9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,46 6,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(counter)+',')= = 1): #if(input[i]<0) or (str(input[i])=='nan') or (str(input[i])=='inf') or (str(input[i])== inf'): # print input[i] #RSQ=9999

PAGE 152

152 #return RSQ #else: #print input[i] sum+=input[i] counter=counter+1 #print input[i+1] mean=sum/385 ###385 cells in domain (NOT 386) #print mean #mean = sum/(len(input)) return mean ###MEAN### ###MEAN### def calc_NSmean(input): sum=0 mean=0 counter=1 #print input for i in range(len(input) 1): #if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40, 41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,299,300,301,302,303,304,305,335,336,337,338,339,341 ,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,4 55,456,457,458,459 ,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494,495,496 ,497,498,499,500,501,502,505,506,507,508,509,510,".find(str(counter)+',')== 1): #if(input[i]<0) or (str(input[i])=='nan') or (str(input[i])=='inf') or (str(input[i])==' inf'): # RSQ=9999 # return RSQ #else: sum+=input[i+1] counter=counter+1 #print input[i+1] mean = sum/(len(input)) return mean ###MEAN### ###Variance### def calc_var(mean, inp ut): variance = 0 sum=0 counter=1 for i in range (0,len(input) 1,1): if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9

PAGE 153

153 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,2 19,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,4 97,498,499,500,501,502,505,506,507,508,509,510,".find(str(counter)+',')== 1): #if(input[i]<0) or (str(input[i])=='nan') or (str(input[i])=='inf') or (str(input[i])==' inf'): # RSQ=9999 # return RSQ #els e: sum += (input[i] mean)**2 counter=counter+1 variance = sum/(len(input) 1) return variance ###Variance### ###findDataMean### def findDataMean(year): cmd_dat = 'find ./raw_data/ name "*mean.dat" print' for fileD in o s.popen(cmd_dat).readlines(): #run find command full_nameD = fileD[: 1] #strip \ n' #print full_nameD nameD = full_nameD[11: 9] dat_match_year = full_nameD[11: 13] dat_match_type = full_nameD[16: 9] #print da t_match_year, dat_match_type if ( (dat_match_type=="cat") and (dat_match_year==year) ) : #print "observed CAT data:",nameD,dat_match_year vec_datMEAN_cat = [] file_data_cat = open(full_nameD,'r') data _buff_cat = file_data_cat.readlines() file_data_cat.close() #print data_buff_cat #CATcounter=1 for lineC in data_buff_cat: #if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,5 7,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,45 7,458 ,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494,495 ,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(str(CATcounter)+',')== 1): itemsC = lineC.split() data_datC = float(itemsC[1])

PAGE 154

154 vec_datMEAN_cat.append(data_datC) #CATcounter=CATcounter+1 if ( (dat_match_type=="saw") and (dat_match_year==year) ) : #print "observed SAW data:",nameD,dat_match_year vec_datMEAN_saw = [] file_data_saw = open(full_nameD,'r') data_buff_saw = file_data_saw.readlines() file_data_saw.close() #SAWcounter=1 for lineS in data_buff_saw: #if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19 ,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,299,300,301,302,303,304,305,335,336,337,338,339,341 ,370,371,372,373,400,401,402,403,404,405,429,4 30,431,432,433,455,456,457,458,459 ,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494,495,496 ,497,498,499,500,501,502,505,506,507,508,509,510,".find(str(SAWcounter)+',')== 1): itemsS = lineS.split() data _datS = float(itemsS[1]) vec_datMEAN_saw.append(data_datS) #SAWcounter=SAWcounter+1 #print vec_datMEAN_cat return vec_datMEAN_cat, vec_datMEAN_saw ###findDataMean### ###find, format, parse, calculate### def quickPar se(full_name): contents1, contents2, contents3, contents4, contents5, contents6, contents7 = [], [], [], [], [], [], [] vec_L1, vec_L2, vec_L3, vec_L4, vec_L5, vec_SL1, vec_SL1a = [], [], [], [], [], [], [] vegvalstr1, vegvalstr2, vegvalstr3, v egvalstr4, vegvalstr5, vegvalstr6, vegvalstr7 = '', '', '', '', '', '', '' openfile = open(full_name,'r') filebuffer = openfile.readlines() openfile.close() MainCount=0 for line in range(len(filebuffer) 1,0, 1): #move from the last line up linestr = filebuffer[line] startptr1 = linestr.find('') if (startptr1 != 1): endptr1 = linestr.find('',startptr1) vegvalstr1 = linestr[startptr1+27:endptr1] contents1 = vegvalstr1.split(',') #counter1=1 for iter1 in contents1:

PAGE 155

155 #if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183 ,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,299,300,301,302,303,304,305,335,336,337,338,339,341 ,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457,458,459 ,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493 ,494,495,496 ,497,498,499,500,501,502,505,506,507,508,509,510,".find(str(counter1)+',')== 1): number1 = float(iter1) vec_L1.append(number1) #counter1=counter1+1 MainCount=MainCount+1 startp tr2 = linestr.find('') if (startptr2 != 1): endptr2 = linestr.find('',startptr2) vegvalstr2 = linestr[startptr2+27:endptr2] contents2 = vegvalstr2.split(',') #co unter2=1 for iter2 in contents2: #if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261, 262,299,300,301,302,303,304,305,335,336,337,338,339,341 ,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457,458,459 ,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494,495,496 ,497,498,499,500,501,502,505,506,507,508, 509,510,".find(str(counter2)+',')== 1): number2 = float(iter2) vec_L2.append(number2) #counter2=counter2+1 MainCount=MainCount+1 startptr3 = linestr.find('') if (startptr3 != 1): endptr3 = linestr.find('',startptr3) vegvalstr3 = linestr[startptr3+27:endptr3] contents3 = vegvalstr3.split(',') #counter3=1 for iter3 in contents3: #if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,299,300,301,302,303,304,305,335,336,337,338,339,3 41 ,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457,458,459 ,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494,495,496 ,497,498,499,500,501,502,505,506,507,508,509,510,".find(str(counter3)+',')== 1): number3 = float(iter3) vec_L3.append(number3) #counter3=counter3+1

PAGE 156

156 MainCount=MainCount+1 startptr4 = linestr.find('') if (startptr4 != 1): endptr4 = line str.find('',startptr4) vegvalstr4 = linestr[startptr4+27:endptr4] contents4 = vegvalstr4.split(',') #counter4=1 for iter4 in contents4: #if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,2 8,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,299,300,301,302,303,304,305,335,336,337,338,339,341 ,370,371,372,373,400,401,402,403,404,405,429,430,43 1,432,433,455,456,457,458,459 ,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494,495,496 ,497,498,499,500,501,502,505,506,507,508,509,510,".find(str(counter4)+',')== 1): #print iter4 number4 = float(iter4 ) vec_L4.append(number4) #counter4=counter4+1 MainCount=MainCount+1 startptr5 = linestr.find('') ###Slight difference here if (startptr5 != 1): endptr5 = li nestr.find('',startptr5) vegvalstr5 = linestr[startptr5+27:endptr5] ###Slight difference here contents5 = vegvalstr5.split(',') #counter5=1 for iter5 in contents5: #if (",1,2,3,4,5, 7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,299,300,301,302,303,304,305,335,336,337,338,339,341 ,370,371,372,373,400,40 1,402,403,404,405,429,430,431,432,433,455,456,457,458,459 ,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494,495,496 ,497,498,499,500,501,502,505,506,507,508,509,510,".find(str(counter5)+',')== 1): #print iter5 number5 = float(iter5) vec_L5.append(number5) ###Slight difference here #counter5=counter5+1 MainCount=MainCount+1 startptr6 = linestr.find('') ###Slight difference here if (startptr6 != 1): endptr6 = linestr.find('',startptr6) vegvalstr6 = linestr[startptr6+27:endptr6] contents6 = vegvalstr6.split(',') #counter6=1

PAGE 157

157 for iter6 in contents6: #if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,299,300,301,302,303,304,305,335,336,337,338,33 9,341 ,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457,458,459 ,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494,495,496 ,497,498,499,500,501,502,505,506,507,508,509,510,".find(str(counter6)+',')== 1): number6 = float(iter6) vec_SL1.append(number6) ###Slight difference here #counter6=counter6+1 MainCount=MainCount+1 # startptr7 = linestr.find('') ###Slight difference here # if (startptr7 != 1): # endptr7 = linestr.find('',startptr7) # vegvalstr7 = linestr[startptr7+28:endptr7] ###Slight difference here # contents7 = vegvalstr7.split(',') # #counter7=1 # for iter7 in contents7: # #if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,299,300,301, 302,303,304,305,335,336,337,338,339,341 ,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457,458,459 ,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494,495,496 ,497,498,499,500,501,502,505,506,507,508,509,510,".find(s tr(counter7)+',')== 1): # number7 = float(iter7) # vec_SL1a.append(number7) ###Slight difference here # #counter7=counter7+1 # MainCount=MainCount+1 if MainCount==6: break #print count return vec_L1, vec_L2, vec_L3, vec_L4, vec_L5, vec_SL1 ###find, format, parse, calculate### ###Nash Sutcliffe### def NashSutcliffe(Invec, data): retvec=[] #print 'Invec '+str(Invec)+' \ n' #print len(Invec[0]), len(data) for model in Invec: Sxx=0 Syy=0 RSQ=0

PAGE 158

158 mean_data = calc_NSmean(data) counter=1 #print mean_data #print data#len(model), model #'model '+str(model) #print model for i in range(0,len(data) 1 ,1): #if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,299,300,301,302,303,304,305,335,336,337, 338,339,341 ,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457,458,459 ,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494,495,496 ,497,498,499,500,501,502,505,506,507,508,509,510,".find(str(counter)+',')== 1): #if(model[i]<0) or (str(model[i])=='nan') or (str(model[i])=='inf') or (str(model[i])==' inf'): if(str(model[i])=='nan') or (str(model[i])=='inf') or (str(model[i])==' inf'): #RSQ=9999 #retvec.append(RSQ) #print i, model[i] break #return RSQ number=0 else: number=model[i] #print data[i], number, mean_data Sxx+=(data[i] number)**2 S yy+=(data[i] mean_data)**2 counter=counter+1 #print Sxx, Syy RSQ = 1 Sxx/Syy retvec.append(RSQ) #print \ n'#retvec #print retvec return retvec ###Nash Sutcliffe### ###MoransPlot### def MoransPlot(Invec data, distance,start_year, year): matplotlib.pyplot.clf() matplotlib.pyplot.hold(True) matplotlib.pyplot.plot(distance,data,'k ',label='data') col=['b','g','r','c'] style=[' -',' .','+','o'] SC=0 CC =0 Ncount=1

PAGE 159

159 for vec in Invec: if CC>len(col) 1: CC=0 SC=SC+1 if SC>len(style) 1: SC=0 mark=col[CC]+style[SC] matplotlib.pyplot.plot(dist ance,vec,mark,label=str(Ncount)) CC=CC+1 Ncount=Ncount+1 matplotlib.pyplot.xlabel('Lag/Distance (ft)') matplotlib.pyplot.ylabel('Spatial Autocorrelation') matplotlib.pyplot.legend() # matplotlib.pyplot .show() if len(Invec)==5: nameM = 'MoransI_'+'cat_'+start_year+'to'+year matplotlib.pyplot.title(nameM) else: nameM = 'MoransI_'+'saw_'+start_year+'to'+year matplotlib.pyplot.title(nameM) #matplotlib.pyplot.axis([0,len(distance),0,3]) matplotlib.pyplot.ylim(0,3) matplotlib.pyplot.savefig('./analysis/'+nameM) matplotlib.pyplot.hold(False) matplotlib.pyplot.clf() ###MoransPlot### ########################## ####################################### ########################################################## #######################################CORRELATION############# ############################################################ ################################## ############################### ########################################################## def correlation_ii(): #print'Moran \ 's I' ###Clear previous files #vec_logging={} os.system('rm rf ./analysis/*MoransI*') ###Loading Centroid Dist ances### file_centroids = open('./raw_data/Centroids.dat','r') buff_centroids = file_centroids.readlines() file_centroids.close() matrix_centroids = [[]] for lineC in buff_centroids: itemsC = lineC.split() matrix_centroi ds.append(itemsC)

PAGE 160

160 #print matrix_centroids ###Loading Centroid Distances### ###Loading Vegetation Data and Model### cmd = 'find ./output/ name "xmlOut_*.xml" print' for file in os.popen(cmd).readlines(): full_name = file[: 1 ] #strip \ n' filename = full_name[9:] match_year = filename[13: 4] start_year = filename[7: 10] #start_year = filename[7: 10] #flush #print full_name, filename, start_year, match_year #flush ve c_L1, vec_L2, vec_L3, vec_L4, vec_L5, vec_SL1 = quickParse(full_name) #print vec_L1 vec_datMean_cat, vec_datMean_saw = findDataMean(match_year) #vec_datStartMean_cat, vec_datStartMean_saw = findDataMean(start_year) ###Loading Ve getation Data and Model### ###Calculating Spatial Correlation_begin### R_square=0 max_dist = 121000 #121000 top to bottom min_dist = 0 step = 1000 model_cat_vector_variogram1 = [] model_cat_vector_variogram2 = [] model_cat_vector_variogram3 = [] model_cat_vector_variogram4 = [] model_cat_vector_variogram5 = [] model_saw_vector_variogram1 = [] model_saw_vector_variogram2 = [] data_cat_vector_variogram = [] data_saw_vector_variogram = [] vector_setdistance = [] vector_count=[] for set_dist in range(min_dist, max_dist, step): count = 0 ###I realize that this can be made a LOT simpler by sending it to an external function, and dynamically initializing vectors etc as needed. As with the NashSutcliffe function### ###Although this does drastically reduce the number of loops through the centroid matrix### ###Initializing variables### model_cat_corr1 = 0 model_cat_sum1 = 0 model_cat_variogram1 = 0 model_cat_corr2 = 0

PAGE 161

161 model_cat_sum2 = 0 model_cat_variogram2 = 0 model_cat_corr3 = 0 model_cat_sum3 = 0 model_cat_variogram3 = 0 model_cat_corr4 = 0 model_cat_sum4 = 0 model_cat_variogram4 = 0 model_cat_corr5 = 0 model_cat_sum5 = 0 model_cat_variogram5 = 0 model_saw_ corr1 = 0 model_saw_sum1 = 0 model_saw_variogram1 = 0 model_saw_corr2 = 0 model_saw_sum2 = 0 model_saw_variogram2 = 0 data_cat_corr = 0 data_cat_sum = 0 data_ca t_variogram = 0 data_saw_corr = 0 data_saw_sum = 0 data_saw_variogram = 0 ###calculating means and variances of respective datasets### #print vec_L1 mean_modC1 = calc_mean(vec_L1) var_modC1 = calc_var(mean_modC1, vec_L1) mean_modC2 = calc_mean(vec_L2) var_modC2 = calc_var(mean_modC2, vec_L2) #print vec_L1#, mean_modC2, var_modC2 mean_modC3 = calc_mean(vec_L3) var_mo dC3 = calc_var(mean_modC3, vec_L3) mean_modC4 = calc_mean(vec_L4) var_modC4 = calc_var(mean_modC4, vec_L4) mean_modC4a = calc_mean(vec_L5) var_modC4a = calc_var(mean_modC4a, vec_L5) mean_modS1 = c alc_mean(vec_SL1) var_modS1 = calc_var(mean_modS1, vec_SL1) # mean_modS1a = calc_mean(vec_SL1a) # var_modS1a = calc_var(mean_modS1a, vec_SL1a) mean_dataC = calc_mean(vec_datMean_cat) var_dataC = cal c_var(mean_dataC, vec_datMean_cat) mean_dataS = calc_mean(vec_datMean_saw) var_dataS = calc_var(mean_dataS, vec_datMean_saw) ###HERE IS THE MEAT### ###Looping through centroid distances###

PAGE 162

162 #print len(matrix_centroids) for i in range(1, (len(matrix_centroids) 1), 1): if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217 ,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495 ,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(i)+',')== 1): #print i for j in range(1, (len(matrix_centroids) 1), 1): if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30, 40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,43 1,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(j)+',')== 1): if j>i: d ist=float(matrix_centroids[i][j]) #i[j] if dist <= set_dist: count=count+1 #print vec_L1[j 1], mean_modC1 model_cat_ corr1 = (vec_L1[j 1] mean_modC1)*(vec_L1[i 1] mean_modC1) model_cat_sum1 += model_cat_corr1 model_cat_corr2 = (ve c_L2[j 1] mean_modC2)*(vec_L2[i 1] mean_modC2) model_cat_sum2 += model_cat_corr2 model_cat_corr3 = (vec_L3[j 1] mean_modC3)*(ve c_L3[i 1] mean_modC3) model_cat_sum3 += model_cat_corr3 model_cat_corr4 = (vec_L4[j 1] mean_modC4)*(vec_L4[i 1] mean_modC4) model_cat_sum4 += model_cat_corr4 model_cat_corr5 = (vec_L5[j 1] mean_modC4a)*(vec_L5[i 1] mean_modC4a) model_cat_sum5 += model_cat_corr5 model_saw_corr1 = (vec_SL1[j 1] mean_modS1)*(vec_SL1[i 1] mean_modS1) model_saw_sum 1 += model_saw_corr1 # model_saw_corr2 = (vec_SL1a[j 1] mean_modS1a)*(vec_SL1a[i 1] mean_modS1a) # model_saw_sum2 += model_saw_co rr2

PAGE 163

163 data_cat_corr = (vec_datMean_cat[j 1] mean_dataC)*(vec_datMean_cat[i 1] mean_dataC) data_cat_sum += data_cat_corr data_saw_corr = (vec_datMean_saw[j 1] mean_dataS)*(vec_datMean_saw[i 1] mean_dataS) data_saw_sum += data_saw_corr if count>0: #print count vector_setdistance.append(set_dist) vector_count.append(count) #print model_cat_sum1, coun t, var_modC1 model_cat_variogram1 = model_cat_sum1/(count*var_modC1) model_cat_vector_variogram1.append(model_cat_variogram1) model_cat_variogram2 = model_cat_sum2/(count*var_modC2) model_cat_ vector_variogram2.append(model_cat_variogram2) model_cat_variogram3 = model_cat_sum3/(count*var_modC3) model_cat_vector_variogram3.append(model_cat_variogram3) model_cat_variogram4 = model_cat_sum4/(count*var _modC4) model_cat_vector_variogram4.append(model_cat_variogram4) model_cat_variogram5 = model_cat_sum5/(count*var_modC4a) model_cat_vector_variogram5.append(model_cat_variogram5) model_saw_var iogram1 = model_saw_sum1/(count*var_modS1) model_saw_vector_variogram1.append(model_saw_variogram1) # model_saw_variogram2 = model_saw_sum2/(count*var_modS1a) # model_saw_vector_variogram2.append(model_saw_vari ogram2) data_cat_variogram = data_cat_sum/(count*var_dataC) data_cat_vector_variogram.append(data_cat_variogram) data_saw_variogram = data_saw_sum/(count*var_dataS) data_saw_vector_variogram.a ppend(data_saw_variogram) #print len(model_cat_vector_variogram1) #print data_cat_vector_variogram NS_L1, NS_L2, NS_L3, NS_L4, NS_L5 = NashSutcliffe([model_cat_vector_variogram1, model_cat_vector_variogram2, model_cat_vector_variogr am3, model_cat_vector_variogram4, model_cat_vector_variogram5],data_cat_vector_variogram) NS_SL1 = NashSutcliffe([model_saw_vector_variogram1], data_saw_vector_variogram)[0] print NS_L1, NS_L2, NS_L3, NS_L4, NS_L5 ###Calculating Spatial Cor relation_end### ####################Log N S############################## print 'logging NS for Moran \ 's I' #print sortedDictValues1(vec_logging) #vec_logging = sortedDictValues1(vec_logging) loggingfile = open('./log/Anal_ Log.dat','a') header = start_year+'to'+match_year+' N S For Moran \ 's I \ n'

PAGE 164

164 #for iter in [NS_L1, NS_L2, NS_L3, NS_L4, NS_L5, NS_SL1, NS_SL1a]: line = padst('L1:')+padst(NS_L1)+' \ t'+padst('L2:')+padst(NS_L2)+' \ t'+padst('L3:')+padst(NS _L3 )+' \ t'+padst('L4:')+padst(NS_L4)+' \ t'+padst('L5:')+padst(NS_L5)+' \ t'+padst('SL1:')+padst( NS_SL1)+' \ n' loggingfile.write(header) loggingfile.write(line) loggingfile.close() ###Plotting### MoransPlot([model_cat_vector_var iogram1, model_cat_vector_variogram2, model_cat_vector_variogram3, model_cat_vector_variogram4, model_cat_vector_variogram5],data_cat_vector_variogram,vector_setdistance,start_ye ar,match_year) MoransPlot([model_saw_vector_variogram1],data_saw_vecto r_variogram,vector_setdis tance,start_year,match_year) ###Plotting### ##### #RUN# ##### #correlation_ii()

PAGE 165

165 Abundance.py ########### #Abundance# ########### import string, os, sys, matplotlib.pyplot, math, random, numpy flush = sys.stdout.flush() #### #######Sorting Dictionary########## http://code.activestate.com/recipes/52306 to sort a dictionary def sortedDictValues1(adict): vector=[] vec_key=adict.keys() vec_key.sort() for i in vec_key: tmpv = [i,adict[i]] vector.appe nd(tmpv) return vector ###########Sorting Dictionary########## ###########paded string conversion########## def padst(input): newin = str(input).ljust(10) return newin ###########paded string conversion########## ###MEAN### def calc_NSmean(i nput): sum=0 mean=0 counter=1 for i in range(len(input) 1): #if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,2 56,257,258,259,260,261,262,299,300,301,302,303,304,305,335,336,337,338,339,341 ,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457,458,459 ,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494,495,496 ,497,498,499,500,5 01,502,505,506,507,508,509,510,".find(str(counter)+',')== 1): if(input[i]<0) or (str(input[i])=='nan') or (str(input[i])=='inf') or (str(input[i])==' inf'): RSQ=9999 return RSQ else: sum+=input[i+1]

PAGE 166

166 counter=counter+1 #print input[i+1] mean = sum/(len(input)) return mean ###MEAN### ###Variance### def calc_var(mean, input): variance = 0 sum=0 counter=1 for i in range (len(input) 1): #if (",1,2,3,4,5,7,8,9,10,1 1,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,299,300,301,302,303,304,305,335,336,337,338,339,341 ,370,371,372,373,400,401,402,403, 404,405,429,430,431,432,433,455,456,457,458,459 ,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494,495,496 ,497,498,499,500,501,502,505,506,507,508,509,510,".find(str(counter)+',')== 1): if(input[i]<0) or (str(input[i])=='nan') or (str(input[i])=='inf') or (str(input[i])==' inf'): RSQ=9999 return RSQ else: sum += (input[i+1] mean)**2 counter=counter+1 variance = sum/(len(input) 1) return variance ###Variance### ###findD ataMean### def findDataMean(year): cmd_dat = 'find ./raw_data/ name "*mean.dat" print' for fileD in os.popen(cmd_dat).readlines(): #run find command full_nameD = fileD[: 1] #strip \ n' #print full_nameD nameD = full_nameD[ 11: 9] dat_match_year = full_nameD[11: 13] dat_match_type = full_nameD[16: 9] #print dat_match_year, dat_match_type if ( (dat_match_type=="cat") and (dat_match_year==year) ) : #print "observed CAT data:",nameD,da t_match_year vec_datMEAN_cat = [] file_data_cat = open(full_nameD,'r') data_buff_cat = file_data_cat.readlines() file_data_cat.close()

PAGE 167

167 #print data_buff_cat #CATcounter=1 fo r lineC in data_buff_cat: #if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,299,300,301,302 ,303,304,305,335,336,337,338,339,341 ,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457,458,459 ,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494,495,496 ,497,498,499,500,501,502,505,506,507,508,509,510,".find(str( CATcounter)+',')== 1): itemsC = lineC.split() data_datC = float(itemsC[1]) vec_datMEAN_cat.append(data_datC) #CATcounter=CATcounter+1 if ( (dat_match_type=="saw") and (dat_match_year== year) ) : #print "observed SAW data:",nameD,dat_match_year vec_datMEAN_saw = [] file_data_saw = open(full_nameD,'r') data_buff_saw = file_data_saw.readlines() file_data_saw.close() #SA Wcounter=1 for lineS in data_buff_saw: #if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,26 0,261,262,299,300,301,302,303,304,305,335,336,337,338,339,341 ,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457,458,459 ,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494,495,496 ,497,498,499,500,501,502,505,506,50 7,508,509,510,".find(str(SAWcounter)+',')== 1): itemsS = lineS.split() data_datS = float(itemsS[1]) vec_datMEAN_saw.append(data_datS) #SAWcounter=SAWcounter+1 return vec_datMEAN_cat, vec_d atMEAN_saw ###findDataMean### ###find, format, parse, calculate### def quickParse(full_name): contents1, contents2, contents3, contents4, contents5, contents6, contents7 = [], [], [], [], [], [], [] vec_L1, vec_L2, vec_L3, vec_L4, vec_L5, vec_SL1, vec_SL1a = [], [], [], [], [], [], [] vegvalstr1, vegvalstr2, vegvalstr3, vegvalstr4, vegvalstr5, vegvalstr6, vegvalstr7 = '', '', '', '', '', '', '' openfile = open(full_name,'r') filebuffer = openfile.readlines()

PAGE 168

168 openfile.close() Mai nCount=0 for line in range(len(filebuffer) 1,0, 1): #move from the last line up linestr = filebuffer[line] startptr1 = linestr.find('') if (startptr1 != 1): endptr1 = linestr.find('',startptr1) vegvalstr1 = linestr[startptr1+27:endptr1] contents1 = vegvalstr1.split(',') #print contents1 #counter1=1 for iter1 in contents1: # if (",1,2,3,4,5,7,8,9,10,11,16,1 7,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,402,40 3,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(counter1)+',') == 1): number1 = float(iter1) #print counter1#number1 vec_L1.append(number1) # counter1=counter1+1 MainCount=MainCount+1 startptr2 = linestr.find('') if (startptr2 != 1): endptr2 = linestr.find('',startptr2) vegvalstr2 = linestr[startptr2+27:endptr2] contents2 = vegvalstr2.split(',') #counter2=1 for iter2 in contents2: # if (",1,2,3,4,5,7,8,9,10,11,1 6,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,402 ,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(counter2)+',') == 1): number2 = float(iter2) vec_L2.append(number2) # counter2=counter2+1 MainCount=MainCount+1 startptr3 = linestr.find('') if (startptr3 != 1):

PAGE 169

169 endptr3 = linestr.find('
',sta rtptr3) vegvalstr3 = linestr[startptr3+27:endptr3] contents3 = vegvalstr3.split(',') #counter3=1 for iter3 in contents3: # if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57, 58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456, 457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(counter3)+',') == 1): number3 = float(iter3) vec_L3.append(number3) # counter3=counter3+1 MainCount=MainCount+1 startptr4 = linestr.find('') if (startptr4 != 1): endptr4 = linestr.find('',startptr4) vegvalstr4 = linestr[ startptr4+27:endptr4] contents4 = vegvalstr4.split(',') #counter4=1 for iter4 in contents4: # if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,1 24,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,4 81,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(counter4)+',') == 1): number4 = float(iter4) vec_L4.append(number4) # counter4=counter4+1 MainCount=MainCount+1 startptr5 = linestr.find('') ###Slight difference here if (startptr5 != 1): endptr5 = linestr.find('',startptr5) vegvalstr5 = linestr[startptr5+27:en dptr5] ###Slight difference here contents5 = vegvalstr5.split(',') #counter5=1 for iter5 in contents5: #if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,10 1,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220

PAGE 170

170 ,256,257,258,259,260,261,262,299,300,301,302,303,304,305,335,336,337,338,339,341 ,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457,458,459 ,465,466,467,468,469,470,48 0,481,482,483,484,488,489,490,491,492,493,494,495,496 ,497,498,499,500,501,502,505,506,507,508,509,510,".find(str(counter5)+',')== 1): number5 = float(iter5) vec_L5.append(number5) ###Slight difference here #c ounter5=counter5+1 MainCount=MainCount+1 startptr6 = linestr.find('') ###Slight difference here if (startptr6 != 1): endptr6 = linestr.find('',startptr6) vegvalstr6 = linestr[startptr6+27:endptr6] contents6 = vegvalstr6.split(',') #counter6=1 for iter6 in contents6: # if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101 ,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469 ,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(counter6)+',') == 1): number6 = float(iter6) vec_SL1.append(number6) ###Slight difference here #counter6=counter6+1 MainCount=MainCount+1 # startptr7 = linestr.find('') ###Slight difference here # if (startptr7 != 1): # endptr7 = linestr.find('',startptr7) # vegvalstr7 = linestr[startptr7+28:endptr7] ###Slight difference here # contents7 = vegvalstr7.split(',') # #counter7=1 # for iter7 in contents7: # #if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40 ,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,299,300,301,302,303,304,305,335,336,337,338,339,341 ,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433, 455,456,457,458,459 ,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494,495,496 ,497,498,499,500,501,502,505,506,507,508,509,510,".find(str(counter7)+',')== 1): # number7 = float(iter7) # vec_SL1a.append(num ber7) ###Slight difference here # #counter7=counter7+1

PAGE 171

171 # MainCount=MainCount+1 if MainCount==6: break #print count return vec_L1, vec_L2, vec_L3, vec_L4, vec_L5, vec_SL1 ###find, format, parse, calc ulate### ###Nash Sutcliffe### def NashSutcliffe(Invec, data): retvec=[] for model in Invec: Sxx=0 Syy=0 RSQ=0 mean_data = calc_NSmean(data) counter=1 for i in range(0,len(data) 1,1): #if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,299,300,301,302,303,304,305,335,336,337,338,339,341 ,370,371,3 72,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457,458,459 ,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494,495,496 ,497,498,499,500,501,502,505,506,507,508,509,510,".find(str(counter)+',')== 1): #if(data[i]<0) or (str(data[i])=='nan') or (str(data[i])=='inf') or (str(data[i])==' inf'): if(str(model[i])=='nan') or (str(model[i])=='inf') or (str(model[i])==' inf'): #RSQ=9999 #return RSQ #print i, model[i ] break else: Sxx+=(data[i] model[i])**2 Syy+=(data[i] mean_data)**2 counter=counter+1 RSQ = 1 Sxx/Syy retvec.append(RSQ) return retvec ###Nash Sutcliffe### ###A bundancePlot### def AbundancePlot(Invec, data, distance,start_year, year): matplotlib.pyplot.clf() matplotlib.pyplot.hold(True) matplotlib.pyplot.plot(distance,data,'k ',label='data') col=['b','g','r','c']

PAGE 172

172 style=[' -',' .','+','o'] SC=0 CC=0 Ncount=1 for vec in Invec: if CC>len(col) 1: CC=0 SC=SC+1 if SC>len(style) 1: SC=0 mark=col[CC]+style[SC] matplotlib.pyplot.plot(distance,vec,mark,label=str(Ncount)) CC=CC+1 Ncount=Ncount+1 matplotlib.pyplot.xlabel('Lag/Distance (ft)') matplotlib.pyplot.ylabel('Mean Cumulative Density (g/m2)') matplot lib.pyplot.legend() # matplotlib.pyplot.show() if len(Invec)==5: nameM = 'Abundance Area_'+'cat_'+start_year+'to'+year matplotlib.pyplot.title(nameM) matplotlib.pyplot.ylim(0,100000) else: nameM = 'Abundance Area_'+'saw_'+start_year+'to'+year matplotlib.pyplot.title(nameM) matplotlib.pyplot.ylim(0,300000) # matplotlib.pyplot.axis([0,len(distance),0,350000]) # matplotlib.pyplot.xlim(0,len(distance)) matplotlib.pyplot.savefig('./analysis/'+nameM) matplotlib.pyplot.hold(False) matplotlib.pyplot.clf() ###AbundancePlot### ################################################################# ############################################## ############ #######################################ABUNDANCE AREA################################################################# ##### ################################################################# ###################################################### #### def abundance(): ###Loading Centroid Distances### # print 'Abundance Area' os.system('rm rf ./analysis/*Abundance*') #vec_logging={} ###Loading Centroid Distances###

PAGE 173

173 file_centroids = open('./raw_data/Centroids.dat','r') buff_centro ids = file_centroids.readlines() file_centroids.close() matrix_centroids = [[]] for lineC in buff_centroids: itemsC = lineC.split() matrix_centroids.append(itemsC) ###Loading Centroid Distances### ###Loading Vegetation Data and Model### cmd = 'find ./output/ name "xmlOut_*.xml" print' for file in os.popen(cmd).readlines(): full_name = file[: 1 ] #strip \ n' filename = full_name[9:] match_year = filename[13: 4] start_year = filename[7: 10] #print full_name vec_L1, vec_L2, vec_L3, vec_L4, vec_L5, vec_SL1 = quickParse(full_name) #print len(vec_L1) vec_datMean_cat, vec_datMean_saw = findDataMean(match_year) ###Loading Vegetation Data and Model### ###Selecting Ra ndom points### vector_random=[] num_points = 100 random.seed() newelements=[] elements=range(1,511,1) for i in elements: if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59 ,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,4 58,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(i)+',')== 1): newelements.append(i) vector_random = random.sample(newelements,num_ points) ###Selecting Random points### ###Set Distance### max_dist = 121000 #121000 top to bottom min_dist = 0 step = 1000 vector_setdistance = [] vector_area=[] area=0 vector_mod_cat1=[] vecto r_mod_cat2=[]

PAGE 174

174 vector_mod_cat3=[] vector_mod_cat4=[] vector_mod_cat5=[] vector_mod_saw1=[] vector_mod_saw2=[] vector_dat_cat=[] vector_dat_saw=[] for set_dist in range(min_dist, max_dist, step) : vector_setdistance.append(set_dist) area = math.pi*(set_dist**2) vector_area.append(area) ###I realize that this can be made a LOT simpler by sending it to an external function, and dynamically initializing vectors etc as needed. As with the NashSutcliffe function### ###Although this does drastically reduce the number of loops through the centroid matrix### ###Initializing variables### current_point=0 sum_tot_m od_cat1=0 sum_tot_mod_cat2=0 sum_tot_mod_cat3=0 sum_tot_mod_cat4=0 sum_tot_mod_cat5=0 sum_tot_mod_saw1=0 sum_tot_mod_saw2=0 sum_tot_dat_cat=0 sum_tot_dat_saw=0 for iter in range(num_points): current_point=vector_random[iter] sum_int_mod_cat1=0 sum_int_mod_cat2=0 sum_int_mod_cat3=0 sum_int_mod_cat4=0 sum_int _mod_cat5=0 sum_int_mod_saw1=0 sum_int_mod_saw2=0 sum_int_dat_cat=0 sum_int_dat_saw=0 count=0 for i in range(1, (len(matrix_centroids) 1), 1): if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340 ,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457

PAGE 175

175 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(i)+',')== 1): if (int(i)==int(current_point)): for j in range(1, (len(matrix_centroids) 1), 1): if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,12 2,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,47 0,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(j)+',')== 1): if j>i: dist=float(matrix_centroi ds[i][j]) #i[j] if dist <= set_dist: #print j, len(vec_L1) sum_int_mod_cat1 += vec_L1[j 1] sum_int_mod_cat2 += vec_L2[j 1] sum_int_mod_cat3 += vec_L3[j 1] sum_int_mod_cat4 += vec_L4[j 1] sum_int_mod_cat5 += v ec_L5[j 1] sum_int_mod_saw1 += vec_SL1[j 1] # sum_int_mod_saw2 += vec_SL1a[j 1] sum_int_dat_cat += vec_datMean_cat[j 1] sum_int_dat_saw += vec_datMean_saw[j 1] count+=1 if count>0: sum_tot_mod_cat1 += sum_int_mod_cat1 sum_tot_mod_cat2 += sum_int_mod_cat2 sum_tot_mod_cat3 += sum_int_mod_cat3 sum_tot_mod_cat4 += sum_int_mod_cat4 sum_tot_mod_cat5 += sum_int_mod_cat5 sum_tot_mod_saw1 += sum_int_mod_saw1 # sum_tot_mod_saw2 += sum_int_mod_saw2 sum_tot_dat_cat += sum_int_dat_cat sum_tot_dat_saw += sum_int_dat_saw mean_mod_cat1 = sum_tot_mod_cat1/num_points vector_mod_cat1.append(mean_m od_cat1) mean_mod_cat2 = sum_tot_mod_cat2/num_points vector_mod_cat2.append(mean_mod_cat2) mean_mod_cat3 = sum_tot_mod_cat3/num_points vector_mod_cat3.append(mean_mod_cat3) mean_mod_cat4 = sum_tot _mod_cat4/num_points vector_mod_cat4.append(mean_mod_cat4) mean_mod_cat5 = sum_tot_mod_cat5/num_points vector_mod_cat5.append(mean_mod_cat5) mean_mod_saw1 = sum_tot_mod_saw1/num_points

PAGE 176

176 vector_mod_ saw1.append(mean_mod_saw1) # mean_mod_saw2 = sum_tot_mod_saw2/num_points # vector_mod_saw2.append(mean_mod_saw2) mean_dat_cat=sum_tot_dat_cat/num_points vector_dat_cat.append(mean_dat_cat) mean_dat_ saw=sum_tot_dat_saw/num_points vector_dat_saw.append(mean_dat_saw) NS_L1, NS_L2, NS_L3, NS_L4, NS_L5 = NashSutcliffe([vector_mod_cat1, vector_mod_cat2, vector_mod_cat3, vector_mod_cat4, vector_mod_cat5],vector_dat_cat) NS_SL1 = NashSutcliffe([vector_mod_saw1], vector_dat_saw)[0] #print len(vector_mod_saw1) print NS_L1, NS_L2, NS_L3, NS_L4, NS_L5 ####################Log N S############################## print 'logging NS for Abundance Area' #print sortedDictValues1(vec_logging) #vec_logging = sortedDictValues1(vec_logging) loggingfile = open('./log/Anal_Log.dat','a') header = start_year+'to'+match_year+' N S For Abundance Area \ n' #for iter in [NS_L1, NS_L2, NS_L3, NS _L4, NS_L5, NS_SL1, NS_SL1a]: line = padst('L1:')+padst(NS_L1)+' \ t'+padst('L2:')+padst(NS_L2)+' \ t'+padst('L3:')+padst(NS_L3 )+' \ t'+padst('L4:')+padst(NS_L4)+' \ t'+padst('L5:')+padst(NS_L5)+' \ t'+padst('SL1:')+padst( NS_SL1)+' \ n' loggingfile.wri te(header) loggingfile.write(line) loggingfile.close() ###Plotting### AbundancePlot([vector_mod_cat1, vector_mod_cat2, vector_mod_cat3, vector_mod_cat4, vector_mod_cat5],vector_dat_cat,vector_setdistance,start_year,match_year) AbundancePlot([vector_mod_saw1], vector_dat_saw,vector_setdistance,start_year,match_year) ###Plotting### ##### #RUN# ##### #abundance()

PAGE 177

177 TimeSeries.py ############ #Timeseries# ############ import string, os, math, sys, matplotlib.pyplot, numpy, pprint ###########Sorting Dictionary########## http://code.activestate.com/recipes/52306 to sort a dictionary def sortedDictValues1(adict): vector=[] vec_key=adict.keys() vec_key.sort() for i in vec_key: tmpv = [i,adict[i]] vector.append(tmpv) return vector ###########Sorting Dictionary########## ###########paded string conversion########## def padst(input): newin = str(input).ljust(10) return newin ###########paded string conversion########## ###MEAN### def c alc_mean(input): sum=0 mean=0 counter=1 #print len(input) for i in range(0,len(input) 1,1): if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,15 5,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,49 0,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(counter)+',')= = 1): #if(input[i]<0) or (str(input[i])=='nan') or (str(input[i])=='inf') or (str(input[i])==' inf'): # mean=99999 # return mean #else: #print input[i]

PAGE 178

178 sum=sum+input[i] counter=counter+1 mean = sum/385 ###385 cells in domain (NOT 386) #print mean return mean ###MEAN### ###find, format, parse, calculate### def Parse(full_name): ###vec_380[[],[],[],[]] for [Regional Mean], [209igh], [244edium], [380ow] vec_L1, vec_L2, vec_L3, vec_L4, vec_L5, vec_SL1, vec_SL1a = [[],[],[],[]], [[],[],[],[]], [[],[],[],[]], [[],[],[],[]], [[],[],[],[]], [[],[],[],[]], [[], [],[],[]] #tempR, temp209, temp244, temp380 = 0,0,0,0 openfile = open(full_name,'r') filebuffer = openfile.readlines() openfile.close() MainCount=0 name = full_name[16: 4] #print full_name, name namestring = name+'.txt' testfile = open(namestring,'w') for line in range(0,len(filebuffer) 1,1): linestr='' linestr = filebuffer[line] contents1, contents2, contents3, contents4, contents5, contents6, contents7 = [], [], [], [], [], [], [] ve gvalstr1, vegvalstr2, vegvalstr3, vegvalstr4, vegvalstr5, vegvalstr6, vegvalstr7 = '', '', '', '', '', '', '' #print linestr startptr1=0 endptr1=0 startptr1 = linestr.find('') if (startptr1 != 1): endptr1 = linestr.find('',startptr1) tempR, temp209, temp244, temp380 = 0,0,0,0 vec_temp=[] vegvalstr1 = linestr[startptr1+27:endptr1] contents1 = vegvalstr1.split(',') for iter1 in contents1: #print iter1 number1 = float(iter1) vec_temp.append(number1) tempR = calc_mean(vec_temp) temp209 = vec_temp[209] temp244 = vec_temp[244] temp380 = vec_temp[380]

PAGE 179

179 vec_L1[0].append(tempR) vec_L1[1].append(temp209) vec_L1[2].append(temp244) vec_L1[3].append(temp380) if len(vec_L1[0])>0: string = str(len(vec_L1 [0]))+' '+str(vec_L1[0][(len(vec_L1[0]) 1)])+' \ n' testfile.write(string) startptr2=0 endptr2=0 startptr2 = linestr.find('') if (startptr2 != 1): endptr2 = linestr.find ('',startptr2) tempR, temp209, temp244, temp380 = 0,0,0,0 vec_temp=[] vegvalstr2 = linestr[startptr2+27:endptr2] contents2 = vegvalstr2.split(',') for iter2 in contents2: number2 = float(iter2) vec_temp.append(number2) tempR = calc_mean(vec_temp) temp209 = vec_temp[209] temp244 = vec_temp[244] temp380 = vec_temp[380] vec_L2[0].append(tempR) vec_L2[1].append(temp209) vec_L2[2].append(temp244) vec_L2[3].append(temp380) #print vec_L1[1][0] startptr3=0 endptr3=0 startptr3 = linestr.find('') i f (startptr3 != 1): endptr3 = linestr.find('',startptr3) tempR, temp209, temp244, temp380 = 0,0,0,0 vec_temp=[] vegvalstr3 = linestr[startptr3+27:endptr3] contents3 = vegvalstr3.split( ',') for iter3 in contents3: number3 = float(iter3) vec_temp.append(number3) tempR = calc_mean(vec_temp) temp209 = vec_temp[209] temp244 = vec_temp[244] temp380 = v ec_temp[380] vec_L3[0].append(tempR)

PAGE 180

180 vec_L3[1].append(temp209) vec_L3[2].append(temp244) vec_L3[3].append(temp380) #print vec_L1[1][0] startptr4=0 endptr4=0 startptr4 = linestr.find('') if (startptr4 != 1): endptr4 = linestr.find('',startptr4) tempR, temp209, temp244, temp380 = 0,0,0,0 vec_temp=[] vegvalstr4 = linestr[startptr 4+27:endptr4] contents4 = vegvalstr4.split(',') for iter4 in contents4: number4 = float(iter4) vec_temp.append(number4) tempR = calc_mean(vec_temp) temp209 = vec_temp[209] temp244 = vec_temp[244] temp380 = vec_temp[380] vec_L4[0].append(tempR) vec_L4[1].append(temp209) vec_L4[2].append(temp244) vec_L4[3].append(temp380) #print vec_L1[1][0] startptr5 = linestr.find('') ###Slight difference here if (startptr5 != 1): endptr5 = linestr.find('',startptr5) tempR, temp209, temp244, temp380 = 0,0,0,0 vec_temp=[ ] vegvalstr5 = linestr[startptr5+27:endptr5] ###Slight difference here contents5 = vegvalstr5.split(',') for iter5 in contents5: number5 = float(iter5) vec_temp.append(number5) tempR = calc_mean(vec_temp) temp209 = vec_temp[209] temp244 = vec_temp[244] temp380 = vec_temp[380] vec_L5[0].append(tempR) vec_L5[1].append(temp209) vec_L5[2].append(temp244) vec_L5[3].append(temp380) #print vec_L1[1][0] startptr6=0 endptr6=0

PAGE 181

181 startptr6 = linestr.find('') ###Slight difference here if (startptr6 != 1): endptr6 = linestr. find('',startptr6) tempR, temp209, temp244, temp380 = 0,0,0,0 vec_temp=[] vegvalstr6 = linestr[startptr6+27:endptr6] contents6 = vegvalstr6.split(',') for iter6 in contents6: number6 = float(iter6) vec_temp.append(number6) tempR = calc_mean(vec_temp) temp209 = vec_temp[209] temp244 = vec_temp[244] temp380 = vec_temp[380] vec_SL1[0].append(tempR) vec_SL1[1].append(temp209) vec_SL1[2].append(temp244) vec_SL1[3].append(temp380) #print vec_L1[1][0] # startptr7 = linestr.find('') ###Slight difference here # if (startptr7 != 1): # endptr7 = linestr.find('',startptr7) # tempR, temp209, temp244, temp380 = 0,0,0,0 # vec_temp=[] # vegvalstr7 = linestr[startptr7+28:endptr7] ###Slight difference here # contents7 = vegvalstr7.split(',') # for iter7 in contents7: # number7 = float(iter7) # vec_temp.append(number7) # tempR = calc_mean(vec_temp) # temp209 = vec_temp[209] # temp244 = v ec_temp[244] # temp380 = vec_temp[380] # vec_SL1a[0].append(tempR) # vec_SL1a[1].append(temp209) # vec_SL1a[2].append(temp244) # vec_SL1a[3].append(temp380) # #print vec_L1[1][0] te stfile.close() return vec_L1, vec_L2, vec_L3, vec_L4, vec_L5, vec_SL1 ###find, format, parse, calculate### ###findDataMean### def findDataMean(start_year, end_year):

PAGE 182

182 vec_dat_cat, vec_dat_saw = [],[] ###CATTAIL START### cmd_dat = 'find ./ra w_data/ name "'+start_year+'*cat*mean.dat" print' for fileD in os.popen(cmd_dat).readlines(): #run find command full_nameD = fileD[: 1] #strip \ n' #print full_nameD nameD = full_nameD[11: 9] dat_match_year = full_name D[11: 13] dat_match_type = full_nameD[16: 9] tempR, temp209, temp244, temp380 = 0,0,0,0 # vec_temp=[] vecdattemp=[] file_data_cat = open(full_nameD,'r') data_buff_cat = file_data_cat.readlines() file_ data_cat.close() #CATcounter=1 for lineC in data_buff_cat: itemsC=lineC.split() data_datC=float(itemsC[1]) vec_temp.append(data_datC) tempR = calc_mean(vec_temp) #print tempR temp2 09 = vec_temp[209] temp244 = vec_temp[244] temp380 = vec_temp[380] vecdattemp.append(tempR) vecdattemp.append(temp209) vecdattemp.append(temp244) vecdattemp.append(temp380) vec_dat_cat.append(vecdatt emp) ###CATTAIL INTERMEDIATE### diff = int(end_year) int(start_year) if 8
PAGE 183

183 tempR, temp209, temp244, temp380 = 0,0,0,0 # vec_temp=[] vecdattemp=[] f ile_data_cat = open(full_nameD,'r') data_buff_cat = file_data_cat.readlines() file_data_cat.close() #CATcounter=1 for lineC in data_buff_cat: itemsC=lineC.split() data_datC=float(i temsC[1]) vec_temp.append(data_datC) tempR = calc_mean(vec_temp) #print tempR temp209 = vec_temp[209] temp244 = vec_temp[244] temp380 = vec_temp[380] vecdattemp.append( tempR) vecdattemp.append(temp209) vecdattemp.append(temp244) vecdattemp.append(temp380) vec_dat_cat.append(vecdattemp) ###CATTAIL END### cmd_dat = 'find ./raw_data/ name "'+end_year+'*cat*mean. dat" print' for fileD in os.popen(cmd_dat).readlines(): #run find command full_nameD = fileD[: 1] #strip \ n' #print full_nameD nameD = full_nameD[11: 9] dat_match_year = full_nameD[11: 13] dat_match_type = full _nameD[16: 9] tempR, temp209, temp244, temp380 = 0,0,0,0 # vec_temp=[] vecdattemp=[] file_data_cat = open(full_nameD,'r') data_buff_cat = file_data_cat.readlines() file_data_cat.close() #CATcounter=1 for lineC in data_buff_cat: itemsC=lineC.split() data_datC=float(itemsC[1]) vec_temp.append(data_datC) tempR = calc_mean(vec_temp) #print tempR temp209 = vec_temp[209] temp244 = ve c_temp[244] temp380 = vec_temp[380]

PAGE 184

184 vecdattemp.append(tempR) vecdattemp.append(temp209) vecdattemp.append(temp244) vecdattemp.append(temp380) vec_dat_cat.append(vecdattemp) ###SAW START### cmd_dat = 'find ./raw_data/ name "'+start_year+'*saw*mean.dat" print' for fileD in os.popen(cmd_dat).readlines(): #run find command full_nameD = fileD[: 1] #strip \ n' #print full_nameD nameD = full_nameD[11: 9] dat_match_year = full_nameD[11: 13] dat_match_type = full_nameD[16: 9] tempR, temp209, temp244, temp380 = 0,0,0,0 # vec_temp=[] vecdattemp=[] file_data_saw = open(full_nameD,'r') data_buff_saw = file_data_saw.readlines() file_data_saw.close() #SAWcounter=1 for lineC in data_buff_saw: itemsC=lineC.split() data_datC=float(itemsC[1]) vec_temp.append(data_datC) tempR = calc_mean(vec_temp) temp209 = vec_t emp[209] temp244 = vec_temp[244] temp380 = vec_temp[380] vecdattemp.append(tempR) vecdattemp.append(temp209) vecdattemp.append(temp244) vecdattemp.append(temp380) vec_dat_saw.append(vecdattemp) ###SAWGRASS INTERMEDIATE### diff = int(end_year) int(start_year) if 8
PAGE 185

185 #print full_nameD nameD = full_nameD[11: 9] dat_match_year = full_nameD[11: 13] dat_match_type = full_nameD[16: 9] tempR, temp209, temp244, temp380 = 0,0,0,0 # vec_temp=[] vecdattemp=[] file_data_sa w = open(full_nameD,'r') data_buff_saw = file_data_saw.readlines() file_data_saw.close() #SAWcounter=1 for lineC in data_buff_saw: itemsC=lineC.split() data_datC=float(itemsC[1]) vec_temp.append(data_datC) tempR = calc_mean(vec_temp) temp209 = vec_temp[209] temp244 = vec_temp[244] temp380 = vec_temp[380] vecdattemp.append(tempR) vecdattemp.append (temp209) vecdattemp.append(temp244) vecdattemp.append(temp380) vec_dat_saw.append(vecdattemp) ###SAW END### cmd_dat = 'find ./raw_data/ name "'+end_year+'*saw*mean.dat" print' for fileD in os.popen(cmd_d at).readlines(): #run find command full_nameD = fileD[: 1] #strip \ n' #print full_nameD nameD = full_nameD[11: 9] dat_match_year = full_nameD[11: 13] dat_match_type = full_nameD[16: 9] tempR, temp209, temp24 4, temp380 = 0,0,0,0 # vec_temp=[] vecdattemp=[] file_data_saw = open(full_nameD,'r') data_buff_saw = file_data_saw.readlines() file_data_saw.close() #SAWcounter=1 for lineC in data_buff_saw: itemsC=lineC.split() data_datC=float(itemsC[1]) vec_temp.append(data_datC) tempR = calc_mean(vec_temp) temp209 = vec_temp[209]

PAGE 186

186 temp244 = vec_temp[244] temp380 = vec_temp[380] vecdattemp.ap pend(tempR) vecdattemp.append(temp209) vecdattemp.append(temp244) vecdattemp.append(temp380) vec_dat_saw.append(vecdattemp) #print start_year, end_year, vec_dat_cat return vec_dat_cat, vec_dat_saw ###findDataMean### ###TSPlot### def TSPlot(Modvec,Datvec,start_year,end_year,name): matplotlib.pyplot.clf() matplotlib.pyplot.hold(True) #print start_year, end_year, len(Modvec), len(Datvec) #print len(Datvec) ###Data TS### col=['r','b','g','c'] style=['o'] datalabelvalues=['R','209','244','380'] #print len(Datvec)#, len(Datvec[0]) for TSiter in range(0,len(Datvec),1): #skipdays=TSiter*365*4 #print TSiter, (len(Datvec) 1) SC=0 CC=0 ###Start for LABiter in range(0,len(Datvec[TSiter]),1): #print TSiter, LABiter#len(Modvec[0]) data=[None]*(len(Modvec[0])) if TSiter==0: data[0] = Datvec[TSiter][LABiter] #print data[0]#Datvec[0][L ABiter] if TSiter==(len(Datvec) 1): #print (len(Modvec[0])) data[(len(Modvec[0]) 1)] = Datvec[TSiter][LABiter] #print data[(len(Modvec[0]) 1)]#Datvec[0][LABiter] if len(Datvec)==3: #print Datvec if TSiter==1: data[((365*4)+1)] = Datvec[TSiter][LABiter] #print Datvec[TSiter][LABiter]#data[((365*4)+1)]#Datvec[TSiter][LABiter] #print data[ 1:] #skipdays if CC>len(col) 1:

PAGE 187

187 CC=0 SC=SC+1 if SC>len(style) 1: SC=0 mark=col[CC]+style[SC] #datalabel='Data_'+datalabelvalues[LABiter] #matplotlib.pyplot.p lot(data,mark,label=datalabel) matplotlib.pyplot.plot(data,mark) CC=CC+1 ###MODEL### if name.find('Cattail_All')!= 1: modellabelvalues=['L1','L2','L3','L4','L5'] ylim=1500 elif name.find('Sawgrass_All')!= 1: modellabelvalues=['SL1'] ylim=2000 elif name.find('Cattail_Level')!= 1: modellabelvalues=['R','209','244','380'] ylim=1500 elif name.find('Sawgrass_Level')!= 1: modellabelvalues=['R','209','244','380'] ylim=2000 col=['r','b','g','c'] style=[' -',' .','+','o'] SC=0 CC=0 count=0 for vec in Modvec: if CC>len(col) 1: CC=0 SC=SC+1 if SC>len(style) 1: SC=0 mark=co l[CC]+style[SC] modellabel='Model_'+modellabelvalues[count] matplotlib.pyplot.plot(vec,mark,label=modellabel) CC=CC+1 count=count+1 ###General### matplotlib.pyplot.ylim( 50,ylim) # matplotlib.pyplot.xlim(1200,1700 ) #just checking initial trend. not to be used for full implementation matplotlib.pyplot.xlabel('Time (days)') matplotlib.pyplot.ylabel('Density (g/m2)') matplotlib.pyplot.legend(loc=9) title=name+' '+start_year+' to '+end_year

PAGE 188

188 matplotl ib.pyplot.title(title) # matplotlib.pyplot.show() matplotlib.pyplot.savefig('./analysis/'+title) matplotlib.pyplot.hold(False) matplotlib.pyplot.clf() ###TSPlot### ####### #BEGIN# ####### #flush = sys.stdout.flush() def timeseries(): cm d = 'find ./output/ name "xmlOut_*.xml" print' for file in os.popen(cmd).readlines(): full_name = file[: 1 ] #strip \ n' filename = full_name[9:] end_year = filename[13: 4] start_year = filename[7: 10] #flush print full_name, filename, start_year, end_year #flush vec_L1, vec_L2, vec_L3, vec_L4, vec_L5, vec_SL1 = [],[],[],[],[],[] vec_L1, vec_L2, vec_L3, vec_L4, vec_L5, vec_SL1 = Parse(full_name) #print vec_L1[0] # v ec_datA_cat, vec_datB_cat, vec_datA_saw, vec_datB_saw= [],[],[],[] # vec_datA_cat, vec_datB_cat, vec_datA_saw, vec_datB_saw = findDataMean(start_year, end_year) vec_dat_cat, vec_dat_saw = [],[] vec_dat_cat, vec_dat_saw = findDataMean (start_year, end_year) #print vec_datA_cat, vec_datB_cat #print NS_SL1, NS_SL1a name='Cattail_Level1_Complexity' TSPlot(vec_L1, vec_dat_cat, start_year, end_year, name) name='Cattail_Level2_Complexity' TSPlot (vec_L2, vec_dat_cat, start_year, end_year, name) name='Cattail_Level3_Complexity' TSPlot(vec_L3, vec_dat_cat, start_year, end_year, name) name='Cattail_Level4_Complexity' TSPlot(vec_L4, vec_dat_cat, start_year, end_year, na me) name='Cattail_Level5_Complexity' TSPlot(vec_L5, vec_dat_cat, start_year, end_year, name) name='Sawgrass_Level1_Complexity' TSPlot(vec_SL1, vec_dat_saw, start_year, end_year, name) temp_vec_cat=[] for iter in range(0,len(vec_dat_cat),1): temp_vec=[]

PAGE 189

189 temp_vec.append(vec_dat_cat[iter][0]) temp_vec_cat.append(temp_vec) temp_vec_saw=[] for iter in range(0,len(vec_dat_saw),1): temp_vec=[] temp_vec.append(vec_dat_saw[iter][0]) temp_vec_saw.append(temp_vec) #print temp_vec_cat name='Cattail_All_Complexities' #print vec_L1 TSPlot([vec_L1[0],vec_L2[0],vec_L3[0],vec_L4[0],vec_L5[0]], temp_vec_c at, start_year, end_year, name) name='Sawgrass_All_Complexities' TSPlot([vec_SL1[0]], temp_vec_saw, start_year, end_year, name) ##### #RUN# ##### #timeseries()

PAGE 190

190 APPENDIX C MANAGEMENT TIME SERI ES SCRIPTS FOR LEVEL 4 CreateXMLs_mgmt.py ################################################################# ### ###run through the (.sam) file and generate xml files### ################################################################# ### ###Based on given parameters and row number### ###Import rel event libraries### import os ###read (.sam) file### cmd = 'find name "*.sam" print' os.popen('rm rf ./runmgmt*.xml') os.popen('rm rf ./wqmgmt*.xml') rows=0 for file in os.popen(cmd).readlines(): fullname=file[: 1] openfile = open(fullname,'r ') samfile=openfile.readlines() openfile.close() rows = int(samfile[1].split()[0]) # print rows for rowiter in range(0,rows,1): #print samfile[rowiter+4] vars = samfile[rowiter+4].split() #print vars CATGF = vars[0] CATDF = vars[1] Dmgmt = vars[2] Pmgmt = vars[3] Sawmgmt = vars[4] ###creating new run and wq files### NewNameWQ = 'wqmgmt'+str(rowiter+1)+'.xml' NewNameRun = 'runmgmt'+str(rowiter+1)+' .xml' os.popen('cp ./wqBASEmgmt ./'+NewNameWQ) os.popen('cp ./runBASEmgmt ./'+NewNameRun) ###Edit new files, changing vars### ###SED is a shell command to fins and replace strings### os.popen('sed i \ 's|WQNUMBER|'+s tr(rowiter+1)+'|g \ '+NewNameRun) #catmap = str(Cat)+'.dat' #os.popen('sed i \ 's|CATMAP|'+catmap+'|g \ '+NewNameWQ)

PAGE 191

1 91 #os.popen('sed i \ 's|SAWGRASS|'+str(Saw)+'|g \ '+NewNameWQ) os.popen('sed i \ 's|CATGROW|'+str(CATGF)+'|g \ '+NewNameWQ) os.popen('sed i \ 's|CATDIE|'+str(CATDF)+'|g \ '+NewNameWQ) ###Depth Cases### if(Dmgmt==str(1)): #ALTD==0 os.popen('sed i \ 's|SETDEPTH|0|g \ '+NewNameWQ) os.popen('sed i \ 's|ALTD|0|g \ '+NewN ameWQ) elif(Dmgmt==str(2)): os.popen('sed i \ 's|SETDEPTH|50|g \ '+NewNameWQ) os.popen('sed i \ 's|ALTD|0|g \ '+NewNameWQ) elif(Dmgmt==str(3)): os.popen('sed i \ 's|SETDEPTH|300|g \ '+NewNameWQ) os.popen('sed i \ 's|ALTD|0|g \ '+NewNameWQ) elif(Dmgmt==str(4)):#ALTD==100 even os.popen('sed i \ 's|SETDEPTH|0|g \ '+NewNameWQ) os.popen('sed i \ 's|ALTD|100|g \ '+NewNameWQ) elif(Dmgmt==str(5)):#ALTD==100 odd os.popen('sed i \ 's|SETDEPTH|0|g \ '+NewNameWQ) os.popen('sed i \ 's|ALTD|100|g \ '+NewNameWQ) os.popen('sed i \ 's|deep, dry|dry, deep|g \ '+NewNameWQ) ###Phosphorus Cases### if(Pmgmt==str(1)): #ALTD==0 os.popen('sed i \ 's|SETPHOSPHORUS|0|g \ '+NewNameWQ) os.popen('sed i \ 's|ALTP|0|g \ '+NewNameWQ) elif(Pmgmt==str(2)): os.popen('sed i \ 's|SETPHOSPHORUS|600|g \ '+NewNameWQ) os.popen('sed i \ 's|ALT P|0|g \ '+NewNameWQ) elif(Pmgmt==str(3)): os.popen('sed i \ 's|SETPHOSPHORUS|1500|g \ '+NewNameWQ) os.popen('sed i \ 's|ALTP|0|g \ '+NewNameWQ) elif(Pmgmt==str(4)):#ALTD==100 even os.popen('sed i \ 's|SET PHOSPHORUS|0|g \ '+NewNameWQ) os.popen('sed i \ 's|ALTP|100|g \ '+NewNameWQ) os.popen('sed i \ 's|ALTPP|100|g \ '+NewNameWQ) elif(Pmgmt==str(5)):#ALTD==100 odd os.popen('sed i \ 's|SETPHOSPHORUS|0|g \ '+NewNameWQ ) os.popen('sed i \ 's|ALTP|100|g \ '+NewNameWQ) os.popen('sed i \ 's|ALTPP|100|g \ '+NewNameWQ) os.popen('sed i \ 's|high, low|low, high|g \ '+NewNameWQ) elif(Pmgmt==str(6)): os.popen('sed i \ 's|SET PHOSPHORUS|0|g \ '+NewNameWQ) os.popen('sed i \ 's|ALTP|1000|g \ '+NewNameWQ) os.popen('sed i \ 's|ALTPP|1000|g \ '+NewNameWQ) ###Sawgrass Cases### if(Sawmgmt==str(1)): os.popen('sed i \ 's|SAWGRASS|'+str (300)+'|g \ '+NewNameWQ)

PAGE 192

192 elif(Sawmgmt==str(2)): os.popen('sed i \ 's|SAWGRASS|'+str(900)+'|g \ '+NewNameWQ) elif(Sawmgmt==str(3)): os.popen('sed i \ 's|SAWGRASS|'+str(1500)+'|g \ '+NewNameWQ) #os.popen('sed i \ 's|SAWGROW|'+str(SAWGF)+'|g \ '+NewNameWQ) #os.popen('sed i \ 's|SAWDIE|'+str(SAWDF)+'|g \ '+NewNameWQ) #os.popen('sed i \ 's|SETDEPTH|'+str(Depth)+'|g \ '+NewNameWQ) #os.popen('sed i \ 's|SETPHOSPHORUS|'+str(Phos)+'|g \ '+NewNam eWQ) os.popen('sed i \ 's|ROWNUMBER|'+str(rowiter+1)+'|g \ '+NewNameWQ) #print vars ################################################################# ############### ###Make Batches and create Jobs### ###Get file names### ###split into batches of gi ven size### filenames=[] BATCH=[] batchsize=1 ###!!!IMPORTANT!!!### TempFileList=[] tempfilecount=0 cmd = 'find ./ name "runmgmt*.xml" print' #print rows #print batchsize for file in os.popen(cmd).readlines(): fullname=file[: 1] #print file n amenumber = fullname[5: 4] #print namenumber filenames.append(fullname) TempFileList.append(fullname) if len(TempFileList)>=batchsize: skip = batchsize*tempfilecount BATCH.append(filenames[skip:]) tempfilecount=tempf ilecount+1 del TempFileList[:] finalskip=batchsize*tempfilecount BATCH.append(filenames[finalskip:]) del TempFileList[:] del filenames[:] #print \ n BATCH: '+str(BATCH)+' \ n' ################################################################# ######## ####create batch jobs### #print len(BATCH) #ScriptName = 'HSEBatch_mgmt' cmdClearSh= 'rm rf *.sh'

PAGE 193

193 cmdClearOutJob = 'rm rf ./outjob/*' cmdClearOutput = 'rm rf ./output/*' os.popen(cmdClearSh) os.popen(cmdClearOutJob) os.popen(cmdClearOutput) #print len(B ATCH) #BATCH.sort() for batchiter in range(1,len(BATCH),1): #newName = ScriptName+str(batchiter+1)+'.sh' #cmdCopy = 'cp rf ./'+ScriptName+' ./'+newName #os.popen(cmdCopy) #print batchiter#newName #appendfile = open(newName,'a') for fileiter in BATCH[batchiter 1]: print fileiter jobstring='python pythonTSmgmt.py '+fileiter os.popen(jobstring) #appendfile.write(jobstring) #appendfile.close() #submitstring='qsub '+newName #os.popen(submitstri ng) #os.popen('qstat u gareth83') print 'Finished Submitting '+str(batchiter)+' Jobs' ###################################################################### ##

PAGE 194

194 TSInputs.sam 0 21 #63 total if including Sawgrass initial densities 5 0 6.7e 09 10 3 3 1 #1 H H L 6.7e 09 10 1 1 1 #2 L L L 6.7e 09 10 3 1 1 #3 H L L 6.7e 09 10 1 3 1 #4 L H L 6.7e 09 10 2 3 1 #5 M H L 6.7e 09 10 2 2 1 #6 M M L 6.7e 09 10 2 1 1 #7 M L L 6.7e 09 10 3 2 1 #8 H M L 6.7e 09 10 1 2 1 #9 L M L 6.7e 09 10 4 4 1 #10 EH EH L 6.7e 09 10 5 4 1 #11 EL EH L 6.7e 09 10 3 6 1 #12 H D L 6.7e 09 10 2 6 1 #13 M D L 6.7e 09 10 1 6 1 #14 L D L 6.7e 09 10 4 6 1 #15 EH D L 6.7e 09 10 3 4 1 #16 H EH L 6.7e 09 10 2 4 1 #17 M EH L 6.7e 09 10 1 4 1 #18 L EH L 6.7e 09 10 4 3 1 #19 EH H L 6.7e 09 10 4 2 1 #20 EH M L 6.7e 09 10 4 1 1 #21 EH L L 6.7e 09 10 3 3 2 # H H M 6.7e 09 10 1 1 2 # L L M 6.7e 09 10 3 1 2 # H L M 6.7e 09 10 1 3 2 # L H M 6.7e 09 10 2 3 2 # M H M 6.7e 09 10 2 2 2 # M M M 6.7e 09 10 2 1 2 # M L M 6.7e 09 10 3 2 2 # H M M 6.7e 09 10 1 2 2 # L M M 6.7e 09 10 4 4 2 # EH EH M 6.7e 09 10 5 4 2 # EL EH M 6.7e 09 10 3 6 2 # H D M 6.7e 09 10 2 6 2 # M D M 6.7e 09 10 1 6 2 # L D M 6.7e 09 10 4 6 2 # EH D M 6.7e 09 10 3 4 2 # H EH M 6.7e 09 10 2 4 2 # M EH M 6. 7e 09 10 1 4 2 # L EH M 6.7e 09 10 4 3 2 # EH H M 6.7e 09 10 4 2 2 # EH M M

PAGE 195

195 6.7e 09 10 4 1 2 # EH L M 6.7e 09 10 3 3 3 # H H H 6.7e 09 10 1 1 3 # L L H 6.7e 09 10 3 1 3 # H L H 6.7e 09 10 1 3 3 # L H H 6.7e 09 10 2 3 3 # M H H 6.7e 09 10 2 2 3 # M M H 6.7e 09 10 2 1 3 # M L H 6.7e 09 10 3 2 3 # H M H 6.7e 09 10 1 2 3 # L M H 6.7e 09 10 4 4 3 # EH EH H 6.7e 09 10 5 4 3 # EL EH H 6.7e 09 10 3 6 3 # H D H 6.7e 09 10 2 6 3 # M D H 6.7e 09 10 1 6 3 # L D H 6.7e 09 10 4 6 3 # EH D H 6.7e 09 10 3 4 3 # H EH H 6.7e 09 10 2 4 3 # M EH H 6.7e 09 10 1 4 3 # L EH H 6.7e 09 10 4 3 3 # EH H H 6.7e 09 10 4 2 3 # EH M H 6.7e 09 10 4 1 3 # EH L H

PAGE 196

196 pythonTSmgmt. py ###import necessary libraries### import os, sys, math, matplotlib.pyplot ########### ###################################################### ##### ###Function for calculating mean### def RMeanFunction(strlist): sum=0 meanF=0 counter=1 for iter in strlist: if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43 ,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,4 55,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(counter)+',')= = 1): #if iter=='inf' or iter==' inf' or float(iter)<0.001: # number=0 #else: number=float(iter) sum=sum+number counter=counter+1 meanF=sum/385 return meanF def MeanFunction(strlist): sum=0 meanF=0 counter=1 for iter in strlist: # if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341, 370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(counter)+',')= = 1): # if iter=='inf' or iter==' inf' or float(iter)<0.001: # number=0 #else:

PAGE 197

197 number=float(iter) sum=sum+number counter=counter+1 meanF=sum/counter return meanF ###Function for calulating variance### def RSTDEVFunction(strlist, meanF): var=0 sum=0 square=0 counter=1 for iter in strlist: if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,1 84,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,4 92,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(counter)+',')= = 1): #if iter=='inf' or iter==' inf' or float(iter)<0.001: # number=0 #else: number=float(iter) #print number, meanF square = math.pow((number meanF),2) counter=counter+1 sum=sum+square var=math.sqrt(sum/385) return var def STDEVFunction(strlist, meanF): var=0 sum=0 square=0 counter=1 for iter in strlist: # if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,3 36,337,338,339 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510,".find(','+str(counter)+' ,')= = 1):

PAGE 198

198 #if iter=='inf' or iter==' inf' or float(iter)<0.001: # number=0 #else: number=float(iter) #print number, meanF square = math.pow((number meanF),2) counter=counter+1 sum=sum+square var=math.sqrt(sum/counter) return var ################################################################# ####### ###Run hse based on file input as argument at runtime### filename = sys.argv[1][2:] filenumber = filename[7: 4] #[7: 4 ] for normal. [9: 4] for batch. dont know why??? #print filename #print filenumber xmlfilename = str(filenumber)+'.xml' xmlfilebuffer=[] firsttime_begin=0 firsttime_end=0 lasttime_begin=0 lasttime_end=0 InitialValues="" FinalValues="" #print xmlfilename if (os.popen('../hse '+filename)): ###do post processing here. cut file size down### #print filename+' '+filenumber xmlfile = open('./output/'+xmlfilename,'r') xmlfilebuffer = xmlfile.readlines() xmlfile.close() os.popen('rm rf ./outp ut/'+xmlfilename) STAT_ARR=[] temp_arr = [] R_DM=[] R_SDEV=[] NE_DM=[] NE_SDEV=[] CE_DM=[] CE_SDEV=[] SW_DM=[] SW_SDEV=[] statfile = open('./output/'+filenumber+'.dat','w') for line in xmlfilebuffer:

PAGE 199

199 tempmean=0 tempstdev=0 temp_NE_mean=0 temp_NE_stdev=0 temp_CE_mean=0 temp_CE_stdev=0 temp_SW_mean=0 temp_SW_stdev=0 splitline = line.split() if len(splitline)>0: #print splitline[0] if splitline[0]=="') end=line.rfind('<') values = line[start+1:end].split(', ')#ignore 1st and last value, should be zero anyway #print values #variables ###Regional### tempmean = RMeanFunction(values) temp_ arr.append(tempmean) R_DM.append(tempmean) tempstdev = RSTDEVFunction(values, tempmean) temp_arr.append(tempstdev) R_SDEV.append(tempstdev) ###NE (High)### temp _NE_mean = MeanFunction(values[174:179]) temp_arr.append(temp_NE_mean) NE_DM.append(temp_NE_mean) temp_NE_stdev = STDEVFunction(values[174:179],temp_NE_mean) temp_arr.append(temp_NE_stdev) NE_SDEV.append(temp_NE_stdev) ###Central (Med)### temp_CE_mean = MeanFunction(values[279:282]) temp_arr.append(temp_CE_mean) CE_DM.append(temp_CE_mean) temp_CE_std ev = STDEVFunction(values[279:282],temp_CE_mean) temp_arr.append(temp_CE_stdev) CE_SDEV.append(temp_CE_stdev) ###SW (Low)### temp_SW_mean = MeanFunction(values[375:379]) temp_a rr.append(temp_SW_mean) SW_DM.append(temp_SW_mean)

PAGE 200

200 temp_SW_stdev = STDEVFunction(values[375:379],temp_SW_mean) temp_arr.append(temp_SW_stdev) SW_SDEV.append(temp_SW_stdev) #pri nt temp_arr STAT_ARR.append(temp_arr) #for stat in STAT_ARR[iter]: tempstr = tempstr+''.join(str(temp_arr))+' #linestr = linestr+''.join(tempstr) #print linestr statfile.write(te mpstr) statfile.write(' \ n') #print temp_arr #print STAT_ARR ################################################################# ################### ###Write stats to file### # print statfile #for iter in range(0,le n(STAT_ARR),1): ###PRINTING### #for i in STAT_ARR: # R_DM.append(i[0]) # #print R_DM[0] # R_SDEV.append(i[1]) # NE_DM.append(i[2]) # NE_SDEV.append(i[3]) # CE_DM.append(i[4]) # CE_SDEV.append(i[5]) # SW_DM.append(i[6]) # SW_SDEV.append(i[7]) ##print R_DM#len(STAT_ARR) matplotlib.pyplot.hold(True) matplotlib.pyplot.plot(range(1,len(R_DM)+1),R_DM,'r+') # matplotlib.pyplot.plot(range(1,len(R_SDEV)+1),R_SDEV,'r ') # matplo tlib.pyplot.plot(range(1,len(NE_DM)+1),NE_DM,'b+') # matplotlib.pyplot.plot(range(1,len(NE_SDEV)+1),NE_SDEV,'b ') # matplotlib.pyplot.plot(range(1,len(CE_DM)+1),CE_DM,'y+') # matplotlib.pyplot.plot(range(1,len(CE_SDEV)+1),CE_SDEV,'y ') # matplo tlib.pyplot.plot(range(1,len(SW_DM)+1),SW_DM,'g+') # matplotlib.pyplot.plot(range(1,len(SW_SDEV)+1),SW_SDEV,'g ') matplotlib.pyplot.xlabel('time (days)') matplotlib.pyplot.ylabel('Mean Density (g/m^2)') matplotlib.pyplot.axis([1,(len(R_DM)+1 ),0,1240]) matplotlib.pyplot.title('Time Series '+filenumber)

PAGE 201

201 #matplotlib.pyplot.show() matplotlib.pyplot.savefig('./output/'+filenumber+'.png') matplotlib.pyplot.clf() matplotlib.pyplot.hold(False) statfile.clos e() del STAT_ARR print 'done' else: print 'failed'

PAGE 202

202 pypostplot_1.py import os, matplotlib.pyplot, matplotlib, numpy #os.popen('rm rf ./runmgmt*.xml') #os.popen('rm rf ./wqmgmt*.xml') #rows=0 R_DM=[] R_SDEV=[] NE_DM=[] NE_SDEV=[] CE_D M=[] CE_SDEV=[] SW_DM=[] SW_SDEV=[] col=['b','g','r','c','m','y','k'] mark=[' -',' .',':',','] j=0 k=0 matplotlib.pyplot.hold(True) for i in range(1,22,1): cmd = 'find ./output/ name "'+str(i)+'.dat" print' for file in os.popen(cmd).readline s(): fullname=file[: 1] namenumber=fullname[9: 4] print namenumber openfile = open(fullname,'r') samfile=openfile.readlines() openfile.close() list_rdm=[] list_rsdev=[] list_nedm=[] list_nesdev=[] list_swdm=[] list_swsdev=[] for row in samfile: line=row[1: 3].split(', ') #print line temp_rdm=float(line[0]) list_rdm.append(temp_rdm) #print temp_rdm temp_rsdev=float(line[1]) list_rsdev.append(temp_rsdev) temp_nedm=float(line[2])

PAGE 203

203 list_nedm.append(temp_nedm) temp_nesdev=float(line[3]) list_nesdev.append(temp_nesdev) temp_cedm= float(line[4]) temp_cesdev=float(line[5]) temp_swdm=float(line[6]) list_swdm.append(temp_swdm) temp_swsdev=float(line[7]) list_swsdev.append(temp_swsdev) R_DM.append(list_rdm) R_SD EV.append(list_rsdev) NE_DM.append(list_nedm) NE_SDEV.append(list_nesdev) SW_DM.append(list_swdm) SW_SDEV.append(list_swsdev) if j>(len(col) 1): j=0 #k=k+1 if k>(len(mark) 1): k=0 colmark=col[j]+mark[k] j=j+1 k=k+1 # matplotlib.pyplot.plot(range(1,len(list_rdm)+1),list_rdm,colmark,label=namenumber) #print R_DM #print row for i in range(0,len(R_DM)): # #print i # #j=i if j>(len(col) 1): j=0 #k=k+1 if k>(len(mark) 1): k=0 colmark=col[j]+mark[k] j=j+1 k=k+1 matplotlib.pyplot.plot(range(1,len(R_DM[i])+1),R_DM[i],colmark,label=str(i+1)) # matplotlib.pyplot.plot(range(1,len(NE_DM[ i])+1),NE_DM[i],colmark) # matplotlib.pyplot.plot(range(1,len(SW_DM[i])+1),SW_DM[i],colmark) matplotlib.pyplot.xlabel('time (days)') matplotlib.pyplot.ylabel('Mean Density (g/m^2)')

PAGE 204

204 matplotlib.pyplot.title('Density Timeseries') # matplotlib.pyplot.legend () matplotlib.pyplot.legend(loc=(1.0,0.0)) matplotlib.pyplot.axis([1,len(list_rdm)+1,0,1240]) #matplotlib.pyplot.show() matplotlib.pyplot.savefig('./output/R_DM_total_altLEG.png') matplotlib.pyplot.clf() matplotlib.pyplot.hold(False)

PAGE 205

205 A PPENDIX D SEQUENTIAL INDICATOR SIMULATION FILES SISIM.par Parameters for SISIM ******************** START OF PARAMETERS: 1 1=continuous(cdf), 0=categorical(pdf) 6 number thresholds/categories 1 2 3 4 5 6 thresholds / categories 0.04 0.73 0.01 0.12 0.09 0.01 global cdf / pdf rawdata.dat file with data 2 3 0 1 columns for X,Y,Z, and variable direct.ik file with soft indicator input 1 2 0 3 4 5 6 7 columns for X,Y,Z, and indicators 0 Markov Bayes simulation (0=no,1=yes) 0.61 0.54 0.56 0.53 0.29 calibration B(z) values 1.0 1.0e21 trimming limits 1.0 1240.0 minimum and maximum data value 1 1.0 lower tail option and parameter 1 372.0 middle option and parameter 1 1240.0 upper tail option and parameter cluster.dat file with tabulated values 3 0 columns for variable, weight 0 debugging level: 0,1,2,3 sisim.dbg file for debugging output sisim.out file for simulation output 250 number of realizations 472 546353.20 50.0 nx,xmn,xsiz 636 2895789.95 50.0 ny,ymn,ysiz 1 1.0 10.0 nz,zmn,zsiz 69069 random number seed 12 maximum original data for each kriging 10 maximum previous nodes for each kriging 1 maximum soft indicator nodes for kriging 1 assign data to nodes? (0=no,1=yes) 1 3 multiple grid search? (0=no,1=yes),num 0 maximum per octant (0=not used) 5000.0 500 0.0 100.0 maximum search radii 0.0 0.0 0.0 angles for search ellipsoid 51 51 11 size of covariance lookup table 0 2.5 0=full IK, 1=median approx. (cutoff) 0 0=SK, 1=OK 1 0.015 One nst, nugget effect 1 0.01 0.0 0.0 0.0 it,cc,ang1,ang2,ang3

PAGE 206

206 9000.0 9000.0 9000.0 a_hmax, a_hmin, a_vert 1 15 Two nst, nugget effect 1 3 0.0 0.0 0.0 it,cc,ang1,ang2,ang3 3120.0 3120.0 3120.0 a_hmax, a_hmin, a_vert 1 100 Three nst, nugget effect 1 500 0.0 0.0 0.0 it,cc,ang1,ang2,ang3 11880.0 1188 0.0 11880.0 a_hmax, a_hmin, a_vert 1 10000 Four nst, nugget effect 1 5000 0.0 0.0 0.0 it,cc,ang1,ang2,ang3 1320.0 1320.0 1320.0 a_hmax, a_hmin, a_vert 1 50000 Five nst, nugget effect 1 10000 0.0 0.0 0.0 it,cc,ang1,ang2,ang3 1440.0 1440.0 1440.0 a_hmax, a_hmin, a_vert 1 2000 Six nst, nugget effect 1 4000 0.0 0.0 0.0 it,cc,ang1 ,ang2,ang3 3000.0 3000.0 3000.0 a_hmax, a_hmin, a_vert

PAGE 207

207 Addcoord.par Parameters for ADDCOORD *********************** START OF PARAMETERS: sisim.out file with data addco ordiii.out file for output iii realization number 472 546353.20 50 nx,xmn,xsiz 636 2895789.95 50 ny,ymn,ysiz 1 0.5 1.0 nz,zmn,zsiz

PAGE 208

208 MainMapPy.py ####### ########################################## ###Create multiple batchfiles to submit to hpc### ################################################# ###import relevent libraries### import os ################################################## ###Get file names## # ###split into batches of given size### filenames=[] BATCH=[] batchsize=10 TempFileList=[] tempfilecount=0 cmd = 'find ./input/ name "coord*.csv" print' for file in os.popen(cmd).readlines(): fullname=file[: 1] namenumber = fullname[13: 4] f ilenames.append(fullname) TempFileList.append(fullname) if len(TempFileList)>=batchsize: skip = 10*tempfilecount BATCH.append(filenames[skip:]) tempfilecount=tempfilecount+1 del TempFileList[:] finalskip=10*tempfilec ount BATCH.append(filenames[finalskip:]) del TempFileList[:] del filenames[:] #print \ n BATCH: '+str(BATCH)+' \ n' ################################################# ###create batch jobs### #print len(BATCH) ScriptName = 'MappingScript' cmdClearSh= 'rm rf .sh' cmdClearOutMap = 'rm rf ./outmap/*' cmdClearOutJob = 'rm rf ./outjob/*' os.popen(cmdClearSh) os.popen(cmdClearOutMap) os.popen(cmdClearOutJob) for batchiter in range(0,len(BATCH),1): newName = ScriptName+str(batchiter+1)+'.sh' cmdCopy = 'cp rf ./'+ScriptName+' ./'+newName os.popen(cmdCopy) #print newName

PAGE 209

209 appendfile = open(newName,'a') for fileiter in BATCH[batchiter]: jobstring='python MapProcessing_files_MAIN_2011_03_13.py '+fileiter+' \ n' appendfile.write(job string) appendfile.close() submitstring='qsub '+newName os.popen(submitstring) os.popen('qstat u gareth83') print 'Finished Submitting '+str(batchiter+1)+' Jobs'

PAGE 210

210 MapProcessing_files_MAIN_2011_03_13.py ###################################### ########################### ##################### ###Create element map input files based on SIS raster grid overlayed with mesh file### ################################################################# ##################### ###import relevent libraries### import numpy, os, sys ################################################################# ############################### ###Open mesh file, read elements### openfile = open('./mesh.xml','r') meshfile = openfile.readlines() openfile.close() VERTICES = [] E LEMENTS = [] TempVert = [] vertexlist = [] ###Loop through buffer line by line### for line in meshfile: #vertexstring = line[1:7] elementstring = line[1:17] xbegin=0 xend=0 ybegin=0 yend=0 listbegin=0 listend=0 ###if lin e matches, extract vertex coordinates and append to list of vertices### #if vertexstring == "vertex": # xbegin = line.find('')+3 # xend = line.find('') # Xcoord = float(line[xbegin:xend]) # ybegin = line.find('')+3 # yend = line.find('') # Ycoord = float(line[ybegin:yend]) # TempVert = numpy.array([Xcoord,Ycoord]) # VERTICES.append(TempVert) ###if line matches, extract vertex numbers and append to list of elements### if elemen tstring == "element_vertices": listbegin = line.find('>')+1

PAGE 211

211 listend = line.rfind('<') vertexlist = line[listbegin:listend].split(', ') ELEMENTS.append(vertexlist) del meshfile[:] ############################################# #################### ################################# ###Open Nodes file, read vertices### openfile = open('./ProjectedNodes_mod.csv','r') nodefile = openfile.readlines() openfile.close() ###Loop through buffer line by line### for line in nodefile: lin evector = line.split(',') Xcoord=float(linevector[1]) Ycoord=float(linevector[2]) TempVert = numpy.array([Xcoord, Ycoord]) VERTICES.append(TempVert) ################################################################# ########################## ####### ###Barycentric detection method from http://www.blackpawn.com/texts/pointinpoly/default.html### ###Barycentric constants### A=[] B=[] C=[] v0=[] v1=[] dot00=0 dot01=0 dot11=0 invDenom=0 BaryElmConstants=[] for ElementVertices in ELEMENTS: A = VERTICES[int(ElementVertices[0]) 1] B = VERTICES[int(ElementVertices[1]) 1] C = VERTICES[int(ElementVertices[2]) 1] v0 = C A v1 = B A dot00 = numpy.dot(v0,v0) dot01 = numpy.dot(v0,v1) dot11 = numpy.dot(v1,v1) invDenom = 1/((dot00*dot11) (dot01*dot01)) BaryElmConstants.append([A,B,C,v0,v1,dot00,dot01,dot11,invDenom]) del ELEMENTS[:]

PAGE 212

212 del VERTICES[:] def BaryTest(pointvector,baryelementvector): P=pointvector[0:2] A=baryelementvector[0] B=baryelementvector[1 ] C=baryelementvector[2] v0=baryelementvector[3] v1=baryelementvector[4] v2=P A dot00=baryelementvector[5] dot01=baryelementvector[6] dot02=numpy.dot(v0,v2) dot11=baryelementvector[7] dot12=numpy.dot(v1,v2) invDenom= baryelementvector[8] U = ((dot11*dot02) (dot01*dot12))*invDenom V = ((dot00*dot12) (dot01*dot02))*invDenom #print P,A,B,C,v0,v1,v2,dot00,dot01,dot02,dot11,dot12,invDenom,U,V #print U,V,(U+V) if ( (U>0) and (V>0) and (U+V<1) ): # print 'TRUE' return 1 #true else: # print 'false' return 0 #false ################################################################# ################################### ###Get map files### ###arg is the filename inputted as cmd ar gument### ###eg python MapProcessing_files_MAIN.py filename### for arg in sys.argv: #print arg cmd = 'find ./input/ wholename "'+arg+'" print' for file in os.popen(cmd).readlines(): fullname=file[: 1] namenumber = fullname[13: 4] #print fullname #os.popen('qsub MappingScript') #sleep(10) ################################################################# ################################### ###Open map file, read### openfile = open(fullname,'r') mapfile = openfile.readlines()

PAGE 213

213 openfile.close() linevec=[] classvalue = 0 xcoord = 0 ycoord = 0 POINTS = [] TempPoint = [] ###Loop through buffer line by line### ###Each line is a new raster point falling within an element### for line in mapfile[1:]: linevec = line.split(',') xcoord = float(linevec[0]) ycoord = float(linevec[1]) classvalue = float(linevec[3]) ###ADJUSTED FOR CONTINUOUS DATA### #print classvalue TempPoint = [xcoord, ycoord, classvalue] #print TempPoint POINTS.append(TempPoint) #print POINTS del mapfile[:] ############################################################# #### ##################################### ###check if point lies within element### ###ADJUSTING FOR CONTINUOUS DATA### ###Writing data to file for TARSE input### ElementClass = numpy.zeros([510,8]) #ElementClass=[[0]*510] #print Ele mentClass #assigned=0 #result=0 #print POINTS[100][2] writefile = open('./outmap/'+namenumber+'.dat','w') elementcount=1 for iter in range(0,len(BaryElmConstants),1): sum=0 count=0 mean=0 pointcount=0 ###DONT include border cells### if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217 ,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339

PAGE 214

214 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495 ,496,497,498,499,500,501,502,505,506,507,508,509,510".find(','+str(elementcount) +',')== 1): for point in POINTS: result = BaryTest(point,BaryElmConstants[iter]) if(result==1): sum=sum+POINTS[pointcount][2] count=count+1 pointcount=pointcount+1 ###check for boundary elements### if count!=0: mean=sum/count else: mean=0 else: mean=0 inputstring = str(elementcount)+' \ t'+str(mean)+' \ n' writefile.write(inputstring) elementcount=elementcount+1 ##################################### ############################ ######################################## ###String required at the end of the file. accounts for canals### endstr=" \ 308018 0 \ n \ 308019 0 \ n \ 308020 0 \ n \ 308021 0 \ n \ 308050 0 \ n \ 308051 0 \ n \ 308053 0 \ n \ 308057 0 \ n \ 308059 0 \ n \ 308061 0 \ n \ 308065 0 \ n \ 308068 0 \ n \ 308070 0 \ n \ 308071 0 \ n \ 308537 0 \ n \ 308538 0 \ n \ 308539 0 \ n \ 308540 0 \ n \ 308541 0 \ n \ 308542 0 \ n \ 308543 0 \ n \

PAGE 215

215 308544 0 \ n \ 308545 0 \ n \ 308548 0 \ n \ 308556 0 \ n \ 308557 0 \ n \ 308564 0 \ n \ 308565 0 \ n \ 308577 0 \ n \ 308630 0 \ n \ 308634 0 \ n \ 308636 0 \ n \ 308640 0 \ n \ 308644 0 \ n \ 308645 0 \ n \ 308657 0 \ n \ 308670 0 \ n \ 308681 0 \ n \ 310476 0 \ n \ 310479 0 \ n \ 310480 0 \ n \ 310481 0 \ n \ 310859 0 \ n \ 310860 0 \ n \ 310866 0 \ n \ 310868 0 \ n \ 310887 0 \ n \ 310888 0 \ n \ 311498 0 \ n \ 311499 0 \ n \ 312753 0 \ n \ 312 756 0 \ n" writefile.write(endstr) writefile.close() ################################################################# ########################################## os.system(' \ n')

PAGE 216

216 MappingScript #!/bin/bash # #PBS q submit #PBS M gareth 83@ufl.edu #PBS m abe #PBS l nodes=1:ppn=1 #PBS o outjob #PBS e outjob #PBS j oe #PBS r n #PBS u gareth83 #PBS l walltime=20:00:00 # cd /scratch/ufhpc/gareth83/DataPreProcessing/

PAGE 217

217 APPENDIX E MANAGEMENT TIME SERIES PLOTS FOR LEVEL 4 Figure E 1 Regional mean density trend for Level 4 complexity applied to various management scenarios, with low initial sawgrass density. The aggregation of these plots can be found in Figure 4 6

PAGE 218

218 Figure E 2 Regional mean density trend for Level 4 complexity a pplied to various management scenarios, with medium initial sawgrass density. The aggregation of these plots can be found in Figure 4 6

PAGE 219

219 Figure E 3 Regional mean density trend for Level 4 complexity applied to various management scenarios, with high ini tial sawgrass density. The aggregation of these plots can be found in Figure 4 6

PAGE 220

220 Figure E 4 Complete table of management scenarios for Level 4 complexity. W ith final regional mean cattail densities above 400 g/m 2

PAGE 221

221 Figure E 5 Complete table of manage ment scenarios for Level 4 complexity. The red highlighted scenarios are those with final regional mean cattail densities below 2 00 g/m 2 or level trends below 400 g/m 2

PAGE 222

222 APPENDIX F MANAGEMENT TIME SERIES PLOTS FOR LEVEL 5 Figure F 1 Regional mean density trend for Level 5 complexity applied to various management scenarios, with low initial sawgrass density.

PAGE 223

223 Figure F 2 Regional mean density trend for Level 5 complexity applied to various management scenarios, with medium initial sawgrass density.

PAGE 224

224 Figu re F 3 Regional mean density trend for Level 5 complexity applied to various management scenarios, with high initial sawgrass density.

PAGE 225

225 Figure F 4 Complete table of management scenarios for Level 5 complexity. W ith final regional mean cattail densities above 400 g/m 2

PAGE 226

226 Figure F 5 Complete table of management scenarios for Level 4 complexity. The red highlighted scenarios are those with final regional mean cattail densities below 200 g/m 2 or level trends below 400 g/m 2

PAGE 227

227 APPENDIX G GUSA INPUT FILES Simla bInput.fac Default Truncations: 0.001 0.999 6 Distributions: Triangular CATGF 1 1e 009 1e 007 1e 006 Triangular SAWGF 1 1e 009 1e 007 1e 006 Uniform Depth 1 1 0 10 1 Uniform Phosphorus 1 1 0 1000 1 Uniform Sawgrass 1 1 0 1958 1 Discrete Cattail SIS maps 1 to 250 1 250 1 2 0.004 2 3 0.004 3 4 0.004 4 5 0.004 5 6 0.004 6 7 0.004

PAGE 228

228 7 8 0.004 8 9 0.004 9 10 0.004 10 11 0.004 11 12 0.004 12 13 0.004 13 14 0.004 14 15 0.004 15 16 0.004 16 17 0.004 17 18 0.004 18 19 0.004 19 20 0.004 20 21 0.004 21 22 0.0 04 22 23 0.004 23 24 0.004 24 25 0.004 25 26 0.004 26 27 0.004 27 28 0.004 28 29 0.004 29 30 0.004 30 31 0.004 31 32 0.004 32 33 0.004 33 34 0.004 34 35 0.004 35 36 0.004 36 37 0.004 37 38 0.004 38 39 0.004 39 40 0.004 40 41 0.004 41 42 0.004 42 43 0.004 4 3 44 0.004 44 45 0.004 45 46 0.004 46 47 0.004 47 48 0.004 48 49 0.004 49 50 0.004 50 51 0.004 51 52 0.004 52 53 0.004

PAGE 229

229 53 54 0.004 54 55 0.004 55 56 0.004 56 57 0.004 57 58 0.004 58 59 0.004 59 60 0.004 60 61 0.004 61 62 0.004 62 63 0.004 63 64 0.004 64 65 0.004 65 66 0.004 66 67 0.004 67 68 0.004 68 69 0.004 69 70 0.004 70 71 0.004 71 72 0.004 72 73 0.004 73 74 0.004 74 75 0.004 75 76 0.004 76 77 0.004 77 78 0.004 78 79 0.004 79 80 0.004 80 81 0.004 81 82 0.004 82 83 0.004 83 84 0.004 84 85 0.004 85 86 0.0 04 86 87 0.004 87 88 0.004 88 89 0.004 89 90 0.004 90 91 0.004 91 92 0.004 92 93 0.004 93 94 0.004 94 95 0.004 95 96 0.004 96 97 0.004 97 98 0.004 98 99 0.004

PAGE 230

230 99 100 0.004 100 101 0.004 101 102 0.004 102 103 0.004 103 104 0.004 104 105 0.004 105 106 0.004 106 107 0.004 107 108 0.004 108 109 0.004 109 110 0.004 110 111 0.004 111 112 0.004 112 113 0.004 113 114 0.004 114 115 0.004 115 116 0.004 116 117 0.004 117 118 0.004 118 119 0.004 119 120 0.004 120 121 0.004 121 122 0.004 122 123 0.004 123 124 0.004 124 125 0.004 125 126 0.004 126 127 0.004 127 128 0.004 128 129 0.004 129 130 0.004 130 131 0.004 131 132 0.004 132 133 0.004 133 134 0.004 134 135 0.004 135 136 0.004 136 137 0.004 137 138 0.004 138 139 0.004 139 140 0.004 140 141 0.004 141 142 0.004 142 143 0.004 143 144 0.004 144 145 0.004

PAGE 231

231 145 146 0.004 146 147 0.004 147 148 0.004 148 149 0.004 149 150 0.004 150 151 0.004 151 152 0.004 152 153 0.004 153 154 0.004 154 155 0.004 155 156 0.004 156 157 0.004 157 158 0.004 158 159 0.004 159 160 0.004 160 161 0.00 4 161 162 0.004 162 163 0.004 163 164 0.004 164 165 0.004 165 166 0.004 166 167 0.004 167 168 0.004 168 169 0.004 169 170 0.004 170 171 0.004 171 172 0.004 172 173 0.004 173 174 0.004 174 175 0.004 175 176 0.004 176 177 0.004 177 178 0.004 178 179 0.004 17 9 180 0.004 180 181 0.004 181 182 0.004 182 183 0.004 183 184 0.004 184 185 0.004 185 186 0.004 186 187 0.004 187 188 0.004 188 189 0.004 189 190 0.004 190 191 0.004

PAGE 232

232 191 192 0.004 192 193 0.004 193 194 0.004 194 195 0.004 195 196 0.004 196 197 0.004 197 19 8 0.004 198 199 0.004 199 200 0.004 200 201 0.004 201 202 0.004 202 203 0.004 203 204 0.004 204 205 0.004 205 206 0.004 206 207 0.004 207 208 0.004 208 209 0.004 209 210 0.004 210 211 0.004 211 212 0.004 212 213 0.004 213 214 0.004 214 215 0.004 215 216 0. 004 216 217 0.004 217 218 0.004 218 219 0.004 219 220 0.004 220 221 0.004 221 222 0.004 222 223 0.004 223 224 0.004 224 225 0.004 225 226 0.004 226 227 0.004 227 228 0.004 228 229 0.004 229 230 0.004 230 231 0.004 231 232 0.004 232 233 0.004 233 234 0.004 234 235 0.004 235 236 0.004 236 237 0.004

PAGE 233

233 237 238 0.004 238 239 0.004 239 240 0.004 240 241 0.004 241 242 0.004 242 243 0.004 243 244 0.004 244 245 0.004 245 246 0.004 246 247 0.004 247 248 0.004 248 249 0.004 249 250 0.004 250 251 0.004 Correlation infor mation 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 Correlation Matrix: 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 Stein Information: 0 0 0 2

PAGE 234

234 SimlabOut.sam (p ortion ) 0 14336 6 0 3.295151008e 007 3.295151008e 007 5 5 00 979 125 1.788270584e 007 5.258955811e 007 2.5 750 489.5 188 5.258955811e 007 1.788270584e 007 7.5 250 1468.5 63 1.130318495e 007 4.193430445e 007 8.75 875 1223.75 32 4.193430445e 007 1.130318495e 007 3.75 375 244.75 157 2.503750938e 007 2.503750938 e 007 6.25 125 1713.25 219 6.647575504e 007 6.647575504e 007 1.25 625 734.25 94 7.962132344e 008 7.629477906e 007 6.875 312.5 367.125 16 3.728188061e 007 2.888433717e 007 1.875 812.5 1346.125 141 2.137867656e 007 1.452961039e 007 9.375 562.5 856.625 20 4

PAGE 235

235 CreateXMLs.py ################################################################# ### ###run through the (.sam) file and generate xml files### ################################################################# ### ###Based on given parameters and row numbe r### ###Import relevent libraries### import os ###read (.sam) file### cmd = 'find name "*.sam" print' os.popen('rm rf ./run*.xml') os.popen('rm rf ./wq*.xml') rows=0 for file in os.popen(cmd).readlines(): fullname=file[: 1] openfile = open( fullname,'r') samfile=openfile.readlines() openfile.close() rows = int(samfile[1].split()[0]) #print rows for rowiter in range(0,rows,1): #for rowiter in range(0,1,1): # for rowiter in range(0,rows,1): #print samfile[r owiter+4] vars = samfile[rowiter+4].split() #print vars CATGF = vars[0] #CATDF = vars[1] SAWGF = vars[1] #SAWDF = vars[3] Depth = vars[2] Phos = vars[3] Saw = vars[4] Cat = var s[5] ###creating new run and wq files### NewNameWQ = 'wq'+str(rowiter+1)+'.xml' NewNameRun = 'run'+str(rowiter+1)+'.xml' os.popen('cp ./wqBASE ./'+NewNameWQ) os.popen('cp ./runBASE ./'+NewNameRun) ###Edit new files, changing vars### ###SED is a shell command to fins and replace strings### os.popen('sed i \ 's|WQNUMBER|'+str(rowiter+1)+'|g \ '+NewNameRun) catmap = str(Cat)+'.dat' os.popen('sed i \ 's|CATMAP|'+catmap+'|g \ '+NewNa meWQ)

PAGE 236

236 os.popen('sed i \ 's|SAWGRASS|'+str(Saw)+'|g \ '+NewNameWQ) os.popen('sed i \ 's|CATGROW|'+str(CATGF)+'|g \ '+NewNameWQ) #os.popen('sed i \ 's|CATDIE|'+str(CATDF)+'|g \ '+NewNameWQ) os.popen('sed i \ 's|SAWGROW|'+str(S AWGF)+'|g \ '+NewNameWQ) #os.popen('sed i \ 's|SAWDIE|'+str(SAWDF)+'|g \ '+NewNameWQ) os.popen('sed i \ 's|SETDEPTH|'+str(Depth)+'|g \ '+NewNameWQ) os.popen('sed i \ 's|PHOSPHORUS|'+str(Phos)+'|g \ '+NewNameWQ) os.popen('sed i \ 's|ROWNUMBER|'+str(rowiter+1)+'|g \ '+NewNameWQ) #print vars ################################################################# ############### ###Make Batches and create Jobs### ###Get file names### ###split into batches of given size### filenames=[] B ATCH=[] batchsize=30 TempFileList=[] tempfilecount=0 cmd = 'find ./ name "run*.xml" print' for file in os.popen(cmd).readlines(): fullname=file[: 1] #print file namenumber = fullname[5: 4] #print namenumber filenames.append(fullname) TempFileList.append(fullname) if len(TempFileList)>=batchsize: skip = batchsize*tempfilecount BATCH.append(filenames[skip:]) tempfilecount=tempfilecount+1 del TempFileList[:] finalskip=batchsize*tempfilecount BATCH.a ppend(filenames[finalskip:]) del TempFileList[:] del filenames[:] #print \ n BATCH: '+str(BATCH)+' \ n' ################################################################# ######## ####create batch jobs### #print len(BATCH) ScriptName = 'HSEBatch' cmdClearSh= rm rf ./*.sh' cmdClearOutJob = 'rm rf ./outjob/*' cmdClearOutput = 'rm rf ./output/*' os.popen(cmdClearSh)

PAGE 237

237 os.popen(cmdClearOutJob) os.popen(cmdClearOutput) for batchiter in range(0,len(BATCH),1): newName = ScriptName+str(batchiter+1)+'.sh' cmdC opy = 'cp rf ./'+ScriptName+' ./'+newName os.popen(cmdCopy) #print newName appendfile = open(newName,'a') for fileiter in BATCH[batchiter]: jobstring='python pythonHSE.py '+fileiter+' \ n' appendfile.write(jobstring) appe ndfile.close() submitstring='qsub '+newName os.popen(submitstring) os.popen('qstat u gareth83') print 'Finished Submitting '+str(batchiter+1)+' Jobs' ################################################################# #######

PAGE 238

238 wqBASE backward_euler -> runge_kutta ./output/input_store_info.xml components_in_common water_column_p 10.0 gw_p 10.0 longitudinal_dispersivity 10.0 transverse_dispersivity 10.0

PAGE 239

239 molecular_diffusion 0.00001 surface_porosity 1.0 subsurface_longitudinal_dispersivity 10.0 subsurface_transverse_dispersivity 10.0 subsurface_molecular_diffusion 0.00001 subsurface_porosity 1.0 k_st 0.0 k_rs 0.0
all all

PAGE 240

240 surface _water settled_p PHOSPHORUS ecology stab_cat_L1 ./catin/CATMAP stab_cat_L2 ./catin/CATMAP stab_ca t_L3 ./catin/CATMAP stab_cat_L4 ./catin/CATMAP stab_cat_L5 ./catin/CATMAP stab_cat_L1pre

PAGE 241

241 ./catin/CATMAP
stab_cat_L2pre ./catin/CATMAP
stab_cat_L3pre ./catin/CATMAP stab_cat_L4pre ./catin/CATMAP stab_cat_L5pre ./catin/CATMAP stab_saw_L1 SAWGRASS stab_saw_L1pre SAWGRASS stab_saw_L1a SAWGRASS stab_saw_L1apre SAWGRASS cat_depth_HSI 1

PAGE 242

242 cat_p_HSI 1
cat_saw_HSI 1 cat_saw_HSIa 1 saw_cat_HSIa 1 cat_com_HSI_L3 1 cat_com_HSI_L4 1 cat_com_HSI_L5 1 cat_inthsi_depth_Hi 1 cat_inthsi_depth_Lo 1


PAGE 243

243 daycount 0.00 yearcount 1.00 cat_grow_factor CATGROW cat_max_dens 1240.00 cat_init_dens ./catin/CATMAP saw_grow_factor SAWGROW saw_max_dens 1958.00 saw_init_dens SAWG RASS cat_min_depth 0.16

PAGE 244

244 cat_peak_depth 2.30
cat_max_depth 3.77 cat_depth_risingDenom 3.66 cat_depth_fallingDenom 3.6 cat_min_p 200 cat_max_p 1800 cat_p_diff 1034 cat_p_denom 144 cat_saw_max 1 cat_saw_grad 0.84 saw_cat_max 1

PAGE 245

245 saw_cat_grad 0.84 depth_A SETDEPTH
daycount 1/86400 yearcount if(floor((daycount/365))>=yearcount,yearcount+1,yearcount)< /rhs> cat_inthsi_depth_Hi if( depth_A>cat_max_depth, 0.01, 1 ((depth_A cat_peak_depth)/cat_depth_fallingDeno m) ) cat_inthsi_depth_Lo if( depth_A>cat_min_depth, 1 ((cat_peak_depth depth_A)/cat_depth_risingDenom), 0.01 ) cat_dep th_HSI if( depth_A>cat_peak_depth, cat_inthsi_depth_Hi, cat_inthsi_depth_Lo ) cat_p_HSI (1+exp( (settled_p cat_p_diff)/cat_p_denom))^ 1

PAGE 246

246 cat_saw_HSI cat_saw_max+(cat_saw_grad*(stab_saw_L1/saw_max_dens)) cat_saw_HSIa cat_saw_max+(cat_saw_grad*(stab_saw_L1a/saw_max_dens)) saw_cat_HSIa saw_cat_max+(saw_cat_grad*(stab_cat_L5/cat_max_dens)) cat_com_HSI_L3 (cat_depth_HSI+cat_p_HSI)/2 cat_com_HSI_L4 (cat_depth_HSI+cat_p_HSI+cat_saw_HSI)/3 cat_com_HSI_L5 (cat_depth_HSI+cat_p_HSI+cat_saw_HSIa)/3 stab_cat_L1 cat_grow_factor*stab_cat_L1*(1 (stab_cat_L1/stab_cat_L1pre)) stab_cat_L1pre cat_max_dens*1 stab_cat_L2 cat_grow_factor*stab_cat_L2*(1 (stab_cat_L2/stab_cat_L2pre)) stab_cat_L2pre if(cat_depth_HSI>0.001, (cat_max_dens*cat_ depth_HSI), 0.001)

PAGE 247

247 stab_cat_L3 cat_grow_factor*stab_cat_L3*(1 (stab_cat_L3/stab_cat_L3pre)) stab_cat_L3pre if(cat_com_HSI_L3>0 .001, (cat_max_dens*cat_com_HSI_L3), 0.001) stab_cat_L4 cat_grow_factor*stab_cat_L4*(1 (stab_cat_L4/stab_cat_L4pre)) stab_cat_L4pre if(cat_com_HSI_L4>0.001, (cat_max_dens*cat_com_HSI_L4), 0.001) stab_cat_L5 cat_grow_factor*stab_cat_L5*(1 (stab_cat_L5/stab_cat_L5pre)) stab_cat_L5pre if(cat_com_HSI_L5>0.001, (cat_max_dens*cat_com_HSI_L5), 0.001) stab_saw_L1 cat_grow_factor*stab_saw_L1*(1 (stab_saw_L1/stab_saw_L1pre)) stab_saw_L1pre saw_max_dens*1
stab_saw_L1a cat_grow_factor*stab_saw_L1a*(1 (stab_saw_L1a/stab_saw_L1apre)) stab_saw_L1apre

PAGE 248

248 if(saw_cat_HSIa>0.001, saw_max_dens*saw_cat_HSIa, 0.001)
stab_cat_L1 stab_cat_ L2 stab_cat_L3 stab_cat_L4 stab_cat_L5 stab_saw_L1 depth_A depth ./output/ROWNUMBER.xm l

PAGE 249

249 runBASE

PAGE 250

250
->


PAGE 251

251 HSEBatch #!/bin/bash # #PBS q submit #PBS M gareth83@ufl.edu #PBS m abe #PBS l nodes=1:ppn=1 #PBS o outjob #PBS e outjob #PBS j oe #PBS r n #PBS u gareth8 3 #PBS l walltime=30:00:00 # cd /scratch/crn/gareth83/RUN/GSA_A_2011_03_31_TESTING/

PAGE 252

252 pythonHSE.py ###import necessary libraries### import os, sys, math ################################################################# ##### ###Function for calculating me an### def RMeanFunction(strlist): sum=0 meanF=0 count=1 for iter in strlist: if (",1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,2 17,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370,371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,4 95,496,497,498,499,500,501,502,505,506,507,508,509,510".find(','+str(count)+',')== 1): sum=sum+float(iter) count=count+1 meanF=sum/385 return meanF def MeanFunction(strlist): sum=0 meanF=0 count=0 for iter in st rlist: sum=sum+float(iter) count=count+1 meanF=sum/count return meanF ###Function for calulating variance### def RSTDEVFunction(strlist, meanF): var=0 sum=0 square=0 count=1 #for iter in strlist: # if (", 1,2,3,4,5,7,8,9,10,11,16,17,18,19,27,28,29,30,40,41,42,43,56,57,58,59,76,77,78,79,9 8,99,100,101,122,123,124,125,126,152,153,154,155,183,184,185,186,217,218,219,220 ,256,257,258,259,260,261,262,298,299,300,301,302,303,304,305,335,336,337,338,339 ,340,341,370, 371,372,373,400,401,402,403,404,405,429,430,431,432,433,455,456,457

PAGE 253

253 ,458,459,465,466,467,468,469,470,480,481,482,483,484,488,489,490,491,492,493,494 ,495,496,497,498,499,500,501,502,505,506,507,508,509,510".find(','+str(count)+',')== 1): # if iter =='inf' or iter==' inf' or iter=='nan' or float(iter)<0.001: # number=0 # else: # number=float(iter) # square = math.pow((number meanF),2) # #square = math.pow((float(iter) meanF),2) # sum=sum+square # count=count+1 #var=math.sqrt(sum/385) var = 999 return var def STDEVFunction(strlist, meanF): var=0 sum=0 square=0 count=0 #for iter in strlist: # square = math.pow((float(iter) meanF),2) # sum=sum+square # count=count+1 #var=math.sqrt(sum/count) var=999 return var ################################################################# ####### ###Run hse based on file input as argument at runtime### filename = sys.argv[1] filenumber = filename[5: 4] print filename, filenumber xmlfilename = str(filenumber)+'.xml' xmlfilebuffer=[] firsttime_begin=0 firsttime_end=0 lasttime_begin=0 lasttime_end=0 InitialValues="" FinalValues="" if(os.popen('../hse '+filename)): ###do post processing here. cut file size down### xmlfile = open('./output/'+xmlfilename,'r') xmlfilebuffer = xmlfile.read()

PAGE 254

254 xmlfile.close() firsttime_begin = xmlfilebuffer.find('time_step ') firsttime_end = xmlfilebuffer.find('time_step>') I nitialValues = xmlfilebuffer[firsttime_begin:firsttime_end] lasttime_begin = xmlfilebuffer.rfind('time_step ') lasttime_end = xmlfilebuffer.rfind('time_step>') FinalValues = xmlfilebuffer[lasttime_begin:lasttime_end] ###remove large (.xml ) file. have begin and end values stored### del xmlfilebuffer os.popen('rm rf ./output/'+xmlfilename) #print InitialValues #print FinalValues ################################################################# ########### ###Initial Value s### ###Cat_L1### temp_marker_start = InitialValues.find('stab_cat_L1') temp_marker_start = InitialValues.find('>', temp_marker_start) temp_marker_end = InitialValues.find('<', temp_marker_start) InitialCat_L1 = InitialValues[(temp_mark er_start+1):temp_marker_end] InitCatL1_List = InitialCat_L1.split(', ') #print InitialCat_L1 ###Cat_L2### temp_marker_start = InitialValues.find('stab_cat_L2') temp_marker_start = InitialValues.find('>', temp_marker_start) temp_mark er_end = InitialValues.find('<', temp_marker_start) InitialCat_L2 = InitialValues[(temp_marker_start+1):temp_marker_end] InitCatL2_List = InitialCat_L2.split(', ') #print InitialCat_L2 ###Cat_L3### temp_marker_start = InitialValues.find ('stab_cat_L3') temp_marker_start = InitialValues.find('>', temp_marker_start) temp_marker_end = InitialValues.find('<', temp_marker_start) InitialCat_L3 = InitialValues[(temp_marker_start+1):temp_marker_end] InitCatL3_List = InitialCat_L3. split(', ') #print InitialCat_L3 ###Cat_L4### temp_marker_start = InitialValues.find('stab_cat_L4') temp_marker_start = InitialValues.find('>', temp_marker_start) temp_marker_end = InitialValues.find('<', temp_marker_start) InitialC at_L4 = InitialValues[(temp_marker_start+1):temp_marker_end] InitCatL4_List = InitialCat_L4.split(', ') #print InitCatL4_List ###Cat_L5### temp_marker_start = InitialValues.find('stab_cat_L5')

PAGE 255

255 temp_marker_start = InitialValues.find('>' temp_marker_start) temp_marker_end = InitialValues.find('<', temp_marker_start) InitialCat_L5 = InitialValues[(temp_marker_start+1):temp_marker_end] InitCatL5_List = InitialCat_L5.split(', ') #print InitCatL5_List ###Final Values### ###Cat_L1### temp_marker_start = FinalValues.find('stab_cat_L1') temp_marker_start = FinalValues.find('>', temp_marker_start) temp_marker_end = FinalValues.find('<', temp_marker_start) FinalCat_L1 = FinalValues[(temp_marker_start+1):tem p_marker_end] FinCatL1_List = FinalCat_L1.split(', ') #print FinalCat_L1 ###Cat_L2### temp_marker_start = FinalValues.find('stab_cat_L2') temp_marker_start = FinalValues.find('>', temp_marker_start) temp_marker_end = FinalValues.fin d('<', temp_marker_start) FinalCat_L2 = FinalValues[(temp_marker_start+1):temp_marker_end] FinCatL2_List = FinalCat_L2.split(', ') #print FinalCat_L2 ###Cat_L3### temp_marker_start = FinalValues.find('stab_cat_L3') temp_marker_start = FinalValues.find('>', temp_marker_start) temp_marker_end = FinalValues.find('<', temp_marker_start) FinalCat_L3 = FinalValues[(temp_marker_start+1):temp_marker_end] FinCatL3_List = FinalCat_L3.split(', ') #print FinalCat_L3 ###Cat_L4 ### temp_marker_start = FinalValues.find('stab_cat_L4') temp_marker_start = FinalValues.find('>', temp_marker_start) temp_marker_end = FinalValues.find('<', temp_marker_start) FinalCat_L4 = FinalValues[(temp_marker_start+1):temp_marker_end] FinCatL4_List = FinalCat_L4.split(', ') #print FinCatL4_List ###Cat_L5### temp_marker_start = FinalValues.find('stab_cat_L5') temp_marker_start = FinalValues.find('>', temp_marker_start) temp_marker_end = FinalValues.find('<', temp _marker_start) FinalCat_L5 = FinalValues[(temp_marker_start+1):temp_marker_end] FinCatL5_List = FinalCat_L5.split(', ') #print FinCatL5_List ################################################################# ########### ###calculate statistics### STAT_ARR=[]

PAGE 256

256 ###LEVEL1### temp_arr = [] InitCatL1_mean = RMeanFunction(InitCatL1_List) FinCatL1_mean = RMeanFunction(FinCatL1_List) CatL1_DiffMean = FinCatL1_mean InitCatL1_mean temp_arr.append(CatL1_DiffMean) FinCatL1_std ev = RSTDEVFunction(FinCatL1_List, FinCatL1_mean) temp_arr.append(FinCatL1_stdev) ###Local Stats### InitL1_NE_mean = MeanFunction(InitCatL1_List[174:179]) FinL1_NE_mean = MeanFunction(FinCatL1_List[174:179]) L1_NE_DiffMean = FinL1_NE_me an InitL1_NE_mean temp_arr.append(L1_NE_DiffMean) L1_NE_stdev = STDEVFunction(FinCatL1_List[174:179],FinL1_NE_mean) temp_arr.append(L1_NE_stdev) InitL1_CENT_mean = MeanFunction(InitCatL1_List[279:282]) FinL1_CENT_mean = MeanFunction( FinCatL1_List[279:282]) L1_CENT_DiffMean = FinL1_CENT_mean InitL1_CENT_mean temp_arr.append(L1_CENT_DiffMean) L1_CENT_stdev = STDEVFunction(FinCatL1_List[279:282],FinL1_CENT_mean) temp_arr.append(L1_CENT_stdev) InitL1_SW_mean = MeanF unction(InitCatL1_List[375:379]) FinL1_SW_mean = MeanFunction(FinCatL1_List[375:379]) L1_SW_DiffMean = FinL1_SW_mean InitL1_SW_mean temp_arr.append(L1_SW_DiffMean) L1_SW_stdev = STDEVFunction(FinCatL1_List[375:379],FinL1_SW_mean) temp _arr.append(L1_SW_stdev) STAT_ARR.append(temp_arr) del temp_arr ###LEVEL2### temp_arr = [] InitCatL2_mean = RMeanFunction(InitCatL2_List) FinCatL2_mean = RMeanFunction(FinCatL2_List) CatL2_DiffMean = FinCatL2_mean InitCat L2_mean temp_arr.append(CatL2_DiffMean) FinCatL2_stdev = RSTDEVFunction(FinCatL2_List, FinCatL2_mean) temp_arr.append(FinCatL2_stdev) ###Local Stats### InitL2_NE_mean = MeanFunction(InitCatL2_List[174:179]) FinL2_NE_mean = MeanFunct ion(FinCatL2_List[174:179]) L2_NE_DiffMean = FinL2_NE_mean InitL2_NE_mean temp_arr.append(L2_NE_DiffMean)

PAGE 257

257 L2_NE_stdev = STDEVFunction(FinCatL2_List[174:179],FinL2_NE_mean) temp_arr.append(L2_NE_stdev) InitL2_CENT_mean = MeanFunction( InitCatL2_List[279:282]) FinL2_CENT_mean = MeanFunction(FinCatL2_List[279:282]) L2_CENT_DiffMean = FinL2_CENT_mean InitL2_CENT_mean temp_arr.append(L2_CENT_DiffMean) L2_CENT_stdev = STDEVFunction(FinCatL2_List[279:282],FinL2_CENT_mean) temp_arr.append(L2_CENT_stdev) InitL2_SW_mean = MeanFunction(InitCatL2_List[375:379]) FinL2_SW_mean = MeanFunction(FinCatL2_List[375:379]) L2_SW_DiffMean = FinL2_SW_mean InitL2_SW_mean temp_arr.append(L2_SW_DiffMean) L2_SW_stdev = STDEVFunction(FinCatL2_List[375:379],FinL2_SW_mean) temp_arr.append(L2_SW_stdev) STAT_ARR.append(temp_arr) del temp_arr ###LEVEL3### temp_arr = [] # print len(InitCatL3_List) ###Regional Stats### InitCatL3_mean = RMeanFunction(I nitCatL3_List) FinCatL3_mean = RMeanFunction(FinCatL3_List) CatL3_DiffMean = FinCatL3_mean InitCatL3_mean temp_arr.append(CatL3_DiffMean) FinCatL3_stdev = RSTDEVFunction(FinCatL3_List, FinCatL3_mean) temp_arr.append(FinCatL3_stdev) ## #Local Stats### InitL3_NE_mean = MeanFunction(InitCatL3_List[174:179]) FinL3_NE_mean = MeanFunction(FinCatL3_List[174:179]) L3_NE_DiffMean = FinL3_NE_mean InitL3_NE_mean temp_arr.append(L3_NE_DiffMean) L3_NE_stdev = STDEVFunction(FinC atL3_List[174:179],FinL3_NE_mean) temp_arr.append(L3_NE_stdev) InitL3_CENT_mean = MeanFunction(InitCatL3_List[279:282]) FinL3_CENT_mean = MeanFunction(FinCatL3_List[279:282]) L3_CENT_DiffMean = FinL3_CENT_mean InitL3_CENT_mean temp_a rr.append(L3_CENT_DiffMean) L3_CENT_stdev = STDEVFunction(FinCatL3_List[279:282],FinL3_CENT_mean) temp_arr.append(L3_CENT_stdev)

PAGE 258

258 InitL3_SW_mean = MeanFunction(InitCatL3_List[375:379]) FinL3_SW_mean = MeanFunction(FinCatL3_List[375:379]) L3_SW_DiffMean = FinL3_SW_mean InitL3_SW_mean temp_arr.append(L3_SW_DiffMean) L3_SW_stdev = STDEVFunction(FinCatL3_List[375:379],FinL3_SW_mean) temp_arr.append(L3_SW_stdev) STAT_ARR.append(temp_arr) del temp_arr ###LEVEL4### temp_arr = [] InitCatL4_mean = RMeanFunction(InitCatL4_List) FinCatL4_mean = RMeanFunction(FinCatL4_List) CatL4_DiffMean = FinCatL4_mean InitCatL4_mean temp_arr.append(CatL4_DiffMean) FinCatL4_stdev = RSTDEVFunction(FinCatL4_List FinCatL4_mean) temp_arr.append(FinCatL4_stdev) ###Local Stats### InitL4_NE_mean = MeanFunction(InitCatL4_List[174:179]) FinL4_NE_mean = MeanFunction(FinCatL4_List[174:179]) L4_NE_DiffMean = FinL4_NE_mean InitL4_NE_mean temp_arr. append(L4_NE_DiffMean) L4_NE_stdev = STDEVFunction(FinCatL4_List[174:179],FinL4_NE_mean) temp_arr.append(L4_NE_stdev) InitL4_CENT_mean = MeanFunction(InitCatL4_List[279:282]) FinL4_CENT_mean = MeanFunction(FinCatL4_List[279:282]) L4_CE NT_DiffMean = FinL4_CENT_mean InitL4_CENT_mean temp_arr.append(L4_CENT_DiffMean) L4_CENT_stdev = STDEVFunction(FinCatL4_List[279:282],FinL4_CENT_mean) temp_arr.append(L4_CENT_stdev) InitL4_SW_mean = MeanFunction(InitCatL4_List[375:379]) FinL4_SW_mean = MeanFunction(FinCatL4_List[375:379]) L4_SW_DiffMean = FinL4_SW_mean InitL4_SW_mean temp_arr.append(L4_SW_DiffMean) L4_SW_stdev = STDEVFunction(FinCatL4_List[375:379],FinL4_SW_mean) temp_arr.append(L4_SW_stdev) STAT _ARR.append(temp_arr) del temp_arr ###LEVEL5### temp_arr = [] InitCatL5_mean = RMeanFunction(InitCatL5_List) FinCatL5_mean = RMeanFunction(FinCatL5_List) CatL5_DiffMean = FinCatL5_mean InitCatL5_mean temp_arr.append(CatL5_Diff Mean)

PAGE 259

259 FinCatL5_stdev = RSTDEVFunction(FinCatL5_List, FinCatL5_mean) temp_arr.append(FinCatL5_stdev) ###Local Stats### InitL5_NE_mean = MeanFunction(InitCatL5_List[174:179]) FinL5_NE_mean = MeanFunction(FinCatL5_List[174:179]) L5_NE_ DiffMean = FinL5_NE_mean InitL5_NE_mean temp_arr.append(L5_NE_DiffMean) L5_NE_stdev = STDEVFunction(FinCatL5_List[174:179],FinL5_NE_mean) temp_arr.append(L5_NE_stdev) InitL5_CENT_mean = MeanFunction(InitCatL5_List[279:282]) FinL5_CEN T_mean = MeanFunction(FinCatL5_List[279:282]) L5_CENT_DiffMean = FinL5_CENT_mean InitL5_CENT_mean temp_arr.append(L5_CENT_DiffMean) L5_CENT_stdev = STDEVFunction(FinCatL5_List[279:282],FinL5_CENT_mean) temp_arr.append(L5_CENT_stdev) InitL5_SW_mean = MeanFunction(InitCatL5_List[375:379]) FinL5_SW_mean = MeanFunction(FinCatL5_List[375:379]) L5_SW_DiffMean = FinL5_SW_mean InitL5_SW_mean temp_arr.append(L5_SW_DiffMean) L5_SW_stdev = STDEVFunction(FinCatL5_List[375:379],F inL5_SW_mean) temp_arr.append(L5_SW_stdev) STAT_ARR.append(temp_arr) del temp_arr ################################################################# ################### ###Write stats to file### statfile = open('./output/'+filenumber+'.dat', 'w') linestr="" for iter in range(0,len(STAT_ARR),1): tempstr="" for stat in STAT_ARR[iter]: tempstr = tempstr+''.join(str(stat))+' linestr = linestr+''.join(tempstr) statfile.write(linestr) statfil e.close() else: print 'failed'

PAGE 260

260 LIST OF REFERENCES Arnold, K. & Gosling, J., 1998. The Java Programming Language 2nd ed. Prentice Hall. Beven, K., 2001. How Far can we go in Distributed Hydrological Modelling? Hydrology and Earth Sciences 5(1) pp.1 12. Brown, M.T., Cohen, M.J., Bardi, E. & Ingwersen, W.W., 2006. Species Diversity in the Florida Everglades, USA: A Systems Approach to Calculating Biodiversity. Aquatic Science 68, pp.254 77. Cary, J.R.; Shasharina, S.G.; Cummings, J.C.; Reynders J.V.W.; Hinker, P.J. 1998. Comparison of C++ and Fortran 90 for Oject Oriented Scientific Programming. Computer Physics Communications pp.20 36. Cebrian, J. & Duarte, C.M., 1994. The Dependence of Herbivory on Growth Rate in Natural Plant Communities. Functional Ecology 8(4), pp.518 25. Cliff, A.D. & Ord, K., 1970. Spatial Autocorrelation: A Review of Existing and New Measures with Applications. Economic Geography pp.269 92. Costanza, R. & Voinov, A., 2001. Modeling Ecological and Economic Systems wit h STELLA: Part III. Ecological Modeling pp.1 7. Cressie, N., 1990. The Origins of Kriging. Mathematical Geology pp.239 52. Crosetto, M., Stefano, T. & Saltelli, A., 2000. Sensitivity and Uncertainty Analysis in Spatial Modelling Based on GIS. Agriculture Ecosystems and Environment 81, pp.71 79. Cukier, R.I., Fortuin, C.M. & Shuler, K.E., 1973. Study of the Sensitivity of Coupled Reaction Systems to Uncertainties in Rate Coefficients. Part 1. Journal of Chemical Physics 59(8), pp.3873 78. DeBusk, W.F., Reddy, K.R., Koch, M.S. & Wang, Y., 1994. Spatial Distribution of Soil Nutrients in a Nothern Everglades Marsh: Water Conservation Area 2A. Soil Society of America 58, pp.543 52. Deutsch, C.V. & Journel, A.G., 1992. GSLIB: Geostatistical Software Library and User's Guide New York, NY: Oxford University Press. http://www.gslib.com/gslib_help/sisim.html. Duke Sylvester, S., 2005. Initial Performance Measures and Information Related to the ATLSS Vegetation Succession Model [Online] Available at http://atlss.org/VSMod/ [Accessed 31 July 2010].

PAGE 261

261 Fitz, C.H., Kiker, G.A. & Kim, J.B., 2010. Integrated Ecological Modeling and Decision Analysis within the Everglades Landscape. Environmental Science and Technology p.in press. Fitz, C.H. & Trimble, B., 2006a. Documentation of the Everglades Landscape Model: ELM v2.5 West Palm Beach, Florida: South Florida Water Management District. Fitz, C.H. & Trimble, B., 2006b. Everglades Landscape Model (ELM) [Online] Available at http://my.sfwmd.gov/portal/page/portal/xweb%20 %20release%202/elm [Accessed 31 July 2010]. Glennon, R., 2002. Water Follies Groundwater Pumping and the Fate of America's Fresh Waters Washington, DC: Island Press. Goodwin, R. Andrew; Nestler, John M.; Anderson, James J.; Weber, Larry J.; Loucks, Daniel P. 2006. Forecasting 3 D Fish MOvement Behaviour Using a Eulerian Lagrangian Agent Method (ELAM). Ecological Modeling 19 2, pp.197 223. Goovaerts, P., 2001. Geostatistical Modelling of Uncertainty in Soil Science. Geoderma pp.3 26. Goovaerts, P., 1996. Stochastic Simulation of Categorical Variables Using a Classification Algorithm and Simulated Annealing. Mathematical Geolo gy pp.909 21. Grace, J.B., 1989. Effects of Water Depth on Typha latifolia and Typha domingensis. American Journal of Botany pp.762 68. Gross, L.J., 1996. ATLSS home page [Online] Available at http://atlss.org/ [Acc essed 31 July 2010]. Grunwald, S., 2010. Phosphorus Data for WCA2A Personal Communication. University of Florida. Grunwald, S., Ozborne, T.Z. & Reddy, K.R., 2008. Temporal Trajectories of Phosphorus and Pedo Patterns Mapped in Water Conservation Area 2, E verglades, Florida, USA. Geoderma 146, pp.1 13. Grunwald, S., Reddy, K.R., Newman, S. & DeBusk, W.F., 2004. Spatial Variability, Distribution and Uncertainty Assessment of Soil Phosphorus in a South Florida Wetland. Environmetrics 15, pp.811 25. Guardo, Mariano; Fink, Larry; Fontaine, Thomas D.; Newman, Susan; Chimney, Michael; Bearzotti, Ronald; Goforth, Gary ., 1995. Large Scale Constructed Wetlands for Nutrient Removal from Stormwater Runoff: An Everglades Restoration Project. Environmental Management 19(6), pp.879 89.

PAGE 262

262 Gunderson, L.H., Holling, C.S. & Peterson, G.D., 2001. Surprises and Sustainability: Cycles of Renewal in the Everglades. In Panarchy: Understanding Transformations in Human and Natural Systems Washington: Island Press. pp.315 32. Hanna, S.R., 1988. Air quality model evaluation and uncertainty. International Journal of Air Pollution Control and Hazardous Waste Management 38(4):406 412. Fisher, B.E.A., Ireland, M.P., Boyland, D.T., Critten, S.P., 2002. Why use one model? An approach for en compassing model uncertainty and improving best practice. Environmental Modeling and Assessment 7(4):291 299. Harold, E.R., 1998. XML: Extensible Markup Language 1st ed. Foster City: IDG Books Worlwide, Inc. Hsu, Sze Bi, Hwang, Tzy Wei, Kuang, Yang, 2000. Global Analysis of the Michaelis Menten type ratio dependent predator prey system. Journal of Mathematical Biology, 42 (6), pp.489 506 James, A.I. & Jawitz, J.W., 2007. Modeling Two Dimensional Reactive Transport Using a Godunov Mixed Finite Element Meth od. Journal of Hydrology 338, pp.28 41. Jawitz, J. W.; Muoz Carpena, Rafael; Mul ler, Stuart; Grace, K.A.; James A.I. 2008. Development, Testing, and Sensitivity and Uncertainty Analyses of a Transport and Reaction Simulation Engine (TaRSE) for Spatiall y Distributed Modeling of Phosphorus in South Florida Peat Marsh Wetlands Scientific Investigations Report 2008 5029. Reston, Virginia: United States Geological Survey. Jensen, J.R., Rutchey, K., Koch, M.S. & Narumalani, S., 1995. Inland Wetland Change De tection in the Everglades Water Conservation Area 2A Using a Time Series of Remotely Sensed Data. Photogrammetric Engineering and Remote Sensing 61(2), pp.199 209. Keen, R.E. & Spain, J.D., 1992. Computer Simulation in Biology New York: Wiley Liss, Inc. Kiker, G.A. & Linkov, I., 2006. The QND Model/Game System: Integrating Questions and Decisions for Multiple Stressors. Ecological Risk Assessment and Multiple Stressors pp.203 25. Kiker, G.A., Rivers Moore, N.A., Kiker, M.K. & Linkov, I., 2006. QND: A Sce nario Based Gaming System for Modeling Environmental Processes and Management Decisions. Environmental Security and Environmental Management pp.151 85. Kiker, G.A., 1998, Development and C omparison of Savanna E cosystem M odels to E xplore the C oncept of C ar rying C apacity Ph.D. Dissertation, Ithaca, New York: Conrell University

PAGE 263

263 Krige, D.G., 1951. A Statistical Approach to Some Basic Mine Valuation Problems on the Witwatersrand. J. Chem. Metal. Min. Soc. South Africa pp.119 39. Lagerwall, G.L., Kiker, G.A., Convertino, M. & Muoz Carpena, R., 201 1 RSM/TARSE:ECO A Spatially Distributed, Deterministic, Free Form Approach to Modeling Vegetation Dynamics (Typha domingensis) in an Everglades Wetland, WCA2A. UNPUBLISHED Final preraration 04 / 01 /201 1 (Relates t o Chapter 2) Lagerwall, G., Kiker, G.A., Zaja c, Z. & Muoz Carpena, R., 201 1 a Global Uncertainty and Sensitivity Analysis of a Spatially Distributed Ecological Model. UNPUBLISHED Final preparation. 04/01/2011. (Relates to Chapter 3) Layzer, J.A., 2006. E cosystem Based Solutions: Restoring the Florida Everglades. In The Environmental Case 2nd ed. Washington: CQPress. pp.404 35. Lilburne, L. & Tarantola, S., 2009. Sensitivity Analysis of Spatial Models. International Journal of Geographical Information Sci ence pp.151 68. Lindenschmidt, K.E., 2006. The Effect of Complexity on Parameter Sensitivity and Model Uncertainty in River Water Quality Modeling. Ecological Modelling pp.72 86. Liu, J., Chen, H., Ewing, R. & Qin, G., 2007. An Efficient Algorithm for Ch aracteristic Tracking on Two Dimensional Triangular Meshes. Computing 80, pp.121 36. Ludascher, Bertram; Altintas, Ilkay; Berkley, Chad; Higgins, Dan; Jaeger, Efrat; Jones, Matthew; Lee, Edward A.; Tao, Jing; Zhao, Yang ., 2006. Scientific Workflow Managem ent and the Kepler System. Concurrency and Computation: Practice and Experience 18, pp.1039 65. Marani, Marco; Tommaso, Zillio; Belluco, Enrica; Silvestri, Sonia; Maritan, Amos ., 2006. Non Neutral Vegetation Dynamics. PLoS ONE p.e78. website: www.plosone.org email: mariani@idra.unipd.it. Martin, T.E., 1980. Diversity and Abundance of Spring Migratory Birds Using HAbitat Islands on the Great Palins. Cooper Ornithological Soc iety pp.430 39. McCuen, R.H., Knight, Z. & Cutter, A.G., 2006. Evaluation of the Nash Sutcliffe Efficiency Index. Hydrologic Engineering pp.597 602. Miao, S., 2004. Rhizome Growth and Nutrient Resorption: Mechanisms Underlying the Replacement of Two Clon al Species in Florida Everglades. Aquatic Botany pp.55 66.

PAGE 264

264 Miao, S., Newman, S. & Sklar, F.H., 2000. Effects of Habitat Nutirents and Seed Sources on Growth and Expansion of Typha domingensis. Aquatic Botany pp.297 311. Available at http:/ /www.sciencedirect.com/science?_ob=MImg&_imagekey=B6T4F 41N57PX 2 X&_cdi=4973&_user=2139813&_pii=S0304377000001273&_orig=search&_cover Date=12%2F31%2F2000&_sk=999319995&view=c&wchp=dGLzVzz zSkzV&md5=bc500d0241e35eec0e2539372cce75c7&ie=/sdarticle.pdf Miao, S.L. & Sklar, F.H., 1998. Biomass and Nutrient Allocation of Sawgrass and Cattail Along a Nutrient Gradient in the Florida Everglades. Wetlands Ecology and Management p.245. Michalski, F. & Peres, C.A., 2007. Disturbance Mediated Mammal Persistence and A bundance Area Relationships in Amazonian Forest Fragments. Conservation Biology pp.1626 40. Muller, S, Muoz Carpena, R, Kiker, G, 2011, Model Relevance: Frameworks for exploring the complexity sensitivity uncertainty trilemma. Book Chapter (pp. 35 67). C limate: Global Change and Local Adaption. Linkov, I, Bridges, T. S.. Springer Dordrecht/Boston/London. Published in cooperation with NATO Scientific Affairs Division. Muller, S., 2010. Adaptive Spatially Distributed Water Quality Modeling: An Application t o Mechanistically Simulate Phosphorus Conditions in the Variable Density Surface Waters of Coastal Everglades Wetlands PhD. Dissertation. Gainesville, Florida: University of Florida. Muoz Carpena, R., Parsons, J.E. & Gilliam, J.W., 1999. Modeling hydrolo gy and sediment transport in vegetative filter strips. Journal of Hydrology (214), pp.111 29. Newman, S.; Schutte, J.; Grace, J.; Rutchey, K.; Fontaine, T; Reddy, K.; M., Pietrucha ., 1998. Factors Influencing Cattail Abundance in the Northern Everglades. Aquatic Botany pp.265 80. Obeysekera, J. & Rutchey, K., 1997. Selection of Scale for Everglades Landscape Models. Landscape Ecology 12(1), pp.7 18. Odum, H.T., Odum, E.C. & Brown, M.T., 2000. Wetlands Management. In Environment and Society in Florida Bo ca Raton: CRC Press. p.197. Ormsby, Tim; Napolean, Eileen; Burke, Robert; Groessl, Carolyn; Feaster, Laura ., 2001. Getting to Know ArcGIS Desktop Redlands: ESRI. Ott, R.L. & Longnecker, M.T., 2004. A First Course in Statistical Methods Belmont, Californi a: Curt Hinrichs. ISBN: 0 534 40806 0.

PAGE 265

265 Ovilla, O.P., 2010. Modeling Runoff Pollutant Dynamics Through Vegetative Filter Strips: A Flexible Numerical Approach PhD. Dissertation. Gainesville, Florida: University of Florida. Paradis, E., 2010. Moran's Autoco rrelation Coefficient in Comparative Methods [Online] Available at http://cran.r project.org/web/packages/ape/vignettes/MoranI.pdf [Accessed 7 August 2010]. Perez, L., 2006. Everglades National Park Development In The Everglades (U.S. National Park Service) [Online] Available at http://www.nps.gov/ever/historyculture/developeverglades.htm [Acce ssed 03 August 2010]. Perez, L., 2006a. Everglades National Park Development In The Everglades (U.S. National Park Service) [Online] Available at http://www.nps.gov/ever/histor yculture/developeverglades.htm [Accessed 03 August 2010]. Prickett, T.A., Naymik, T.G. & Lonnquist, C.G., 1981. 65 A "Random Walk" Solute Transport Model for Selected Groundwater Quality Evaluations Bulletin. Champaign: State of Illinois. Available at h ttp://www.isws.illinois.edu/pubdoc/B/ISWSB 65.pdf. Reed, L.J. & Berkson, J., 1929. The Application of the Logistic Function to Experimental Data. Physics and Chemistry pp.760 79. Rivero, R.G., Grunwald, S. & Bruland, G.L., 2007. Incorporation of Spectral Data into Multivariate Geostatistical Models to Map Soil Phosphorus Variability in a Florida Wetland. Geoderma 140, pp.428 43. Rivero, R.G., Grunwald, S., Osborne, T.Z..R.K.R. & Newman, S., 2007. Characterization of the Spatial Distribution of Soil Proper ties in Water Conservation Area 2A, Everglades, Florida. Soil Science 172, pp.149 66. Roloff, G.J. & Kernohan, B.J., 1999. Evaluating Reliability of Habitat Suitability Index Models. Wildlife Society Bulletin 27(4), pp.973 85. Rossi, R.E., Borth, P.W. & Tollefson, J.J., 1993. Stochastic Simulation for Characterizing Ecological Spatial Patterns and Appraising Risk. Ecological Applications pp.719 35. Rutchey, K., Schall, T. & Sklar, F., 2008. Development of Vegetation Maps for Assessing Everglades Restorat ion Progress. Wetlands 172(2), pp.806 16. Saltelli, A., Chan, K. & Scott, E.M., 2004. Sensitivity Analysis Chichester, England: John Wiley & Sons Ltd.

PAGE 266

266 SFWMD, 2009. DBHYDRO [Online] Available at: http://my.sfwmd.gov/dbhydroplsql/show_dbkey_info.main_menu [Accessed 04 August 2010]. SFWMD, 2008a. RSM Water Quality User Manual User Manual. West Palm Beach, Florida: South Florida Water Management District. SFWMD, 2008b. RSM Water Qu ality User Manual (DRAFT) User Manual (draft). West Palm Beach, Florida: South Florida Water Management District. SFWMD, 2008c. RSMWQE Theory Manual (DRAFT) Theory Manual (draft). West Palm Beach, Florida: South Florida Water management District. SFWMD, 2008d. WCA2A HSE Setup. Overview Document. West Palm Beach, Florida: South Florida Water management District. SFWMD, 2005. Documentation of the South Florida Water Management Model Version 5.5 Documentaiton. West Palm Beach: South Florida Water Management District. SFWMD, 2005a. Regional Simulation Model (RSM) Hydrologic Simulation Engine (HSE) User's Manual User Manual. West Palm Beach, Florida: South Florida Water Management District. SFWMD, 2005b. Regional Simulation Model (RSM) Theory Manual Theory M anual. West Palm Beach, Florida: South Florida Water Management District. SFWMD, 1999. Land Cover Land Use 1999 [Online] Available at: http://my.sfwmd.gov/gisapps/sfwmd xwebdc/dataview.asp?query=unq_id=1593 [Accessed 11 November 2009]. SFWMD, 1995. Land Cover Land Use 1995 [Online] Available at http://my.sfwmd.gov/gisapps/sfwmdxwebdc /dataview.asp?query=unq_id=297 [Accessed 11 November 2009]. Shipley, B. & Keddy, P.A., 1988. The Relationship Between Relative Growth Rate and Sensitivity to Nutrient Stress in Twenty Eight Species of Emergent Macrophytes. Journal of Ecology 76(4), pp.1 101 10. Shipley, B. & Peters, R.H., 1990. A Test of the Tillman Model of Plant Strategies: Relative Growth Rate and Biomass Partitioning. The American Naturalist 136(2), pp.139 53. Snowling, S.D.& Kramer, J.R., 2001. Evaluating modelling uncertainty for m odel selection. Ecological Modelling 138(1 3):17 30. Sobol, I.M., 2001. Global Sensitivity Indices for Nonlinear Mathematical Models and Their Monte Carlo Estimates. Mathematics and Computers in Simulation 55, pp.271 2 80.

PAGE 267

267 Stroustrup, B., 2000. The C++ Pro gramming Language (Special Edition) Westford, Massachusetts: Addison Wesley. ISBN: 0 201 70073 5. Tanner, C.C., 1996. Plants for Constructed Wetland Treatment Systems A Comparison of the Growth and Nutrient Uptake of Eight Emergent Species. Ecological E ngineering pp.59 83. Tarboton, Kenneth C.; Irizarry Ortiz, Michelle M.; Loucks, Daniel P.; Davis, Steven M.; Obeysekera, Jayantha T ., 2004. Habitat Suitability Indices for Evaluating Water Management Alternatives West Palm Beach, Florida: South Florida W ater Management District. UFL, 2010. University of Florida HPC Center [Online] Available at http://www.hpc.ufl.edu/ [Accessed 30 September 2010]. Urban, N.H., Davis, S.M. & Aumen, N.G., 1993. Fluctuations in Sawg rass and Cattail Densities in Everglades Water Conservation Area 2A Under Varying Nutrient, Hydrologic, and Fire Regimes. Aquatic Botany 46, pp.203 23. USACE, S.F.R.O., 2010a. CERP: The Plan in Depth Part 1 [Online] Available at: http://www.evergladesplan.org/about/rest_plan_pt_01.aspx [Accessed 03 August 2010]. USACE, S.F.R.O., 2010b. CERP: The Plan in Depth Part 2 [Online] Available at: http://www.evergladesplan.org/about/rest_plan_pt_02.aspx [Accessed 03 August 2010]. van der Valk, A.G. & Rosburg, T.R., 1997. Seed Bank Composition Along a Phosphorus Gradient in the Northern Florida Everglades. Wetland s 17(2), pp.228 36. van Griensven, A.; Meixner, T.; Grunwald, S.; Bishop, T.; Diluzio, M.; Srinivasan, R. 2006. A Global Sensitivity Analysis Tool for the Parameters of Multi Variable Catchment Models. Journal of Hydrology pp.10 23. Walker, W.W. & Kadle c, R.H., 1996. A model for Simulating Phosphorus Concentrations in Waters and Soils Downstream of Everglades Stormwater Treatment Areas Draft. Homestead, Florida: US Department of the Interior Everglades National Park. http://publicfiles.dep.state.fl.us/D EAR/GoldAdministrativeRecord/Item%2027/0187 52.PDF. Walters, C.J. & Martell, S.J.D., 2004. Fisheries Ecology and Management Princeton: Princeton University Press. Wang, N., 2009. Personal Communication West Palm Beach, USA: South Florida Water management District. Received 2003 Vegetation Map;.dss hydrology input files.

PAGE 268

268 Wang, N., 2010. WCA2A 2003 Vegetation Data Personal Communication. West Palm Beach: South Florida Water Management District. Wang, J.D.; Swain, E.D.; Wolfert, M.A.; Langevin, C.D.; James, D.E.; Telis, P.A. 2007. Application of FTLOADDS to simulate flow, salinity, and surface water stage in the southern Everglades, Florida Scientific Investigations Report 2007 5010. United States Geological Survey. Wetzel, P.R., 2003. Nutrient and Fire Dis turbance and Model Ealuation Documentation for the Actoss Trophic Level System Simulation (ATLSS) Johnson City, Tennessee: East Tennessee State University. Wetzel, P.R., 2001. Plant Community Parameter Estimates and Documentation for the Across Trophic Le vel System Simulation (ATLSS) Johnson City, Tennessee: East Tennessee State University. Willard, D.A., 2010. SOFIA FS 146 96 [Online] Available at http://sofia.usgs.gov/publications/fs/14 6 96/ [Accessed 03 August 2010]. Wu, Y., Sklar, F.H. & Rutchey, K., 1997. Analysis and Simulation of Fragmentation Patterns in the Everglades. Ecological Applications 7(1), pp.268 76. Zajac, Z.B., 2010. Global Sensitivity and Uncertainty Analysis of Spat ially Distributed Watershed Models PhD. Dissertation. Gainesville, Fl: University of Florida.

PAGE 269

269 BIOGRAPHICAL SKETCH Gareth Lynton Lagerwall was born and raised in the small town of Eshowe, Kwa Zulu Natal, South Africa. H e attended s chool there where he was an active student member Partially surrounded by forest this is also where he developed a love for the natural world around him. In 2005 h e graduated with a BSc. in a gricultural e ngineering from the University of Natal From here he went on to pursue his PhD degree in a gricultural and b iological e ngineering at the University of Florida En route to this, he received an MSc degree from the same department in April of 2010, after which he received his PhD in August the following year