Quantifying the Relationships Among Habitat, Behavior and Performance of Gag, Mycteroperca microlepis

MISSING IMAGE

Material Information

Title:
Quantifying the Relationships Among Habitat, Behavior and Performance of Gag, Mycteroperca microlepis
Physical Description:
1 online resource (577 p.)
Language:
english
Creator:
Biesinger,Zy
Publisher:
University of Florida
Place of Publication:
Gainesville, Fla.
Publication Date:

Thesis/Dissertation Information

Degree:
Doctorate ( Ph.D.)
Degree Grantor:
University of Florida
Degree Disciplines:
Fisheries and Aquatic Sciences, Forest Resources and Conservation
Committee Chair:
Lindberg, William J
Committee Co-Chair:
Bolker, Benjamin M
Committee Members:
Frazer, Tom K
Murie, Debra J
Osenberg, Craig W

Subjects

Subjects / Keywords:
distribution -- gag -- habitat -- home -- microlepis -- model -- mycteroperca -- predation -- quality -- range -- risk -- space -- use
Forest Resources and Conservation -- Dissertations, Academic -- UF
Genre:
Fisheries and Aquatic Sciences thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract:
An animal's choice of position in the landscape often represents a trade-off between the conflicting needs for food and shelter. The distributions of predators, conspecifics, and resources across the landscape define costs and benefits, where different habitats and environmental conditions affect predation risks and foraging opportunities. Theoretical studies suggest that individuals and populations can respond to predation risk, landscape structure and environmental conditions through different space-use strategies, while empirical studies have demonstrated that animals do indeed vary their choice of location to balance predation risk and foraging success. Our understanding of how landscape and environmental conditions affect space use and fitness of animals foraging around a central shelter is often compromised by a poor understanding of space-use patterns. This dissertation examined the effect of landscape structure and environmental conditions on space-use patterns and fitness of large, mobile, reef fish using gag (Mycteroperca microlepis) as a model. I developed a model of space-use rules to predict distributions of individuals sharing a common shelter. One model prediction was that individuals trade off time spent near the shelter where foraging competition is high with time spent farther away where predation risk is high. Applied to gag, the model predicted that in presumably safer hard-bottom landscapes, individuals should use more of the surrounding landscape and experience higher fitness than individuals in sand-bottom landscapes. To test this prediction I recorded two- and three-dimensional gag space use and performance (growth and condition) around experimental reefs in sand- and hard-bottom landscapes, using biological sampling and new acoustic telemetry positioning technology. Tests of the telemetry system showed that it produced robust transmitter positions over appropriate spatial and temporal ranges in the experimental area. Using the telemetry array to obtain a basic description of the patterns and extent of gag space use, I found that core use areas ranged from 240 to 891 m2 and that individuals moved farthest from the reef during the daytime. Space-use correlations with other environmental conditions were less conclusive. Finally, in an experiment comparing gag space use and performance, I acoustically tagged eight individuals on each of three reefs in sand-bottom and three in hard-bottom landscapes. Gag living in hard-bottom had core areas 7.6 times larger than gag in sand-bottom landscapes (412 versus 54 m2, respectively). I failed to detect a difference in gag performance. These results suggest that gag alter their space-use patterns to trade off risks and foraging opportunities. Further studies might improve our understanding of how differences in space use of different landscapes affects fitness.
General Note:
In the series University of Florida Digital Collections.
General Note:
Includes vita.
Bibliography:
Includes bibliographical references.
Source of Description:
Description based on online resource; title from PDF title page.
Source of Description:
This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility:
by Zy Biesinger.
Thesis:
Thesis (Ph.D.)--University of Florida, 2011.
Local:
Adviser: Lindberg, William J.
Local:
Co-adviser: Bolker, Benjamin M.
Electronic Access:
RESTRICTED TO UF STUDENTS, STAFF, FACULTY, AND ON-CAMPUS USE UNTIL 2012-08-31

Record Information

Source Institution:
UFRGP
Rights Management:
Applicable rights reserved.
Classification:
lcc - LD1780 2011
System ID:
UFE0043281:00001


This item is only available as the following downloads:


Full Text

PAGE 1

1 QUANTIFYING THE RELATIONSHIPS AMONG HABITAT, BEHAVIOR AND PERFORMANCE OF GAG, MYCTEROPERCA MICROLEPIS By ZY BIESINGER A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2011

PAGE 2

2 2011 Zy Biesinger

PAGE 3

3 To my wife, Jackie, and my wonderful children

PAGE 4

4 ACKNOWLEDGMENTS First and foremost, I thank my wife, Jackie, for her excitement and suppo rt; she has patiently en couraged my work and provided a wonderful example of endurance. I also thank my family for their patience during our time at the University of Florida. I am extremely grateful to William J. Lindberg and Benjamin M. Bolker and my s upervisory committee, Thomas K. Frazer, Debra J. Murie, and Craig W. Osenberg. Bill Lindberg taught me to think critically about science, as well as how to write more clearly and effectively. He also provided the opportunity and resources to pursue an ex periment well beyond the opportunities I expected. Ben Bolker taught me how to construct a mathematical model and then effectively describe it in writing. From him I also learned how to program in R. From Tom Frazer I learned about marine ecology and ho w field work supports theoretical advancement. Deb Murie taught me about fish biology and the importance of quickly identify ing the heart of a scientific question and the research elements that ad dress it. Doug Marcinek not only provided substantial technical and logistical support over many field days, he also taught me to handle a boat and to dive with purpose, safety, and pleasure. I am grateful to several people who offered the use of their sp ecialized research tools. Arnoldo Valle Levinson donated the use of an Acoustic Doppler Current Profiler, which added the important dimension of aquatic environmental conditions to this study; Robert Swett donated the use of a Trimble GPS. Thomas Grothue s and Joseph Dobarro, from Rutgers University gathered much of the sonar imagery using their autonomous underwater vehicle. Denise Petty and Ruth Francis Floyd helped develop my fish handling and tagging procedures and taught me how to tag and suture. I am grateful to many volunteers who generously donated their time: Gustav Pauley,

PAGE 5

5 Tori Bacheler, Emilee Pierce, Ryan Kroutil, Kate Lazar, Carly Knoell, Alecia Adamson, Garin Davidson, Geoff Smith, Cheryl Thacker, Michelle Meadows, Kevin Segall, and Earl Smi th Procedures to capture and tag Mycteroperca microlepis were approved by the IFAS Animal Use Committee at the University of Florida (Approval Number: 001 08FAS). I gratefully acknowledge research funding and support from the D.M. Smith Marine Fellow shi p, the E.T. York Presidential Fellow ship, the Disney Worldwide Conservation Fund, the Florida Chapter of the American Fisheries Society, the American Academy of Underwater Sciences, the Aylesworth Foundation, the University l nick Endowment Fund, and The National Marine Fisheries Service, MARFIN Program, in addition to base funds from Bill Lindberg and Ben Bolker.

PAGE 6

6 TABLE OF CONTENTS page ACKNOWLEDGMENTS ................................ ................................ ................................ .. 4 LIST OF TABLES ................................ ................................ ................................ ............ 9 LIST OF FIGURES ................................ ................................ ................................ ........ 10 LIST OF ABBREVIATIONS ................................ ................................ ........................... 14 ABSTRACT ................................ ................................ ................................ ................... 15 CHAPTER 1 INTRODUCTION ................................ ................................ ................................ .... 17 2 PREDICTING LOCAL POPULATION DISTRIBUTIONS AROUND A CENTRAL SH ELTER BASED ON A PREDATION RISK GROWTH TRADE OFF .................. 21 Background ................................ ................................ ................................ ............. 21 Methods ................................ ................................ ................................ .................. 25 Predation Mortality Risk ................................ ................................ .................... 25 Food Acquisition and Growth ................................ ................................ ........... 27 Habitat Quality ................................ ................................ ................................ .. 27 Local Population Distribution ................................ ................................ ............ 28 Results ................................ ................................ ................................ .................... 29 Effects of Risk Dilution and Foraging Competition ................................ ........... 29 b = b g = 0 ................................ ................................ ................................ .. 29 b = b g > 0 ................................ ................................ ................................ .. 30 b > b g ................................ ................................ ................................ .. 30 b g > b ................................ ................................ ................................ .. 30 Parameter Effects on Population Distribution ................................ ................... 31 Strength of foraging competition ................................ ................................ 32 Reaction distance ................................ ................................ ...................... 32 Growing populations in two landscapes ................................ ..................... 32 Discussion ................................ ................................ ................................ .............. 34 3 TESTING AN ACOUSTIC TELEMETRY POSITIONING SYSTEM ........................ 47 Background ................................ ................................ ................................ ............. 47 Methods ................................ ................................ ................................ .................. 50 Study System ................................ ................................ ................................ ... 50 Telemetry System ................................ ................................ ............................ 50 Transmitters ................................ ................................ ............................... 51 Hydrophones ................................ ................................ .............................. 51 Post processing software: ALPS ................................ ................................ 53

PAGE 7

7 Hydrophone Deployments ................................ ................................ ................ 54 Telemetry Data Processing ................................ ................................ .............. 54 Detection fraction ................................ ................................ ....................... 55 Position solution fraction ................................ ................................ ............ 55 Position solution accuracy ................................ ................................ ......... 56 Spatial vari ation in position solution fractions ................................ ............. 57 Errors in sound speed estimates ................................ ................................ 57 Errors in hydrophone position estimates ................................ .................... 58 Results ................................ ................................ ................................ .................... 58 Detection Fraction ................................ ................................ ............................ 59 Position Solution Fraction ................................ ................................ ................. 60 Position Solution Accuracy ................................ ................................ ............... 61 Spatial Variation in Position Solution Fractions ................................ ................ 62 Errors in Sound Speed Estimates ................................ ................................ .... 62 Errors in Hydrophone Position Estimates ................................ ......................... 63 Discussion ................................ ................................ ................................ .............. 63 4 GAG SPACE USE RELATIVE TO ENVIRONMENTAL CONDITIONS ................... 77 Background ................................ ................................ ................................ ............. 77 Methods ................................ ................................ ................................ .................. 79 Study System and Organism ................................ ................................ ............ 79 Habitat Preference ................................ ................................ ........................... 82 Aquatic Environmental Con ditions ................................ ................................ .... 83 Fish Tagging ................................ ................................ ................................ ..... 84 Hydrophone Array ................................ ................................ ............................ 85 Data Post Process ing ................................ ................................ ....................... 86 2007 Deployment ................................ ................................ ............................. 87 2008 Deployment ................................ ................................ ............................. 87 Results ................................ ................................ ................................ .................... 88 Habitat Preference ................................ ................................ ........................... 88 Aquatic Environmental Conditions ................................ ................................ .... 88 Telemetry Resu lts ................................ ................................ ............................ 89 Extent of space use ................................ ................................ ................... 90 Vertical position in the water column ................................ .......................... 94 Gag travel speed ................................ ................................ ........................ 96 Discussion ................................ ................................ ................................ .............. 97 5 COMPARING GAG SPACE USE IN TWO LANDSCAPES ................................ .. 120 Background ................................ ................................ ................................ ........... 120 Methods ................................ ................................ ................................ ................ 123 Study System and Organism ................................ ................................ .......... 123 Experimental Design ................................ ................................ ...................... 125 Experimental Reef System ................................ ................................ ............. 126 Landscape and Aquatic Conditions ................................ ................................ 126 Growth and Condition ................................ ................................ ..................... 127

PAGE 8

8 Results ................................ ................................ ................................ .................. 129 Extent of Space Use In Two Landscapes ................................ ....................... 130 Space Use and Environmental Conditions ................................ ..................... 132 Space Use and Habitat Preference ................................ ................................ 133 Growth and Condition ................................ ................................ ..................... 134 Discussion ................................ ................................ ................................ ............ 135 6 CONCLUSIONS ................................ ................................ ................................ ... 156 APPENDIX A DERIVATIONS ................................ ................................ ................................ ..... 164 B MODEL CODE AND EXAMPLES FOR CHAPTER 2 ................................ ........... 167 C TELEMETRY ARRAY ASSESSMENT DEPLOYMENTS ................................ ...... 174 D ADDITIONAL CHAPTER 3 FIGURES ................................ ................................ .. 176 E ADDITIONAL CHAPTER 4 FIGURES ................................ ................................ .. 178 F ADDITIONAL CHAPTER 5 FIGURES ................................ ................................ .. 203 G R CODE FOR ALL CALCULATIONS ................................ ................................ .... 214 LIST OF REFERENCES ................................ ................................ ............................. 569 BIOGRAPHICAL SKETCH ................................ ................................ .......................... 577

PAGE 9

9 LIST OF TABLES Table page 2 1 Parameter values and elasticities. ................................ ................................ ...... 40 3 1 Array deployment details ................................ ................................ .................... 68 4 1 Summary of gag measurements and behavior ................................ ................. 102 4 2 Genera lized additive model fits for distance from reef models ......................... 103 4 3 Generalized additive model fits for altitude models ................................ .......... 104 4 4 Gene ralized additive model fits for travel speed models ................................ .. 105 5 1 Mean measurements and behavior of all gag on a reef ................................ .... 141 5 2 Measurem ents and behavior of individual tagged gag ................................ ..... 142 5 3 Measurements of gag collected for growth and condition analysis ................... 143

PAGE 10

10 LIST OF FIGURES Figure page 2 1 Geometry of the risk function ................................ ................................ .............. 41 2 2 P redation mortality risk ................................ ................................ ....................... 42 2 3 Population distribution and range predictions ................................ ..................... 43 2 4 Predicted population ranges and densities in two landscapes ........................... 44 2 5 Predicted population range density, and realized habitat quality ....................... 45 2 6 P opulation range for given combination s of reaction distance and abundance .. 46 3 1 Overall detection fractions by individual hydrophones at various distances ....... 69 3 2 Temporal variation in hourly detection and position solution fractions ................ 70 3 3 C ondition number versus position solution fraction and accuracy ...................... 71 3 4 Mean position solution fractions from array deployments at va rious spacings ... 72 3 5 Assessing position solution accuracy ................................ ................................ 73 3 6 Spatial variation in position solution fractions within the ar ray ............................ 74 3 7 Sound speed changes versus position solution fraction and accuracy ............... 75 3 8 H ydrop hone position errors versus position solution fraction and accuracy ....... 76 4 1 Habitat composition and use ................................ ................................ ............ 106 4 2 Aquatic and lunar conditions ................................ ................................ ............. 10 7 4 3 Circular distributions of current direction for each deployment ......................... 108 4 4 Telemetered positions and hourly position fractions for Fish ID 2 .................... 109 4 5 Gag behavior and telemetry array performance distributions ........................... 110 4 6 Home range stabilization curves ................................ ................................ ....... 111 4 7 Distance from the reef versus time of day ................................ ........................ 112 4 8 Time series of the distance from the reef for Fish ID 2 ................................ ..... 113 4 9 Nighttime distance from the reef versus lunar index ................................ ......... 114

PAGE 11

11 4 10 Distance from the reef versus water temperature ................................ ............. 115 4 11 Altitude above the seafloor versus time of day ................................ ................. 116 4 12 Time series of the altitude above the seafloor for Fish ID 3. ............................. 117 4 13 Nighttime altitude above th e seafloor versus lunar index. ................................ 118 4 14 Time series of gag travel speed for Fish ID 2 ................................ ................... 119 5 1 Map of the e xperimental reefs ................................ ................................ .......... 144 5 2 Size distributions of observed and tagged gag at experimental reefs ............... 145 5 3 Te lemetered two dimensional positions for four individuals ............................. 146 5 4 Time series of the distance from the reef for Fish ID 17 ................................ ... 147 5 5 Core area kernel density estimates ................................ ................................ .. 148 5 6 Home range stabilization curves ................................ ................................ ....... 149 5 7 Distributions of calculated distances from the reef ................................ ........... 150 5 8 Distributions of calculated travel speeds of gag ................................ ............... 151 5 9 Distance from the reef versus time of day ................................ ........................ 152 5 10 Habitat composition and use in Deployment A HB 1 ................................ ..... 153 5 11 Size and fractional ages of gag collected for growth and condition ana lyses ... 154 5 12 L ength versus age of gag collected for growth and condition analyses ............ 155 D 1 Temporal variation in hourly detecti on and hourly position solution fractions ... 176 D 2 Temporal variation along the Northing axis of position solutions ...................... 177 E 1 Two d imensional positions and hourly position fractions for Fish ID 1 ............. 178 E 2 Two dimensional positions and hourly position fractions for Fish ID 3 ............. 179 E 3 T wo dimensional positions and hourly positi on fractions for Fish ID 4 ............. 180 E 4 Two dimensional positions and hourly position fractions for Fish ID 5 ............. 181 E 5 Time series of the distance from the reef for Fish ID 1 ................................ ..... 182 E 6 Time series of the distance from the reef for Fish ID 3 ................................ ..... 183

PAGE 12

12 E 7 Time series of the distance from the reef for Fish ID 4 ................................ ..... 184 E 8 Time series of the distance from the reef for Fish ID 5 ................................ ..... 185 E 9 Distance from the reef versus current direction ................................ ................ 186 E 10 Distance from the reef versus current speed ................................ .................... 187 E 11 Altitude above the seafloor versus distance from the reef ................................ 188 E 12 Time series of the altitude above the seafloor for Fish ID 4 .............................. 189 E 13 Time series of the altitude above the seafloor for Fish ID 5 .............................. 190 E 14 Altitude above the seafloor versus water temperature ................................ ...... 191 E 15 Altitude above the seafloor versus current direction ................................ ......... 192 E 16 Altitude above the seafloor versus current speed ................................ ............. 193 E 17 Gag travel speed versus time of day ................................ ................................ 194 E 18 Time series of travel speed for Fish ID 1 ................................ .......................... 195 E 19 Time series of travel speed for Fish ID 3 ................................ .......................... 196 E 20 Time series of travel speed for Fish ID 4 ................................ .......................... 197 E 21 Time series of trave l speed for Fish ID 5 ................................ .......................... 198 E 22 Nighttime gag travel speed versus lunar index ................................ ................. 199 E 23 Gag travel speed versus water temperature ................................ ..................... 200 E 24 Gag travel speed versus current direction ................................ ........................ 201 E 25 Gag travel speed versus current speed ................................ ............................ 202 F 1 Time series of the distance from the reef for Fish ID 23 ................................ ... 203 F 2 Time series of the distance from the reef for Fish ID 21 ................................ ... 204 F 3 Time series of the distance from the reef for Fish ID 1 ................................ ..... 205 F 4 Aquatic and lunar conditions ................................ ................................ ............. 206 F 5 Distance from the reef versus lunar index ................................ ........................ 207 F 6 Distance from the reef versus water temperature ................................ ............. 208

PAGE 13

13 F 7 Gag t ravel speed versus time of day ................................ ................................ 209 F 8 Gag travel speed versus lunar index ................................ ................................ 210 F 9 Gag travel speed versus water temperature ................................ ..................... 211 F 10 Habitat composition and use in Deployment D HB 2 ................................ ..... 212 F 11 Habitat composition and use in Deployment F HB 3 ................................ ..... 213

PAGE 14

14 LIST OF ABBREVIATION S ADCP Acoustic Doppler Current Profiler AIC Akaike Information Criterion ALPS Asynchronous Logger Positi oning System CN Condition number GAM Generalized additive model GPS Global positioning system KDE Kern el density estimate MCP Minimum convex polygon PSR Partial symbol reconstruction SFMA Steinhatchee Fisheries Management Area

PAGE 15

15 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy QUANTIFYING THE RELATIONSHIPS AMONG HABITAT, BEHAVIOR AND PERFORMANCE OF GAG, MYCTEROPERCA MICROLEPIS By Zy Biesinger August 2011 Chair: William J. Lindberg Cochair: Benjamin M. Bolker Major: F isheries and Aquatic Sciences off between the conflicting needs for food and shelter. The distributions of predators, conspecifics, and resources across the landscape define costs a nd benefits, where different habitats and environmental conditions affect predation risks and foraging opportunities. Theoretical studies suggest that individuals and populations can respond to predation risk, landscape structu re and environmental conditi ons through di fferent space use strategies, while e mpirical studies have demonstrated that animals do indeed vary their choice of location to balance predation risk and foraging success. Our understanding of how landscape and environmental conditions affe ct space use and fitness of animals foraging around a central shelter is often compromised by a poor understanding of space use patterns This dissertation examined the effect of landscape structure and environmental conditions on space use patterns and fi tness of large, mobile, reef fish using gag ( Mycteroperca microlepis ) as a model. I developed a model of space use rules to predict distributions of individuals sharing a common shelter. One model prediction was

PAGE 16

16 that individuals trade off time spent near the shelter where foraging competition is high with time spent farther away where predation risk is high. Applied to gag, the model predicted that in presumably safer hard bottom landscapes, individuals should use more of the surrounding landscape and ex perience higher fitness than individuals in sand bottom landscapes. To test this prediction I recorded two and three dimensional gag space use and performance (growth and condition) around experimental reefs in sand and hard bottom landscapes using bio logical sampling and new acoustic telemetry positioning technology. T est s of the telemetry system showed that it produced robust transmitter positions over appropriate spatial and temporal ranges in the experimental area. Using the telemetry array to obt ain a basic description of the patterns and extent of gag space use, I found that core use areas ranged from 240 to 891 m 2 and that individuals moved farthest from the reef during the daytime. Space use correlations with other environmental conditions wer e less conclusive. Finally, in an experiment comparing gag space use and performance, I acoustically tagged eight individuals on each of three reefs in sand bottom and three in hard bottom landscapes. Gag living i n hard bottom had core areas 7.6 times la rger than ga g in sand bottom landscapes (412 versus 54 m 2 respectively). I failed to detect a difference in gag performance. These results suggest that gag alter their space use patterns to trade off risks and foraging opportunities. Further studies mi ght improve our understanding of how differences in space use of different landscapes affects fitness.

PAGE 17

17 CHAPTER 1 INTRODUCTION Our understanding of how landscape and environmental conditions affect animal behavior and fitness is often compromised by a poo r understanding of space use off between the conflicting needs for shelter and food (Stearns, 1992; Hebblewhite and Merrill, 2009). The distributions of predators and resources across the landscape define the costs and benefits associated with each location, where different habitats present different sheltering and foraging opportunities (Claireaux and Lefran ois, 2007; Lima and Dill, 1990). In response to temporal changes in predator pres ence, resource distributions, evaluating their environment and moving to the location best suited to their needs (e.g., Ferrari et al ., 2009). Several theories explore dif ferent components of the risk/resource trade off and their impact on space use decisions. Optimal foraging theory predicts an in patch residency time to maximize energy intake rates, while ignoring predatio n risk (Perry and Pianka, 1997). I deal free distribution theory incorporates competition to predict movement among patches in a way that maximizes and equalizes fitness for all individuals (Giske et al asin model extends ideal free theory to predict population distr ibutions in continuous landscapes. Central place foraging theory balances foraging success with predation risk for animals transporting resources to a central shelter (Bakker et al. 2005). F oraging arena theory argues that spatially and temporally restr icted arenas establish variation in risk and foraging opportunities in aquatic environments (Walters and Martell, 2004). Many empirical studies have tested

PAGE 18

18 such theories and added to our understanding of the role of landscape and environmental conditions in space use patterns. For example, s helter and habitat affected risk and survival (Arthur et al ., 2005; Lewis and Eby, 2002). Animals varied their choice of location to balancing predation risk and foraging success (Gliwics et al ., 2006; Cowlishaw, 1997 ), while risk modified and interfered with foraging behavior and success (Macleod and Gosler, 2006; Cooper, 2000). Over time, space competing needs, and one strategy is to establish a home range around a central shelter and then forage across the surrounding landscape (e.g., Lindberg et al ., 2006 ; Sale, 1971; Hovel and Lowe, 2007; Johns and Armitage, 1979 ). In this strategy, the position relative to shelter, the surrounding habitats, and changin g environmental conditions define the optimal space use pattern and resultant fitness. The movement of large, mobile, reef fish provides an opportunity to explore how animal space use patterns and fitness respond to differences in landscape composition an d varying environmental conditions. For fish with home ranges centered on a common shelter, movement away from shelter affects the balance of predation risk and foraging competition. But it is poorly understood how the composition of the surrounding seafl oor and changing environmental conditions affect s this balance. Using gag ( Mycteroperca microlepis ) as a model, I examined how space use was affected by the landscape and other environmental conditions. Before the end of t heir first year, juvenile gag le ave near shore nursery grounds and spend several years moving across the shallow continental shelf before joining spawning aggregations at the shelf edge (Koenig and

PAGE 19

19 Coleman, 1998; Collins et al ., 1998). During the years on the shallow shelf, juveniles est ablish home ranges centered on rare, small physical structures in an otherwise flat landscape of sand and hard bottom habitats, which potentially present different risks and foraging opportunities ( Parker, 1983; Hood and Schlieder, 1992) These rare phys ical structures typically attract many individuals so that foraging competition would be greatest near the shelter, causing them to forage out across the landscape, and retreating to shelter when disturbed. Using existing experimental artificial reefs, ca tegorical habitat maps, aquatic condition data, fine grained space use data, and gag performance data, I asked how landscape composition and environmental conditions affected the space use and fitness of gag established on reefs surrounded by sand or hard bottom landscapes. To explore how the landscape and environmental conditions affected gag space use and performance I asked three main questions: 1. How does the landscape composition arou nd a reef affect gag space use? 2. How do diel, lunar, and water co ndition patterns affect gag space use? and 3. How does the landscape composition around a reef affect gag fitness, using growth and condition as proxies? In Chapter 2 I developed a theoretical model predicting space use distributions to balance predation risk and forging competition around a common shelter. I model ed the scenario in which individuals, centering their movement on a common shelter, move outward across the surrounding landscape to minimize foraging competition, only retreating to the shelter upon a predato r encounter. Individuals trade off time spent near the shelter where competition is high with time spent farther away where risk is high. The model predicts different space use patterns in different landscapes. To test these predictions, I used

PAGE 20

20 space use patterns in different landscapes and under varying environmental conditions. Acoustic telemetry is beginning to overcome some of the limitations in mak ing observations of large, mobile organisms at appropriate locations, depths, and spatial and temporal scales in aquatic environments. New active acoustic positioning systems can provide fine grained fish movement behavior, but are not often tested to est ablish the quality and reliability of their performance. In Chapter 3 I assessed the capabilities and performance of an acoustic positioning system using stationary transmitters at known locations during fifteen field deployments. My first effort to quan tify the space use patterns of gag in response to a range of temporally and spatially varying conditions and establish the appropriate spatial and temporal scales for future experiments is described in Chapter 4 s capabilities and describing the general patterns and range of gag space use, I conducted an experiment comparing space use patterns and fitness proxies of individuals centered on reefs placed in either sand or hard bottom landscapes described in Chapte r 5 A final discussion of how these efforts advance our understan ding of how landscape and envir onmental conditions affect animal space use and fitness is given in Chapter 6.

PAGE 21

21 CHAPTER 2 PREDICTING LOCAL POP ULATION DISTRIBUTION S AROUND A CENTRAL SHELTE R BASED ON A PREDATI ON RISK GROWTH TRADE OFF Background Most animals must acquire resources while avoiding predators (Stearns, 1992; Lima and Dill, 1990). Because risks and resources vary spatially, this balance is often achieved through choice of locatio n, which is complicated by interactions such as intraspecific competition (Walters and Martell, 2004). Many theoretical frameworks address the habitat selection trade off between maximizing foraging success and minimizing risk (Rosenzweig, 1981). A comm on scenario in animal ecology is that a group of (not necessarily related) individuals will share a common shelter from predators, making forays out from the shelter to forage but retreating back to the shelter to avoid predation risk ( e.g., fish: Lindberg et al ., 2006; Sale, 1971; lobster: Hovel and Lowe, 2007; Karnofsky et al ., 1989; marmots: Johns and Armitage, 1979; squirrels: Schooley et al ., 1996). The space use pattern of any individual, which I will represent throughout this paper as its overall av erage distance from the shared shelter, represents a trade off between predation risk and foraging success. If it forages close to the shelter, it must compete for resources with a large number of conspecifics that are doing the same thing; if it forages far away, it incurs a greater risk of predation because it cannot easily retreat to the shelter. Despite a long tradition of theoretical exploration of the relationships among foraging success, conspecific competition, predation risk, and population distr ibution in a heterogeneous environment, the scenario I describe here has not (to my knowledge) been explored theoretically.

PAGE 22

22 Optimal foraging theory (Pyke, 1984; Perry and Pianka, 1997) describes urce patches to maximize energy intake rates, but typically ignores predation risk and intraspecific competition. Ideal free distribution theory (Fretwell and Lucas, 1970; Kacelnik et al ., 1992; Giske et al ., 1997) extends this framework to include compe tition. This theory predicts the distribution of conspecific competitors moving freely among discrete habitat patches of different quality, resulting in equal, maximized fitness for all individuals. The typical metric of patch quality is the rate of reso urce acquisition, and again predation risk is behavior, ideal free theory makes predictions about the population d istribution. asin model in turn ext ends ideal free theory to a continuous landscape where habitat quality varies spatially, with a central, highest quality habitat surrounded b y lower quality habitats. The b asin model predicts that a growing population will expand outward from high to low quality areas via a demographic response as the growth rate becomes positive in previously unused habitat at the margins of the range. Central place foraging theory (Giraldeau et al ., 1994; Bakker et al ., 2005) provides a framework that addresses the tr ade off between maximizing foraging success and minimizing predation risk. This theory uses the transport cost and risk of moving resources from discrete patches to a central shelter to predict spatial and temporal foraging budgets that maximize the rate of energy acquisition. Predation risk is often assumed as the motivation for transporting resources back to shelter, but is not explicitly included in the theory. Central place foraging generally fails to account for

PAGE 23

23 co mpetitive interactions ( Acosta et a l ., 1995). Further, central place foraging theory only takes into account the cost of transporting resources to a central shelter. No existing theoretical frameworks handle density and habitat dependent trade off between predation risk and foraging succ ess at the scale of individual movement decisions. Empirical work to understand this trade off addresses some of the elements incorporated in the theoretical frameworks. Resource selection functions estimate empirical correlations between animal habitat u se and several habitat characteristics (Boyce et al ., 2002). These functions are most useful in systems where large, comprehensive habitat descriptor datasets have been collected through remote sensing, and the target animals are managed species ranging o ver large areas ( e.g., Hebblewhite et al ., 2005). Resource selection functions incorporate foraging competition and predation risk only as qualities of the fixed landscape, ignoring possible intraspecific and predator prey movement interactions (White et al ., 1996). They do, however, explicitly address spatial resource and risk distributions. At a smaller scale, Scrimgeour and Culp (1994) measured larval mayfly habitat selection in artificial streams in response to predation risk and food abundance. Thei r findings suggest that mayflies do trade off foraging success for predation risk: their preferences for high risk/high food and low risk/low food areas are approximately equal and intermediate to low risk/high food and high risk/low food areas. In a stud y of predator behavior, Quinn and Cresswell (2004) show that, during stressful winters, ground feeding coastal shorebirds increase foraging success (at the cost of increased

PAGE 24

24 predation risk) by feeding in more dispersed flocks as compared to summer flocking behavior. I model a scenario which has not been theoretically explored. Individuals, centering their movement on a common shelter, move outward across the landscape to minimize foraging competition, only retreating to shelter when a predator is encount ered (Lindberg et al ., 2006). The cost of being away from shelter is in increased predation risk, not energetic expense of resource transportation as in central place foraging. Risk dilution, a decrease in risk due to increased prey numbers, could also d ecrease as density decreases away from shelter (Bednekoff and Lima, 1998). In short, foraging competition drives prey away from shelter while predation risk drives them toward shelter. In this situation the population response is behavioral, n ot demograp hic as in asin model, and happens at the scale of individual movement decisions over short times and local areas, not generations and full population ranges. At short temporal and spatial scales, I duals maximum distance from shelter to which individuals of a local population travel while foraging. The combination of all prey movement decisions produces the local popula tion distribution. Also, I table food highlighting the fact that the tor of a lower trophic level. To predict the optimal population distribution that results when all individual s freely and independently balance predation risk and foraging success, I describe mortality risk as increasing with distance from shelter and possibly decreasing with density due to risk dilution. Foraging success, which I equate to realized growth, is a decreasing function

PAGE 25

25 of density that is otherwise independent of location. I combine these two processes into a single metric of habitat quality and explore predicted population distributions that arise in different landscapes from prey distributing thems elves so as to maximize their realized habitat quality. I explore model behavior to reveal the implications of foraging competition, risk dilution, abundance changes, and landscape characteristics on the local population distribution. Methods Predation Mortality Risk I assume that predation mortality risk to an individual prey is an increasing function of distance from a shelter r (Eggleston et al ., 1992; Hemmi, 2005) I construct a simplified mechanistic model for the probability of capture, of the prey by a newly detected predator. Prey at distance r from the shelter will react to a predator at some distance D ( Figure 2 1; Cooper, 2000; Broom and Ruxton, 2005). At the moment the prey reacts lines connecting the shelter, the prey, and the p redator define an angle ( r ). B oth the prey and the predator begin moving directly toward the shelter at their respective speeds, S prey and S predator ( Table 2 1). If the prey arrives first, it escapes, otherwise it is captured. Although this chase could more realistically be m odeled as a pursuit evasion game (Bopardikar et al 2007 ; Karnad and Isler, 2008 ), I expect the This formulation simplifies the geometry without qualitatively changing model pr edictions. The two (symmetrical) critical points on the reaction circle occur when the prey is at a distance r and the predator is at a distance r predator = r ( S predator / S prey ) from the shelter ; these points represent predator locations on the circle wher e the predator and the prey will arrive at the shelter simultaneously. The critical points define a critical angle, c ( r ).

PAGE 26

26 When a predator appears at a point on the circle farther from the shelter than the critical points that is, when ( r ) exceeds c ( r ) the prey esc apes. I f a predator is equally likely to come from any direction, the probability that the prey will be captured ( i.e., that it will lose the race to the shelter ), ( r ), equals 2 c ( r )/2 I incorporate the probability of a predator encou nter and the chance that a prey will escape even after being captured by limiting the probability of capture to max < 1. Similarly, to account for the chance the prey is captured even within the safety of the shelter, I include a minimum probability of c apture, min > 0. Thus, I calculate the overall density independent probability of capture as ( 2 1) By the law of cosines (Appendix A), c ( r ) = arccos[ ((1 S predator 2 / S prey 2 ) r 2 + D 2 ) / 2 Dr ]. When the prey is clos e enough to the shelter, that is within the minimum risk threshold, r' ( i.e., when r r' = DS prey / ( S predator + S prey ) ), it always experiences the minimum risk, ( r ) = min ( Figure 2 2). When it is beyond the maximum risk threshold, r'' ( i.e., when r r'' = DS prey / ( S predator S prey )) it experiences the maximum risk, ( r ) = m ax (Appendix A). Next, to incorporate effects of risk dilution (Sandin and Pacala, 2005) I write the predation mortality risk, as ( Figure 2 2) : ( 2 2) where n ( r ) is the density of all prey not including the focal animal, at distance r from the shelter and b is the strength of density dependent risk dilution (Johnson, 2006) The strength of risk dilution ranges from b = 0 ( density indepe ndent ) to b = 1 (perfectly compensating risk dilution). This formulation is similar to, though not identical to,

PAGE 27

27 assigning the predator a Holling type II functional response: b = 0 corresponds to a negligible handling time, while b = 1 is the limit o f a large handling time. For a given value of b Figure 2 2 shows the effect of density on mortality risk across the landscape Food Acquisition and Growth Growth r ate depends on maximum (competition free) growth rate, g max and on its reduction by co mpetition. I assume a standard hyperbolic form for the effects of competition on growth rate, ( 2 3) This form is consistent, for example, with conspecifics dividing a flow of resources, or simultaneously depleting the st ock of a newly discovered patch of resources. Similar to the expression for risk dilution, b g = 0 corresponds to no conspecific competition ( e.g., if the population is top down controlled so that food resources are overabundant) while b g = 1 corresponds to perfectly compensating competition (e.g., food resources are limiting and individuals divide them equally). In the current study, I assume that the resource supply, g max is homogeneous in space all variation in growth rate is driven by variation in co nspecific density. Habitat Quality To represent the trade off between predation risk and foraging success, I use the ratio of mortality risk to growth as an inverse metric of local habitat quality (low / g corresponds to high quality, Werner and Gilliam, 1 984): ( 2 4)

PAGE 28

28 When only the focal prey individual is present ( r 0) / g ( 0) = ( r )/ g max This defines the inverse of the intrinsic quality of the landscape, i.e., the quality experienced in the absence of risk dilution or foraging competition. Because mortality risk increases with distance from the shelter, the spatial pattern of intrinsic quality (shown in Figures 2 2 and 2 (lowest / g and highest quality) in the center of the landscape surrounded by higher locations (higher / g and lower quality). As the density of prey increases, the realized quality experienced by the focal individual decreases through resource competition, but increases through risk dilution, so that the net effect depends on the relative strengths of b and b g Local Population Dist ribution Dividing the area around a shelter into annuli of thickness dr n ( r ) becomes the prey density in each ring. Given the assumptions that resource supply is spatially homogeneous and predation risk increases with distance from shelter, prey habitat use will be radially symmetric around the shelter. which prey might choose to occupy, then by ideal free distribution theory, prey will distribute themselves among the rings so as to maximize their realized quality For a given abundance the habitat quality experienced by prey at all locations will equal the inverse of a constant value, C = / g which is calculated from Equation 2 4 Conversely, for a given C I calculate the density distribution of the population by sol ving E quation 2 4 for n ( r ): ( 2 5)

PAGE 29

29 Because of the shape of ( r ), densities are always highest near the shelter. If r* is the maximum range of the local population, i.e., the point beyond which predicted densities drop to zero, then n ( r* ) = 0, and where Cg max min ) / ( max min ) ( Appendix A) To calculate the total number of prey in the local population, n T I integrate over the circle from the shelter out to r* : ( 2 6) Equation 2 5 describes the population distribution n ( r ), that results in a specific realized quality, C I use numerical root finding (R Development Core Team, 2010) methods to calculate the C and n ( r ) that make Equation 2 6 equal some target local population size n T = n t arget Appendix B provides R code for model equations an d examples. Results Effects of Risk Dilution and Foraging Competition The relative strengths o f the two density dependent responses, b and b g determine whether the benefit of reduced risk at higher density will overcome the cost to growth. I consider fo ur cases: b = b g = 0 This is the case where both risk and growth are density independent. Because the intrinsic growth rate is spatially uniform and unaffected by population density, there is no motivation to travel away from the shelter in order to lowe r competition costs. All individuals should remain at the shelter ( r = 0) where the predation risk is minimal

PAGE 30

30 b = b g > 0 When risk and growth both decrease at the same rate with increasing density, all prey should, again, remain at the shelter The ben efit of increased growth at lower densities is exactly matched by increased risk (realized habitat quality remain s unchanged), so that habitat dep endent increases in risk drive all individuals to the shelter b > b g If, for some increase in density, risk decreases (a benefit) faster than growth rate decreases (a cost), then, whether growth is density independent ( b > b g = 0) or dependent ( b > b g > 0), the situation is more complex. Adding more prey to the shel ter decreases risk more than it decreases growth, thus increasing realized habitat quality. Increased growth associated with lower densities away from the shelter is always outweighed by increased risk. The highest realized habitat quality is (again) alw ays at the shelter. b g > b The model makes the most interesting predictions (indeed, the only non trivial ones) in the case where risk decreases more slowly than growth rate for a given increase in density whether risk is density independent ( b g > b = 0) or dependent ( b g > b > 0). I When the local population is small, competition for food at the shelter is low, expected grow th is high, and risk is minimized near the shelter : prey should stay near the shelter As abundance increases, density at the shelter increases and growth rate decreases faster than risk, causing decreased realized habitat quality. As realized quality at the shelter

PAGE 31

31 decreases to equal the lower, intrinsic quality of previously unoccupied (riskier) habitats at the margins of the population range, prey will choose to move out and occupy those habitats thus expanding the local population range. As abundanc e continues to increase, realized quality decreases further, causing prey to move farther still from the shelter until realized quality equal s the intrinsic quality beyond r'' and the population range extends to r'' Beyond this maximum abundance, any ne w individuals will becoming non site attached. Parameter Effects on Population Distribution Having defined the model, I now explore the relationships between behaviora l or landscape characteristics and population size on the population distribution. Two examples relate the strength of density dependence and reaction distance to local population distribution. A third example compares distribution predictions in two dif ferent landscapes as abundance changes. Similar examples could explore effects of other parameters ( e.g., decreasing predator speed, S pred ator has an effect similar to increasing the reaction distance, D ). Beyond these examples, Table 2 1 shows the elas ticities of population range, r* to changes in model parameters. Prey and predator speeds have the largest and opposite elasticities because they are relative speeds, so it is the ratio of the two that affects the model. The next largest elasticities re late to conspecific, rather than predator/prey interactions. And because g max is simply a scaling parameter it is expected to have a very small elasticity. In this paper I focus largely on the reaction distance because it shows some non intuitive pattern s, even though they are not the strongest.

PAGE 32

32 Strength of foraging competition I first consider the effect of the strength of foraging competition, b g on the population distribution, n ( r ). Figure 2 3a shows that i f increases in population density cause only small decreases in growth rate ( i.e., b g is small), prey will choose to spend most of their time within the minimum risk threshold, r' As the strength of competition increases ( i.e., b g i ncreases) food competition will drive prey to expand their ranges to spend more time at risk beyond r' Reaction distance Next I explore the relationship between the reaction distance, D, and population range, r* Figure 2 3b shows that at small ( D < 9) and large ( D > 18) reaction distances, r* increases with increasin g D ; at intermediate values of D r* instead decreases with increasing D Different detection distances change the shape of the habitat quality basin (compare Figure 2 4a and b). As the abundance in a particular landscape increases, the experienced quali ty decreases. When the experienced quality decreases to equal intrinsic quality beyond r'' ( i.e., Figure 2 4a, n T = 24), the landscape has limit to the quality of landsca pe ( e.g., a lower limit of D = 5 in Figure 2 4a) capable of sustaining that abundance. In Figure 2 3b the dot on the curve marks the shortest reaction ( D = 7) defining the smallest landscape quality basin capable of sustaining an abundance of n T = 50. Gro wing populations in two landscapes I can also compare population distributions in contrasting landscapes as abundance changes ( Figure 2 4) Consider two landscapes distance ( D D = 10 for the

PAGE 33

33 ( i.e., equal values of max and min ), but ( i.e., compar e mini mum and maximum risk thresholds in Figure 2 4 a and b). Risk increases more quickly with distance from the shelter in the low visibility than in the high visibility landscape; intrinsic habitat quality decreases accordingly. How do I expect realized quality and population distribution to c hange as abundance, n T increases ? Figure 2 4 shows the realized quality, / g and population distribution, n ( r ), as functions of abundance for both landscapes. In the low visibility landscape ( Figure 2 4 a and c), when abundance is low ( n T = 5) the cent ral, highest quality habitats are occupied, density is low ( Figure 2 4 c), and realized quality is high ( Figure 2 4 a); the population distribution is restricted to very near the shelter ( Figure 2 4c ). As abundance increases (to n T = 24) the population rang e r* exp ands from 3 to 8 and density increases from 0.37 to 1, resulting in a strong decrease in realized quality ( Figure 2 4 a). In contrast, in a high visibility landscape ( Figure 2 4 b and d), at low abundance, again the central habitat is occupied, de nsity is low, and realized quality is high; the population distribution is again restricted to very near the shelter As abundance increases in this landscape, the population range expands from 4 to 7 and density increases from 0.1 to 0.43 resulting in a weak decrease in realized quality. These relationships are summarized in Figure 2 5. As abundance increases, range expansion is faster but more limited in riskier landscapes ( Figure 2 5 a) Again, with increasing abundance, increases in density ( Figure 2 5 b) and the resulting decreases in realized quality ( Figure 2 5 c) are stronger in riskier landscapes. As before, the dots represent the maximum abundance of each landscape. In Figure 2 4b, the population

PAGE 34

34 abundance, n T = 50, and maximum abundance, n T = 95 (not shown), highlight points of correlation with other figures. The surface in Figure 2 6 shows the relationship between reaction distance, population abundance, and population range. The solid, dashed and dot dashed lines show in three dimensions th e curves of Figures 2 3b and 2 5a. The maximum abundance/minimum reaction distance dots lie along the margin of the surface showing the maximum abundance and resultant population range for any given landscape ( e.g., any given prey reaction distance). The surface margin is projected onto the bottom plane of the figure highlighting the relationship between reaction distance and maximum abundance. Discussion In a system where a local population of prey use space around a common shelter, my model predicts tha t the relative strengths of risk dilution, b and foraging competition, b g control the trade off between these two forces, and set the population distribution (Abrahams and Pratt, 2000). In the trivial scenarios when risk dilution is stronger than, or e qual to, foraging competition ( b b g at the shelter where they experience the highest habitat quality. When foraging competition is stronger than risk dilution ( b g > b space around the shelter to balance risk and compet ition. With a constant population abundance, changes in b or b g affect the optimal balance of risk and competition, and alter the population distribution, either contracting or expanding population range with increasing b or b g ( Figure 2 3a), respective ly. Alternatively, as abundance increases, the population range expands, density increases, and realized habitat quality decreases ( Figure 2 4). When abundance increases to the capacity of the landscape ( i.e., when the maximum abundance makes the experie nced quality equal intrinsic quality b eyond

PAGE 35

35 r'' ; Figure 2 4a, n T = 24), there is no benefit for new individuals to associate with the shelter. To explore the implications of landscape structure for population distribution, one may consider any parameter that changes the shape ( e.g., depth, width, steepness) of the intrinsic habitat quality curve ( e.g., Figure 2 4a and b). For example, decreased visibility, due to increased foliage density in a wooded landscape or turbidity in an aquatic system, may decre ase the distance at which a prey reacts to a predator, D in the model. Figure 2 3b shows an unexpected relationship between the reaction distance, D and population range, r* The general positive relationship is the natural expectation that as landscape quality increases ( i.e., as D increases ), r* expand s The decre asing portion of this curve (9 < D < 18) is less intuitive The relative effe c t of competition decreases as the local population distribution becomes more even; as D increases, r' moves outw ard so the density within r' decreases. During this shift, prey that once were experiencing lower density (and thus lower competition) beyond r' move within r' to experience a higher density (but lower risk) equal to all other prey. This simultaneous in crease in r' decrease in density within r' and increased evenness in density, results in the decreasing phase of r* in Figure 2 3b. As D and r' continue to increase, eventually ( D > 18) all prey are within r' and r* simply grows with r' A given lands cape (in my example a given D ) is capable of sustaining a certain maximum prey abundance before realized quality decreases to equal intrinsic quality beyond the maximum risk threshold, r'' ( Figure 2 4a, n T = 24). Additional prey will not benefit from asso ciating with the saturated shelter and sh ould not become site attached. Conversely, decreasing landscape quality ( e.g., decreasing D ) decreases the maximum

PAGE 36

36 abundance a landscape can sustain. The dot in Figure 2 3b indicates the lowest visibility (smalles t D ) landscape capable of sustaining a maximum abundance of n T = 50. Next, I compare predicted population distributions in two landscapes with changing abundances ( Figure 2 4). At low abundances, when foraging competition is low and there is little ince ntive to move beyond r' a population in a high visibility landscape will have a larger population range ( r* = 4 in Figure 2 4b) than a population in a low visibility landscape ( r* = 3 in Figure 2 4a, also Figure 2 5a). One might expect a growing populati on to expand across a high visibility landscape (reducing foraging competition for a small increase in risk Figure 2 4b and d ) faster than across a low visibility landscape (suffering higher competition to avoid a large increase in risk Figure 2 4a and c ). Figure 2 5a predicts a more complex scenario. Although a high visibility landscape can sustain a higher maximum abundance, r* expands more slowly with increasing abundance. This is because the area within the high visibility r' is greater than the ar ea within the low visibility r' Thus, an equivalent abundance increase results in a slower outward expansion ( Figure 2 5a), smaller density increase ( Figure 2 5b), and slower decrease in realized quality ( Figure 2 5c) in the high visibility landscape. It is useful to consider the relationships between reaction distance, D abundance, n T and population range, r* together ( Figure 2 6). At low abundances, prey remain almost exclusively within r' even with small reaction distances, resulting in a nearl y linear relationship between D and r* There is a minimal region of range contraction (decreasing r* ) as visibility increases. Only at higher abundances does the effect of increasing evenness in prey density result in range contraction. Only at higher

PAGE 37

37 abundances do prey spend more time at risk beyond r' so that improved visibility (increased D and thus r' ) results in stronger range contraction as individuals move within r' The front left margin of the surface shows the maximum abundance, and resultan t population range, that a given landscape (a given D ) can sustain; and conversely the lowest quality landscape (the smallest D ) abl e to sustain a given abundance The projection of the surface margin to the figure floor shows the relationship between reac tion distance and maximum abundance. I have used a simplified mechanistic predator prey chase model for ( r n ) and assumed that intrinsic for aging success and growth are location independent for g ( r n ) My approach should be capable of treat ing more reali stic (and even empirical ly derived ) ( r n ) and g ( r n ) curves. While quantitative predictions using different curves may change, I expect the qualitative predictions will remain consistent as long as risk increases away from shelter (and possibly decreases by dilution), and growth rate is inversely related to density This expectation is supported by the exercise of replacing Equation 2 1 with other, monotonically decreasing functions V ariations may be explored, for example, when habitat quality decrease s non monotonically with distance from a shelter (Lewis and Eby, 2002). Model predictions address the shape of population distribution s not the actual magnitude of realized quality. As an example, if one landscape ranges from low to medium intrinsic habit at quality, and another ranges from medium to high quality, as long as the shapes of the intrinsic quality curves are the same, the model gives the same predictions about population distribution. In this sense, the model addresses local population distrib utions within a landscape (i.e. home range) not a choice among

PAGE 38

38 landscapes. Also, in distribution I do not distinguish the space use patterns of individual prey A particular population distribution might result either b ecause (1) a ll individuals use space equivalently with no social structure, or (2) dominant individuals preferentially use high quality habitats, leaving subordinates to use low quality habitats Such is sues relate to variations in ideal free distribution theory ( e.g., id eal despotic distributions, Oro 2008) As a result, one can interpret in two ways. First, it may represent the distribution of stationary individuals where low density indicates few individuals occupying an ar ea. Second, and more appropriate to my conceptual model it may represent the temporal distribution of one or more mobile individuals. In this sense the population distribution is proportional to the probability of the presence of an individual at a give n location. Two approaches exist for testing models such as the one presented here. First, one could empirically parameterize model components, for example, density as a function of distance from shelter, predation risk and foraging success as functions of density, or risk as a function of distance from shelter. A second approach involves taking a qualitative prediction specific to the model and testing it emp irically. For example, to test model predictions relating to changes in abundance in high vs. low quality landscapes, consider gag ( Mycteroperca microlepis ). Gag center their movement around isolated reef s forage across the surrounding landscape, and retreat to the reef when threatened R ecording each individual s habitat use patterns, one coul d estimate the distribution of all gag on a reef Experiments contrasting local population distributions at low and high abundances, established on reefs surrounded

PAGE 39

39 by different bottom types where reaction distances differ could test model predictions sh own in Figure 2 5.

PAGE 40

40 Table 2 1 Parameter values and elasticities. Parameter Value Elasticity a Reaction distance ( D ) 10 Maximum prey speed ( S prey ) 1 0.87 Maximum predator speed ( S predator ) 1.6 Maximum capture probability ( max ) 0. 05 Minimum capture probability ( min ) 0.005 0.14 Maximum growth rate ( g max ) 0.01 < 0.01 Strength of risk dilution ( b ) 0.1 Strength of foraging competition ( b g ) 10 0.72 Total number of prey ( n T ) 50 0.69 a Elasticity is calculat ed as ( x r* x r* x = 0.01 is the change in parameter x Decreasing values by 0.01 gives similar results, except that min and g max must be positive.

PAGE 41

41 Figure 2 1 Geometry of the risk f unction An individual prey located at distance, r from shelter will react to predators at distance, D When a predator is detected somewhere on the reaction circle, the prey and predator move toward the shelter at speeds S prey and S predator respectively. If the predator is dete cted at either of the two critical points on the circle ( = c ) the prey and predator arrive at the shelter at the same time ( i.e., r / S prey = r predator / S predator ). If the predator appears beyond those points ( > c ) the prey escapes; otherwise the prey is captured.

PAGE 42

42 Figure 2 2 T he predation mortality risk for prey at distance r from shelter for three constant prey densities ( e.g., n ( r ) = 5 at all r ). Parameters min and m ax represent the minimum and maximum intrinsic risks, realized risk is lowered by dilution. Individuals exp erience minimum risk within r' and maximum risk beyond r'' Parameter values are the speed of the predator relative to the speed of the prey S predator = 1.6 and S prey = 1 and D = 10, min = 0.005 max = 0.05 b = 0.1.

PAGE 43

43 Figure 2 3 (a ) P opulation distributions, n ( r ), for two strengths of foraging competition. As foraging competition increases from b g = 10 to 20 the population range increases from r* = 13 to 17. The minimum and maximum risk thresholds, r' and r'' respectively, are unaffected by changes in b g r'' = 17 and so is not labeled. (b ) P opulation range for increasin g reaction distances, D The dot represents the lowest quality landscape ( i.e., smallest landscape basin, D = 7) capable of sustaining the local population. In (b) b g = 1 0, in both panels n T = 50, g max = 0.01. O ther parameters are as in Figure 2 2

PAGE 44

44 Figure 2 4. Predicted population ranges (a and b) and densities (c and d) in two l andscapes L eft (a and c ) and right (b and d ) panels depict low visibility ( D = 5) and hig h visi bility ( D = 10) landscapes, respectively. In (a) and (b ) intrinsic habitat quality (thick black line) is shown increasing downward. The realized habitat quality at low ( n T = 5) and high ( n T = 24) abundances are shown by the solid and dashed horizon tal lines, respectively. The realized quality at n T = 50 is shown by the dash dot line to highlight points of correlation among figures. Points where realized quality lines intersect intrinsic quality curves indicate the extent of population range, r* l abeled on x axes. The alternate y axis emphasizes the fact that / g is the inverse of habitat quality. Panels ( c ) and ( d ) show the low and high abundance p opulation distributions, n ( r ) for the landscapes shown in ( a) and (b ), respectively. Other parame te r values are as in Figures 2 2 and 2 3b

PAGE 45

45 Figure 2 5 Predicted population range (a), density at the shelter (b), and realized habitat quality (c) in two landscapes with increasing a bundances As local population abundance increases, the model predic ts (a ) a weaker range expansion in low visibility landscapes ( D = 5, solid lines) than high visibility landscapes ( D = 10, dashed lines), (b ) a stronger increase in density in low visibility landscapes than high visibility landscapes, and (c ) a stronger de crease in realized quality in low visibility landscapes than high visibility landscapes. In all panels, the dots represent the maximum abundance a given landscape can sustain. Other p aramete r values are as in Figures 2 2 and 2 3b

PAGE 46

46 Figure 2 6. Predict ed population range, r* for a given combination of reaction distance, D and local population abundance, n T The curve in Fig. 3b appears as the dash dot line at n T = 50, with a dot indicating the minimum D defining the lowest visibility landscape capabl e of sustaining 50 individuals. Curves from Fig. 5a are shown as a solid line ( D = 5) and a dashed line ( D = 10) on the surface, with dots at the end of the lines indicating maximum sustainable abundances. The thin solid line (and its projection onto the figure floor) indicate the margin of the surface along the line of minimum D and maximum n T values. Other paramete r values are as in Figures 2 2 and 2 3b

PAGE 47

47 CHAPTER 3 TESTING AN ACOUSTIC TELEMETRY POSITIONIN G SYSTEM Background The difficulty of making dir ect or indirect observations at appropriate locations, depth, and spatial and temporal scales in aquatic environments often restricts the scope of ecological studies. Challenges of scale or resolution have largely shaped the way I monitor, model, and mana ge populations of ecologically and economically important fishes, especially in marine systems. Acoustic telemetry (reviewed in Heupel et al ., 2006) is beginning to overcome some of these limitations. Active acoustic hydrophones detect signals from trans mitters attached to animals in studies to define coarse grained habitat use, home range size, passage routes, or presence/absence. In this chapter I focus on acoustic positioning technologies, which use simultaneous detections by multiple hydrophones to r ecord fine grained animal movements in two dimensions. I evaluate the capabilities and performance of a positioning system of autonomous submersible hydrophones under a range of deployment and acoustic conditions in the northeastern Gulf of Mexico in 13 m of water and over ranges up to 250 m. Acoustic positioning technologies typically use cable or radio linked arrays of multiple hydrophones to calculate two or three dimensional positions of animals to describe fine grain habitat use or home range size based on detections of a single transmitter signal at multiple hydrophones. Cabled arrays typically require a connection to land, limiting the distance from shore the array can be deployed. Radio linked arrays allow the array to be deployed farther fro m shore, but surface radio units leave the array vulner able to tampering (potentially limiting the duration of unattended deployments) and to some degree limit the working depth of the array. Positioning systems use small

PAGE 48

48 differences in the detection time s of a given transmission at multiple hydrophones (Lagard re et al ., 1990 ; Klimley et al., 2001 ), meaning that hydrophone clocks must be precisely synchronized, typically via constant contact. More recently, positioning sys tems using autonomous (i.e., ne ither cable n or radio linked) hydrophones have expanded the range and potential locations of behavioral studies (Niezgoda et al ., 2002; Andrews et al., 2011) In place of constant communication, autonomous positioning systems use a beacon transmitter at a known location and post processing to compensate for initial clock differences and clock drift during the deployment, allowing the array to be deployed across larger areas and in deeper waters without surface or shore connections, thus expanding the rang e of target organisms and ecological questions. Though the difficulty of making accurate independent distance measurements between hydrophones can limit the working depth of autonomous positioning systems, the range of the array is essentially limited onl y by the power of the transmitter. In one autonomous positioning system (Lotek Wireless WHS 3050 MAP Cot e et al 1997 ; Niezgoda et al ., 2002), a complex digital signal replaced the common analog signal, increasing the number of simultaneously monitored transmitters and decreasing the detrimental effects of environmental noise, acoustic echoes, and transmitter interference. Despite the increasing use of acoustic technology, and in particular positioning systems, few reports validate or test system perf ormance (Heupel et al ., 2006; Clements et al performance may include several steps. One is to determine the probability of detection by a single hydrophone of a transmitter at differen t distances. This can be done without

PAGE 49

49 deploying an entire array, yet permits the calculation of a theoretical probability of detection of a single transmission by three or more hydrophones. This calculation, in turn, informs the trade off between increas ing array size and the frequency of successfully producing position solutions. With this estimate of the operational range of the hydrophones and transmitters, if time and resources allow, another step is to deploy the full array at different spacings, th at is, with the hydrophones in the desired geometry but spaced at various distances (it is important however, to recognize that array performance will vary through time as the acoustic environment varies ) From a full array deployment the fraction of tra nsmissions resulting in two dimensional position solutions can be calculated. assessment of the accuracy and spatial variation in the probability of telemetered positions, e.g. is a tagged individual in the center of the array more likely to produce good results than an individual near the outer margin? Likely sources of error in positioning systems include errors in estimates of sound speed and positions of the acoustic equip ment. I assessed the capabilities of an autonomous positioning system using Lotek Wireless WHS 3050 MAP submersible dataloggers and post processing software. Using data from stationary transmitters at known locations collected during fifteen field deplo yments at six locations, I calculated 1. the fraction of transmissions detected by single hydrophones at different distances, 2. the fraction of transmissions resulting in calculated position solutions from arrays of different spacings, 3. the accuracy of position solutions, and 4. the spatial variation in position solution fractions within a

PAGE 50

50 hydrophone array. The detection and position solution fractions can be used as estimates of detection and position solution probabilities expected in future deploymen ts. Finally, I explored the impacts of two likely sources of error: changes in water temperature and error in hydrophone position estimates. Methods Study System This study was conducted 30 km off the Florida coast in the Gulf of Mexico in 13 m of water. The seafloor was characterized by a mix of low relief (Parker, 1983) hard bottom and sand bottom. Hard bottom was characterized by emergent limestone often covered with a veneer of sand and shell rubble, and typically sustained low algal, sponge, and sof t coral growth less than 0.5 m tall. Soft bottom was characterized by deeper, bare sand. Part of the Steinhatchee Fisheries Management Area, the experimental, artificial reef system consisted of clusters of four, immediately adjacent, hollow cement hemis pheres about 1 m tall, with holes allowing fish access to the interior. Each reef cluster had a 4 m 2 footprint. Each telemetry array deployment centered around a single reef, with no other reefs within the array. All locations had relatively little turb ulence from wave action or boat traffic, and no other structures, e.g., docks or hardened shorelines. Telemetry System The telemetry system, from Lotek Wireless, Inc., consisted of three main components: transmitters, hydrophones, and post processing so ftware. Output from Lotek Wireless software required additional processing to produce high quality results.

PAGE 51

51 Transmitters I used uniquely coded Lotek Wireless acoustic transmitters operating at 76 kHz of three basic types: 1. sentinels with temperature a nd pressure sensors (MA TP16 50, 16 mm diameter, 81 mm length, 32 g in air, 2 s burst i nterval, 5 min on/25 min off), 2. beacons with temperature and pressure sensors (MA TP16 50, 16 mm diameter, 81 mm length, 32 g in air, 20 s burst interval, continuously on) and without sensors (MA 16 50, 16 mm diameter, 79 mm length, 31 g in air, 20 s burst interval, continuously on), and 3. tags with temperature and pressure sensors (MA TP16 25, 16 mm diameter, 56 mm length, 23 g in air, 13 g in salt water, 2 s burst i nterval, continuously on) and without sensors (MA 16 25, 16 mm diameter, 54 mm length, 23 g in air, 13 g in salt water, 2 s burst interval, continuously on). Each transmission consisted of three codes forming one symbol Each symbol carried ID informatio n and for sensor transmitters alternated measurements between temperature ( 6 to 34 C at 50 intervals) and pressur e (0 to 50 psi at 50 intervals). I do not report sensor data here because it is not involved in the calculation of position solutions or arra y performance. Hydrophones I used fixed arrays of five autonomous submersible dataloggers (Lotek Wireless WHS 3050 MAP 76 kHz, 127 mm diameter, 720 mm length, 12 kg weight in air with user replaceable alkaline battery pack and mounting bracket), which I call hydrophones, each consisting of an actual omnidirectional hydrophone, a receiver, a datalogger, and a battery pack. Each hydrophone unit was mounted with the actual hydrophone about 2 m above the seafloor using either posts driven into the underlyin g rock (for deployments lasting longer than a day) or on temporary, weighted posts with a surface buoy giving added vertical stability and independent GPS position estimates. All array

PAGE 52

52 deployments used the same basic geometry: a central hydrophone 10 m no rtheast of the reef and four hydrophones set a given distance in each of the cardinal directions. Depending on the deployment, the outer hydrophones were 50 to 150 m away from the reef. In each deployment, four or five hydrophones were deployed with indi vidual beacons suspended 0.5 m above the hydrophone attached via monofilament line and a float. Beacons allowed for compensation (during post processing) of initial hydrophone clock differences and clock drift. When used, the sentinel was suspended 0.5 m above the reef with via monofilament line and a float. Prior to deployment, hydrophones must be set to record receptions in symbol or code mode. In symbol mode, each hydrophone performed some onboard processing, saving only completely detected symbols. In code mode, less onboard processing took place, saving completely detected codes (but potentially partial symbols), making it possible to sometimes reconstruct incomplete symbols, in a process called Partial Symbol Reconstruction (PSR). To estimate hy drophone and reef positions on the seafloor, I attached a handheld GPS unit (Garmin GPS 76, reading with about 2 m accuracy and recording at 5 s intervals) to a post rising 1 m above a buoy. Divers held the buoy line tautly to the seafloor so that the buo y submerged but the post and GPS remained above the waves. Held in place for 5 min, the recorded GPS positions generally ranged within 3 m and all 100 recorded positions were averaged giving a single position estimate. With accurate hydrophone position e stimates the array should be capable of calculating positions with about 1 m accuracy within the array (Niezgoda et al ., 2002). In principle, if a transmitter were to move outside the array, positional accuracy would drop drastically, so that it

PAGE 53

53 would be possible to determine presence/absence, temperature, and depth, but not to calculate an accurate 2 D position. Post processing s oftware: ALPS At the end of an array deployment, post processing software, the Asynchronous Logger Positioning System (ALPS), combined detection records from all five hydrophones, hydrophone GPS position estimates, and sound speed estimates to detection by a position solution calculated during post processing using detections from three or more hydrophones. For each transmission there are separate probabilities that it will be detected by each of the five hydrophones. If at least three hydrophones detected a given transmission, it was possible to calculate a 2 D position solution using the hyperbolic method of trilateration from differences in signal arrival time and known distances between hydrophones. Detections by m ore hydrophones should improve accuracy. For each transmission that included a pressure reading, the transmitter depth was calculated, giving a 3 D position solution. The use of multiple beacons during array deployments is primarily a redundancy to ensur e clock drift compensation during post processing; to process data through ALPS only one beacon is required. For each deployment, the best beacon was identified as the one with the most consistent detections by all hydrophones, then used for clock compens ation when running ALPS for each transmitter. When running ALPS to calculate position solutions for the best beacon, the second best beacon was used for clock compensation. ALPS produced a time series of position solutions, including sensor data, if ava ilable, and several metrics (including the condition number, CN) describing the

PAGE 54

54 quality of the position solution. ALPS could be run without performing PSR, in which case only complete symbols (i.e., all three codes detected) produced position solutions. When data had been collected in code mode, in addition to using complete symbols, ALPS could perform PSR to reconstruct the transmitter ID from symbols for which only two codes were detected; sensor data is lost. Data collected in symbol mode did not supp ort PSR; essentially equivalent to data collected in code mode and processed through ALPS without PSR. This point is important when comparing deployments collecting data in different modes. Unless otherwise noted, my results describe ALPS output using PS R. ALPS output required further processing to produce high quality data. Hydrophone Deployment s To test array performance I conducted a simple detection trial between one transmitter and one hydrophone and several full array deployments. The full array d eployments included nine fish tagging studies, three trials to test the array at different spacings, and two trials to test spatial variation in performance within the array. These deployments are summarized in Table 3 1 and described in Appendix C Here I only report results of stationary transmitters. Telemetry Data Processing Hydrophone detection records and ALPS output were further processed using R (R Core Development Team 20 10 ). Using detection records from each hydrophone, hourly and overall dete ction fractions were calculated. ALPS position solutions for each transmitter were filtered to remove low quality solutions. I examined a range of filtering cut off values and here I report the extremes from no filtering to stringent filtering. Hourly a nd overall position solution fractions were calculated.

PAGE 55

55 Detection f raction The probability of a transmission producing a position solution is directly related to the (non independent) probabilities of detection by at least three hydrophones, which in turn depend on the distance between the transmitter and each hydrophone. To characterize these probabilities, pairing each transmitter with each hydrophone I calculated how the detection fraction (# detections / # transmission s ) varied with distance between the transmitter and the hydrophone over the entire deployment, using data from all deployments. To explore how this detection fraction varied through time, as the acoustic environment changed, I calculated the detection fraction for each deployment hour. Position solution f raction The probability of a transmission producing a position solution is best estimated from a full array deployment. Using data from Deployments A and C O (Table 3 1) to examine how the position solution fraction (# position solu tions / # transmissions) was affected by array spacing I calculated the position solution fraction for each transmitter. Dealing with stationary transmitters with independent GPS position estimates, I could more effectively explore the utility of ALPS qu ality metrics. To describe the number of position solutions at each quality le vel, for each transmitter I examined the number of position solutions below different CN values. Also, I examined the relationship between the locations of position solutions r elative to GPS position estimates and their associated CN values. I explored a range of filtering cut offs and here I report the extremes of no filtering to stringent filtering (CN < 1.5). This represents a relatively simple filtering technique; more sop histicated techniques (e.g., filtering on two or more ALPS quality metrics, Kalman filtering; Grewal and Andrews, 2008) would likely improve

PAGE 56

56 filtering results. Because the primary goal of the larger study was to estimate fish space use, I had the luxury o f retaining only the highest quality position solutions. Except where noted, results represent filtered data. Results from Deployment A, collected in symbol mode, were not augmented by PSR; all others were. To compare the position solution fraction col lected in both modes, using Deployments I O, I calculated the percent increase in position solution fractions between non PSR and PSR augmented results, then calculated the mean percent increase. Then for Deployment A, I increased each position solution f raction by the mean percent increase of Deployments I O. When considering position solutions it was important to remember that the array is expected to perform well within the array and poorly at and beyond the margins. The position solution fractions of transmitters at marginal hydrophones represented the poorest expected array performance, while transmitters within the array were representative of how the array should perform with tagged fish in the interior. Therefore I distinguish results from margin al and central transmitters. Finally, I examined how the hourly position solution fraction varied through time. Position solution a ccuracy Assessing position solution accuracy was complicated by the difficulty of making accurate, independent estimates of hydrophone and transmitter positions of the seafloor. I used two approaches. First, using data from Deployments A and C O (only beacons from Deployments G and H), I compared mean position solutions of each transmitter with independent estimates made usi ng GPS buoys. This approach cannot separate the errors of each positioning method. Second, for each transmitter, successive position solutions were slightly different because of vagaries in the acoustic

PAGE 57

57 environment. Over time a cloud of position solutio ns was produced; tighter clouds suggested higher accuracy of individual position solutions. The mean position solution, i.e., the mean of all position solutions for a transmitter, was the centroid of the cloud of all solutions. To examine the consistency of position solutions through time, I calculated the range of the central 90% of ALPS position solution estimates along the Northing axis. Spatial v ari ation in position solution fractions The deployment geometry, e.g., square with a central hydrophone v ersus a ring of five outer hydrophones or other geometry, affects array performance because it affects the calculation of position solutions. Similarly, the position of a transmitter relative to each hydrophone may affect the probability that a transmissi on produces a position solution. To map the spatial variation in position solution fractions within the array, using data from Deployments G and H, for transmitters at 63 different locations I calculated the position solution fractions. Because roaming t ransmitters were at different places at different times, differences in detection or position solution fractions cannot be attributed purely to spatial variation; nevertheless, a strong systematic bias should be discernible. For roaming tags I calculated the position solution fractions during the brief time at each location. For stationary beacons and the sentinel, position solutions were calculated from t ransmissions between 1 June 2009, 13:57:27 EDT and 3 June 13:00:00 EDT, and thus less susceptible to temporal variation. Error s in sound speed e stimate s When processing data through ALPS, only one sound speed can be specified, even though changes in water temperature or salinity, for example, over the course of a deployment will change the actual sound s peed. To estimate the strength of the

PAGE 58

58 relationship between changes in sound speed and the probability and accuracy of position solutions, using results from Deployment K, which experienced relatively constant temperatures, I calculated the nominal speed o f sound in water (Wilson, 1960; salinity = 34 ppt, depth = 13 m) at the average water temperature (30 C, range = 29.9 to 31.6 C, measured continuously by a nearby Acoustic Doppler Current P rofiler ADCP Teledyne RD Instruments Workhorse Sentinel, 600 kHz ) I also calculated the sound speed corresponding to 5 and 10C. I processed Deployment K data through ALPS once for each of the five sound speeds (1521, 1533, 1545, 1554, 1562 m/s) and calculated the position solution fractions for the beacon transmit ters. I examined the magnitude of displacement of the mean position solution for each beacon. Error s in hydrophone position e stimate s To examine the effect of error in GPS position estimates of hydrophones input to ALPS, using data from Deployment K, I ra n ALPS once using the best GPS position estimate of the central hydrophone, then five more times, artificially displacing the central hydrophone 2, 4, 6, 8, and 10 m to the east. For each artificial displacement I calculated the overall position solution fraction and examined the magnitude of displacement of the mean position solution for each beacon. Results Detection and position solution fractions varied substantial over examined distance s and through time. Position solutions of central transmitters w ere generally accurate to within 2 m and there was no evidence of systematic variation in performance within the array, except for the anticipated decrease near array margins. Errors in sound speed and hydrophone position estimates generally decreased the

PAGE 59

59 fraction and accuracy of position solutions, but within the array, performance remained strong. Detection Fraction I calculated the distance between all possible transmitter/hydrophone pairs. For each pair I calculated the overall detection fraction (Fi gure 3 1) and the hourly detection fractions (e.g., Figure 3 2a). Given the amount of temporal variability (exemplified in Figure 3 2a), I identified each detection fraction as being based on more or less than six hours of data. Shorter deployments cover ed a smaller range of temporal variability so that mean detection fractions were more affected by temporal variability, resulting in greater variability among mean detection fractions. In general, as the distance between transmitter and hydrophone increas ed, the detection fraction decreased, though there was variation at all distances, especially for shorter duration deployments. Hourly detection fractions for a given transmitter typically ranged from 0 to 1, and closer proximities usually increased detec tions (e.g., central versus marginal hydrophones in Figure 3 2a). Comparing the hourly detection fraction of all transmitters during individual deployments showed that during some times hourly detections were uniformly high or lo w, while at other times th ere was little consistency among transmitters. These temporal relationships were complicated by the spatial relationship of the hydrophones. F or example, in the first 30 hours of the deployment ( Figure 3 2a ) there was a drop in detections of the central beacon by only the central, north, and east hydrophones. This contrasts with hours 210 to 280 when only the east and south hydrophones showed a drop in detections. Such patterns presumably reflect temporal changes in the spatial structure of the acoustic environment.

PAGE 60

60 Position Solution Fraction As expected, because of the geometry between a transm itter and the array, signals fro m the center of the array produced many more position solutions than signals from the margins, even if detections by all hydrop hones were high; compare the position solution fractions of the central and marginal transmitters shown in Figures 3 2 and D 1. When filtering ALPS output, the fraction of position solutions below a given CN varied among transmitters, with marginal transm itters having fewer high quality position solutions (Figure D 2 ) To describe the number of position solutions at each quality level, f or each transmitter I examined the number of position solutions below given CN values (Figure 3 3a). Stringent filterin g (CN < 1.5) removed on average 66% of data for central transmitters and 87% for marginal t ransmitters (overall range: 2 to 100%). Examining the location of position solutions relative to GPS position estimates and the corresponding CN values (e.g., Figur e 3 3b), I found that above CN = 15 most position solutions were widely inaccurate. As CN cut off values decreased, a higher percentage of remaining position solutions were accurate to within about 2 m This analysis introduces the interplay between the choice of filtering rules, the position solution fraction, the desired accuracy, and the probability that a given position solution meets that accuracy. To define the range of quality of position solutions I considered filtered and unfiltered ALPS outpu t separately (Figure 3 4). For both unfiltered and filtered data, wider array spacing showed lower position solution fractions, with shorter duration deployment showing greater variability. To compare array performance of the Deployme nt A (collected in s ymbol mode without PSR) to all other deployments (collected in code mode with PSR), using Deployments I O, I calculated the mean

PAGE 61

61 percent increase from non PSR to PSR position solution fractions (of filtered data) to be 527% (range: 0 to 5871%). Increasing strengthened the pattern of more position solutions at smaller spacings. Shorter deployments (D 3 4), reflected a wider range of high quality position solutions. Hourly position solution fractions reflected the temporal variability of hourly detection fractions (e.g., Figure 3 2b), so that low detection fractions by two or three hydrophones often led to low position solution fractions dependin g on which hydrophones were involved in the position solution Position Solution Accuracy The first method for assessing position solution accuracy compared mean position solutions with GPS position estimates. For transmitters within the hydrophone array, the distance between the mean ALPS position solution and GPS position estimate was consistently about 2 m (Figure 3 5a). Even for many transmitters at array margins, where performance expectations were lower, many were within 7 m, though one was off by 8 1 m. The second assessment method examined position solution consistency over time. Over an entire deployment, consistency among position solutions for a single transmitter was high for central transmitters and for many marginal transmitters, showing cen tral 90% ranges less than 3 m (Figure 3 5b). Figure D 2 shows examples of transmitters with and without consistent and continuous position solutions, and how well mean position solutions agree with GPS position estimates of central and marginal transmitte rs.

PAGE 62

62 Spatial Variation in Position Solution Fractions To map spatial variation in performance within the array I calculated the position solution fraction at 63 locations during Deployments G and H. After filtering, the position solution fractions range d from 0 to 0.98 (Figure 3 6). Differences in position solution fractions did not show clear spatial patterns, except for decreased fractions near array margins. The two transmitters on the GPS buoy line sometimes agreed closely in both the position solu tion fractions and mean location (e.g., Figure 3 6 point A) but differed in both measures at other times (e.g., Figure 3 6 point B), emphasizing the uncertainty in array performance introduced by the acoustic environment. An examination of the full time s eries of position solutions for transmitters at different locations revealed more about the reliability of the mean position solution. For transmitters at array margins, the cloud of position solutions was generally larger than for transmitters fully wit hin the array. The mean position solution for the beacon at the south hydrophone (Figure 3 6 point C) falls between widely spaced lines of individual position solution locations. In contrast, all individual position solutions are nearly equal to the mean position solution for an central roaming tag (Figure 3 6 point D). The awkward array geometry at array margins, and especially array corners, reduced both the accuracy and position solution fraction. Interpretation of performance patterns from this test was confused by the combination of spatial and temporal variation, as well as the short duration of the trial. Longer duration trials would reveal long er term spatial variation in performance but obscure short er term temporal variation. Error s in Sound S peed Estimate s To estimate the effect of changes in sound speed, I compared ALPS output for Deployment K using five sound speeds. The highest position solution fractions were

PAGE 63

63 achieved at the nominal calculated sound speed of 1545 m/s (Figure 3 7a) For t he central beacon the position solution decreased 18% from 0.69 at the nominal sound speed. Only one marginal beacon showed increased solutions at higher sound speeds. As expected, marginal transmitters had lower position solution fractions. W ith sound speed changes corresponding to 10 C mean position solution estimates remained consistent for the central and south beacons, with greater variation in position locations for the north and east beacons (Figure 3 7b). For the central beacon, the mean posit ion solution changed by 0.9 m with a 10 C error in water temperature. Error s in Hydrophone Position Estimate s To describe the effect of errors in initial hydrophone GPS position estimates input to ALPS, I compared ALPS output for Deployment K, using the be st position estimate for the central hydrophone and five artificial eastward displacements. The position solution fraction for all transmitters increased slightly (except for the east beacon) with a 2 m artificial eastward displacement, then dropped with further displacements (Figure 3 8), suggesting that the 2 m artificial displacement corrected an error in the GPS position estimate of the central or other hydrophone, effectively bringing the estimated array geometry into closer agreement with the t rue de ployment geometry. W ith artificial displacements up to 10 m, the position solution fraction for the central beacon decreased 17% from 0.69 (Figure 3 8). T he mean position solution for the central beacon moved 1.9 m with an artificial hydrophone displacem ent of 10 m Discussion By a ssess ing the capabilities of the Lotek Wireless active telemetry positioning system with data from stationary transmitters at known locations I found the fraction of transmissions resulting in detections and position solutions decrease d with increasing

PAGE 64

64 distance between transmitter and hydrophones (e.g., the maximum position solution fraction of stringently filtered data decreased from 1 .0 to 0.5 as array spacing increased from 50 to 150 m), though there was substantial variation among transmitters and through time. P erformance within the array was robust to c ha nges or errors in sound speed and initial hydrophone position estimates especially a t central locations ( e.g., an 18 % decrease in the position solution fraction with soun d speed errors and a 1 .9 m change in the mean position solution with a 10 m artificial displacement of the central hydrophone ). As expected, array performance was substantially better at central locations Though accuracy of position solutions was diffic ult to assess, I found high quality mean position solutions within the array to be accurate within about 2 m and consistent over time, usually deviating less than about 3 m from the mean. The performance (i.e., probability and accuracy of position solution s) was affected by several interacting factors at various steps in the process The probability of a ction by each hydrophone depended on the distance to each and the acoustic conditions along the travel path. Over the range I tested, 50 to 250 m overall detection fraction s of longer deployments varied from 0.2 to 0.9 (Figure 3 1) Transmission s from locations within the array were much more likely to produce quality position solutions (Figures 3 7a, 3 8a, compare 3 2 and D 1 ). A rray performance was relatively robust to sound speed and hydrophone position estimate errors, especially for transmissions within the array Over a range of 20 C temperature differences position solution fractions for the central beacon decreased, at most, by 18 % and the me an position solution location changed by 0.9 m (Figure 3 7) With a 10 m artificial displacement of the central hydrophone, the position solution

PAGE 65

65 fraction for the central beacon decreased by 17% and the mean position solution location cha nged by 1.9 m (Figure 3 8). I tested the effect of errors in only the central performance differently, a nd errors in multiple position estimates will combine in complicat ed ways depending on which hydrophones have errors and which are used in the calculation of any given position solution. B ecause the distance s (and direction s ) between hydrophones, and not their absolute positions, are used to calculate position solutions errors in one position estimate may or may not compensate for errors in another. A slight increase in the position solution fraction with a 2 m artificial displacement of the central hydrophone (Figure 3 8a) suggest ed a situation where the GPS estimate of either the central or other hydrophone position was wrong and the artificial movement brought the relative geometry closer to the true hydrophone arrangement. This emphasizes the compensating or magnifying nature of hydrophone positional errors and the importance of achieving the most accurate position coordinates possible. Even in the absence of speed and distance errors, changes in the acoustic environment would still affect the probability of obtaining position solutions. Rain, wave action, or biolo gical activity all increase background noise, and increased abundance of acoustically active objects, like air bubbles from waves or swim bladders of fish schooling around a transmitter, all impede transmissions. I f I could eliminate speed and distance er rors, error and variation in accuracy would reflect only changes in the aco ustic environment. In practice, accuracy confounds all three sources of uncertainty.

PAGE 66

66 Though I have described long term or hourly performance and accuracy, many telemetry studies fo cus on the ability to predict the probability of obtaining a position solution at a given moment or to know the accuracy of any single position solution. The detection and position solution fractions I calculated can be used as estimates of detection and position solution probabilities in future similar array deployments. Furthermore, depending on whether the goal is to describe overall space use or to analyze individual animal movement paths, the measure of array performance will differently weight the r elative importance of the frequency and accuracy of position solutions, the deployment duration, and whether an average movement description is adequate. To assess array performance in this light, one must consider the trade off between the frequency and accuracy of position solutions, and the probability that each solution meets that accuracy, which in turn depends on the chosen filtering rule. For example, choosing to remove p osition solutions with CN > 15 in Figure 3 3 will leave very many position sol utions but the probability of being accurate to within 2 m is relatively low. More stringent filtering will decrease the probability that a transmission will produce a good position solution, but increase the probability that retained solutions will meet the desired accuracy. E ach new deployment may experience a new acoustic environment so it may be important to adjust filtering rules on a per deployment basis I make one final important observation about the temporal variability in array performance. Even in these relatively controlled deployments with stationary transmitters and relatively little acoustical noise, I recorded substantial variability in the probability (i.e., the recorded fraction) of obtaining position solutions. This effectively caus es a sampling issue, where successive hours (or minutes, etc.) show high and low

PAGE 67

67 sampling frequency. Since most space use measures (e.g., kernel density estimates ) consider only the spatial, and not the temporal, distribution of recorded animal positions, a temporal sampling bias can result in misleading conclusions, especially if temporal or spatial patterns exist in animal space use or position solution probabilities. The use of stationary sentinel transmitters providing an index of the acoustic envir onment during animal studies might help incorporate temporal (but not spatial) array performance variations in data analysis. O verall, these results suggest that the acoustic positioning system I tested will work well in relatively simple acoustic enviro nments and provide accurate data on animal space use patterns. It was capable of deployments covering areas larger and farther from shore than cabled systems ( compare Klimley et al., 2001; Andrews et al., 2011 ), being essentially limited only by the power of the transmitter Without the need to be connected to and exposed at the surface the array could be deployed in deeper waters with less risk of tampering. These advantages suggest that autonomous positioning systems are well suited for home range or pathway studi es in larger, deeper, farther offshore environments for longer durations. The results also highlight some of the challenges associated with acoustic telemetry. Unrecognized and unmeasured spatial and temporal variation in the acoustic enviro nment can significantly impact study results. Researchers must be aware of these limitations and appropriately consider them when interpreting telemetry data.

PAGE 68

68 Table 3 1. Array d eployment d etails Deployment Dates Activity Location a Collection Mode Spaci ng (m) Total Duration Number of T ransmitters A 2007 Dec 7 2008 Jan 15 Fish study 1 (hb) symbol 50 38 days 5 B 2008 July 22 Detection trial 1 (hb) symbol 200, 300 12 min 1 C 2008 Oct 9 Dec 7 Fish study 1 (hb) code 125 59 days 5 D 2009 April 23 Spa cing trial 1 (hb) code 125 77 min 5 E 2009 May 7 Spacing trial 1 (hb) code 100 136 min 7 F 2009 May 7 Spacing trial 1 (hb) code 150 105 min 7 G 2009 June 1 Internal performance trial 2 (hb) code 100 377 min 14 H 2009 June 3 Internal performance trial 2 (hb) code 100 252 min 14 I 2009 June 1 June 17 Fish study 2 (hb) code 100 16 days 5 J 2009 July 10 July 27 Fish study 3 (sb) code 100 17 days 5 K 2009 Aug 3 Aug 20 Fish study 4 (sb) code 100 17 days 5 L 2009 Aug 24 Sep 8 Fish study 1 (hb) code 100 15 days 5 M 2009 Sep 14 Oct 1 Fish study 5 (sb) code 100 17 days 5 N 2009 Oct 12 Oct 27 Fish study 6 (hb) code 100 15 days 4 O 2009 Nov 16 Nov 30 Fish study 3 (sb) code 100 14 days 4 a Each location was classified as predominantly hard bottom (hb) or sand bottom (sb).

PAGE 69

69 Figure 3 1. Overall detection fractions of transmitters by individual hydrophones at deployment s lasting more or less than 6 hr respectively.

PAGE 70

70 Figure 3 2. Temporal variation in hourly detection and hourly position solution fractions. a) Hourly detection fractions of the beacon at the central hydrophone, by eac h hydrophone during Deployment L Positioning of the five graphs reflects actu al deployment geometry. b) Hourly position solution fractions of that same transmitter. During hours 0 50 (shaded region) the north, east, and center hydrophones show detection decreases, then during hours 210 280 (shaded region), the east and south hydr ophones show detection decreases. Both periods show position solution decreases.

PAGE 71

71 Figure 3 3. Relationship between condition number and position solution fraction and accuracy. a) Each curve shows the fraction of position solutions removed by filte ring out points with progressively smaller condition numbers. Solid and dashed lines indicate central and marginal transmitters, respectively. The thick solid line indicates the central beacon shown in panel b). Filtered at CN=1.5, the fraction of posit ion solutions remaining ranged from 0 to 0.9 (highlighted in inset). b) In this comparison of condition number and Northing position solution, each dot represents a single solution recorded for the central beacon of Deployment I. From panel a) I see that 70% of the position solutions fall below CN = 1.5.

PAGE 72

72 Figure 3 4. Mean position solution fractions from array deployments at various spacings. To represent the range of quality of position solutions, a) and b) show unfiltered and filtered (CN < 1.5) so lutions, respectively. Array spacing indicates the distance from center to outer hydrophones. Closed and open circles represent central and marginal transmitters, respectively, in deployments longer than six hours Position solution fractions from deplo yments shorter than six hours performance trials. Closed and open triangles indicate position solution fractions of central and marginal transmitters, respectively, in Deployment A, adjusted to estimate the potential increase if PSR had been possible. At each array spacing, data points have been slightly offset (jittered) horizontally to distinguish points.

PAGE 73

73 Figure 3 5. Assessing position solution accuracy. a) Distance between AL PS mean position solutions and independent GPS estimates for central and marginal transmitters of all deployments. One marginal transmitter at 81 m is not shown. b) Range of the central 90% of position solution estimates along the Northing axis.

PAGE 74

74 Figur e 3 6. Spatial variation in position solution fractions during the internal performance trial, Deployments G and H. Open circles show locations of mean position solutions for individual transmitters, and the size of the circle indicates the position solut ion fraction. Closed circles show target transmitter deployment locations where all position solutions were removed during filtering. Points A and B indicate two locations of the GPS buoy (which held two tags) showing the range of agreement in position s olutions fractions and accuracy of two tags deployed at the same location. All position solutions for two transmitters central and marginal position solution locations over time. Fo r transmitter D, position solution circle.

PAGE 75

75 Figure 3 7. Effect of sound speed changes on position solution fraction and accuracy. a) Changes in the mean position solution f ractions of Deployment K beacons calculated by running ALPS using five different sound speeds corresponding to the nominal water temperature of 30 C 5 and 10C. b) Mean position solutions for beacons using five sound speeds Closed triangles show hydr ophone positions from independent GPS estimates.

PAGE 76

76 Figure 3 8. Effects of hydrophone position estimate errors on position solution fraction and accuracy. a) Changes in the mean position solution fractions of Deployment K beacons calculated by runnin g ALPS using the independent GPS estimates and five artificial displacement s of the central hydrophone and beacon to the east. b) Mean position solutions for beacons using six locations of the central hydrophone and beacon. Closed triangles show hydrohon e positions from GPS estimates.

PAGE 77

77 CHAPTER 4 GAG SPACE USE RELATI VE TO ENVIRONMENTAL CONDITIONS Background Animals use space to balance the often competing needs to acquire food, shelter, mates and other resources (Stearns, 1992). Space use decisions are made in the context of the ecological environment, including temporal variation and landscape structure. For individuals, these decisions affect survival, growth, and reproduction; the cumulative decisions of all individuals determine population space use and demographics. Understanding how individual space use responds to changing conditions and landscape structure can improve the management and conservation of exploited species, especially as spatial management tools such as habitat enhancements and management areas become more common (Kramer and Chapman, 1999 ). This is particularly true in fisheries management where, historically, population models have ignored spatial patterns and treated fish growth, fecundity, and mortality as homogeneous across large spatial scales encompassing habitat varia tion (Schnute and Richards, 2001 ). The disconnect between large scale fisheries models and fine scale ecological patterns and processes may compromise our ability to effectively manage fish populations (Walt ers, 2003), partly because spatial variation in fishing effort and in fishing vulnerability can make traditional fisheries statistics misleading (Ye and Dennis, 2009). For large, mobile, reef fish with home ranges centered on physical structure, their proximity to structure and use of surrounding habitat can affect access to resources, use decisions might vary in response to temporally changing environmental conditions and

PAGE 78

78 landscape structure, but how these affect space use is poorly understood. Before answering such questions, I must know the appropriate scales at which to frame them. Though identifying appropriate scales is rarely the primary research goal, it is a cruc ial and often difficult first step to well designed experiments. To explore the effect of temporal and spatial variation in environmental and landscape conditions on fish space use within a home range I used acoustic telemetry positioning technology, high resolution water condition measurements, and sonar imagery of the seafloor. Using gag ( Mycteroperca microlepis ) as a model, I explored the interactions between space use decisions and environmental conditions for reef centered species. Gag is an ecologic ally and economically important species in the northeas tern Gulf of Mexico (SEDAR, 2006 ). Individuals establish home ranges centered on physical shelter (Hood and Schlieder, 1992) and forage across the surrounding landscape. Despite its importance, littl e is known about individual gag movement within a home range. Kiel (2004) calculated minimum convex polygons (MCPs; Worton, 1987) for gag to be typically less than 9400 m 2 And Lindberg et al (2006) found gag to move more freely among reefs spaced close r together, and estimated the average residency time on a reef to be about ten months. Over longer time scales, fisheries dependent studies have shown gag to be relatively site attached for years or move hundreds of kilometers within weeks (McGovern et al ., 2005). At finer scales Kiel (2004) and Kellogg (unpublished data) found gag to be active at all hours and most closely associated with a reef during daytime and times of stronger currents. In this paper, with gag as a model species for large, mobile, reef fish I ask how the extent and rate of space use of five individuals is affected by temporal and spatial

PAGE 79

79 characteristics of the environment. I quantify individual space use in three ways: 1. extent of space use using kernel density estimates (KDEs; Worton, 1989) and time series of distance from the reef, DFR 2. vertical position in the water column or altitude above the seafloor, ALT and 3. gag travel speed S PD G I explore correlations between these space use measures and potentially important e nvironmental conditions: the time of day, TIME phase of the moon, LUNAR water temperature, TEMP and water current speed, SPD W and direction, DIR W To explore the relationship with landscape structure, I calculate an index of habitat use relative to ha bitat availability and distance from shelter. Taken together, these quantitative descriptions provide the spatial and temporal context for further experiments and modeling. Methods Study System and Organism This study was conducted in the northeastern Gu lf of Mexico 30 km off the Florida coast in 13 m of water where t he seafloor is a mix of low relief hard bottom and sand bottom habitats infrequently interspersed with isolated rocky outcroppings or ledges (Parker, 1983) Hard bottom habitat is character ized by emergent rock or rock with a sand veneer, commonly colonized by sessile inver tebrates such as soft corals and sponges, or algae. Sand bottom is characterized by a deeper layer of sand or sand/shell mix without an established coral/sponge communit y. These two broad habitat categories likely offer different predation risks and foraging opportunities for gag. In this landscape of mixed habitat we re placed experimental artificial reefs, as part of the Steinhatchee Fisheries Management Area, consisti ng of hollow cement hemispheres with holes allowing access to the interior. Each reef was of a cluster of

PAGE 80

80 four hemispheres 1 m tall with a 4 m 2 footprint and was larger than any natural structure in the vicinity. Gag are protogynous hermaphrodites tha t grow to a maximum size of about 1.2 m and 25 kg, with males typically being the largest individuals (Bullock and Smith 1991). In the northeastern Gulf of Mexico, s pawning aggregations form in late December to mid April at the shelf edge in 50 to 120 m of water (Hood and Schlieder 1992 ; Coleman et al 1996). Surface currents carry plank tonic larvae inshore where, in late spring, if they encounter sui table shallow water habitat they settle and become demersal juveniles ( Eklund 1993 ). They grow rapid ly (up to 18.6 cm standard length) until, during the fall of their first year, they move out of nursery grounds onto the shallow continental shelf where they spend two to six years before maturing and joining the spawning stock at the shelf edge (Koenig an d Coleman 1998) During the juvenile years on the shelf, they establish home ranges centered on physical shelter ( Kiel, 2004 ) and forage across the surrounding landscape. During the time they are established at a particular location, gag associate with physical structure putatively for use as shelter during times of threat. Kellogg (unpublished data ) observed an almost simultaneous aggregation to an artificial reef, with increased swimming speeds, direction al changes, and loops through the reef in resp onse to potential predator visits (e.g., dolphin, king mackerel, barracuda, and sharks) Gag often appear shoaled because of their abundance at physical structures, but their individual movements are thought to be largely independent (Bullock and Smith 1 991). At times gag meander about the landscape, presumably foraging and perhaps tracking the movement of midwater schooling prey fish (Bullock and Smith

PAGE 81

81 1991 ; Hobson, 1968 ). From daytime video recordings Kellogg (unpublished data) found strikes at prey to be most common during early morning and late afternoon, though they occurred during all daylight hours. Beyond being opportunistic foragers (Bullock and Smith, 1991) there is some evidence (Kellogg unpublished data) that gag forage synergistically ( Hixon and Carr, 1997) when visiting transient pelagic pr edators push schooling pelagic fish closer to gag near the seafloor On a seasonal time scale, gag are believed to be more active during warmer seasons when metabolic demands increase and pelagic pre y are more abundant (Clarke and Johnston, 1999) Kiel (2004) calculated minimum convex polygons (MCPs; Worton, 1987) of 14 acoustically tagged gag from relocations spread over days or months, which ranged from 100 m 2 to 27 0,000 m 2 s less than about 9400 m 2 approximately equivalent to the area of a 55 m radius circle. Experimentally displaced gag returned to a home reef from as far as 3 km, with shorter displacements showing higher return rates. Lindberg et al (2006) found gag to move more freely among reefs at closer spacings using daily and monthly relocations of 81 telemetered individuals on artificial reefs arranged hexagonally and spaced 25, 75, and 225 m apart. Relocations made monthly or every other month showed the averag e residency time to be about ten months and dependent on reef spacing and size. Over longer time scales, recapture returns from the fishery showed 7 of 23 individuals captured on the experimental reefs, 14 in the northeastern Gulf of Mexico, and 2 in the western Gulf. Fisheries dependent tagging studies show a large range of movement rates: one individual at large for 1562 days was recaptured less than 2 km from the tagging location (McGovern et al ., 2005) while another was recaptured thousands of kilome ters away within nine months of the

PAGE 82

82 last sighting at the release point (Lindberg et al., 2006). Schirripa and Goodyear ( 1994) found that most indi viduals were recaptured within 4 km of their release site and had a maximum movement rate of 0.6 km/day. A t finer spatial and temporal scales, from hourly relocations Kiel (2004) found gag to be active at all hours of the day and night, but most closely associated with the reef during the day, and making their longest movements during crepuscular periods. Fro m daytime video recordings of gag space use within 1 m of an artificial reef, Kellogg (unpublished data) also found that gag were more closely associated with the reef during midday and during stronger currents. As current speed increased, gag moved close r to the reef and positioned themselves near the reef edge facing into the flow, possibly gaining a hydrodynamic advantage in swimming energetics. Habitat Preference To create a categorical habitat map of the seafloor, in cooperation with Rutgers Universit y, I collected side scan sonar imagery using the autonomous underwater vehicle REMUS in October 2008. A geographically referenced TIFF image of the seafloor surrounding the reef was produced using SonarWiz.MAP4 (Chesapeake Technology Inc.). From the ima ge, I visually categorized areas as hard bottom or sand bottom habitat, then exported the categorical map to R (R Core Development Team, 2010 ) as a JPG image for fractional coverage and gag habitat use calculations. To characterize landscape composition at increasing distances from the reef, I divided it into concentric rings 1 m thick and within each ring calculated the fraction of hard bottom cover. For each fish, and for all fish collectively, I calculated the fraction of all recorded positions over ha rd bottom within each ring. From telemetry results, discussed below, I saw that 99% of all recorded positions fell within 50 m of the reef, so

PAGE 83

83 I used this area to calculate an overall index of overall habitat preference for each individual as the fraction of recorded positions over hard bottom divided by the fraction of hard bottom available within 50 m. Aquatic Environmental Conditions To measure water speed, flow direction, temperature, and depth I deployed an Acoustic Doppler Current P rofiler (ADCP Te ledyne RD Instruments Workhorse Sentinel, 600 kHz ) upward looking from the seafloor. The ADCP made measurements every second and saved 10 min averages. The pressure sensor was precise to 0.0015 psi, equivalent to 1 mm of depth. It made water speed and d irection measurements at depths every 0.5 m. Noise from the water surface made velocity measurements within about 1 m of the surface unreliable, and the actual water depth at any time fluctuated with the tide. Additionally, measurements were not made wit hin 1 m of the ADCP itself. Therefore, I only examined water velocity measurements made at 20 depths between 1.27 m (the ADCP height plus minimum measurement distance) and 11.77 m from the seafloor. To condense these measurements into a more trac table fo rm I noted that typically water flowed uniformly throughout the water column and was largely tidally driven. To describe water flow near the seafloor, where gag spend most of their time, I calculated the average flow speed and direction of the five deepes t ADCP measurement layers (1.27 to 3.77 m from the seafloor). Finally, during post processing of the telemetry data, water conditions were linearly interpolated between ADCP 10 min averages. I deployed the ADCP 800 m from the experimental reef from 19 D ecember 2007 to 15 January 2008, and again 350 m from the reef from 10 October to 16 December 2008. T h e ADCP was not deployed until ten days after fish were tagged in the 2007

PAGE 84

84 deployment; analyses involving ADCP data do not i nclude fish data during those ten days. As an index of lunar phase, each day was numbered sequentially from the last new moon, so that 1 and 30 represented new moons and 15 or 16 represented full moons. During both deployments sunrise was between 6:30 and 7:30 EST and sunset was betw een 17:30 and 18:10 EST. I categorized the time as day (8:00 to 17:00 EST) or night (19:00 to 6:00 EST). Because my effort was focused on lunar interactions, for the present purpose I do not consider crepuscular periods, though others have emphasized the ir importance. Fish Tagging I captured fi sh at the reef using wire traps, which were then raised to the surface over a period of at least 30 min prior to being brought on deck a few at a time. Once on board, the measurement and tagging procedure moved fis h from a fresh seawater holding tank, to an immobilization tank, to a weighing and measuring station, to a tagging station, and finally to an overboard holding cage. In the immobilization tank I used CO 2 saturated seawater to sedate the fish. E ach indivi dual was weighed using a handheld spring scale W and measured for fork length, L F and total length, L T All fish were internally tagged with a uniquely coded, acoustic transmitter (Lotek Wireless temperature pressure MA TP16 25, 2 s burst interval 76 kHz pressure sensor precision was 1 psi, equivalent to 0.68 m of depth ) inserted through a 4 cm incision mid way between the pelvic and anal fins and just off the ventral midline (Mulcahy, 2003; Harm s and Lewbart, 2000) A topical antiseptic was applied externally to the sutured area and the individual was placed in an overboard holding cage 2 m deep for observation before being released at the reef by divers. Procedures to capture and tag Mycteroperca

PAGE 85

85 microlepis were approved by the IFAS Animal Use Comm ittee at the University of Florida (Approval Number: 001 08FAS ). In behavioral studies using acoustic transmitters it is important to know how the signal affects the behavior of the target species, its prey, or its predators. Egn er and Mann (2005), Tolimi eri et al (2004), Amoser and Ladich (2005) found that fish hear far below the transmitter frequency, 76 kHz, a nd Myrberg (2001) found the shark hearing range also to be well below the working frequency of the tags Hydrophone Array To determine 2 D or 3 D positions of tagged individuals I deployed a fixed array of five autonomous submersible dataloggers (Lotek Wireless WHS 3050 MAP ), which I call hydrophones, each consisting of an actual omnidirectional hydrophone, a receiver, a datalogger, and a batte ry pack. The central hydrophone was 10 m northeast of the reef, while the remaining four were 50 m (2007 Deployment) or 125 m (2008 Deployment) from the reef in the cardinal directions. Beacon transmitters (Lotek Wireless temperature pressure MA TP16 50, 20 s burst interval), suspended 0.5 m above each hydrophone via monofilament line and a float were used to compensate for initial hydrophone clock differences and subsequent drift. To estimate reef and hydrophone positions on the seafloor, I attached a h andheld GPS device (Garmin GPS 76, reading with 2 m accuracy and recording at 5 s intervals) to a surface buoy tied tautly to the seafloor for at least five minutes. The mean of these 100 position records was the position estimate. With good position es timates the array is capable of calculatin g transmitter positions with about meter accuracy within the array (Chapter 3) If the transmitter

PAGE 86

86 moves outside the array, positional accuracy drop s such that presence/absence, temperature and depth, can be calcu lated, but not an accurate position. Data Post P rocessing At the end of each deployment, the hydrophones were recovered and post processing software, Asynchronous Logger Positioning System (ALPS, from Lotek Wireless Inc.), combined data from all five hyd rophones to calculate 2 D position solutions. Each transmission with a pressure reading produced a full 3 D position estimate. Along with each time stamped position, ALPS provided several quality control measures. I filtered the raw ALPS output (as desc ribed in Chapter 3) leaving only the most reliable position estimates at irregular intervals. Using only the filtered data, I calculated the 2 D distance from the reef for each recorded position. Using ADCP measurements of water depth and tag measurement s of fish depth, I calculated the altitude above the seafloor when possible. Because of differences in precision in tag and ADCP pressure measurement, the altitude calculation of the difference between water and fish depths added artificial variation to f ish altitude. For example, when water depth changes are enough to be measured by the ADCP but not by the fish tag, a fish at a constant distance from the seafloor would appear to be moving up or down within the water column. For each sequential pair of po sitions I calculated the travel speed and time interval. Because the position time series had irregular intervals, times (and potentially places) were unevenly represented, affecting common home range estimators. To minimize this bias, for each minute of the deployment I calculated the mean position, distance from the reef, and speed. Unless otherwise noted reported results represent

PAGE 87

87 filtered, minute averaged telemetry data. I categorized each position as being over sand or hard bottom. For each fish I calculated 50% and 95% kernel density estimates (KDEs) Though KDEs are based on the number of data points, not the length of time spent at a given location, with deployments much longer than the sampling frequency and no strong sampling bias, density e stimates should be good descriptions of gag space use. The two measures of the extent of space use ( DFR and KDE) are different but related, especially if space use is centered on the reef. KDEs summarize all recorded positions for an individual. Distanc e from the reef was calculated for each position and can be used to explore space use variation through time and over temporally changing conditions. 2007 Deployment On 7 De cember 200 7 I positioned fish traps at the reef and deployed the hydrophone array with the central hydrophone 10 m north east of the reef while the remaining four were 50 m from the reef in the cardinal directions. On 9 December 2007 divers gradually raised the trapped gag to the boat for measurement and tagging. I trapped and tagge d five of the eleven total gag on the reef. The hydrophone array recorded tag detections until it was recovered on 15 January 2008 Three of the five tagged individuals gave consistent, continuous positions during the entire deployment. 2008 Deployment O n 9 October 2008 I deployed the hydrophone array at the same reef, with the four outer hydrophones spaced 125 m from the reef in the cardinal directions and the central hydrophone 10 m northeast of the reef as before On 17 October 2008 divers carried fis h traps to the reef and captured eight of 31 gag on the reef for tagging.

PAGE 88

88 Because of technical difficulties with the hydrophone pre deployment settings, only five fish were capable of giving position solutions. For those five individuals, only part of ea ch transmission was useable, so that all sensor data was lost. Of those five fish, only two gave consistent, continuous positions. The hydrophone array was recovered 2008 December 16 but the last data were recorded on 6 Dec ember 2008 when the hydrophon e batteries died Results Habitat Preference From sonar imagery I constructed a categorical habitat map of the area around the reef (Figure 4 1a) and calculated the fractional hard bottom cover in each concentric ring (Figure 4 1b). The reef was immedia tely surrounded by sand in all directions for about 25 m with substantial hard bottom regions to the east, so that the fractional hard bottom cover increased farther from the reef. Within 50 m of the reef, 31% of the landscape was hard bottom. For each i ndividual, the fraction of recorded positions over hard bottom in each ring varied substantially, but the habitat preference index calculated for all area within 50 m of the reef ( Table 4 1) suggested that individuals preferred sand bottom. Aquatic Envir onmental Conditions During the 38 day 2007 deployment, water temperature was relatively stable, ranging from 17.8 to 14.3 C; during the 51 day 2008 deployment (earlier in the season) temperature stepped down from 25.6 to 15.8 C (Figure 4 2). During both de ployments, current speed, ranging between 0.007 and 0.331 m/s with 98% < 0.15 m /s, was largely tidally driven. The direction of water flow during the 2007 deployment was largely perpendicular to the coast along the NE SW axis ( Figures 4 2 and 4 3 ). Dur ing the

PAGE 89

89 2008 deployment this onshore/offshore flow was augmented with more southerly and easterly flows. The 2007 and 2008 deployments covered about one and tw o lunar cycles, respectively. Telemetry Results Of the ten tagged individuals able to give tele metry results only five had consistent, continuous records. Of the five individuals not showing continuous records, one moved widely within the array for several days then remained stationary (perhaps dying or losing the tag) for the rest of the deploymen t, one individual was active for a day, moved to a location about 40 m from the reef and remained stationary with a couple quick excursions through the array during the next several weeks, and three individuals were active for a few hours or days, then app eared to have left the area, only to revisit on rare occasions. The main focus of my analyses concentrated on the five remaining individuals with consistent, continuous positions. The weights, total lengths, and fork lengths of the five individuals are l isted in Table 4 1. Total length and fork length are linearly related ( L T = 3.1 + 1.0 L F r 2 = 0.999, p << 0.001) and total length and weight show the relationship (log( W ) = 22 + 3.6 log( L T ), r 2 = 0.994, p = 0.0002). I use d weight as the primary measure of fish size because of the strong agreement among all three measures. Figure 4 4a shows the Easting Northing positions for one 2008 individual. All five fish showed similar patterns of re ef cente red movement (Figures E 1 through E 4). T he telemetry system produced between 4396 and 7300 filtered, unaveraged positions per day (of 43200 transmissions) on average for each fish ( Table 4 1). The hourly fraction of tag transmissions resulting in filtered, unaveraged positions var ied substantially (Figures 4 4b, E 1 through E 4) and in general was lower during 2008, presumable because of

PAGE 90

90 the larger array spacing. The median time interval between recorded positions for each individual ranged from 2 to 5 s. The 50% and 95% KDEs are measures of core and maxim um space use. The median distance from the reef and travel speed are listed in Table 4 1. These distributions are shown in Figure 4 5. Linear regression showed no significant relationships between median distance from the reef and fish weight ( DFR = 15. 2 + 0.32 W r 2 = 0.02, p = 0.8), median travel speed and fish weight ( SPD G = 0.185 0.0083 W r 2 = 0.53, p = 0.16), or KDEs and fish weight (50% KDE = 725 54 W r 2 = 0.15, p = 0.52; 95% KDE = 34279 274 W r 2 = 0.15, p = 0.51), but with only five indiv iduals all of intermediate size range the power of these tests was low. Extent of s pace use For all five fish, activity concentrated at the reef (e.g., Figure 4 4a). The 50% and 95% KDEs ran ge from 240 to 891 m 2 and 1826 to 4860 m 2 respectively ( Table 4 1). Asymptotic home range stabilization curves (Popple and Hunte, 2005) suggested that robust KDEs were obtained within 10 25 d (Figure 4 6), with some showing decreases toward the end of the deployment, suggesting a contraction of space use, perhaps i n response to cooling water temperatures or changes in the prey community. For KDE calculations the day of and the day after tagging were excluded to avoid potential tagging effects. To characterize the error of the 95% KDE, for each individual I random ly resampled, with replacement (i.e., bootstrapped), 38 or 51 days (2007 and 2008, respectively) of the telemetry data and calculated the 95% KDE from 1000 bootstrap replicates ( Table 4 1). From visual inspection of the 20 07 2 D position plots (Figures E 2 through E 4) it appeared that the smaller 50 m array spacing did not entirely encompass gag space use around the reef. The loss of recorded positions outside the hydrophone perimeter

PAGE 91

91 resulted in a potential underestimate of space use. To explore how arr ay spacing affected space use estimates, I used the 2008 data (collected with a 125 m array spacing) and deleted all positions which would have been lost if the array had been spaced at 50 m. These simulated home range stabilization curves are shown in Fi gure 4 6. The 50% KDEs would have been underestimated by 3.8% and 1.5%; the 95% KDEs would have been underestimated by 12.5% and 4.6%. To explore this relationship further, using 2008 data, I calculated the final 50% and 95% KDEs for both 2008 fish as if the array had been spaced at 10, 20... 130 m (not shown). The 50% KDEs did not begin to substantially underestimate space use until the array spacing fell below 30 m ; the 95% KDEs were not underestimated until below 50 m spacing. Kernel density estimates gave a single measure of space use over the entire deployment duration; because the distance from the reef was recorded for each fish position, it was possible to explore changes in gag space use relative to temporal cycles and changing environmental condi tions. I fit generalized additive models (GAMs) of all possible combinations of the main explanatory variables and an interaction term for time fit a curve to data, simi lar to linear regression but with the relationship specified in a much more flexible way (Ha stie and Tibshirani, 1986; Wood 2006). GAMs included cyclic fits for time of day, lunar phase, and current direction, and allowed either joint or separate fitting terms for individual fish. Calculating the Akaike Information Criterion (AIC; Bolker, 2008) for each model, better fitting models gave smaller AIC values. Because the AIC is a relative measure of model performance I subtracted the best AIC as a measure of model

PAGE 92

92 performance. With this approach it is important to remember that despite the large number of points, the data only represent five individuals, and that this type of analysis examines correlations, not causal relationships. Whe n describing gag distance from the reef, the best GAM model was the full model, explaining 29% of the response variation ( Table 4 2). To explore the relative importance of individual terms in the full model, I examined the increases in AIC with the remov al of each term in turn. A large increase in AIC with the removal of a term indicated that model performance suffered greatly with the loss of that term, suggesting it was relatively important. Water temperature, time of day, and the lunar phase were mo To further exp lore the correlations between the distance from the reef and environmental conditions I inspected the relationships and GAM fits for each variable individually. Time of Day On av erage, gag showed expanded space use during the day (Figure 4 7), though there was substantial variation about the GAM curve (compare to the range in Figure 4 5b). This pattern was seen to differing degrees in individual fish. To examine this pattern thr ough time for each individual, each panel of Figure 4 8 shows the distance from the reef for each day for one individual, essentially revealing the full time series. The pattern of expanded daytime space use was apparent on some days (e.g., 2008 11 11) an d absent others (e.g., 2008 11 15). Figure 4 8 also reveals gaps in the data (e.g., 2008 11 03, 2008 11 08). The recorded positions just before and after the gaps were generally 30 m or more from the reef, suggesting they represent times when the individ ual left the array instead of movement into the cement reef, blocking tag transmissions. Also, it appears that excursions past the hydrophone array

PAGE 93

93 were made by faster directed movement, rather than by gradual meandering departures. The other four indivi duals show similar periods of expanded daytime spac e use and midday gaps (Figures E 5 through E 8). Lunar Cycles Because of lunar illumination, I expected the phase of the moon to have its strongest effect on behavior at night, and used only nighttime ( from 19:00 to 6:00 EST) data to examine this relationship. Examination of the correlations between the nighttime distance from the reef and the lunar phase (Figure 4 9) suggested that on average for all fish, space use was greatest during waxing and wanin g gibbous moons (lunar indices of about 11 and 19, respectively), smaller during full moons (index = 15), and smaller still during crescent and new moons (indices < 7 and > 22). Using only daytime or all the data showed a dampened, but similar pattern. Water Temperature When considering the correlation between distance from the reef and water temperature (Figure 4 10) it is useful to note that during both deployments water temperatures decreased with time, each deployment experienced different ranges, and experienced temperatures were not evenly distributed even within a deployment. As temperatures cooled to 24 C on the day after tagging in 2008 the average space use for the two 2008 individuals contracted, only to expand again as temperatures cooled to 20 C, and finally, contracted again near 18 C. In the range of temperatures experienced by the 2007 individuals there was more variation among Current Direction There wa s little relationship between the distance from the reef and the d irection of water flow (Figure E 9). T he average space use for all fish

PAGE 94

94 contracted from 19 to 16 m as the direction of flow changed from 25 to 250, the most common range, though individual patterns were inconsistent. Current Speed As water speeds increased from 0 to 0.025 m/s, the average distance from the reef increased from 16 to 19 m, then decreased to 15 m as water speed s increased to 0.20 m/s (Figure E 10). This pattern was seen to varying degrees in all individuals. Most water speeds above 0.20 m/s came during a single event in the 2007 deployment (Figure 4 2), making it difficult to draw conclusions about relationships above this speed. Vertical position in the water c olumn On ly the three 2007 individuals had altitude data. The distribution of all 4 5d; individual distributions were similar. These gag spent most of their time within 2 m of the seafloor. As before, I fit GAMS to the a ltitude data ( Table 4 3 ) and again, the most inclusive model was the best, explaining 58% of the variation in altitude. Dropping each term in turn suggested that the strongest correlations were with temperature, time of day, and phase of the moon. In gen eral, gag moved highest into the water colu mn at or near the reef (Figure E 11). As with the distance from the reef, to explore the relationships between gag altitude and environmental conditions I inspected GAM fits for each variable individually. Time of Day The GAM fit to all fish data weak relationships between altitude and the time of day (Figure 4 11). At all times of the day, the majority of altitudes was below 2 m However, examining the complete altitud e time series for each individual (Figures 4 12, E 12 and E 13) revealed that gag showed periods of vertical space use, either as movements up and down through the water column or as movement to and maintenance at some altitude over hours or days.

PAGE 95

95 Though this vertical movement most often occurred at night, it also extended through daytime. And though vertical movement was concentrated at the reef it was not exclusively observed there. It is also worth noting that the vertical movement of all three indivi duals was relatively synchronized, suggesting some trigger or causal event. Lunar Cycles In the relationship between nighttime altitude and the phase of the moon, the overall pattern was shown by all three individuals (Figure 4 13). Near times of the new and full moons (lunar indices = 30/1, 15, respectively), individuals stayed within 2 m of the seafloor, but during the times the moon was in the first and third quarters (lunar indices = 7, 23) gag moved through the water column up to 9 m from the seaf loor. Using only daytime data or all data, the pattern of expanded water column use was apparent to a lesser degree. Temperature In the relationship between gag altitude and temperature the average behavior of all individuals was similar to the averag e behavior of each individual (Figur e E 14), with individuals moving higher into the water column at temperatures of 15.5 and 17 C, though this pattern was likely because of correlated explanatory variables (e.g., time of day or lunar phase). Current Dir ection As expected from GAM fits, there was little relationship between gag altitude and the di rection of the current (Figure E 15). Considering the low precision of altitude calculations, there was not a discernible change in altitude as current direct ion changed, except an increase in altitude during the rare times when the direction of current flow was between 250 and 360 ( Figure 4 3) Current Speed Given the precision of altitude calculations there appeared to be no correlation between gag alti tude and the speed of the current (Figure E 16).

PAGE 96

96 Gag travel s peed I fit GAMs to gag travel speed and calculated AIC values. The best model, included only time of day, lunar phase, and the interaction terms, and described 13% of the variation in travel sp eed ( Table 4 4 ). In fact, the top five models were various combinations of the time of day and phase of the moon, and described little variation in travel speed. Though diel and lunar cycles correlate d best with travel speed, there was only a weak relati onship. Time of Day Though the 2008 fish, on average, traveled faster than 2007 fish, the average of all individuals and average of each individual suggested that gag were active during all hours, most active during the daytime, and sl owest just b efore dawn (Figure E 17). Examining the full time series for each individual showed great individual and daily variation (Figures 4 14, E 18 through E 21). Lunar Cycles The correlation between travel speed and the phase of the moon showed contradictory pat terns between deployments using either nighttime (Figure E 22), daytime, or all data. The patterns during each deployment were relatively strong and similar among individuals. The 2007 individuals decreased their travel speed dramatically from the first quarter (lunar index = 7) to aft er the full moon (lunar index = 22). During those same lunar phases the 2008 individuals showed the opposite pattern. It is unclear whether these opposing patterns were due to a coincidental correlation between environment al conditions or because gag were responding to an interaction of conditions or seasonal differences. Temperature Recalling the cautions previously noted relative to the relationship between the distance from the reef and temperature, the early mean tr avel speed of the 2008 individuals first dropped to about 0.13 m/s then increased to above 0.19 m/s at

PAGE 97

97 22 C (Figure E 23). The 2007 individuals traveled more slowly on average (about 0.14 m/s), and did not show a consistent pattern. Current Direction The correlations between individual average travel speed and current direction showed different patte rns in each deployment (Figure E 24). The 2007 individuals showed their slowest speeds when the current flowed eastward and southwestward. The 2008 indiv iduals showed little change in average travel speed. Current Speed Considering the relationship between gag travel s peed and current speed (Figure E 25), in the range of most common current speeds (0 to 0.20 m/s) there was agreement among individuals w ithin each deployment but weak and opposite patterns between deployments. The 2007 individuals slightly decreased their travel speed (from 0.14 to 0.13 m/s) while the 2008 fish slightly increase their speed (from 0.17 to 0.18 m/s) over the same current sp eed range. Discussion To explore how the extent and rate of gag space use was affected by the temporally and spatially variable environment, I measured the extent of space use, the altitude above the seafloor, and travel speed. The five gag with consist ent, continuous recorded positions centered their space use on the reef, spending half their time within 9 to 17 m of the reef and using an area of 240 to 891 m 2 They spent 95% of their time within 24 to 39 m and used 1826 to 4860 m 2 They roamed widely at all times but, on average, moved farther from the reef during the day and nights with fuller moons. With m ost of their activity centered near the reef, gag made occasional excursions beyond their primary area, usually during daytime. The majority of time was spent within 2 m of the seafloor and movement up to 9 m altitude usually, though not exclusively, occurred during the night. Additionally, movement up into the water column was highest at or

PAGE 98

98 near the reef. Position in the landscape was most infl uenced by position relative to the reef. Water temperature, current flow, and landscape composition had weak or inconclusive relationships with space use. Gag travel speed was only weakly correlated with the environmental conditions I measured. With m ost of their time spent near the reef and occasional directed excursions beyond the hydrophone array, this suggests gag switch between intensive activity centered at the reef and extensive movement across the wider landscape. These estimates roughly agree for most of his individuals suggest they primarily use an area less than 9400 m 2 and the few larger MCPs might reflect a chance relocation of an individual during an excursion beyond the primar y use area. The pattern of reef centered movement with occasional wider excursions also agrees with the findings of Lindberg et al (2006) that gag visited nearer neighboring reefs more often than distant reefs. Furthermore, periodic excursions away from a home reef could explain the behavior of the tagged individuals not used in this study. They might have been trapped and tagged while only visiting the focal reef, or the tagging process might have induced them to relocate from one reef to another. Eith er explanation hints at a degree of landscape connectedness at scales intermediate between the home range of this study and the population scale of many fisheries models. These types of observations highlight the importance of recognizing behavioral plast icity in the way I understand and describe fish behavior. (2004) findings that gag make their longest moves during crepuscular times, I recorded departures and returns to the array during these same times. I also recorded excursions beyond the array at other times, more commonly

PAGE 99

99 (2004) associated with the reef during daytime. Giv en the small sample sizes and differences (2004) hourly relocations over one day for three min intervals over several days, and the position records every severa l seconds for a few weeks in the present study, my seemingly conflicting results might simply reflect large daily behavioral variation. T his daily variation might be related to, for example, switching between intensive and extensive searching or resting p eriods following large meals. (unpublished) observations that during times of high current speed any gag within 1 m of the reef move closer to the reef in what he called wake surfing behavior. I found current speed to have very littl e effect on gag behavior. Wake surfing represents movement within the scale of error of my telemetry system and was unlikely to be detectible. Contrary to the expectation of preference for hard bottom, in this study gag space use was not biased toward the east over the largest patches of hard bottom and the habitat preference index suggests a preference for sand bottom habitats, though little live bottom fell within the central area. Proximity to the r eef appeared to be a more important factor in gag space use decisions. This study represents the first detailed record of gag vertical use of the water column, though Bullock and Smith (1991) describe gag moving throughou t the water column. Though gag sp ent the majority of time near the seafloor, some nights, and on a few days, gag moved up into the water column. The motivation for increased vertical

PAGE 100

100 movement on one night versus another is unknown; with just three individuals and deployments covering les s than two months, it is difficult to relate vertical movement to the phase of the moon or water temperature. Periods of high vertical movement roughly coincide with the first and third quarters of the moon, but whether gag, or perhaps their prey, respond to the phase of the moon, or whether this is simply coincidence is impossible to determine in this study. Thought to be opportunistic (Bullock and Smith, 1991) and synergistic predators (Kellogg, unpublished data) gag generally move at slower, energetica lly efficient speeds with occasional fast strikes at nearby prey lasting only a few seconds. The distribution of gag travel speed I measured supports this model, with faster moves rarely observed. Indeed, bursts of higher attack speed are likely too brie f to be reliably recorded even at the 2 s resolution of my fish tags. ize in mind, this study examined patterns and correlations of space use, not factors motivating behavioral decisio ns. For gag, and similar mobile, reef fish, the factors of resource acquisition and predator avoidance likely present conflicting motivations in the choice of position in the landscape. As gag congregate at infrequent physical shelters and concentrate th eir movement over the surrounding landscape, individuals compete for prey. An individual can decrease its experienced density (and competition) by expanding its use of the surrounding areas. This expansion comes at the cost of spending more time farther from shelter at greater risk. The balance of these competing use decisions (Biesinger et al ., 2011). As environmental conditions, like the time of day, nighttime lunar illumination, water

PAGE 101

101 conditions, or prey and predator behaviors change through time, the balance of foraging success and predation risk will change optimal space use patterns. The composition and structure of the surrounding landscape might also have an important effect on space use. Sand and hard bottom habitats offer different prey communities with their own energetic costs and benefits. Also, the two habitats represent different risk environments, as the physical and visual complexity of hard bottom likely offers better protection against detec tion. An experiment contrasting gag space use in these two landscapes would improve our understanding of the role of habitat in the balance between foraging success and predation risk (Chapter 5) The present study helps to establish the appropriate spat ial and temporal scales for designing such an experiment to address gag space use decisions within a home range. This study has confirmed and refined some previous findings of gag space use and added details to our understanding of how changing environme nta l conditions affect space use. Recognizing and incorporating fine scale spatial and temporal ecological patterns into large scale management models and spatial tools can help conserve fisheries resources and improve management outcomes.

PAGE 102

102 Table 4 1 Sum mary of gag measurements and behavior Fish ID Deployment W (kg) L T (mm) L F (mm) Mean Positions per D ay a Median DFR (m) Median Speed (m/s) 50% KDE (m 2 ) 50% Radius b (m) 95% KDE (95% CI) c (m 2 ) 95% Radius b (m) Habitat Preference Index d 1 2008 2.0 557 53 8 5141 19.6 0.151 891 17 4860 (4122, 5648) 39 0.40 2 2008 2.8 624 600 4320 16.8 0.154 566 13 3737 (3171, 4431) 34 0.23 3 2007 3.1 625 603 6483 10.7 0.126 240 9 1826 (1594, 1928) 24 0.21 4 2007 4.8 710 689 5661 11.4 0.135 377 11 2340 (1928, 25 03) 27 0.20 5 2007 6.4 772 744 7300 20.9 0.125 540 13 3308 (2905, 3471) 32 0.58 a The mean number of filtered, unaveraged positions. b The radius of a circle with an area approximately equal to the area of the KDE. c 95% confidence intervals calculated from the 2.5 and 97.5 percentile values of 1000 bootstrapped replicates. d The fraction of recorded positions over hard bottom divided by the fraction of hard bottom available within 50 m.

PAGE 103

103 Table 4 2 Generalized additive model fits for d istance from ree f ( DFR ) models Distance from Reef Models a Rank d.f. Adj. r 2 AIC full model = DFR b ~ LUNAR c + TIME + TEMP + SPD W + DIR W + ( LUNAR x TIME ) d 1 227 0.29 0 full model SPD W 2 186 0.29 831 full model DIR W 3 193 0.29 840 full model SPD W DIR W 4 153 0.28 1686 full model ( LUNAR x TIME ) 5 196 0.28 1770 full model LUNAR ( LUNAR x TIME ) 9 163 0.25 9024 full model TIME ( LUNAR x TIME ) 13 163 0.21 16448 full model TEMP 23 188 0.18 23982 a All models allow separate parameter fitting for each individual. b tation indicates the response variable is a function of the explanatory variables. c L UNAR TIME TEMP SPD W and DIR W represent the phase of the moon, time of day, water temperature, water speed, and water flow direction, respectively. d ( LUNAR x TIME ) ind icates the interaction between the time of day and the lunar phase.

PAGE 104

104 Table 4 3 Generalized additive model fits for altitude ( ALT ) models Altitude Models a Rank d.f. Adj. r 2 AIC full model = ALT b ~ LUNAR c + TIME + TEMP + SPD W + DIR W + ( LUNAR x TIME ) d 1 156 0.58 0 full model DIR W 2 132 0.57 1373 full model SPD W 3 130 0.57 1884 full model SPD W DIR W 4 106 0.56 3604 full model ( LUNAR x TIME ) 5 126 0.54 6671 full model TIME ( LUNAR x TIME ) 8 103 0.51 9 724 full model TEMP 9 129 0.51 10166 full model LUNAR ( LUNAR x TIME ) 47 102 0.31 31966 a All models allow separate parameter fitting for each individual. b function of the explanatory variabl es. c L UNAR TIME TEMP SPD W and DIR W represent the phase of the moon, time of day, water temperature, water speed, and water flow direction, respectively. d ( LUNAR x TIME ) indicates the interaction between the time of day and the lunar phase.

PAGE 105

105 Table 4 4. Generalized additive model fits for travel speed ( SPD G ) models Travel Speed Models a Rank d.f. Adj. r 2 AIC SPD G b ~ LUNAR c + TIME + ( LUNAR x TIME ) d 1 110 0.13 0 SPD G ~ LUNAR + TIME 2 79 0.12 1138 SPD G ~ LUNAR 3 42 0.06 3451 SPD G ~ LUNAR + TIME + ( LUNAR x TIME )* 4 47 0.09 3881 SPD G ~ TIME 5 39 0.06 4758 a All models allow separate parameter fitting for each individual except the model marked by b the response variable is a function of the explanatory variables. c L UNAR TIME TEMP SPD W and DIR W represent the phase of the moon, time of day, water temperature, water speed, a nd water flow direction, respectively. d ( LUNAR x TIME ) indicates the interaction between the time of day and the lunar phase.

PAGE 106

106 Figure 4 1. Habitat composition and use. a) Categorical habitat map with white and black representing sand and hard bott om areas, respectively. The reef location ( 141 m E, 136 m both 50 and 125 bottom in each 1 m thick concentric ri ng around the reef. Remaining lines show the fraction of recorded positions over hard bottom within each ring for each individual and for all individuals combined.

PAGE 107

107 Figure 4 2. Aquatic and lunar conditions. Water temperature, current speed, and curre nt direction, measured by an ADCP, are shown for each deployment. The bottom row indicates the lunar phase, where 0 and 1 represent times of new and full moons, respectively.

PAGE 108

108 Figure 4 3. Circular distributions of current direction for each deployme nt. The height of each bar indicates the proportion of flows in that direction.

PAGE 109

109 Figure 4 4. Telemetered two dimensional positions and hourly position fractions for Fish ID 2. a) Each point represents a single position datum and has a 5% density, s o that a fully black dot represents at least twenty recorded positions at that location. b) The hourly fraction of transmissions resulting in filtered, unaveraged positions

PAGE 110

110 Figure 4 5. Gag behavior and telemetry array performance distributions. a) Tr uncated distribution of the time between filtered, unaveraged recorded positions for all individuals. The maximum interval was 11.83 hr Vertical black lines in a), b), and c) indicate median values. b) Distribution of the distance from the reef of filt ered, minute averaged positions for all individuals. c) Distribution of gag travel speeds using filtered, minute averaged positions for all individuals. d) Distribution of gag altitudes above the seafloor using filtered, minute averaged positions for 200 7 individuals. For all four panels, each individual showed similar distributions.

PAGE 111

111 Figure 4 6. Home range stabilization curves. a) 50% KDEs (solid lines) using progressively more days of position data for each individual. Stabilization curves are reca lculated for 2008 individuals using only positions within a simulated array spacing of 50 m The final estimates for Fish IDs 1 and 2 decreased by 3.8% and 1.5%, respectively. b) 95% KDE stabilization curves. The final estimates for Fish IDs 1 and 2 dec reased by 12.6% and 4.6%, respectively.

PAGE 112

112 Figure 4 7. Distance from the reef versus time of day. Here and in similar figures, each colored point represents a single position datum and has a 5% density, so that a solid dot represents at least twenty posi tions recorded at that location. curve represents the 95% confidence interval. The plotted y axis range highlights GAM fits and does not cover the full range of response variable values. In this figure the GAMs are cyclic fits of DFR ~ TIME

PAGE 113

113 Figure 4 8. Time series of the distance from the reef for Fish ID 2. In this and similar figures, each panel shows all filtered, minute averaged positions on a single day. The density of each point is 5% so that a fully black dot represents at least twenty recorded positions at that location.

PAGE 114

114 Figure 4 9. Nighttime distance from the reef versus lunar index. GAMs are cyclic fits of DFR ~ LUNAR See Figure 4 7 caption for more details.

PAGE 115

115 Figure 4 10. Distance from the reef versus water temperature. Deployment 2007 individuals experienced temperatures between 14 and 18 C and 2008 individuals experi enced temperatures between 15 and 26 C. GAMs are fits of DFR ~ TEMP See Figure 4 7 caption for more details.

PAGE 116

116 Figure 4 11. Altitude above the seafloor versus time of day. GAMs are cyclic fits of ALT ~ TIME Sinusoidal patterns in recorded positions are artifacts of differences in instrument precision. See Figure 4 7 caption for more details.

PAGE 117

117 Figure 4 12. Time series of the altitude above the seafloor for Fish ID 3. See Figure 4 8 caption for more details.

PAGE 118

118 Figure 4 13. Nighttime altitude abo ve the seafloor versus lunar index. GAMs are cyclic fits of ALT ~ LUNAR See Figure 4 7 caption for more details.

PAGE 119

119 Figure 4 14. Time series of gag travel speed for Fish ID 2. See Figure 4 8 caption for more details.

PAGE 120

120 CHAPTER 5 COMPARING GAG SPACE USE IN TWO LANDSCAPES Background use decisions, and fitness is often limited by our understanding of fine scale space use patterns The trade off between the often conflicting ne eds for shelter and food can be manifest by an and Merrill, 2009). When different habitats present different risks and foraging opportunities, the landscape composition defines the co sts and benefits of a given location (Claireaux and Lefran ois, 2007; Lima and Dill, 1990). In response to t emporal changes in predator and resource distributions, and environmental conditions, mobile animals can manage their fitness through their choice of position in the landscape (Ferrari et al. 2009). The development of optimal foraging (Perry and Pianka, 1997), ideal free distribution (Giske et al. 1998), central place foraging (Bakker et al 2005 ) and foraging arena theories (Walters and Martell 2004) has advanced our understanding of the role of risk and resources in animal space use decisions. Experimental studies have revealed many strategies for optimizing the risk resource trade off in different landscapes in species from invertebrates (Sc rimgeour and Culp, 1994), to birds (Quinn and Cresswell, 2004), to fish (Lindberg et al 2006 ), to mammals (Hebblewhite et al ., in part set by its landscape and reflected in its space use patterns. One strategy is to forage within a home range, retreating to a central shelter when disturbed ( e.g., Hovel and Lowe, 2007; Johns and Armitage, 1979; Schooley et al ., 1996 ). For an animal following this strategy,

PAGE 121

121 the distribution of habitat s in the surrounding landscape, and changing environmental conditions define the space use pattern mos t likely to maximize fitness. The space use strategy of large, mobile reef fish around a central shelter offers an opportunity to explore how space use patterns and fitness respond to differences in landscape and varying environmental conditions. Using gag ( Mycteroperca microlepis ) as a model, I studied the effect of landscape composition and environmental condition s on space use and fitness. I acoustica lly tagged gag to identify differences in the extent and variation in space use patterns in sand and hard bottom landscapes. Through fish collections at the end of the experiment I compared gag growth and condition in both landscapes. Individual gag est ablish home ranges centered on small, infrequent shelters (Hood and Schleider, 1992 ; Chapter 4 ) and forage across the surrounding mix of sand and hard bottom habitats, each habitat potentially offering different risks and foraging opportunities. Though l ittle is known about the effects of different habitats on gag space use, foraging success, and predation risk, in general structural complexity is known to decrease predation risk in many species (Crowder and Cooper, 1982; Lima and Dill, 1990; Warfe and Ba rmuta, 2004). Structural and visual complexity likely make hard bottom habitats safer for gag. When disturbed gag retreat to the central shelter ( pers. obs. ). For pre reproductive gag, reproductive success may involve significant trade offs between ris k management and foraging success. Despite its ecological an d economic importance (SEDAR, 2006 ), little is known about individual gag movement within a home range. Kiel (2004) estimated h ome range sizes to be typically less than 9400 m 2 ( Kiel, 2004) and m aintained for an average

PAGE 122

122 of ten months (Lindberg et al ., 2006). I estimated 95% kernel density estimates (KDEs) to range from 1826 to 4860 m 2 (Chapter 4). Other studies show a wide range in residency times with some individuals remaining at one location for years and others moving hundreds of kilometers within weeks (McGovern et al. 2005). While established within a home range, gag have been found to be active at all hours and most closely associated with shelter during daytime (Kiel, 2004; Kellogg, un published data), though in Chapter 4 I found five individuals to move more widely during the daytime When comparing space use in two landscape s my model (Chapter 2) predicted that, experiencing equal abundances, individuals living in safer landscapes s hould move more widely from a central shelter and experience greater fitness. The fitness was defined as the ratio of predation mortality risk and growth, g so that lower / g values represented better fitness from better landscapes. When risk is dif ficult to measure and can only be assumed, measures of growth can still be useful in comparing landscapes. For gag, if hard bottom landscapes are safer, i.e., have lower values, then the relative values of growth in both landscapes will magnify or dimin ish differences in / g For example, if gag grow faster (or as fast) in safer hard bottom landscapes than in riskier sand bottom landscapes, then the ir experienced fitness must also be greater In this study I did not measure predation risk or prey avail ability in sand or hard bottom landscapes; thus my interpretation of space use patterns based on these driving mechanisms must be tentative. To test model predictions and explore how different landscapes and environmental conditions affect gag space use a nd fitness, I address three main questions: 1.

PAGE 123

123 Compared to gag in sand bottom landscapes, do gag in hard bottom landscapes use more area, quantified by kernel density estimates (KDEs; Worton, 1989), distance from the reef, and altitude above the seafloor? 2. How do water temperature, diel, and lunar cycles interact with landscape composition to affect space use patterns? and 3. Do gag in hard bottom landscapes have higher fitness, as reflected in the proxies of recent growth and condition? To answer the se questions, I used acoustic telemetry to record gag positions through time and estimate home range sizes on experimental, artificial reefs placed in sand and hard bottom landscapes. At the end of the experiment I collected individuals for biological me asurement s and otolith extraction to estimate recent growth and condition. Methods Study System and Organism Gag is a shallow water grouper in the northeastern Gulf of Mexico (Brul 2003), where the seafloor slopes gradually to the continental shelf e dge break. On the shallow shelf, the seafloor is a mix of sand bottom and emergent rock hard bottom habitats with little vertical relief (Parker, 1983). Hard bottom habitats often support low soft coral, sponge, or algae growth, providing camouflage and hiding potential but little physical protection; sand bottom habitats support a relatively depauperate community and provide little protection at all. Small, infrequent, patch reefs, boulders, or underwater sinkholes provide vertical structure about which gag establish home ranges (Bullock and Smith, 1991). During the fall of their first year, juvenile gag move out o f near shore nursery grounds onto the shallow shelf, where they spend two to six years before maturing and joining spawning aggregations at the shelf edge (Schirripa and Goodyear, 1994; Collins

PAGE 124

124 et al ., 1998 ). As they grow, gag appear to move from structure to structure into progressively deeper waters (Hood and Schleider, 1992; Brul 2003), though Bullock and Smith (1991) note this size/depth d istribution might be caused by fishing depletion not habitat selection. During the time established at each shelter, gag associate with physical structure putatively for shelter during times of disturbance (Kellogg, unpublished data). Gag exhibit a larg e range of movement rates over long time scales. One individual at large for almost 4.4 years was recaptured less than 2 km from the tagging location (McGovern et al. 2005), while another was recaptured thousands of kilometers away within nine months of the last sighting at the release point (Lindberg et al., 2006). Telemetry relocation studies (Kiel, 2004; Lindberg et al. 2006) found home range sizes to vary widely and estimate d maximum widths to be approximately 350 m with 50% core use areas about 38 m. D uring their time resident at a reef, gag occasionally make extensive movements beyond their core use areas (Kiel, 2004; Chapter 4). My 2007 and 2008 tagging of five individuals suggested that gag spend half their time within 9 to 17 m of the reef an d 95% of their time within 27 to 39 m. They moved most widely and fastest during the day. Also, movement patterns appeared to switch between intensive space use and occasional extensive visits beyond the normal home range. Correlations between distance from the reef and travel speed were unclear or inconsistent, but water flow speed and direction had little correlation with any movement measures. There was some suggestion that the positio n of gag in the water column (altitude) was correlated with the ph ase of the moon, but with so few individuals covering few lunar cycles, this relationship needs to be further explored

PAGE 125

125 Experimental Design To compare gag space use and performance in two landscapes I acoustically tagged eight gag on each of three experim ental reefs in both sand and hard bottom landscapes. Because all tagged individuals left the first sand bottom deployment soon after tagging, I conducted a seventh deployment, tagging eight new individuals on the same, failed sand bottom re ef. In this e xperimental design, each reef, not each fish, represented the experimental replicate. For each of the seven deplo yments, before tagging SCUBA divers counted the total number of gag at the reef and estimated their size to 10 cm categories (i.e., 10 20 c m, 20 30 cm) Following the procedures described in Chapter 4, on each reef I trapped, weighed ( W to the nearest 0.1 kg), and measured total and fork length ( L T and L F respectively, to the nearest 1 mm) for as many gag as possible. Of those I acoust ically tagged eight individuals on each reef H ome range stabilization curves fro m the 2007 and 2008 deployments (Chapter 4) suggest that during fall and winter, 10 to 25 days are required to fully capture gag space use, but because of weather and time co nstraints, only 10 to 14 days of telemetry data were collected at each of the six experimental reefs between 1 June and 30 November 2009 ( Table 5 1). Tagging, array deployments, and data processing were largely performed as for the 2007 and 2008 deployment s (Chapter 4) At each reef, seven individuals were tagged with 2 s, ID only transmitters without temperature and pressure sensors. One other individual was tagged with a sensor tag, alternating between pressure and temperature data along with each ID tr ansmission The central hydrophone was 10 m northeast of the reef and the outer hydrophones were 100 m from the reef in the cardinal directions.

PAGE 126

126 From telemetry results, for each individual I calculated the 50% and 95% KDEs as measures of core and extend ed space use. For each recorded position and position pair I calculated the distance from the reef and travel speed, respectively. And for sensor tags I calculate d gag altitude or the distance above the seafloor when possible Kernel density estimates gave a single measure of space use over the entire deployment duration; because the distance from the reef (and travel speed and altitude) was recorded for each fish position, I examined changes in gag space use relative to the time of day, lunar phase, a nd water temperature Experimental Reef System As part of the Steinhatchee Fisheries Management Area (SFMA) during 2006, 36 experimental, artificial reef patches were deployed in the northeastern Gulf of Mexico in about 13 m of water in landscapes previ ously designated as sand bottom, fragmented bottom, or hard bottom using coarse scale sonar imagery. Each reef patch comprised either one or four hollow cement hemisphere units The walls contained several holes allowing gag access to the interior. The 36 reefs, larger than any natural structure in the vicinity were evenly divided between arbitrary inner and outer mosaics with six 1 unit and six 4 unit patches in each of the three la ndscape categories (Figure 5 1) From initial diver assessments of ga g abundance and landscape composition, t hree 4 unit reefs in predominan tly sand bottom and three in hard bottom landscapes were chosen Landscape and Aquatic Conditions To create categorical habitat maps of the seafloor around the hard bottom reefs I col lected s ide scan sonar imagery using a cabled towfish (Marine Sonic Technology, Ltd., 600 kHz) in 2007. In cooperation with Rutgers University in 2008 I collected side scan sonar imagery arou n d the sand bottom reefs using the autonomous un derwater

PAGE 127

127 vehicl e REMUS (also using a Marine Sonic 600 kHz sonar) Geographically referenced TIFF images of the seafloor surrounding the reef s were produced using SonarWiz.MAP4 (Chesapeake Technology Inc.). From the image s I visually categorized areas as hard bottom or sand bottom habitat, then exported the categorical map to R (R Co re Development Team, 2010) for fractional cov erage and gag habitat use calculations. To characterize landscape composition a t increasing distances from each reef, I divided the area into c oncentric rings 1 m wide and within each ring calculated the fraction of hard bottom cover. For each fish, and for all fish on a reef I calculated the fraction of all recorded positions over hard bottom within each ring. I calculate d an index of overal l habitat preference for each individual as the fraction of recorded positions over hard bottom divided by the fraction o f hard bottom available within 10 0 m of the reef An Acoustic Doppler Current Profiler (ADCP, Teledyne RD Instruments Workhorse Sentinel 600 kHz), deployed within several hundred meters of the experimental reefs gather ed data on water conditions as described in Chapter 4 Because I found little correlation between water flow and gag space use in earlier deployments, in this study I prim arily use water temperature and depth from the ADCP The ADCP was deployed from 1 June to 20 August 2009 24 August to 1 October 13 October to 27 October, and 18 November to 26 November Growth and Condition At the end of the 2009 experimental season, I collected gag by trapping and spear fishing from each experimental reef between October 2009 and March 2010. Fish were placed on ice for transport to the laboratory for processing. All individuals

PAGE 128

128 were uniquely numbered and measured for total length an d fork length ( L T and L F respectively, to the nearest mm) and total body weight (W, to the nearest 0.1 g) Both sagittal otoliths were removed, rinsed in water to remove all surrounding membranes, and stored dry for later processing. For ageing, one ot olith was mounted and cross sectioned into 0.5 mm sections using a Buehler Isomet 1000 digital sectioning saw. Sections were permanently mounted in Histomount (National Diagnostics) and viewed using a stereomicroscope (20 45X) with transmitted light. T hin sections of otoliths for marginal increment analysis were measured using a digital image analysis system (IMAGE 1 Universal Imaging C orp.). Otoliths were measured along an axis on the proximal medial surface, where the opaque zones were most distinc t. Fractional ages were assigned based on the number of opaque zones plus the fraction of the year transpired at the time of capture since 1 April, the end of the main spawning season (Hood and Schleider, 1992). Gag deposit only one annulus each year wi th the opaque zone forming during July to August (Harris and Collins, 2000). Without the ability to follow individuals through their reproductive lives I could not directly evaluate land scape effects on fitness. Nevertheless, I attempted to quantify the i nfluence of landscape on the growth of gag living on the experimental reefs during the 2009 summer by comparing length as a function of fractional age This assumed that collected individuals were resident on the experimental reefs (or similar reefs in th e SFMA) during the entire study, an assumption supported by an estimated mean residency time of ten months (Lindberg et al., 2006). Because I weighed and measured individuals only once, I could not directly calculate recent growth. Instead, the common me thod of comparing lengths of individuals of similar ages is with the von Bertalanffy

PAGE 129

129 growth curve ( Chen et al., 1992) To compensate for the fact that my individuals were captured over a six month period, giving some individuals more time to grow, I used fractional ages instead of age classes. Also, all collected individuals were le ss than five years old, and fe ll on the relatively linear, younger end of the gag growth curve (Harris and Collins, 2000), so I simply fit a linear model to compare growth betw een treatments ( L T = a + b Age F L T = total length at capture, Age F = fractional age ) T he relationship between total length and weight, which I call condition of gag was described by the relationship log 10 W = log 10 c + d log 10 L T where log 10 c and d = intercept and slope, respectively, of the regression. This was transformed into the power function, W = c L T d I tested for differences in the log 10 (weight) versus log 10 (length) relationships between treatments. Model fitting was command in R ( R Development Core Team, 2010). Results I tagged eight gag on each of four sand and three hard bottom reefs between 1 June and 15 December 2009 ( Table 5 1 ). On the fi r st sand bottom reef all tagged individuals left the hydrophone array with in four days of tagging. At the end of the season, I returned to the same reef and tagged eight new individuals. Except where note d, in all treatment comparisons I used data from only the six deployments giving consistent and continuous fish posit ions. The telemetry array was deployed for between 14 and 17 days at each reef. To determine if the abundance and size distribution of (presumably) resident gag on reefs in each landscape were different I compared the numbers and sizes of all gag observ ed on four sand and three hard bottom reefs. The number s of gag on sand and hard bottom reefs ( Table 5 1) were not

PAGE 130

130 significantly different (2 sided t ( 0.4514), d.f. = 5, p = 0.67). B ecause trapped and non trapped gag total lengths were estimated with d ifferent precision, I assigned tagged fish to the corresponding 10 cm category of the classification system used for non tagged gag (Figure 5 2 a ). Using the lower boundary of each size class as a single measure of total length, I calculated the mean total length of all gag observed on ea ch reef which were not significantly different between treatments (2 sided t ( 0. 6900 ), d.f. = 5 p = 0. 52 ). T he sizes of tagged individuals (Figure 5 2b) were also not significantly different between treatments ( 2 sided t ( 1.8739) d.f. = 5. p = 0.12 ). Extent of Space U se In Two Landscapes For six deployments between three and six gag that remained on each reef, giving consistent, continuous telemetry results ( Table 5 1 ). The telemetry system produced between 1080 a nd 11318 high quality positions (filtered at CN < 1.5) per day (of 43200 transmissions) on average for each fish ( Table 5 2) Examining the Easting Northing plots of all individuals showed a wide range of space use patterns. Activity was centered ne ar the reef, except in one or two fish in hard bottom landscapes. Gag in hard bottom landscapes were less tightly associated with the reef than in sand bottom landscapes, though at times fish in sand bottom landscapes spent time 50 and 60 m from the reef (Figure 5 3 ). An examination of each among individuals. In general, gag in hard bottom landscapes moved more widely. Some individuals were relatively stationary du ring the day and active within the array during the night (Figure 5 4 ) though this pattern was not consistently exhibited. Some gag did not show this pattern at all (Figure F 1). Other individuals were relatively stationary most of the time, making infr equent moves between two or three locations

PAGE 131

131 (Figure F 2) There were times when individuals appeared to depart the array completely, only to return hours or days later (e.g., Figures 5 4 day 2009 06 15 and F 1 day 2009 08 27) These same patterns were se en in gag on sand bottom reefs, though in a spatially compressed area (Figure F 3 ). For gag in sand bottom landscapes the 50% and 95% KDEs ranged from 20 to 206 m 2 and from 134 to 1934 m 2 respectively. In hard bottom landscapes the 50% a n d 95% KDEs rang ed from 141 to 982 m 2 and from 1325 to 6467 m 2 respectively ( Table 5 2). Because each reef, not each fish, represented the experimental replicate, I calculated the mean 50% and 95% KDEs of all good fish on each reef ( Table 5 1) Gag core (Figure 5 5) a nd extended areas were significantly 7.6 and 6.6 times larger respectively, in hard bottom landscapes ( 50% KDE: 1 sided t ( 3.7655), d.f. = 2.085, p = 0. 03 Welch correction for unequal variances; 95% KDE : 1 sided t ( 3.6176), d.f. = 4, p = 0.0 1 ). From dep loyments in 2007 and 2008 (Chapter 4), I estimated that s table KDEs required 10 to 25 days of position data. Stabilization curves for gag used in this experiment showed that in sand bottom landscapes, where space use was limited, 50% and 95% KDEs were s ta ble within days of tagging (Figure 5 6 ). In hard bottom landscapes, KDEs increased gradually over the entire deployment, suggesting that space use was underestimated. F or each individual I calculated the median distance from the reef, DFR ( Table 5 2) T hen for each reef, I calculated the mean median DFR ( Table 5 1) which was significant ly larger in hard bottom landscapes ( 1 sided t ( 2.9156), d.f. = 2.011, p = 0.0498, Welch correction for unequal variances ). Combining recorded positions of all individ uals in each landscape type the distribution of distan ces from the reef (Figure

PAGE 132

132 5 7 ) showed the peak distance in sand bottom landscapes at 4 m with little movement beyond 10 m. In hard bottom landscapes the peak distance from the reef was 12 m with movem ent beyond 100 m. Individual distributions were roughly similar. In all cases space use in sand bottom landscapes was reef centered, but for one or two gag in hard bottom landscapes, it was not (e.g., Figure 5 3 d ) I also calculated the reef mean media n travel speeds and found no significant difference between treatments (2 sided t ( 0.2078), d.f. = 4, p = 0.85) Distributions of travel speeds for all individuals showed little movement faster than 0.4 m/s in either landscape, but in sand bottom landscap es there was relatively more movement at speeds near 0.15 m/s (Figure 5 8 ). Only one of the eight fish with sensor tags gave consisten t, con tinuous telemetry results, and a n inspection of its altitude time series did not show any patterns. Space Use and Environmental Conditions Kernel density estimates gave a single measure of space use over the entire deployment duration, but because the distance from the reef was recorded for each fish position, it was possible to explore changes in gag space use rela tive to time of day, phase of the moon, and water temperature (Figure F 4) I fit generalized additive models (GAMs; Ha stie and Tibshirani, 1986; Wood 2006) to data from both treatments 2010) as des cribed in Chapter 4. With this approach it is important to remember that, despite the large number of points, the data only represe nts 27 individuals on six reefs, and that this type of analysis examines correlations, not causal relationships. Neverthele ss, visual inspections of GAM fits revealed treatment differences in correlations between space and environmental conditions.

PAGE 133

133 Though each reef was an experimental replicate, ins pecting GAM fits to individual behav ioral variation that con tributed to differences in behavioral patterns between treatments. T here was no clear correlation between the time of day and the distance from the reef (Figure 5 9 ), though some i ndividuals spent more time farther from the reef during the nighttime, a pa ttern opposite that observed in the 2007 and 2008 deployments The fact that individual deployments covered less than half of a lunar cycle and experienced relatively constant water temperatures made it difficult to assess correlations with the distance f rom the reef : there were no clear correlations (Figures F 5 and F 6 ). Combining all individuals in hard bottom landscapes, gag travel speed was faster during evening and nighttime hours (about 0.18 and 0.13 m/s respectively) though the pattern was weaker and in fact opposite, for some individuals (Figure F 7 ). There were no clear relationships between gag travel speed and the phase of the moon (Figure F 8) and water temperature (Figure F 9). Appendix G shows R code for these and all other calculations. Space Use and Habitat Prefe rence Within 100 m of the three sand bottom reefs the entire landscape was classified as sand bottom habitat. The fraction of hard bottom cover within 100 m of the reefs i n three hard bottom landscapes was 0.64, 0.29 and 0.5 3 ( Table 5 1, Figures 5 10a, F 10 a and F 11 a). In hard bottom landscapes where there was a choice between habitat types, the overall habitat preference indices for each individual ranged from 0.04 to 1.92 ( Table 5 2 ), with reef means of 0.52, 0.55, and 0 .91 ( Table 5 1 ) Comparing fractional hard bottom coverage and hard bottom use in expanding concentric rings did not show a consistent preference for one habitat type (Figures 5 10 b, F 1 0 b and F 11 b ).

PAGE 134

134 Even though the overall landscape composition affecte d space use, gag choice of position seemed to be more related to proximity to the reef than to habitat type. Growth and Condition During October 2009 and March 2010, 14 gag were sampled from sand bottom reefs and 18 gag from hard bottom reefs. Of these, three individuals from sand bottom and two from hard bottom had acoustic transmitters. Because of small and uneven sampling numbers, growth and condition analyses pooled individuals into either sand or hard bottom treatments. Weights, lengths, and fract ional ages are shown in Table 5 3. Total lengths of gag collected from sand bottom reefs ranged from 343 to 697 mm, while those from hard bottom reefs gag ranged from 348 to 742 mm (Figure 5 11 a). Fractional ages ranged from 1.9 to 4.9 years for sand bot tom and 1.6 to 4.8 years (Figure 5 11 b) for hard bottom. Neither fish lengths (2 sided t (0.2656), d.f. = 30, p = 0.79) nor ages (2 sided t (0.2373), d.f. = 30, p = 0.81) differed significantly between treatments though having all but two individuals in t he sand bottom landscapes between 2.5 and 3.5 years old makes the power of the regression low The regression s for total length as a function of fractional age were L T = 146.8 + 117.3 Age F for sand bottom and L T = 176.4 + 1 06.1 Age F for hard bottom ; anal ysis of co between the slopes or the intercepts (ANOVA: slopes p = 0.76, elevation p = 0.79; Figure 5 12 a ). The pooled regression relating age to length was L T = 170.9 + 108 .5 Age F The regression equations for the log log length/weight relationship were log 10 W = 11.3 + 3.0 log 10 L T for soft bottom and log 10 W = 11.6 + 3.1 log 10 L T for hard bottom (Figure 5 12b ) ; again, ANCOVA showed no significant difference between habi tats (slopes p = 0.67, elevations p = 0.68) The

PAGE 135

135 pooled regression relating length to weight was W = 3.5 x 10 12 L T 3.03 Based on the se allometric relationships, length as a function of age, and weight as a function of length did not differ between treat ments. Discussion These results support the model prediction (Chapter 2) that gag range farther from the shelter in hard bottom landscapes, suggesting that space use might reflect an optimization of the trade off between predation risk and foraging competi tion. By several measures, gag in hard bottom landscapes used about seven times the area as gag in sand bottom landscapes ( Tables 5 1 and 5 2, Figure 5 5 ), and though there was no clear evidence that the phase of the moon (Figure F 5) or water temperature (Figure F 6) affected space use, the time of day was sometimes correlated to space use for some indivi duals (e.g., Figures 5 4 and 5 9 ). There was no difference in gag size (Figure 5 2) or abundance at the time of tagging, nor in recent growth and condit ion of gag collected at the end of the exper iment (Figures 5 11 and 5 12 ) suggesting that either space use differences i n response to the landscape do not affect fitness, or that my design and sampling was insufficient to detect true differences. The ful l implications of these results in the broader context of model predictions, gag ecology, and animal movement, as well as their application to model predictions, will be elaborated in Chapter 6. H ere I interpret my results primarily in the context of gag space use. As expected, differences in the median distance from the reef reflected the same general patterns as differences in home range size: median DFR was more than four times greater in hard than sa nd bottom landscapes (Figure 5 7 ). Interestingly in either landscape, gag spent most time near the reef, not directly at the reef. Based on my prior knowledge of gag natural history, the most plausible driving factor behind the

PAGE 136

136 between landscape differences is that proximity to ph ysical shelter (i.e., a reef) and visual shelter (i.e., the camouflage potential of hard bottom habitats) lowered the risk of predation mortality. In this hypothesis, g ag were reef associated because of the quick access to shelte r, a nd gag in hard bottom landscapes use d more of the surrounding landscape because lower predation risk near hard bottom habitats freed them to lower their experienced density, and thus foraging competition, by expanding their space use. In this way gag might have been managing their fitness by incre asing their foraging success through expanded foraging areas in less risky hard bottom landscapes. In contrast, gag in more risky sand bottom landscapes might have sacrificed foraging opportunities and lower competition farther from the reef in order to l imit their predation risk. This interpretation assumes that gag prey were abundant in the area around the reef. An alternative interpretation of these results is that i ncreased space use in hard bottom landscapes might be driven predominantly by prey availability rather than predation risk. If gag prey, in sand bottom landscapes, were tightly associated with the reef and absent from surrounding areas then there would have been no benefit to moving off the reef, even in the absence of predation risk. Conversely, if prey in hard bottom landscapes move more widely across the landscape (e.g., because of increased camouflage protection from gag), then gag might increase their space use, not to decrease foraging competition, but to follow their prey. Duri ng warm summer months the primary prey of gag is small schooling pelagic fish ( Weaver, 1996 ). The space use patterns and habitat preferences of these prey are not well known, but some evidence suggests that during the night, typically tight daytime school s break apart as individuals

PAGE 137

137 distribute themselves more evenly within the water column above the landscape (Nagy, unpublished data). There is suggestion that during early dawn, prey cue on physical structures to re form schools, but throughout the day it is unclear how strongly they associate with structure or particular habitat types (Nagy, unpublished data). As gag meander across the landscape they are potentially tracking the movement of prey schools (Bullock and Smith, 1991; Hobson, 1968) and taking a dvantage of synergistic foraging opportunities (Hixon and Carr, 1997) when the presence of transient predators drives schooling prey closer to the seafloor. Whether space use differences in sand and hard bottom landscapes wa s most influenced by predation risk or prey distributions, theory suggests gag balance both predation risk and foraging competition. My results support s the idea that gag alter their space use patterns within home ranges in response to predation risk and foraging competition. As prox ies for fitness I compared recent growth and condition of gag living in sand and hard bottom landscapes. tagging dates spread over six months Table 5 1 ) there was no significant difference between gag in the two la ndscapes. There was no detectible difference in the abundance or size of all gag observed on the reefs, nor in the tagged individuals. experiment (collection dates were largely spread over five months, Table 5 3) there was still n o significant difference between individuals collected from sand and hard bottom reefs in length or age ( Figure 5 1 1 ). Assuming that collected individuals had been resident on experimental reefs during the entire experiment, t o estimate recent growth of gag presumably living in experimental landscapes I com p ared length as a function of age and condition (Figure 5 12 ) and did not detect a significant difference between

PAGE 138

138 landscapes. Because of the limits of my study design it is difficult to draw any conclu sions about the effect of landscape composition on gag performance and fitness. There may have been growth and condition difference s between landsc apes undetected by my analysis; t he effect s of habitat on fitness have been clearly seen in many systems ( e .g., Claireaux and Lefran ois, 2007 ; Franklin et al., 2000; Munday, 2001 ). If there truly was no difference in growth or condition this might suggest that landscape structure did not play an important ro le in growth, though risk might have been affected. O ther factors potentially influence d the landscape growth relationship, for example if low prey densities in the region diminish ed growth benefits typical of the habitat usually better for gag growth. Alternatively, because the Gulf of Mexico gag popul ation was substantially below its ecologically historic level (SEDAR, 2006), density dependent processes did not manifest the density dependent pattern of differential growth in different habitats. A comparison of my collected gag and current population r elative w eights might help clarify this; f uture studies could improve our understanding of the effect of landscape on gag fitness. I assumed that gag collected for performance measures remained resident on experimental reefs during the entire experiment R esidency estimates vary widely about the mean residency time of 10 mo nths (Lindberg et al., 2006) but the fact that my end of season fish collections recaptured only six of 56 tagged individuals ( Table 5 3) and the fact that 29 tagged individuals left t he hydrophone array within days of tagging suggested that not all collected individuals had been resident on the experimental reefs throughout the summer. However, if those individuals had been resident on other nearby SFMA reefs in similar landscapes an d were only visiting or recently moved to

PAGE 139

139 the collection reef, or similarly, left the tagging reef because of the tagging experience, then tests of landscape comparisons might still be valid if landscape conditions around nearby reefs were equivalent. In this light, the separation of sand and hard bottom reefs into distantly spaced mosaics (Figure 5 1 ) could have strengthened differences in landscape effects, if landscape composition was relatively homogeneous around reefs of one treatment and relatively different between treatments I make a final observation regarding the behavioral plasticity among and within individual gag. Previous tagging studies (McGovern et al., 2005; Lindberg et al., 2006) reported gag recaptured less than 2 km from the release site after more than four years, while another gag was recaptured thousands of kilometers away within nine months of the last sighting at the release point From my studies, variation in space use between treatments ( Table 5 1, Figure 5 5), between indivi duals within treatments (Tables 4 1 and 5 2, Figure 5 5), and over time for a single individual ( e.g., Figures E 5 and 5 4) demonstrated substantial plasticity at the scale of tens or hundreds of meters. Also, periods of absence from the reef of some indi viduals might represent switching between intensive and extensive search behavior (Ferran et al., 1994) or between foraging within a well known home range and exploration of lesser known landscape. This variation at multiple scales highlights the importa nce of incorporating individual variation into our understanding and management of gag and similar fish populations. Such behavioral plasticity can account for differences in gag space use estimates. Compare my 95% KDE (less than 1000 m 2 ) calculated from high spatial and 2 ) calculated from coarse resolution data over many months. Gag space use be havior is

PAGE 140

140 a complex and possibly hierarchical process; to measure or summarize it with a single metric can produce an incomplete understanding and hinder our ability to link research across scales. For gag, and similar mobile reef fish, the needs of resource acquisition and predator avoidance likely present conflicting motivations affe cting the choice of position in the landscape. The strategy for balancing these needs can be affected by the landscape composition resulting in the space use patterns I observed. This is especially true when habitats present different risks and foraging opportunities.

PAGE 141

141 Table 5 1. Mean m easurements and behavior of all gag on a reef Deployment a Dates HB Cover b Total Gag on Reef No. Gag c Gag Size d (mm) Mean Median DFR e (m) Mean 50% KDE (m 2 ) Mean 95% KDE (m 2 ) Mean Median SPD G (m/s) Habitat Preference Inde x f A HB 1 3 June 17 June 0.64 53 3 522 127 12.9 282 1768 0.155 0.52 B SB 1 13 July 27 July 0.00 49 0 C SB 2 4 Aug 20 Aug 0.00 46 5 398 23 4.1 40 237 0.162 D HB 2 25 Aug 8 Sep 0.29 22 6 403 18 40.2 359 2939 0.113 0. 55 E SB 3 16 Sep 1 Oct 0.00 25 4 390 43 4.5 41 256 0.122 F HB 3 13 Oct 27 Oct 0.53 43 4 511 77 30.8 595 4044 0.154 0.91 G SB 4 g 18 Nov 30 Nov 0.00 15 5 471 70 5.6 82 825 0.125 a Abbreviations SB and HB indicate sand bottom and hard bottom landscapes, respectively. b Fractional hard bottom cover within 100 m of the reef. c Number of gag giving consistent continuous telemetry results. d Mean and standard deviation of total length of tagged gag. e For each individual I calculated the median distance from the reef DFR ; for each reef I calculated the mean of these median values f Mean habitat preference index of all individuals on a reef. The habitat preference is t he fraction of recorded positions over hard bottom divided by the fra ction of hard bottom available within 100 m. g This deployment used the same reef as Deployment B SB 1.

PAGE 142

142 Table 5 2 M easurements and behavior of individual tagged gag Fish ID Deployment W (kg) L T (mm) L F (mm) Number of Days Mean Positions per Day a Median DFR (m) Median Speed (m/s) 50% KDE (m 2 ) 50% Radius b (m) 95% KDE (m 2 ) 95% Radius b (m) Habitat Preference Index c 1 C SB 2 1 .0 432 425 17 8554 3.5 0.155 33 3.2 218 8.3 2 C SB 2 0.7 376 362 17 8726 4.5 0.164 37 3.4 253 9.0 3 C SB 2 0. 7 390 376 17 9979 4.5 0.168 46 3.8 329 10.2 4 C SB 2 0.8 410 397 13 8381 3.7 0.161 33 3.2 172 7.4 5 C SB 2 0.8 382 371 14 6566 4.1 0.163 51 4.0 213 8.2 6 E SB 3 0.9 430 416 16 9590 6 .0 0.161 81 5.1 507 12.7 7 E SB 3 0.9 419 409 16 5875 4.5 0.138 34 3.3 244 8.8 8 E SB 3 0.6 375 368 16 6307 3.4 0.118 22 2.6 138 6.6 9 E SB 3 0.6 336 327 14 1814 4.2 0.072 28 3.0 134 6.5 10 G SB 4 1.3 479 465 10 4795 6.1 0.145 39 3.5 566 13.4 11 G SB 4 1.2 466 458 10 3542 6.5 0.128 95 5.5 534 13.0 12 G SB 4 0.8 407 393 10 3499 4 .0 0.105 49 3.9 901 16.9 13 G SB 4 0.9 420 404 10 4147 8 .0 0.124 206 8.1 1943 24.9 14 G SB 4 2.6 583 568 11 5227 2.8 0.124 20 2.5 182 7.6 15 A HB 1 589 563 15 3154 15.6 0.132 444 11.9 2317 27.2 0.36 16 A HB 1 602 583 15 2549 7.9 0.181 141 6.7 1395 21.1 0.38 17 A HB 1 376 363 15 9331 15.4 0.151 261 9.1 1592 22.5 0.82 18 D HB 2 1 .0 434 420 13 1080 88.2 0.045 190 7.8 4388 37.4 0.54 19 D HB 2 0.8 410 398 15 11318 13.1 0.15 0 280 9.4 21 51 26.2 0.09 20 D HB 2 0.8 390 380 15 8683 12.6 0.156 301 9.8 2048 25.5 0.23 21 D HB 2 0.7 403 388 15 2722 15 .0 0.092 234 8.6 1988 25.2 0.45 22 D HB 2 0.7 385 373 15 2074 78.8 0.058 169 7.3 1325 20.5 1.92 23 D HB 2 0.7 393 380 15 9936 33.2 0.17 8 982 17.7 5736 42.7 0.04 24 F HB 3 1.3 483 469 14 6696 43.7 0.163 593 13.7 2898 30.4 0.77 25 F HB 3 3 .0 625 604 14 4493 15.1 0.166 642 14.3 4092 36.1 1.07 26 F HB 3 1.2 481 468 14 7603 19.3 0.159 455 12.0 2720 29.4 1.00 27 F HB 3 1.1 455 441 1 4 2549 45.0 0.126 688 14.8 6467 45.4 0.81 a The mean number of filtered, unaveraged positions. b The radius of a circle with an area approximately equal to the area of the KDE. c The fraction of recorded positions over hard bottom divided by the fraction o f hard bottom available within 100 m.

PAGE 143

143 Table 5 3 M easurements of gag collected for grow th and condition analysis Fish ID a Collection Location b Collection Date W (g) L T ( mm) L F (mm) Fractional Age c Tagged 28 C SB 2 8 Mar 2010 613.3 362 355 1.9 No 29 C SB 2 8 Mar 2010 1350.6 485 473 2.9 No 30 C SB 2 8 Mar 2010 2013.4 538 523 2.9 Yes d 31 C SB 2 8 Mar 2010 3981.6 697 680 4.9 No 32 E SB 3 8 Mar 2010 506.4 343 334 2.9 No 33 E SB 3 8 Mar 2010 569.6 364 355 2.9 No 7 E SB 3 8 Mar 2010 1511.0 489 478 2.9 Yes 34 E SB 3 8 Mar 2010 1942.5 553 535 2.9 No 35 G SB 4 14 Dec 2009 692.2 386 373 2.7 No 36 G SB 4 14 Dec 2009 814.5 404 390 2.7 No 10 G SB 4 14 Dec 2009 1469.6 485 473 2.7 Yes 37 G SB 4 14 Dec 2009 1522.3 493 480 2.7 No 38 G SB 4 14 Dec 2009 2628.9 602 584 2.7 No 39 G SB 4 13 Jul 2009 4413.7 695 672 3.3 No 40 A HB 1 18 Nov 2009 540.2 348 339 1.6 No 41 A HB 1 18 Nov 2009 548.0 356 344 2.6 No 42 A HB 1 18 Nov 2009 590.8 366 355 1.6 No 43 A HB 1 18 Nov 2009 643.9 367 358 1.6 No 44 A HB 1 18 Nov 2009 809.9 377 364 1.6 No 45 A HB 1 18 Nov 2009 740.4 381 371 1.6 No 17 A HB 1 18 Nov 2009 1055.7 435 424 2.6 Yes 46 A HB 1 18 Nov 2009 1027.8 446 432 3.6 No 47 A HB 1 18 Nov 2009 1401.1 478 464 2.6 No 48 A HB 1 18 Nov 2009 1626.6 506 490 2.6 No 49 A HB 1 18 Nov 2009 6139.4 726 703 4.6 No 50 D HB 2 8 Mar 2010 565 549 2.9 No 51 F HB 3 13 Oct 2009 616.5 390 383 2.5 No 52 F HB 3 27 Jan 2010 807.6 412 404 2.8 No 26 F HB 3 8 Mar 2010 1289.7 487 4 74 2.9 Yes 53 F HB 3 27 Jan 2010 3034.9 634 614 3.8 No 25 F HB 3 27 Jan 2010 3133.7 640 625 4.8 Yes 5 4 F HB 3 27 Jan 2010 4948.1 742 715 4.8 No a Fish ID numbers continue from Table 5 2, except recaptured tagged individuals, which retain Table 5 2 ID s b D eployment reef of collection location R ecaptured individuals were collected from original tagging reef s c A ssigned based on the number of opaque zones plus the fraction of the year transpired at the time of capture since 1 April. d Of r ecapture d tagged individuals, only ID 30 did not result in consistent, continuous telemetry results a nd is not included in Table 5 2; a t the time of tagging, the weight, t otal length, and fork length were 1.2 kg, 449 mm, and 432 mm, respectively.

PAGE 144

144 Figure 5 1. Experimental reefs and the Steinhatchee Fisheries Management Area. The sand bottom reefs were located in the inner mosaic, the hard bottom reefs were in the outer mosaic.

PAGE 145

145 Figure 5 2. Size distributions of all (a) and tagged (b) gag observed at e xperimental reefs at the time of tagging.

PAGE 146

146 Figure 5 3. Telemetered two dimensional positions for four individuals. Panels a) and b) show individuals ID 1 and ID 12, in sand bottom landscapes; c) and d) show individuals ID 17 and ID 23, in hard bottom landscapes. Each point represents a single position datum and has a 5% density, so that a fully black dot represents at least twenty recorded positions at that locati on. In all cases the reef is at 0 m Easting, 0 m Northing

PAGE 147

147 Figure 5 4. Time series of the distance from the reef for Fish ID 17 also shown in Figure 5 3c. In this and similar figures, each panel shows all positions on a single day. The density of each point is 5% so that a fully black dot represents at least twenty recorded positions at that location.

PAGE 148

148 Figure 5 5 Core area kernel density estimates. Fifty percent KDEs of individual gag (black points Table 5 2 ) on one reef are grouped into a single column with the number of individuals indicated above or below the column. Deployme nt means (red lines Table 5 1 ) represent experimental replicates and are used to calculate treatment means (blue lines) and standard deviations (black boxes).

PAGE 149

149 Figure 5 6 Home range stabilization curves. I used progressively more days to calculate 50 % (a) and 95% (c) KDEs in sand bottom landscapes. Each curve represents a single individual. Curves for individuals in hard bottom landscapes are shown in b) and d).

PAGE 150

150 Figure 5 7 Distributions of calculated distances from the reef of all gag in sand ( a) and hard bottom (b) landscapes. Vertical black lines indicate median values in sand (4.3 m) and hard bottom (13.4 m) landscapes Distributions for individual gag were generally similar.

PAGE 151

151 Figure 5 8 Distributions of calculated travel speeds of all gag in sand (a) and hard bottom (b) landscapes. Vertical black lines indicate median speeds in sand (0.146 m/s) and hard bottom (0.148 m/s) landscapes

PAGE 152

152 Figure 5 9 Distance from the reef versus time of day for individuals in hard (a) and sand bo ttom (b) landscapes. Here and in similar figures, each colored point represents a single position datum and has a 5% density, so that a solid dot represents at least twenty positions recorded at that location. Colored curves represent GAM fits to each in curve represents the 95% confidence interval. The plotted y axis range highlights GAM fits and does not cover the full range of response variable val ues. GAMs are cyclic fits of DFR ~ TIME

PAGE 153

153 Figure 5 10 Habitat composition and use in Deployment A HB 1. a) Categorical habitat map with white and black representing sand and hard bottom areas, respectively. The reef location is indicated by an solid line shows the fraction of hard bottom in each 1 m thick concentric ring around the reef. Also shown are the fractions of recorded positions over hard bottom within each ring for each individual (thin solid lines) and for all indi viduals combined (thick dashed line).

PAGE 154

154 Figure 5 11 Size (a) and fractional age (b) distributions of gag collected for growth and condition analyses

PAGE 155

155 Figure 5 12. a) Total length versus fractional age of gag collected for growth and condition analyses. b) Log weight versus log total length of gag collected for growth and condition analyses. In both panels, lines represent linear regression equations and ribbons represent 95% confidence intervals.

PAGE 156

156 CHAPTER 6 CONCLUSIONS use pattern can reflect its strategy for balancing the conflicting needs of food and shelter because its choice of location in the landscape affects what predation risk and foraging opportunities it will experience. Together, the landscape structure and chang ing environmental conditions influence what space use pattern will impart the greatest fitness Using gag as a model for animals that forage around a central shelter, I asked how landscape structure affects space use and how this, in turn, affects fitness (Chapter 2) My model of how a local population alters its space use pattern to balance predation risk and foraging competition made two key predictions: 1. that individuals move more widely across safer landscapes, and 2. that this increased space use i ncreases fitness. In a safer landscape, the small increase in predation risk farther from the shelter is balanced by the large increase in growth due to higher foraging success because of lower experienced density and foraging competition Conversely th e same abundance of animals in a riskier landscape remain close to shelter, suffering a higher experienced density and lower foraging success in order to avoid higher risk away from shelter. In each landscape the balance of risk s and growth opportunities result s in different space use patterns that maximize fitness To test these prediction s I designed an experimental comparison of space use and fitness of juvenile gag resident on artificial reefs in two landscapes (hard and soft bottom) combining th e use of several new and existing technologies. Before the experiment I tested the capabilities, performance, and robustness of the telemetry technology and established that it was able to produce consistent position estimates of several individuals over hundreds of meters for several weeks (Chapter 3) The use of

PAGE 157

157 this telemetry technology, in conjunction with artificial reefs, side scan sonar imagery, and an Acoustic Doppler Current Profiler made it possible to address behavioral and landscape question s of an ecologically and economically important fish species in its natural environment at appropriate and previously unattainable scales using behavioral and environmental measurements suited to fine resolution space use patterns in response to landscap e and aquatic conditions. Next, in preparation for the experiment and to explore the effects of environmental conditions, I deployed the telemetry system in a descriptive study of gag space use (Chapter 4) I found that, in general, gag m oved more widel y during the day and higher into the water column at night, and that the lunar phase, water conditions and landscape conditions had unclear or no correlations with space use. There was both variability among individuals and plasticity within individual s temporal space use patterns. In addition to providing the first detailed description of gag space use, this study also established the appropriate spatial and temporal scales for studying space use within a home range. With a better understandi ng of en vironmental correlates and telemetry capabilities, I conducted an experiment of gag space use and fitness in two landscapes. I found that gag use d about seven times the area in hard botto m versus sand bottom landscapes and that they often moved most widel y at night, though, a g ain, there was variability and plasticity in movement patterns. I did not detect a difference in gag growth or condition between landscapes. My results supported the model prediction that gag would show expanded space use in hard bo ttom (presumably safer ) landscapes, suggesting that gag do trade off predation risk for foraging success. Gag centered their activity around

PAGE 158

158 the reef but spent most of their time several meters away from the reef. In potentially riskier sand bottom lands capes, where there was little camouflage opportunity, gag typically stayed within about 10 m of the shelter. In hard bottom landscapes, where low sponge, coral, or algal growth offered more camouflage opportunities potentially lowering predation risk, ga g moved more widely across the landscape. To consider these results in the larger context of gag ecology, i f predation risk alone drove space use decisions for reef centered animals, gag should have spent all their time directly at the shelter of the ree f; if only foraging success were important and prey we r e distributed across the landscape, gag should have move d to less crowded locations farther from the reef, even potentially abandoning it completely, depending on the space use patterns of the prey. O ne interpretation of the fact t hat gag remained reef attached but not always directly at the reef is that gag chose location s to balance risk and foraging success in order to maximize fitness. The relatively fixed landscape structu re largely set both the risks and foraging oppor tunities so that gag space use wa s contracted in riskier sand bottom landscapes and expanded in safer hard bottom landscapes. Withi n the context of the fixed landscape, tempor al variation in, for example diel pr ey or predator behav ior, changed the balance of risk and success leading to This interpretation of my observed gag space use patterns assumes that gag prey were distributed across the landscape at and away from the reef, so that in the absence of risk, expanded space use increased foraging success. If instead, gag prey responded to landscape structure so that in sand bottom landscapes they were only found near the reef, and in hard bottom landscapes they were distributed more widely

PAGE 159

159 (e. g. because of increased camouflage potential) then, even in the absence of predation risk, gag might exhibit the space use patterns I observed simply because they followed their prey. Though the space (Weaver, 19 96 ) are not well known, evidence suggests they are not strictly reef attached, but move more widely across the landscape (Nagy, unpublished data) This suggests that gag space use patterns truly reflect a balance of predation risk and foraging opportuniti es and do not simply match gag prey distributions. Even though I cannot conclusively identify the factors linking landscape structure and gag space use, it is clear that gag use substantially more area in hard bottom landscapes. It is also likely that la ndscape and space use differences affect gag fitness, though my results showed no difference. The differences in space use patterns between landscapes, as well as the temporal and individual variation in my results have implications for understanding space use ecology and the effective use of fisheries management tools. Though it is generally assumed that densit y dependent growth and mortality in th e earliest lif e stages regulate fish populations, there is evidence that density dependent growth in later li fe stages also plays a role in population regulation (Lorenzen and Enberg, 2000). If this is true, then differential space use and fitness in different landscapes, of gag in particular and mobile reef fish in general, can have important consequences for p opulation regulation, especially as spatial management tools are applied within the Traditional fisheries models average demographic parameters over large spa t ial scales, masking the impact of landscape va riation on fish space use and growth. Many

PAGE 160

160 researchers have stressed the importance of spatial heterogeneity and individual variation to population dynamics (Tyler and Rose, 1994; Humston et al., 2004; LePage and Cury, 1996) and pointed to the need for ap propriately scaled data (Giske et al., 2001) in spatial models (e.g., Christensen and Walters, 2004). This is particularly true for spatial models where, for example, animal movement between landscape cells is modeled as a random walk. If true animal move ment shows plasticity in response to other species changing environmental conditions, or local landscapes, similar to my observations for gag, then simple random walk models will not likely capture important interactions. Additionally, an understanding of the mechanisms, not just empirical patterns, of spatial population dynamics will improve our ability to predict population dynamics in the face of changing global environmental conditions, exploitation pressures, and habitat change The design and locat ion choice of effective marine protected areas requires a clear understanding of landscape structure and how it affects space use decisions and fitness (Mokievsky, 2009). Similarly, if the larger landscape affects behavior and fitness, then the placement of habitat enhancements (e.g., artificial reefs) or stock enhancements will influence their contribution to population dynamics. I make one final observation regarding changes in space use patterns and landscape connectedness beyond the scale of individu al home ranges. My observations of excursions beyond the m ain space use area (e.g., Figure 4 8 days 2008 11 03, 2008 11 08 and Figure 5 4 day 2009 06 15) potentially represent switches between intensive and extensive searching behavior (Ferran et al., 199 4) or between foraging and exploration behavior. Together with the range of gag travel distances

PAGE 161

161 between tag and recapture (McGovern et al ., 2005; Lindberg et al., 2006) and the duration of absences (Lindberg et al., 2006), this hints at the level of land scape connectedness. At least some individuals appear to temporarily or permanently move beyond home range areas easily and regularly. To connect localized individual space use (e.g., my work within home ranges) to population space use, understandi n g the se intermediate scale movements will be important. A more complete understanding of how landscape effects on individuals integrate to determine population space use and dynamics will require integration of space use patterns across scales. Existing theori es of animal space use explore different components of the trade off between predation risk and foraging success. Optimal foraging (Perry and Pianka, 1997) and ideal free distribution (Giske et al., 1998) theories predict space use to maximize fitness wit hout incorporating predation risk. Central place foraging theory (Bakker et al., 2005) indirectly includes risk as the motivation for the cost of transporting resources back to a central shelter for consumption or use The space use patterns of some anim als are poorly d escribed by these theories because they forage across continuous landscapes, consuming resources where they are found and retreating to shelter when threatened (Sale, 1971; Karnofsky et al., 1989; Lindberg et al., 2006) My model predicted that for such animals the structure of the landscape affects the extent of space use and resultant f itness because a nimals choose their locations in the landscape to balance predation risk and foraging opportunities in order to maximize fitness where diff erent habitats offer different risks and opportunities. For gag, expanded space use in presumably safer hard

PAGE 162

162 space use prediction suggesting that gag maximize their fitness by balancing risk and foraging success. Variation in gag space use patterns at the scale of hours to days supports foraging arena theory (Walters and Martel, 2004), which proposes that temporally and spatially restricted areas establish variation in risk and foraging opportunities. The gag diel pattern of being closer to or farther from the reef at different times of day support s the theory that risk and foraging arenas change through time. The foraging arena concept and temporal patterns in gag space use suggest an additional dimension for the ories like optimal foraging, ideal free, and central place foraging where costs and benefits are treated as constant through time. Such temporal patterns highlight the fact that theories and models are always simplifications of more complex processes. An imal distributions and driving environmental conditions are rarely temporally or spatially static and animal movement theories and models which assume that they are, risk producing incorrect or incomplete predictions ( Tyler and Rose, 1994; Humston et al., 2004; LePage and Cury, 1996 ). In addition to describing space use using summary metrics (e.g., median DFR KDEs), the type of high temporal and spatial resolution movement data I collected for gag can be used to explore individual movement decisions rela tive to specific landscape structures. It is possible to compare movement decisions in different habitats or at habitat boundaries. One could compare, for example, gag pathway tortuosity (a measure of extensive and intensive search) at different distance s from shelter, during day and night, or under different water flow and temperature conditions. This type of mechanistic modeling ( Bell, 1991) may be better suited to predicting individual and

PAGE 163

163 population space use and population dynamics than empirical di stribution models in the face of, for example, changing habitat availability or fishing pressure. A better understanding of the mechanisms of movement will also improve our ability to connect observations of individual movement with population distributio ns and dynamics

PAGE 164

164 APPENDIX A DERIVATIONS The law of cosines The law of cosines states that for a general triangle with sides a, a 2 = b 2 + c 2 Population range The population range, r*, is defined as the point when n(r*) = 0. From Equation 2 5: Then substitute in Eq uation 2 1: Because S predator is always larger than S prey the positive solution by the quadratic formula is:

PAGE 165

165 Minimum risk threshold the arccos of the critical angle, c in Equation 2 1 equals 1. Because S predator is always larger than S prey the positive solution by the quadratic formula is: Maximum risk threshold The ma inside the arccos of the critical angle, c in Equation 2 1 equals 1. Because S predator is always larger than S prey the positive solution by the quadratic for mula is:

PAGE 166

166

PAGE 167

167 APPENDIX B MODEL CODE AND EXAMP LES FOR CHAPTER 2 # This appendix contains code for # Based on a Predation Risk Growth Tr ade # Authors: Zy Biesinger, Benjamin M. Bolker, William J. Lindberg # Contact: zbiesing@ufl.edu # # This R code provides model equations and parameter values. Examples # show how to explore model predictions. Basic variables # and equations (theta rho, mu, g, mug, n) are defined. Minimum # and maximum values of C (minC, maxC) are calculated in order to # constrain numerical solutions. The function 'calcRstar' calculates the # population range for a given C, which is used to set the integratio n # bounds in 'findNtot'. The function 'findNtot' calculates the population # abundance required for individuals to experience a given C; it is used in # numerical root finding by 'findC'. The function 'findC' uses numerical root # finding to calcula te the population's realized quality, C, for a given # abundance. Calculations for r prime and r double prime are shown. # # With these functions, one can specify a population abundance, use 'findC' to # calculate the realized habitat quality experien ced by the population, then # use 'n' and 'calcRstar' to calculate the population distribution and range. ############################################### ################# # Parameter Values sPrey = 1 # maximum travel speed of a prey sPred = 1.6 # maximum travel speed of a predator dd = 10 # D, distance at which a prey detects a predator rhoMax = 0.05 # risk of capture, upon predator sighting, when far from shelter rhoMin = 0.005 # risk of capture, upon predator sighting, when at shelter gMax = 0.01 # maxi mum possible growth bG = 10 # strength of density dependence in growth bMu = 0.1 # strength of density dependence in mu cc = 0.5 # C, ideal free distribution constant = mu/g smallNumb = 0.00001 ## fudge factor for comparisons that shouldn't be zero # Int ermediate function to control how parameters are passed through nested functions zcall < function(fun, ...) { L < list(...) ## pass only arguments of calling function, not all objects in ## environment of calling function objlist < setdiff(n ames(formals(sys.function( 1))),c("...",names(L))) vals < lapply(as.list(objlist),get,pos=sys.frame( 1)) names(vals) < objlist

PAGE 168

168 do.call(fun,c(vals,list(...))) } # Critical angle between the shelter, prey, and predator theta = function(r, ddLocal=d d, sPredLocal=sPred, sPreyLocal=sPrey, ...){ term = ( (r^2)*(1 (sPredLocal/sPreyLocal)^2) + ddLocal^2 )/( 2*ddLocal*abs(r) ) term = ifelse(term > 1,1, ifelse(term < 1, 1, term)) #restricted within [ 1,1] acos(term) } # Density independent probab ility of capture, upon predator sighting, at distance r rho = function(r, rhoMaxLocal=rhoMax, rhoMinLocal=rhoMin, ...){ (rhoMaxLocal rhoMinLocal) zcall(theta, r=r, ...)/pi + rhoMinLocal } # Density dependent predation mortality risk mu = function(r, n bMuLocal=bMu, ...){ zcall(rho, r=r, ...) / (1 + bMuLocal n ) } # Density dependent growth g = function(n, gMaxLocal=gMax, bGLocal=bG, ...){ gMaxLocal / (1 + bGLocal n) } # Ratio of mortality risk to growth, a metric of habitat quality mug = function(r, n, ...){ zcall(mu, r=r, n=n, ...) / zcall(g, n=n, ...) } # Population density distribution for a given value of C n = function(r, ccLocal, gMaxLocal=gMax, bMuLocal=bMu, bGLocal=bG, ...){ numerator = ccLocal gMaxLocal (1 bMuLocal) numerator = numerator (1 bGLocal) zcall(rho, r=r, ...) denominator = bGLocal zcall(rho, r=r, ...) ccLocal gMaxLocal bMuLocal fraction = (numerator / denominator) 1 return(fraction) } ### Calculate the minimum and maximu m C values; for constraining numerical solutions. # The best habitat quality: C at the bottom of the intrinsic habitat quality basin cMin = function(rhoMinLocal=rhoMin, gMaxLocal=gMax, ...){rhoMinLocal/gMaxLocal} # T he maximum value (lowest quality) of experienced habitat quality cMax = function(rhoMinLocal=rhoMin, rhoMaxLocal=rhoMax, gMaxLocal=gMax,

PAGE 169

169 bGLocal=bG, bMuLocal=bMu,...){ # There are two constraints on cMax... # The lowest habitat quality: C at the r im of the intrinsic habitat quality basin cMax1 = rhoMaxLocal / gMaxLocal # The value of C that makes the denonminator of n(r) equal zero cMax2 = (bGLocal rhoMinLocal) / (bMuLocal gMaxLocal) # Choose the smaller min(cMax1, cMax2) smallNum b } # The local population range, r* calcRstar = function(ccLocal, ddLocal=dd, sPredLocal=sPred, sPreyLocal=sPrey, rhoMinLocal=rhoMin, rhoMaxLocal=rhoMax, gMaxLocal=gMax, ...){ err.msg = NULL # Calculate cMin and cMax ansMin = zcall(cMin, ...) ansMax = zcall(cMax, ...) # C must be between cMin and cMax if (ccLocal < ansMin){ err.msg = paste("In calcRstar(): 'cc' is too small and set to cMin,", ansMin) ccLocal = ansMin } else if (ccLocal > ansMax){ err.msg=paste("In calcRstar( ) 'cc' is too big and set to cMax,", ansMax) ccLocal = ansMax } if(length(err.msg)>0) {print(err.msg)} # Calculate rStar term1 = pi (ccLocal*gMaxLocal rhoMinLocal) / (rhoMaxLocal rhoMinLocal) term2 = cos(term1) term3 = sqrt(c os(term1)^2 1 + (sPredLocal^2/sPreyLocal^2)) term4 = sPreyLocal^2 sPredLocal^2 result = ddLocal sPreyLocal^2 (term2 term3) / term4 return(result) } # Calculate the population size necessary for the population to experience a # given habitat quality, C findNtot = function(ccLocal, ...){ # Calculate cMin and cMax ansMin = zcall(cMin, ...) ansMax = zcall(cMax, ...) # C must be between cMin and cMax err.msg = NULL if (ccLocal < ansMin){ err.msg=paste("In findNtot(): 'cc' i s too small and cc set to cMin.", ansMin) ccLocal=ansMin

PAGE 170

170 } else if (ccLocal > ansMax){ err.msg=paste("In findNtot(): 'cc' is too big and cc set to cMax.", ansMax) ccLocal=ansMax } if(length(err.msg)>0) {print(err.msg)} # Find the pop ulation size by numerical integration... # # Make 'n()' of the appropriate form for 'integration()' fun1 = function(r, ccLocal, ...){zcall(n, r=r, ccLocal=ccLocal, ...) r} # Calculate r*, the upper limit of the integration term1 = zcall(calcR star, ccLocal=ccLocal, ...) term2 = try(integrate(fun1, 0, term1, ccLocal=ccLocal, ...)) # Check for a failure during the integration if (class(term2)=="try error"){stop("In findNtot(): integrate() failed")} result = 2*pi*term2$value return(re sult) } # Calculate the experienced habitat quality, C, for a given population size findC = function(nTarget, ...){ # Calculate cMax and the associated population size when the habitat basin is full ansMax = zcall(cMax, ...) ansTargetMax = zcall( findNtot, ccLocal=ansMax, ...) # Ensure nTarget is between 0 and the population size when the basin is full err.msg = NULL if (nTarget <= 0){ nTarget = smallNumb err.msg=paste("In findC(): 'nTarget' is <= 0 and set =", nTarget) } else if ( nTarget > ansTargetMax){ nTarget = ansTargetMax err.msg=paste("In findC(): 'nTarget' is too big and set =", nTarget) } if(length(err.msg)>0) {print(err.msg)} # Create a function for the difference between the given population size # and the population size for any given value of C fun1 = function(ccLocal, ...){zcall(findNtot, ccLocal=ccLocal, ...) nTarget} # Calculate the maximum range of C range = c(zcall(cMin, ...), zcall(cMax, ...)) result = try( uniroot(fun1, interval=ra nge, ...) ) # Check for failure during uniroot() if (class(result)=="try error") {stop("In findC(): uniroot() failed")} return(result$root) }

PAGE 171

171 # r' rprime = function(ddLocal=dd, sPreyLocal=sPrey, sPredLocal=sPred, ...){ d dLocal sPreyLocal / (sPredLocal + sPreyLocal) } # r'' rdprime = function(ddLocal=dd, sPreyLocal=sPrey, sPredLocal=sPred, ...){ ddLocal sPreyLocal / (sPredLocal sPreyLocal) } ################################################################### # Example 1. Find the realized quality, population distribution, and ### population range for a given population abundance. # pick abundance nTot = 30 # set plotting limits xaxislimits = c( 18,18) yaxislimits = c(0,1) # find the realized quality ccFoun d = findC(nTot) # use C to calculate the population range rStarFound = calcRstar(ccFound) # look at the population distribution par(mfrow=c(1,1)) curve(n(x, ccLocal=ccFound), xlim=xaxislimits, ylim=yaxislimits, xlab="Distance from shelter, r") # add ve rtical lines showing rStar in red abline(h=0,v=c( rStarFound, rStarFound), col=c("black","red","red")) ### Example 2. Show how the population distribution and range change as ### abundance increases. # pick several abundances nTotVec = c(5, 20, 30, 50 70, 95) # create two empty vectors because findC can't accept vectors ccVec = vector(length=length(nTotVec)) rStarVec = vector(length=length(nTotVec)) # plot colors colVec = c("black", "red", "blue", "green", "yellow", "pink") # calculate C and rStar fo r each abundacne for(i in 1:length(nTotVec)){ ccVec[i] = findC(nTotVec[i]) rStarVec[i] = calcRstar(ccVec[i]) } # look at the distributions and ranges par(mfrow = c(1,2)) # figure 1 showing population distributions

PAGE 172

172 curve(n(x, ccLocal=ccVec[length(nTot Vec)]), xlim=c( max(rStarVec), max(rStarVec))*1.1, xlab="Distance from shelter, r", type='n') for(i in 1:length(nTotVec)){ curve(n(x, ccLocal=ccVec[i]), col=colVec[i], add=TRUE) } abline(h=0) # figure 2 showing population ranges plot(nTotVec, rStarVe c, pch=19, col=colVec) ### Example 3. Show how the population distribution and range change as ### detection distance increases. # pick abundance and several detectin distances nTot = 30 ddVec = c(2,4,6,8,10,12) # create two empty vectors because fin dC can't accept vectors ccVec = vector(length=length(nTotVec)) rStarVec = vector(length=length(nTotVec)) # plot colors colVec = c("black", "red", "blue", "green", "yellow", "pink") # calculate C and rStar for each detection distance for(i in 1:length(nTotV ec)){ ccVec[i] = findC(nTot, ddLocal=ddVec[i]) rStarVec[i] = calcRstar(ccVec[i], ddLocal=ddVec[i]) } # look at the distributions and ranges par(mfrow = c(1,2)) # figure 1 showing population distributions curve(n(x, ccLocal=ccVec[1]), xlim=c( max(r StarVec), max(rStarVec))*1.1, ylim=c(0,n(0,ccLocal=ccVec[1],ddLocal=ddVec[1])), xlab="Distance from shelter, r", type='n') for(i in 1:length(nTotVec)){ curve(n(x, ccLocal=ccVec[i], ddLocal=ddVec[i]), col=colVec[i], add=TRUE) } abline(h=0) # figure 2 showing population ranges plot(nTotVec, rStarVec, pch=19, col=colVec) ### Example 4. Show an intrinsic habitat quality basin, pick a population ### abundance, then show the resultant experienced quality and population ### range. # pick abundance and detection distance nTot = 30 ddPicked = 10 # set plotting limits xaxislimits = c( 40,40)

PAGE 173

173 yaxislimits = c(0,1) # find the realized quality ccFound = findC(nTot, ddLocal=ddPicked) # use C to calculate the population range rStarFound=calcRstar(ccFound, dd Local=ddPicked) par(mfrow=c(2,1)) # figure 1 showing the population distribution curve(n(x,ccFound,ddLocal=ddPicked), xlab="Distance from shelter, r", xlim=xaxislimits, ylim=yaxislimits ) abline(h=0) # figure 2 showing the intrinsic habitat quality basi n... curve(mug(x, n=0,ddLocal=ddPicked), xlim=xaxislimits, xlab="Distance from shelter, r", ) # ... and realized quality level in red curve(mug(x,n=n(x, ccLocal=ccFound, ddLocal=ddPicked),ddLocal=ddPicked), xlim=c( rStarFound,rStarFound), add=TRUE, c ol="red")

PAGE 174

174 APPENDIX C TELEMETRY ARRAY ASSE SSMENT DEPLOYMENTS Deployment A: Fish Tagging Study On 7 December 2007 divers deployed the hydrophone array collecting in symbol mode, with the central hydrophone 10 m north east of an experimental reef and the four outer hydrophones 50 m off in the cardinal directions. Each hydrophone and attached sensor beacon was mounted on a post driven into the seafloor. On 9 December 2007, I tagged five fish (in all studies my tagged individuals are Mycteroperca microlepi s ). The array was left to record transmissions until 15 January 2008. Deployment B: Single Hydrophone Detection Trial On 22 July 2008 I suspended a tag 2 m off the seafloor on a weighted buoy line, positioned adjacent to an experimental reef. Divers he ld a single hydrophone, collecting in code mode, 300 m then 200 m away for 5 and 7 min, respectively. Deployment C: Fish Tagging Study On 9 October 2008 divers deployed the array collecting in code mode, with the central hydrophone 10 m northeast of t he reef and the four outer hydrophones 125 m off in the cardinal directions. Each hydrophone was mounted on a post attached to the seafloor and had an attached sensor beacon. On 17 October 2008, I tagged seven fish. The hydrophone batteries died near 7 December 2007 and the array was recovered 16 December 2008 Prior to deployment, the hydrophones had been set to collect data in code mode, but with an incorrect parameter setting. As a result, ALPS was not able to calculate any position solutions. I wa s able to salvage some data by identifying codes unique to three beacons. All sensor data were lost. Deployment D: Array Spacing Trials 125 m On 23 April 2009 divers deployed the central hydrophone, collecting in code mode, 10 m northeast of a reef with a sensor beacon. Two sensor tags were suspended 0.5 and 1 m above the reef using a line and float. I deployed the remaining four hydrophones, each attached to a weighted post, from the surface. A rope tied to the top of the post and connected to a surf ace buoy was used to lower the hydrophone to the seafloor and add vertical stability. The buoy, with its tall, narrow shape, held a GPS unit clear of the water and minimized lateral movement with wave action. This anchor buoy mounting method allowed me t o deploy the outer hydrophones from the boat and get good position estimates of each temporary hydrophone deployment during the spacing trials. During this deployment only one outer hydrophone had an attached sensor beacon. Finally, I suspended a sensor tag 2 m off the seafloor using a weighted buoy line, positioned within, but near the margin of the array. The full array remained deployed for 77 min Deployments E and F: Array Spacing Trials 100, 150 m On 7 May 2009 I continued the array spacing tri als as in Deployment D. I mounted the central hydrophone, collecting in code mode, near the reef with a sensor beacon and an ID only beacon. The sentinel and one sensor tag were suspended above the reef. I deployed the remaining four hydrophones 150 m f rom the reef using the anchor buoy method. One outer hydrophone had a sensor beacon and an ID only beacon. I suspended a sensor tag 2 m off the seafloor near the margin of the array. The full array remained deployed at 150 m spacing for 136 min The hy drophones were recovered to the boat and reset for a second deployment at 100 m spacing, which lasted 105 min. The outer sensor tag was moved to remain just within the array.

PAGE 175

175 Deployment s G and H : Internal Performance Trial s On 1 June 2009 divers depl oyed the array, collecting in code mode, with the central hydrophone 10 m northeast of the reef and the outer hydrophones 100 m off in the cardinal directions. Each hydrophone was mounted on a post attached to the seafloor. Two hydrophones had sensor bea cons and two had ID only beacons. I suspended the sentinel tag at the reef. Once the array was in place, using a weighted buoy line (without a GPS unit), I suspended one ID only tag 1 m from the seafloor near the reef where it remained during the rest of the trial. Next I ID only tags. The first roaming buoy line (with a surface GPS unit) had two tags at 1 and 2 m from the seafloor, the remaining seven roaming lines (without surface GPS units) had s ingle tags 1 m from the seafloor and were placed at locations throughout the array. Over the next five hours, the roaming lines were moved five times, giving detection and position solution information of 54 transmitters at 48 different locations, plus fo ur beacon, one sentinel, and one non roaming transmitters at six more locations. The roaming tags spent between 33 and 76 min at each location and the entire perfor mance trial lasted 296 min On 3 June 2009, with the hydrophone array (including four beac ons and one sentinel) still in place from 1 June 2009, I again placed ten tags on nine weighted buoy lines at nine locations for about 250 min while I tagged fish. Deployments I O: Fish Studies The remaining seven deployments were fish studies with the array deployed around six different experimental reefs. The central hydrophone was 10 m northeast of the reef and the outer hydrophones 100 m off in the cardinal directions. There were two sensor beacons and two ID only beacons on four hydrophones. The sentinel was at each reef until it was lost on 22 September 2009. During these trials eight fish were tagged per trial and the array left in place for 14 to 17 days.

PAGE 176

176 APPENDIX D ADDITIONAL CHAPTER 3 FIGURES Figure D 1. Temporal variation in hour ly detection and hourly position solution fractions. a) Hourly detec tion fractions of a transmitter at the north hydrophone, by eac h hydrophone during Deployment G Positioning of the five graphs reflects actual deployment geometry. b) Hourly position s olution fractions of the same transmitter.

PAGE 177

177 Figure D 2. Temporal variation along the Northing axis of position solutions for the north (circles), central (squares), and south (triangles) hydrophones of Deployments A (a) and K (b). Points represent indiv idual position solutions, dashed horizontal lines indicate the mean position solution, and solid lines represent GPS position estimates. Deployment A north hydrophone: mean position solution = 743 m Northing, GPS 742 m Northing; central hydrophone: mean p osition solution = 698 m N, GPS 699 m N; south hydrophon e: mean position solution = 630 m N, GPS = 639 m N. Deployment K north hydrophone: mean position solution = 2169 m Northing, GPS 2165 m Northing; central hydrophone: mean position solution = 2068 m N GPS 2070 m N; south hydrophone: mean position solution = 1955 m N, GPS = 1956 m N.

PAGE 178

178 APPENDIX E ADDITIONAL CHAPTER 4 FIGURES Figure E 1. Telemetered two dimensional positions and hourly position fractions for Fish ID 1. a) Each point represents a sing le position datum and has a 5% density, so that a fully black dot represents at least twenty recorded positions at that location. b) The hourly fraction of transmissions resulting in filtered, unaveraged positions.

PAGE 179

179 Figure E 2. Telemetered two dimensi onal positions and hourly position fractions for Fish ID 3. a) The points represent all filtered, minute averaged positions around the reef at the center of the figure. Point colors range from blue to red, indicating lower to higher altitudes, respective ly. See Figure E 1 caption for more details. b) The hourly fraction of transmissions resulting in filtered, unaveraged positions.

PAGE 180

180 Figure E 3. Telemetered two dimensional positions and hourly position fractions for Fish ID 4. a) The points repre sent all filtered, minute averaged positions around the reef at the center of the figure. Point colors range from blue to red, indicating lower to higher altitudes, respectively. See Figure E 1 caption for more details. b) The hourly fraction of transmi ssions resulting in filtered, unaveraged positions.

PAGE 181

181 Figure E 4. Telemetered two dimensional positions and hourly position fractions for Fish ID 5. a) The points represent all filtered, minute averaged positions around the reef at the center of the f igure. Point colors range from blue to red, indicating lower to higher altitudes, respectively. See Figure E 1 caption for more details. b) The hourly fraction of transmissions resulting in filtered, unaveraged positions.

PAGE 182

182 Figure E 5. Time series of the distance from the reef for Fish ID 1. In this and similar figures, each panel shows all filtered, minute averaged positions on a single day. The density of each point is 5% so that a fully black dot represents at least twenty recorded positions at th at location.

PAGE 183

183 Figure E 6. Time series of the distance from the reef for Fish ID 3. See Figure E 5 caption for more details.

PAGE 184

184 Figure E 7. Time series of the distance from the reef for Fish ID 4. See Figure E 5 caption for more details.

PAGE 185

185 Figure E 8. Time series of the distance from the reef for Fish ID 5. See Figure E 5 caption for more details.

PAGE 186

186 Figure E 9. Distance from the reef versus current direction. Here and in similar figures, each colored point represents a single p osition datum and has a 5% density, so that a solid dot represents at least twenty positions recorded at that ss around each curve represents the 95% confidence interval. The plotted y axis range highlights GAM fits and does not cover the full range of response variable values. In this figure the GAMs are cyclic fits of DFR ~ DIR W

PAGE 187

187 Figure E 10. Distance from the reef versus current speed. GAMs are fits of DFR ~ SPD W See Figure E 9 caption for more details.

PAGE 188

188 Figure E 11. Altitude above the seafloor versus distance from the reef. GAMs are fits of ALT ~ DFR See Figure E 9 caption for more de tails.

PAGE 189

189 Figure E 12. Time series of the altitude above the seafloor for Fish ID 4. See Figure E 5 caption for more details.

PAGE 190

190 Figure E 13. Time series of the altitude above the seafloor for Fish ID 5. See Figure E 5 caption for more details.

PAGE 191

191 Figure E 14. Altitude above the seafloor versus water temperature. GAMs are fits of ALT ~ TEMP See Figure E 9 caption for more details.

PAGE 192

192 Figure E 15. Altitude above the seafloor versus current direction. GAMs are cyclic fits of ALT ~ DIR W See Figure E 9 caption for more details.

PAGE 193

193 Figure E 16. Altitude above the seafloor versus current speed. GAMs are fits of ALT ~ SPD W See Figure E 9 caption for more details.

PAGE 194

194 Figure E 17. Gag travel speed versus time of day. GAMs are cyclic fits o f SPD G ~ TIME See Figure E 9 caption for more details.

PAGE 195

195 Figure E 18. Time series of travel speed for Fish ID 1. See Figure E 5 caption for more details.

PAGE 196

196 Figure E 19. Time series of travel speed for Fish ID 3. See Figure E 5 caption for m ore details.

PAGE 197

197 Figure E 20. Time series of travel speed for Fish ID 4. See Figure E 5 caption for more details.

PAGE 198

198 Figure E 21. Time series of travel speed for Fish ID 5. See Figure E 5 caption for more details.

PAGE 199

1 99 Figure E 22. Nighttime gag travel speed versus lunar index. GAMs are cyclic fits of SPD G ~ LUNAR See Figure E 9 caption for more details.

PAGE 200

200 Figure E 23. Gag travel speed versus water temperature. GAMs are fits of SPD G ~ TEMP See Figure E 9 caption for more details.

PAGE 201

201 Figure E 24. Gag travel speed versus current direction. GAMs are cyclic fits of SPD G ~ DIR W See Figure E 9 caption for more details.

PAGE 202

202 Figure E 25. Gag travel speed versus current speed. GAMs are fits of SPD G ~ SPD W See Figure E 9 caption for more details.

PAGE 203

203 APPENDIX F ADDITIONAL CHAPTER 5 FIGURES Figure F 1. Time series of the distance from the reef for Fish ID 23, also shown in Figure 5 3d. In this and similar figures, each panel shows all positions on a single day. The density of each point is 5% so that a fully black dot represents at least twenty recorded positions at that location.

PAGE 204

204 Figure F 2. Time series of the distance from the reef for Fish ID 21. See Figure F 1 caption for more details.

PAGE 205

205 Figure F 3. Time series of the distance from the reef for Fish ID 1, also shown in Figure 5 3a. See Figure F 1 caption for more details.

PAGE 206

206 Figure F 4. Aquatic and lunar conditions. Water temperature, current speed, and current direction, measured by an ADCP, are shown for each deployment. The b ottom row indicates the lunar phase, where 0 and 1 represent times of new and full moons, respectively. Lunar indices of 1 and 30 correspond to new moons and an index of 15 corresponds to full moons. Because of equipment difficulties there is no data wat er flow for deployment C HB 2.

PAGE 207

207 Figure F 5. Distance from the reef versus lunar index for individuals in hard (a) and sand bottom (b) landscapes. Here and in similar figures, each colored point represents a single position datum and has a 5% density so that a solid dot represents at least twenty positions recorded at that location. Colored curves represent GAM fits to curve represents th e 95% confidence interval. GAMs are cyclic fits of DFR ~ LUNAR

PAGE 208

208 Figure F 6 Distance from the reef versus water temperature for individuals in hard (a) a nd sand bottom (b) landscapes. GAMs are fits of DFR ~ TEMP See Figure F 5 caption for more d etails.

PAGE 209

209 Figure F 7 Gag travel speed versus time of day for individuals in hard (a) and sand bottom (b) landscapes. The plotted y axis range highlights GAM fits and does not cover the full range of response variable values. GAMs are cyclic fits of SP D G ~ TIME See Figure F 5 caption for more details.

PAGE 210

210 Figure F 8 Gag travel speed versus lunar index for individuals in hard (a) and sand bottom (b) landscapes. The plotted y axis range highlights GAM fits and does not cover the full range of respon se variable values. GAMs are cyclic fits of SPD G ~ LUNAR See Figure F 5 caption for more details.

PAGE 211

211 Figure F 9 Gag travel speed versus water temperature for individuals in hard (a) and sand bottom (b) landscapes. The plotted y axis range highligh ts GAM fits and does not cover the full range of response variable values. GAMs are fits of SPD G ~ TEMP See Figure F 5 caption for more details.

PAGE 212

212 Figure F 10 Habitat composition and use in Deployment D HB 2. a) Categorical habitat map with whi te and black representing sand and hard solid line shows the fraction of hard bottom in each 1 m thick concentric ring around the reef. Also shown are the fractions of r ecorded positions over hard bottom within each ring for each individual (thin solid lines) and for all individuals combined (thick dashed line).

PAGE 213

213 Figure F 11 Habitat composition and use in Deployment F HB 3. a) Categorical habitat map with white a nd black representing sand and hard solid line shows the fraction of hard bottom in each 1 m thick concentric ring around the reef. Also shown are the fractions of recor ded positions over hard bottom within each ring for each individual (thin solid lines) and for all individuals combined (thick dashed line)

PAGE 214

214 APPENDIX G R CODE FOR ALL CALCULA TIONS This appendix contains the R code for calculations and plotting. The begin ning of # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # global variables.r # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ###################################################################### ### This file sets common variable values which are used in multiple other ### files. This way I only have to change them in one place. ######### ############################################################# ### Several of these have been moved to 'metadata.r' ### libraries I use library("MASS") dataDir = "C:/zy/The closets/data closet" sdlDir = "/Telemetry" adcpDir = "/ADCP Data 2010Sep Fina l" ### A switch to import all days or some subset allDays=TRUE ### constants secPhour = 60 60 secPday = 24 secPhour secPyear365 = 365 secPday secPyear366 = 366 secPday ###################################################################### ### UT M offsets... ### These are offsets meant to be used with all array deployment locations. ### They're chosen to be south and west of all reefs ### See 'SFMA Reefball scanning waypoints.xls' eastingOffset = 236600 northingOffset = 3261700

PAGE 215

215 ################# ##################################################### # Home range estimates depend on the number and size of grid cells you # choose. These are set by the limits and n you choose. # 'homeRange()' expects lims to come as c(minEasting, maxEasting, minNorth ing, maxNorthiong) # For 2007 and 2008...this is big enough to include both array deployment spacings hrlims = c(8440, 8720, 550, 850) # for 2009, because each deployment is in a different location, just pick # the range to put around the reef location... pick a range to match 2007/2008 #8720 8440=280; 850 550=300; Too bad I wasn't smart enough to make it square hrRange = c(140,150) # c(add/subtract from reefEN$easting, ditto reefEN$northing) ################################################################ ###### ### Random plotColors = c( "black", "red", "blue", "green", "yellow", "pink", "brown", "orange", "skyblue", "violet", "grey", "salmon", "black", "red", "blue", "brown", "yellow", "pink", "green", "orange", "skyblue", "violet", "grey", "sal mon", "black", "red", "blue", "brown", "yellow", "pink", "green", "orange", "skyblue", "violet", "grey", "salmon", "black", "red", "blue", "brown", "yellow", "pink", "green", "orange", "skyblue", "violet", "grey", "salmon", "black", "red", "blue" "brown", "yellow", "pink", "green", "orange", "skyblue", "violet", "grey", "salmon", "black", "red", "blue", "brown", "yellow", "pink", "green", "orange", "skyblue", "violet", "grey", "salmon", "black", "red", "blue", "brown", "yellow", "pink", green", "orange", "skyblue", "violet", "grey", "salmon", "black", "red", "blue", "brown", "yellow", "pink", "green", "orange", "skyblue", "violet", "grey", "salmon", "black", "red", "blue", "brown", "yellow", "pink", "green", "orange", "skyblue" "violet", "grey", "salmon", "black", "red", "blue", "brown", "yellow", "pink", "green", "orange", "skyblue", "violet", "grey", "salmon" ) plotBuffer = 25 # this is for drawing plots 25m bigger than the array ######################################## ############################## ### 2007 Dec common starting in unix time # Experiment dates 2007 Dec 09 2008 Jan 15 # startTime's and stopTimes's for each fish are: # T60200 = (1197226947, 1200407671) # T60400 = (1197227586, 1200414927) # T60500 = (11 97225302, 1200412037) # T60700 = (1197228303, 1200397705) # T60900 = (1197224495, 1200414388) # set common start time to 2007 Dec 09 00:00:00 = 1197158400 GMT

PAGE 216

216 # Florida is GMT 5 hours (18000 sec) and Dec Jan are not affected by daylight savings startT ime2007 = 1197158400 # GMT janStart2007 = 1167609600 janStart2008 = 1199145600 ###################################################################### ### 2008 Oct common starting in unix time # Experiment dates 2008 Oct 17 2008 Dec 07 # set common st art time to 2008 Oct 17 00:00:00 = 1197158400 GMT # Florida is GMT 5 hours (18000 sec) and reverted to standard time at # 2am local time on Nov 2. So the local clock changed from 1:59am to 1:00am. # However, this did not affect either the SDL or ADCP cl ocks, as their clocks # were set to local satellite time at the start of the experiment and did not # change with the end of daylight savings. It does affect other times that # I later relate to the SDL and ADCP clocks. startTime2008 = 1224201600 # GMT # ##################################################################### ### To get multiple fish within a loop... ### here are lists of tag ID for each year tags2007 = c(60200, 60400, 60500, 60700, 60900) tags2008 = c(60100, 60300, 60600, 60800, 61100, 61200 61300) tagFolders2007 = paste("C:/zy/Telemetry/2007 Dec/T",tags2007,"B79500", sep="") tagFolders2008 = paste("C:/zy/Telemetry/2008 Oct/T",tags2008,"B79500", sep="") numFish2007 = length(tags2007) numFish2008 = length(tags2008) # this gives a list of the days (in unix time) involved in each year days2007 = 13856:13893 days2008 = 14169:14219 numDays2007 = 38 numDays2008 = 51 ###################################################################### ### Array Spacing Trials start and stop times for eac h spacing # 150m variables startTime_150m = 1241708820; # 1241708820 = 11:07:00 EDT 7 May 2009 stopTime_150m = 1241715240; # 1241715240 = 12:54:00 EDT 7 May 2009 eTime_150m = stopTime_150m startTime_150m # number of seconds emTime_150m = eTime_150m / 6 0 # number of minutes

PAGE 217

217 # 125m variables startTime_125m = 1240504980; # 1240504980 = 12:43:00 EDT 23 April 2009 stopTime_125m = 1240510800; # 1240510800 = 14:20:00 EDT 23 April 2009 eTime_125m = stopTime_125m startTime_125m # number of seconds emTime_125 m = eTime_125m / 60 # number of minutes # 100m variables startTime_100m = 1241719660; # 1241719660 = 14:07:40 EDT 7 May 2009 stopTime_100m = 1241726040; # 1241726040 = 15:54:00 EDT 7 May 2009 eTime_100m = stopTime_100m startTime_100m # number of seconds emTime_100m = eTime_100m / 60 # number of minutes ### Notes on when EST and EDT are in effect: # 2007: EDT started on 11 March and ended on 4 November # 2008: 9 March 2 November # 2009: 8 March 1 November # 2010: 14 March 7 November # The second Su nday in March and the first Sunday in November # EDT = GMT 4 hours # EST = GMT 5 hours # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # global metadata.r # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@ # This file is intended as the starting place where all the basic, # non derived and non calculated information about each array deployment # is entered. Other information is added as it is calculated # md stands for metadata, dn stands for telemetry deployment number # md1 hb2007 if41 2007Dec07 # md2 hb2008 if41 2008Oct09 # md3 hb1 if43 2009Jun01 # md4 sb1 of43 2009Jul10 # md5 sb2 oh41 2009Aug03 # md6 hb2 if41 2009Aug24 # md7 sb3 os43 2009Se p14 # md8 hb3 if42 2009Oct12 # md9 sb4 of43 2009Nov16 md1 = list( # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # hb2007 = list( # dn=1 2007 full deployment

PAGE 218

218 deployment = "hb2007", year = 2007, s ite = "if41", spacing = 50, homeDir = paste(dataDir, sdlDir, "/2007/2007Dec07 IF41",sep=""), alpsDir = "/ALPS 2011Feb14", sentinelNames = c(), beaconNames = c("b79100", "b79200", "b79300", "b79400", "b79500"), fishNames = c("f 60200", "f60400", "f60500", "f60700", "f60900"), goodFishNames = c(), # this needs to be filled in trueNames = c(), # this only applies to hb2008 otherNames = c("o60800"), allTagNames = NA, # automatically filled later SDLmode= "symbol", bestBeacon="b79200", secondBestBeacon = "b79100", # for positioning the bestBeacon timezone="EST", # EST = GMT 5 hours sdlDownloadDate = "17jan08", startDay = "2007/Dec/07", startTime = "21:36:00", # GMT taggingDay = "2007/Dec/09", # tagging time is always 00:00:00 GMT stopDay = "2008/Jan/15", stopTime = "15:00:00", # GMT startUtime = NA, # automatically filled later taggingUtime = NA, # GMT automatic ally filled later stopUtime = NA, # automatically filled later habmapFileName = "C:/zy/Telemetry/R summary files/IF41_IF42_lines_aligned_HBandSB_bluebox.jpg", gpsFileName = "2008Mar11 GPS data.txt", gpsTimes = list( # Times corrected from EST to EDT by adding 1 hour # ... for more on this, see the ReadMe.docx file in the GPS data folder # for this deployment ID = c("reef", "41","42","43","44","45", "wholepath"), where = c("reef", "north", "east", "south", "w est", "center", "wholepath"), start = c("10:28:00", "10:39:22", "11:24:30", "11:38:16", "11:50:30", "10:49:30", "10:28:00"), stop = c("10:33:29", "10:44:13", "11:29:00", "11:43:44", "11:55:37", "10:53:40", "11:55:37") ), reefEN = data.frame( "ID" = "reef", "easting" = 8574.76, "northing" = 691.2323 ), sdlEN = data.frame( "ID" = c("41","42","43","44","45"),

PAGE 219

219 "easting" = c(8569, 8626, 8570, 8522, 8583), "northing" = c(742, 694, 6 39, 693, 699) ), beaconEN= data.frame( # the 'beaconID' should list the beacons/sentinel used in this # deployment and be in the correct order to match where they were # according to the "location" list. "beaconID" = c("b791 00","b79200","b79300","b79400","b79500",""), "location" = c("41","42","43","44","45","reef") ), plotLimits = data.frame( "ID" = c("min", "max"), "easting" = c(0,1), # automatically filled later "northing" = c(0,1) # automa tically filled later ) ), # end 2008 / if41 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # hb2008 = list( # dn=2 2008 full deployment # Remember the SDLs were set to collect in code mode, but with a bad # echo filter parameter. So the results will be run through ALPS as if # the data had been collected in symbol mode. Individual codes will be # treated like symbols. deployment = "hb2008", year = 2008, site = "if41", spacing = 125, homeDir = paste(dataDir, sdlDir, "/2008/2008Oct09 IF41",sep=""), alpsDir = "/ALPS 2011Feb14", sentinelNames = c(), # B79200: 80, 81. B79400: 85, 86. B79500: 130. # you must choose a single beacon code... beaconNames = c("b80" "b81", "b85", "b86", "b130"), #beaconNames = c("b79200", "B79400", "B79500") # T60100: 66, 67. T60300: 70, 71. T61100: 46, 47. T61200: 48, 49. # T61300: 51, 52. fishNames = c("f66","f67","f70","f71","f46","f47","48","49","f51","f52"), # fish tag on REMUS T61500: 55, 56. #otherNames = c("f55","f56"), goodFishNames = c(), # this needs to be filled in otherNames = c("o61500"), # you can choose a true fish tag and both files for both codes will be included t rueNames = c("b79200", "b79400", "b79500","f60100", "f60300", "f61100", "f61200","f61300", "o61500"), allTagNames = NA, # automatically filled later SDLmode="symbol", # really in code mode but the echo filter was set wrong # so we analyze it as if in symbol mode

PAGE 220

220 bestBeacon="b130", secondBestBeacon = "b86", # for positioning the bestBeacon. 85/86 are equally good timezone="EDT", sdlDownloadDate = "19dec08", startDay = "2 008/Oct/09", startTime = "17:05:00", # GMT taggingDay = "2008/Oct/17", # tagging time is always 00:00:00 GMT stopDay = "2008/Dec/08", # was really "2008/Dec/16" but batteries died on 6th/7th stopTime = "00:00:00", # GMT startUtime = N A, # automatically filled later taggingUtime = NA, # GMT automatically filled later stopUtime = NA, # automatically filled later habmapFileName = "C:/zy/Telemetry/R summary files/IF41_IF42_lines_aligned_HBandSB_bluebox.jpg", gpsFile Name = c("2008Dec16 GPS data.txt","2008Dec17 GPS data.txt"), # this needs to be checked. My files on "2007/8 best estimates are a mess. gpsTimes = list( # Times corrected from EDT to EST by subtracting 1 hour # ... for more on this, see the Read Me.docx file in the GPS data folder # for this deployment. # Also, the reef was not done on the 16/17 Dec 2008. ID = c("reef", "41","42","43","44","45", "wholepath"), where = c("reef", "north", "east", "south", "west", "center", "wholepath"), # these times are actually on two days, but not overlapping so it works start = c("", "11:38:00", "12:01:15", "11:12:00", "11:44:00", "11:03:00", "11:03:00"), stop = c("", "11:43:00", "12:05:15", "11:17:00", "11:50:00", "1 1:07:00", "12:05:15") ), reefEN = data.frame( "ID" = "reef", "easting" = 8574.76, "northing" = 691.2323 ), sdlEN = data.frame( "ID" = c("41","42","43","44","45"), "easting" = c(8574.86, 8684.45, 8568.03, 8471 .48, 8582.56), #bob=c(245174.86, 245284.45, 245168.03, 245071.48, 245182.56) "northing" = c(817.0, 686.0, 580.2, 699.5, 699.2) #bob=c(3262517.0, 3262386.0, 3262280.2, 3262399.5, 3262399.2) ), beaconEN= data.frame( # the 'beaconID' should list the beacons/sentinel used in this # deployment and be in the correct order to match where they were # according to the "location" list. "beaconID" = c("b79100","b79200","b79300","b79400","b79500",""), "location" = c("41 ","42","43","44","45","reef") ), plotLimits = data.frame(

PAGE 221

221 "ID" = c("min", "max"), "easting" = c(0,1), # automatically filled later "northing" = c(0,1) # automatically filled later ) ),# end d2008 / if41 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # hb1 = list( # dn=3 hb1 if43 hb1 if43 hb1 if43 hb1 if43 hb1 if43 hb1 if43 deployment = "hb1", year = 2009, site = "if43", spacing = 100, homeDir = paste(dataDir, sdlDir, "/2009/2009Jun01 IF43 and performance test", sep=""), alpsDir = "/ALPS 2011Feb14", sentinelNames = c("s79600"), beaconNames = c("b1", "b2", "b79400", "b79500"), fishNames = c("f11", "f12", "f13", "f14", "f15", "f16", "f17", "f61000"), goodFishNames = c("f13", "f14", "f16"), # from inspecting ALPS output trueNames = c(), # this only applies to hb2008 # these other tags were used during the internal array trials otherNames = c("o43", "o44", "o45", "o46", "o47", "o48", "o50", "o51", "o52"), allTagNames = NA, # automatically filled later SDLmode="code", bestBeacon="b1", secondBestBeacon = "b2", # for positioning the bestBeacon timezone="GMT", sdlDownloadDate = "17jun09", startDay = "2009/Jun/01", startTime = "17:36:00", # GMT taggingDay = "2009/Jun/03", # tagging time is always 00:00:00 GMT stopDay = "2009/Jun/17", stopTime = "14:30:00", # GMT startUtime = NA, # automatically filled later taggingUtime = NA, # GMT automatically filled later stopUtime = NA, # automatically filled later habmapFileName = "C:/zy/Telemetry/R summary files/IF43_lines_aligned_HB_SB_bluebox.jpg", gpsFileName = "2009Jun01 GPS data.txt", gpsTi mes = list( ID = c("reef", "41","42","43","44","45", "wholepath"), where = c("reef", "north", "east", "west","south", "center", "wholepath"), start = c("10:57:43", "12:00:42", "12:42:38", "13:29:45", "13:07:46", "11:11:38", "10:5 0:00"), stop = c("11:08:48", "12:05:09", "12:47:55", "13:35:30", "13:13:00", "11:26:00", "13:40:00") ),

PAGE 222

222 reefEN = data.frame( "ID" = "reef", "easting" = 8878.01, "northing" = 428.3839 ), sdlEN = data.frame( "ID" = c("41","42","43","44","45"), "easting" = c(8879.925, 8963.893, 8794.494, 8874.637, 8886.520), "northing" = c(523.4948, 419.0967, 428.9580, 334.1587, 431.7117) ), beaconEN= data.frame( # the 'beaconID' should list the beacons/sentinel used in this # deployment and be in the correct order to match where they were # according to the "location" list. "beaconID" = c("b2", "b79400", "", "b79500", "b1", "s79600"), "location" = c("41","42","43","44"," 45","reef") ), plotLimits = data.frame( "ID" = c("min", "max"), "easting" = c(0,1), # automatically filled later "northing" = c(0,1) # automatically filled later ) ),# end hb1 / if43 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # sb1 = list( # dn=4 sb1 of43 sb1 of43 sb1 of43 sb1 of43 sb1 of43 sb1 of43 deployment = "sb1", year = 2009, site = "of43", spacing = 100, homeDir = paste(dataDir, sdlDir, "/2009/2009Jul10 of4 3", sep=""), alpsDir = "/ALPS 2011Feb14", sentinelNames = c("s79600"), beaconNames = c("b1", "b2", "b79400", "b79500"), fishNames = c("f18", "f19", "f20", "f21", "f22", "f23", "f24", "f61500"), goodFishNames = c(), # NONE...from inspec ting ALPS output trueNames = c(), # this only applies to hb2008 otherNames = c(), allTagNames = NA, # automatically filled later SDLmode="code", bestBeacon="b2", secondBestBeacon = "b1", # for positioning the bestBeac on timezone="GMT", sdlDownloadDate = "28jul09", startDay = "2009/Jul/10", startTime = "19:05:00", # GMT taggingDay = "2009/Jul/13", # tagging time is always 00:00:00 GMT

PAGE 223

223 stopDay = "2009/Jul/27", stopTime = "15: 00:00", # GMT taggingUtime = NA, # GMT automatically filled later startUtime = NA, # automatically filled later stopUtime = NA, # automatically filled later habmapFileName = "C:/zy/Telemetry/ R summary files/OF43_lines_SBonly_bluebox.jpg", gpsFileName = c("2009Jul10 GPS data.txt","2009Jul27 GPS data.txt"), gpsTimes = list( ID = c("reef","41","42","43","44","45","wholepath"), where = c("reef", "north", "east", "south", "west" "center", "wholepath"), start = c( "12:33:00", # 2009Jul10 GPS data.txt "12:18:00", # 2009Jul27 GPS data.txt "11:57:10", # 2009Jul27 GPS data.txt "11:35:40", # 2009Jul27 GPS data.txt "11:13:20", # 2009Jul2 7 GPS data.txt "12:42:00", # 2009Jul27 GPS data.txt "11:10:00"), # 2009Jul27 GPS data.txt stop = c( "13:01:30", # 2009Jul10 GPS data.txt "12:23:00", # 2009Jul27 GPS data.txt "12:02:10", # 2009Jul27 GPS data.txt "11:37:18", # 2009Jul27 GPS data.txt "11:18:30", # 2009Jul27 GPS data.txt "12:47:00", # 2009Jul27 GPS data.txt "12:50:00") # 2009Jul27 GPS data.txt ), reefEN = data.frame( "ID" = "reef", "easting" = 1297 .066, "northing" = 1428.046 ), sdlEN = data.frame( "ID" = c("41","42","43","44","45"), "easting" = c(1307.215, 1390.835, 1296.048, 1216.619, 1305.865), "northing" = c(1532.886, 1425.098, 1333.012, 1446.895, 1433.767) ), beaconEN= data.frame( # the 'beaconID' should list the beacons/sentinel used in this # deployment and be in the correct order to match where they were # according to the "location" list. "beaconID" = c("b2", "b79400", "b79500" "", "b1", "s79600"), "location" = c("41","42","43","44","45","reef") ), plotLimits = data.frame(

PAGE 224

224 "ID" = c("min", "max"), "easting" = c(0,1), # automatically filled later "northing" = c(0,1) # automatically filled later ) ), # end sb1 / of43 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # sb2 = list( # dn=5 sb2 oh41 sb2 oh41 sb2 oh41 sb2 oh41 sb2 oh41 sb2 oh41 deployment = "sb2", year = 2009, site = "oh41", spacing = 100, homeDir = paste(dataDir, sdlDir, "/2009/2009Aug03 oh41", sep=""), alpsDir = "/ALPS 2011Feb14", sentinelNames = c("s79600"), beaconNames = c("b1", "b2", "b79400", "b79500"), fishNames = c("f25", "f26", "f27", "f28", "f29 ", "f30", "f31", "f61600"), goodFishNames = c("f26","f28", "f29", "f30", "f31"), # from inspecting ALPS output trueNames = c(), # this only applies to hb2008 otherNames = c("o61700"), # this one is the diver tag during cleaning ), all TagNames = NA, # automatically filled later SDLmode="code", bestBeacon="b1", secondBestBeacon = "b2", # for positioning the bestBeacon timezone="GMT", sdlDownloadDate = "20aug09", startDay = "2009/Aug/03", startTime = "17:20:00", # GMT taggingDay = "2009/Aug/04", # tagging time is always 00:00:00 GMT stopDay = "2009/Aug/20", stopTime = "12:00:00", # GMT startUtime = NA, # automatically filled later taggingUtime = NA, # GMT automatically filled later stopUtime = NA, # automatically filled later habmapFileName = "C:/zy/Telemetry/R summary files/OH41_OS43_lines_SBonly_bluebox.jpg" gpsFileName = "2009Aug03 GPS data.txt", gpsTimes = list ( ID = c("reef","41","42","43","44","45","wholepath"), where = c("reef", "north", "east", "south", "west", "center", "wholepath"), start = c("11:12:00", "11:59:15", "12:28:00", "12:52:00", "13:13:00", "11:02:00", "9:50:00"), stop = c("11:27:00", "12:03:00", "12:33:00", "12:57:00", "13:18:00", "11:09:00", "13:30:00") ), reefEN = data.frame( "ID" = "reef",

PAGE 225

225 "easting" = 434.8704, "northing" = 2059.854 ), sdlEN = data.frame( "ID" = c("41","42","43","44","45"), "easting" = c(436.2471, 526.2297, 432.4532, 348.1557, 443.2310), "northing" = c(2164.789, 2054.926, 1955.684, 2061.206, 2069.637) ), beaconEN= data.frame( # the 'beaconID' should list the beacons/s entinel used in this # deployment and be in the correct order to match where they were # according to the "location" lst. "beaconID" = c("b2", "b79400", "b79500", "", "b1", "s79600"), "location" = c("41","42","43","44","45","reef") ), plotLimits = data.frame( "ID" = c("min", "max"), "easting" = c(0,1), # automatically filled later "northing" = c(0,1) # automatically filled later ) ), # end sb2 / oh41 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # hb2 = list( # dn=6 hb2 if41 hb2 if41 hb2 if41 hb2 if41 hb2 if41 hb2 if41 deployment = "hb2", year = 2009, site = "if41", spacing = 100, homeDir = paste(dataDir, sdlDir, "/2009/2009Aug24 if41", s ep=""), alpsDir = "/ALPS 2011Feb14", sentinelNames = c("s79600"), beaconNames = c("b1", "b2", "b79400", "b79500"), fishNames = c("f32", "f33", "f34", "f35", "f36", "f37", "f38", "f61700"), goodFishNames = c("f33", "f34", "f35", "f36 ", "f37", "f38"), # from inspecting ALPS output trueNames = c(), # this only applies to hb2008 otherNames = c(), allTagNames = NA, # automatically filled later SDLmode="code", bestBeacon="b1", secondBestBeacon = b2", # for positioning the bestBeacon timezone="GMT", sdlDownloadDate = "08Sep09", startDay = "2009/Aug/24", startTime = "18:00:00", # GMT taggingDay = "2009/Aug/25", # tagging time is always 00:00:00 GMT stopDay = "2009/Sep/08",

PAGE 226

226 stopTime = "14:00:00", # GMT startUtime = NA, # automatically filled later taggingUtime = NA, # GMT automatically filled later stopUtime = NA, # automatically filled later habmapFile Name = "C:/zy/Telemetry/R summary files/IF41_IF42_lines_aligned_HBandSB_bluebox.jpg", gpsFileName = "2009Aug24 GPS data.txt", gpsTimes = list( ID = c("reef","41","42","43","44","45","wholepath"), where = c("reef", "north", "east", "sout h", "west", "center", "wholepath"), start = c("11:25:00", "12:25:00", "12:51:00", "13:13:20", "13:43:00", "11:40:00", "11:25:00"), stop = c("11:35:00", "12:28:59", "12:56:00", "13:18:20", "13:48:00", "11:50:00", "13:48:00") # Note that the stop time for SDL41 N was changed from 12:30:00 to # 12:28:59 after a visual inspection ), # end gpsTimes reefEN = list( "ID" = "reef", "easting" = 8574.76, "northing" = 691.2323 ), sdlEN = l ist( "ID" = c("41","42","43","44","45"), "easting" = c(8573.610, 8663.719, 8587.707, 8491.073, 8583.340), "northing" = c(790.6531, 686.2500, 597.2437, 694.9796, 699.4701) ), beaconEN= data.frame( # the 'beaconID' should l ist the beacons/sentinel used in this # deployment and be in the correct order to match where they were # according to the "location" list. "beaconID" = c("b2", "b79400", "b79500", "", "b1", "s79600"), "location" = c("41","42","43" ,"44","45","reef") ), plotLimits = data.frame( "ID" = c("min", "max"), "easting" = c(0,1), # automatically filled later "northing" = c(0,1) # automatically filled later ) ), # end hb2 / if41 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # sb3 = list( # dn=7 sb3 os43 sb3 os43 sb3 os43 sb3 os43 sb3 os43 sb3 os43 deployment = "sb3", year = 2009, site = "os43", spacing = 100,

PAGE 227

227 homeDir = paste(dataDir, sdlDir, "/2009/2 009Sep14 os43", sep=""), alpsDir = "/ALPS 2011Feb14", sentinelNames = c("s79600"), beaconNames = c("b1", "b2", "b79400", "b79500"), fishNames = c("f39", "f40", "f41", "f42", "f43", "f44", "f45", "f61800"), goodFishNames = c("f39", "f 40","f42", "f43"), # from inspecting ALPS output trueNames = c(), # this only applies to hb2008 otherNames = c("f25", "f26", "f27", "f28", "f29", "f31"), # these are visiting fish tagNames allTagNames = NA, # automatically filled later SDLmode="code", bestBeacon="b1", secondBestBeacon = "b2", # for positioning the bestBeacon timezone="GMT", sdlDownloadDate = "01oct09", startDay = "2009/Sep/14", startTime = "19:00:00", # GMT tagg ingDay = "2009/Sep/16", # tagging time is always 00:00:00 GMT stopDay = "2009/Oct/01", stopTime = "14:00:00", # GMT startUtime = NA, # automatically filled later taggingUtime = NA, # GMT automatically fill ed later stopUtime = NA, # automatically filled later habmapFileName = "C:/zy/Telemetry/R summary files/OH41_OS43_lines_SBonly_bluebox.jpg", gpsFileName = "2009Sep14 GPS data.txt", gpsTimes = list( ID = c("reef","41","42","43" ,"44","45","wholepath"), where = c("reef", "north", "east", "south", "west", "center", "wholepath"), start = c("11:55:00", "13:00:00", "13:38:00", "14:15:00", "14:51:00", "12:16:00", "11:50:00"), stop = c("12:07:00", "13:06:00", "13:43:00", "14:22:00", "14:57:00", "12:23:00", "15:00:00") ), # end gpsTimes reefEN = data.frame( "ID" = "reef", "easting" = 346.7775, "northing" = 2139.444 ), sdlEN = data.frame( "ID" = c("41","42","43","4 4","45"), "easting" = c(350.5960, 431.8326, 339.6018, 263.0775, 354.4020), "northing" = c(2239.207, 2141.473, 2056.579, 2144.924, 2146.314) ), beaconEN= data.frame( # the 'beaconID' should list the beacons/sentinel used in this

PAGE 228

228 # deployment and be in the correct order to match where they were # according to the "location" list. "beaconID" = c("b2", "b79400", "b79500", "", "b1", "s79600"), "location" = c("41","42","43","44","45","reef") ), plotLimi ts = data.frame( "ID" = c("min", "max"), "easting" = c(0,1), # automatically filled later "northing" = c(0,1) # automatically filled later ) ), # end sb3 / os43 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # hb3 = list( # dn=8 hb3 if42 hb3 if42 hb3 if42 hb3 if42 hb3 if42 hb3 if42 deployment = "hb3", year = 2009, site = "if42", spacing = 100, homeDir = paste(dataDir, sdlDir, "/2009/2009Oct12 if42", sep=""), alpsDi r = "/ALPS 2011Feb14", sentinelNames = c(), beaconNames = c("b1", "b2", "b79400", "b79500"), fishNames = c("f46", "f47", "f48", "f50", "f51", "f52", "f61900", "f62000"), goodFishNames = c("f47", "f48", "f51", "f52"), # from inspecting ALPS output trueNames = c(), # this only applies to hb2008 otherNames = c(), allTagNames = NA, # automatically filled later SDLmode="code", bestBeacon="b1", secondBestBeacon = "b2", # for positioning the bestBeacon timezone="GMT", sdlDownloadDate = "27Oct09", startDay = "2009/Oct/12", startTime = "17:30:00", # GMT taggingDay = "2009/Oct/13", # tagging time is always 00:00:00 GMT stopDay = "2009/Oct/27", stopTime = "14:45:00", # GMT startUtime = NA, # automatically filled later taggingUtime = NA, # GMT automatically filled later stopUtime = NA, # automatically filled later habmapFileName = "C:/zy/Telemetry/R summary files/I F41_IF42_lines_aligned_HBandSB_bluebox.jpg", gpsFileName = "2009Oct12 GPS data.txt", gpsTimes = list( ID = c("reef","41","42","43","44","45","wholepath"), where = c("reef", "north", "east", "south", "west", "center", "wholepath"), start = c("10:39:45","11:42:30","12:08:00","12:45:00","13:18:00",

PAGE 229

229 "11:02:00","10:39:45"), stop = c("10:53:00","11:47:30","12:13:00","12:54:30","13:25:30", "11:08:00","13:25:00") ), # end gpsTimes reefEN = data.frame( "I D" = "reef", "easting" = 8718.069, "northing" = 706.7241 ), sdlEN = list( "ID" = c("41","42","43","44","45"), "easting" = c(8721.842, 8798.138, 8725.15, 8632.181, 8720.673), "northing" = c(806.6219, 705.793, 605.4938 711.7141, 718.6453) ), beaconEN= data.frame( # the 'beaconID' should list the beacons/sentinel used in this # deployment and be in the correct order to match where they were # according to the "location" list. "beaconID" = c("b2", "b79400", "b79500", "", "b1", ""), "location" = c("41","42","43","44","45","reef") ), plotLimits = data.frame( "ID" = c("min", "max"), "easting" = c(0,1), # automatically filled later "northing" = c(0,1) # automa tically filled later ) ), # end hb3 / if42 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # sb4 = list( # dn=9 sb4 of43 sb4 of43 sb4 of43 sb4 of43 sb4 of43 sb4 of43 deployment = "sb4", year = 2009, site = "of43", spacing = 100, homeDir = paste(dataDir, sdlDir, "/2009/2009Nov16 of43", sep=""), alpsDir = "/ALPS 2011Feb14", sentinelNames = c(), beaconNames = c("b1", "b2", "b79400", "b79500"), fishNames = c("f53", "f54", "f5 5", "f56", "f57", "f58", "f59", "f62100"), goodFishNames = c("f54", "f56", "f57","f59", "f62100"), # from inspecting ALPS output trueNames = c(), # this only applies to hb2008 otherNames = c(), allTagNames = NA, # automatically filled later SDLmode="code", bestBeacon="b1", secondBestBeacon = "b2", # for positioning the bestBeacon timezone="GMT",

PAGE 230

230 sdlDownloadDate = "15Dec09", startDay = "2009/Nov/16", startTime = "18:30:00", # GMT taggingDay = "2009/Nov/18", # tagging time is always 00:00:00 GMT stopDay = "2009/Nov/28", # we picked up on 14 Dec but sdl batteries died between stopTime = "00:00:00", # GMT # 26 and 30 Nov. Last position solutions come ~27 Nov startUtime = NA, # automatically filled later taggingUtime = NA, # GMT automatically filled later stopUtime = NA, # automatically filled later habmapFileName = "C:/zy/Telemetry/R summary files/OF 43_lines_SBonly_bluebox.jpg", gpsFileName = c("2009Jul10 GPS data.txt","2009Jul27 GPS data.txt"), gpsTimes = list( ID = c("reef","41","42","43","44","45","wholepath"), where = c("reef", "north", "east", "south", "west", "center", "whole path"), start = c( "12:33:00", # 2009Jul10 GPS data.txt "12:18:00", # 2009Jul27 GPS data.txt "11:57:10", # 2009Jul27 GPS data.txt "11:35:40", # 2009Jul27 GPS data.txt "11:13:20", # 2009Jul27 GPS data.txt "12:42:00", # 2009Jul27 GPS data.txt "11:10:00"), # 2009Jul27 GPS data.txt stop = c( "13:01:30", # 2009Jul10 GPS data.txt "12:23:00", # 2009Jul27 GPS data.txt "12:02:10", # 2009Jul27 GPS data.txt "11:37:18 ", # 2009Jul27 GPS data.txt "11:18:30", # 2009Jul27 GPS data.txt "12:47:00", # 2009Jul27 GPS data.txt "12:50:00") # 2009Jul27 GPS data.txt ), reefEN = data.frame( "ID" = "reef", "easting" = 1297.066, "north ing" = 1428.046 ), sdlEN = data.frame( "ID" = c("41","42","43","44","45"), "easting" = c(1307.215, 1390.835, 1296.048, 1216.619, 1305.865), "northing" = c(1532.886, 1425.098, 1333.012, 1446.895, 1433.767) ), beaconEN= d ata.frame( # the 'beaconID' should list the beacons/sentinel used in this

PAGE 231

231 # deployment and be in the correct order to match where they were # according to the "location" list. "beaconID" = c("b2", "b79400", "b79500", "", "b1", ""), "location" = c("41","42","43","44","45","reef") ), plotLimits = data.frame( "ID" = c("min", "max"), "easting" = c(0,1), # automatically filled later "northing" = c(0,1) # automatically filled later ) ), # en d sb4 / of43 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # sp125 = list( # dn=10, 2009 April 23 Spacing Trial on IF4.1 at 125m deployment = "sp125", year = 2009, site = "if41", spacing = 125, homeDir = paste(dataDir, sdlDir, "/2009/2009Apr23 spacing trials/125m spacing",sep=""), alpsDir = "/ALPS 2011Feb14", sentinelNames = c(), beaconNames = c("b79400", "b79500"), fishNames = c("f2","f61000","f61500"), goodFishNames = c(), # doe sn't apply otherNames = c(), trueNames = c(), # this only applies to hb2008 allTagNames = NA, # automatically filled later SDLmode="code", bestBeacon="b79500", # i need to check this and the second best... secondBestBea con = "b79400", # for positioning the bestBeacon timezone="EDT", sdlDownloadDate = "23apr09", startDay = "2009/Apr/23", startTime = "16:43:00", # GMT taggingDay = NA, # if no fish were tagged, fill this in later with st artDay stopDay = "2009/Apr/23", stopTime = "18:20:00", # GMT startUtime = NA, # automatically filled later taggingUtime = NA, # GMT automatically filled later stopUtime = NA, # automatically filled later habmapFileName = "C:/zy/Telemetry/R summary files/IF41_IF42_lines_aligned_HBandSB_bluebox.jpg", gpsFileName = c(), # I'll do all this by hand since it's funky gpsTimes = list( # I'll do all this by hand since it's funky ID = c("reef", "41","42","43","44","45", "wholepath"), where = c("reef", "north", "east", "south", "west", "center", "wholepath"),

PAGE 232

232 start = c("", "","", "", "", "", ""), stop = c("", "","", "", "", "", "") ), reefEN = data.frame( "ID" = "reef", "easting" = 8574.76, "northing" = 691.2323 ), sdlEN = data.frame( # get these from 'GPS position estimates.r' # NOTE: 41 is from the target position since there was no GPS data # NOTE: 45 is from later es timates of c45 "ID" = c("41","42","43","44","45"), "easting" = c(245175.58, 245278.50, 245179.60, 245069.20, 245183.3) eastingOffset, "northing" = c(3262516.20, 3262394.00, 3262282.30, 3262393.40, 3262399.00) nort hingOffset ), beaconEN= data.frame( # the 'beaconID' should list the beacons/sentinel used in this # deployment and be in the correct order to match where they were # according to the "location" list. "beaconID" = c(" b79400", "", "", "", "b79500", ""), "location" = c("41", "42", "43", "44", "45", "reef") ), plotLimits = data.frame( "ID" = c("min", "max"), "easting" = c(0,1), # automatically filled later "northing" = c(0,1) # automatical ly filled later ) ),# end sp125 / if41 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # sp150 = list( # dn=11, 2009 May 7 Spacing Trial on IF4.1 at 150m deployment = "sp150", year = 2009, site = "if4 1", spacing = 150, homeDir = paste(dataDir, sdlDir, "/2009/2009Apr23 spacing trials/150m spacing",sep=""), alpsDir = "/ALPS 2011Feb14", sentinelNames = c("s79600"), beaconNames = c("b1", "b2", "b79400", "b79500"), fishNames = c("f6 1000","f61500"), goodFishNames = c(), # doesn't apply otherNames = c(), trueNames = c(), # this only applies to hb2008 allTagNames = NA, # automatically filled later

PAGE 233

233 SDLmode="code", bestBeacon="b1", secondBestB eacon = "b2", # for positioning the bestBeacon timezone="EDT", sdlDownloadDate = "07May09", startDay = "2009/May/07", startTime = "15:07:00", # GMT taggingDay = NA, # if no fish were tagged, fill this in later with star tDay stopDay = "2009/May/07", stopTime = "16:54:00", # GMT startUtime = NA, # automatically filled later taggingUtime = NA, # GMT automatically filled later stopUtime = NA, # automatically filled later h abmapFileName = "C:/zy/Telemetry/R summary files/IF41_IF42_lines_aligned_HBandSB_bluebox.jpg", gpsFileName = c(), # I'll do all this by hand since it's funky gpsTimes = list( # I'll do all this by hand since it's funky ID = c("reef", "41"," 42","43","44","45", "wholepath"), where = c("reef", "north", "east", "south", "west", "center", "wholepath"), start = c("", "","", "", "", "", ""), stop = c("", "","", "", "", "", "") ), reefEN = data.frame( "ID" = "reef", "easting" = 8574.76, "northing" = 691.2323 ), sdlEN = data.frame( # get these from 'GPS position estimates.r' "ID" = c("41","42","43","44","45"), "easting" = c(245187.5, 245324.9, 245174.4, 245052.0, 245183.3) eastingOffse t, "northing" = c(3262556, 3262402, 3262253, 3262403, 3262399.00) northingOffset ), beaconEN= data.frame( # the 'beaconID' should list the beacons/sentinel used in this # deployment and be in the correct order to matc h where they were # according to the "location" list. # # I don't list them because two beacons were on one SDL: # N41 had B2, B79400. C45 had B1, B79500 # The sentinel s79600 and T61000 were at the reef "beaconID" = c( "", "", "", "", "", "s79600"), "location" = c("41", "42", "43", "44", "45", "reef") ), plotLimits = data.frame( "ID" = c("min", "max"), "easting" = c(0,1), # automatically filled later

PAGE 234

234 "northing" = c(0,1) # automatically fi lled later ) ),# end sp150 / if41 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # sp100 = list( # dn=12, 2009 May 7 Spacing Trial on IF4.1 at 100m deployment = "sp100", year = 2009, site = "if41", spacing = 150, homeDir = paste(dataDir, sdlDir, "/2009/2009Apr23 spacing trials/100m spacing",sep=""), alpsDir = "/ALPS 2011Feb14", sentinelNames = c("s79600"), beaconNames = c("b1", "b2", "b79400", "b79500"), fishNames = c("f61000 ","f61500"), goodFishNames = c(), # doesn't apply otherNames = c(), trueNames = c(), # this only applies to hb2008 allTagNames = NA, # automatically filled later SDLmode="code", bestBeacon="b1", secondBes tBeacon = "b2", # for positioning the bestBeacon timezone="EDT", sdlDownloadDate = "07May09", startDay = "2009/May/07", startTime = "18:07:00", # GMT taggingDay = NA, # if no fish were tagged, fill this in later with st artDay stopDay = "2009/May/07", stopTime = "19:52:00", # GMT startUtime = NA, # automatically filled later taggingUtime = NA, # GMT automatically filled later stopUtime = NA, # automatically filled later habmapFileName = "C:/zy/Telemetry/R summary files/IF41_IF42_lines_aligned_HBandSB_bluebox.jpg", gpsFileName = c(), # I'll do all this by hand since it's funky gpsTimes = list( # I'll do all this by hand since it's funky ID = c("reef", "41" ,"42","43","44","45", "wholepath"), where = c("reef", "north", "east", "south", "west", "center", "wholepath"), start = c("", "","", "", "", "", ""), stop = c("", "","", "", "", "", "") ), reefEN = data.frame( "ID" = "reef" "easting" = 8574.76, # 245174.8 eastingOffset "northing" = 691.2323 # 3262391 northingOffset ),

PAGE 235

235 sdlEN = data.frame( # get these from 'GPS position estimates.r' "ID" = c("41","42","43","44","45"), "easting" = c(245183.4, 245258.5, 245175.1, 245085.9, 245183.3) eastingOffset, "northing" = c(3262499, 3262391, 3262289, 3262395, 3262399) northingOffset ), beaconEN= data.frame( # the 'beaconID' should list the beacons/sentinel used in this # deploym ent and be in the correct order to match where they were # according to the "location" list. # # I don't list them because two beacons were on one SDL: # N41 had B2, B79400. C45 had B1, B79500 # The sentinel s79600 and T61000 were at the reef "beaconID" = c("", "", "", "", "", "s79600"), "location" = c("41", "42", "43", "44", "45", "reef") ), plotLimits = data.frame( "ID" = c("min", "max"), "easting" = c(0,1), # automatically filled later "northing" = c(0,1) # automatically filled later ) )# end sp100 / if41 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ) # end metadata list ##################################################################### # ###################################################################### ###################################################################### # order summary deploymentOrder = list( "deployment" = vector(mode="character", length = length(md1)), "site = vector(mode="character", length = length(md1)) ) for (i in 1:length(md1)){ deploymentOrder$deployment[i] = md1[[i]]$deployment deploymentOrder$site[i] = md1[[i]]$site } ###################################################################### ######### ############################################################# ###################################################################### # some derived information md2 = md1 for (i in 1:(length(md2) 0)){ # i cycles through different trials # first sentinel, next beacons, finally fish tags # so that $allTagNames[1] is always sentinel, for example

PAGE 236

236 md2[[i]]$allTagNames = c(md2[[i]]$sentinelNames, md2[[i]]$beaconNames, md2[[i]]$fishNames, md2[[i]]$trueNames, md2[[i]]$otherNames) # get the start and stop times md2[[i]]$startUtime = unclass(as.POSIXct(strptime(paste(md2[[i]]$startDay, md2[[i]]$startTime), "%Y/%B/%d %H:%M:%S", tz="GMT"), origin="1970 1 1", tz="GMT"))[1] if(is.na(md2[[i]]$taggingDay)){ # make taggingDay = startDay md 2[[i]]$taggingDay = md2[[i]]$startDay} md2[[i]]$taggingUtime = unclass(as.POSIXct(strptime(paste(md2[[i]]$taggingDay, "00:00:00"), "%Y/%B/%d %H:%M:%S", tz="GMT"), origin="1970 1 1", tz="GMT"))[1] md2[[i]]$stopUtime = unclass(as.POSIXct(strpti me(paste(md2[[i]]$stopDay, md2[[i]]$stopTime), "%Y/%B/%d %H:%M:%S", tz="GMT"), origin="1970 1 1", tz="GMT"))[1] # get limits for plotting md2[[i]]$plotLimits$easting = c(min(md2[[i]]$sdlEN$easting) plotBuffer, max(md2[[i]]$sdlEN$easti ng) + plotBuffer) md2[[i]]$plotLimits$northing = c(min(md2[[i]]$sdlEN$northing) plotBuffer, max(md2[[i]]$sdlEN$northing) + plotBuffer) # for those deployments when no tagging happened, set the taggingDay equal # to startDay if(is.na(md2[[i] ]$taggingDay)){ md2[[i]]$taggingDay = md2[[i]]$startDay } } ###################################################################### ###################################################################### ######################################### ############################# ## GPS position estimates ## Read in GPS data and calculate reef and SDL best estimates of positions. ## Note that OF4.3 is funky and needs special attention. See 'GPS position estimates.r' # ## These numbers are now typed di rectly into the metadata section above md3 = md2 #for (i in c(1,3:5)){#length(md3)){ because of43 is funky don't do i=2, do it below # rootDir = getwd() # gpsDir = paste(md3[[i]]$homeDir,"/GPS data", sep="") # setwd(gpsDir) # # choose to plot or not # plotThem = TRUE # plotThem = FALSE # # # import GPS data for (reef, north, east, south, west, center) and calculate means # temp1 = list() # for (j in 1:6){ # cycles through reef and 5 SDL locations, 1 6 in list # temp1[[j]] = importGPSdata(md3[[i ]]$gpsFileName,

PAGE 237

237 # md3[[i]]$gpsTimes$start[j], md3[[i]]$gpsTimes$stop[j], # filter=100, plotThem=plotThem) # } # # save means # md3[[i]]$reefEN = data.frame( # "ID" = md3[[i]]$gpsTimes$ID[[1]], # "easting"=temp1[[1]]$average$easting, # "northing"=temp1[[1]]$average$northing) # md3[[i]]$sdlEN = data.frame( # "ID" = md3[[i]]$gpsTimes$ID[2:6], # "easting" = c(temp1[[2]]$average$easting, temp1[[3]]$average$easting, # temp1[[4]]$average$easting, temp1[[5]]$average$easting, # temp1[[6]]$average$easting), # "northing" = c(temp1[[2]]$average$northing, temp1[[3]]$average$northing, # temp1[[4]]$average$northing, temp1[[5]]$average$northing, # temp1[[6]]$average$northing) # ) # setwd(rootDir) #} # ########## ############################################################ ## repeat the whole thing for the troublesome deployment of43 #for (i in 2:2){ # rootDir = getwd() # gpsDir = paste(md3[[i]]$homeDir,"/GPS data", sep="") # setwd(gpsDir) # # choose to plot or not # plotThem = TRUE # plotThem = FALSE # # # import GPS data for (reef, north, east, south, west, center) and calculate means # temp1 = list() # for (j in 1:6){ # cycles through reef and 5 SDL locations, 1 6 in list # temp1[[j]] = importGPSdat a(md3[[i]]$gpsFileName[2], # # [2] uses "2009Jul27 GPS data.txt" # # we want 2009Jul27 for everything but the reef, 2009Jul10 for the reef # md3[[i]]$gpsTimes$start[j], md3[[i]]$gpsTimes$stop[j], # filter=100, plotThem=plotThem) # } # # save means # md3[[i]]$reefEN = data.frame( # "ID" = md3[[i]]$gpsTimes$ID[[1]], # "easting"=temp1[[1]]$average$easting, # "northing"=temp1[[1]]$average$northing) # md3[[i]]$sdlEN = data.frame( # "ID" = md3[[i]]$gpsTimes$ID[2:6], # "ea sting" = c(temp1[[2]]$average$easting, temp1[[3]]$average$easting, # temp1[[4]]$average$easting, temp1[[5]]$average$easting,

PAGE 238

238 # temp1[[6]]$average$easting), # "northing" = c(temp1[[2]]$average$northing, temp1[[3]]$average$northing, # te mp1[[4]]$average$northing, temp1[[5]]$average$northing, # temp1[[6]]$average$northing) # ) # setwd(rootDir) #} # ## now do the problem 'reef' all be it's lonesome #i=2 #rootDir = getwd() #gpsDir = paste(md3[[i]]$homeDir,"/GPS data", sep="") #setwd (gpsDir) # #temp1 = importGPSdata(md3[[2]]$gpsFileName[1], # uses "2009Jul10 GPS data.txt" # md3[[2]]$gpsTimes$start[1], md3[[2]]$gpsTimes$stop[1], filter=100, plotThem=plotThem) #md3[[2]]$reefEN = data.frame("ID" = md3[[i]]$gpsTimes$ID[1] "easting"=tem p1$average$easting, # "northing"=temp1$average$northing) ## end the troublesome array ###################################################################### # # ###################check these positions with my notes of best estimates # ### the first three look great. # # ###################################################################### #### now do the 'test' deployment # ## NOTE THE REGULAR CODE DOESN'T WORK BECAUSE THERE'S NO TEST GPS DATA... # #i=8 # ## SO ... FAKE IT... #md 3[[i]]$reefEN = data.frame( # "ID" = "reef", # "easting" = 0+eastingOffset, # "northing" = 0+northingOffset #) #md3[[i]]$sdlEN = data.frame( # "ID" = c("41", "42", "43", "44", "45"), # "easting" = c(0, 100,0,100,sqrt((10^2)/2)) + eastingOffset, # northing" = c(100,0, 100,0,sqrt((10^2)/2)) + northingOffset #)

PAGE 239

239 ###################################################################### ###################################################################### ################################################## #################### # Okay then...all done with the telemetry deployments md=md3 ###################################################################### ##################################### ################################# ###################################################################### # Now for the ADCP deployments. They don't always coincide with SDL deployments. # # Use these start and stop times to trim bad data from the ADCP fil es. # # ADCP metadata > adcpmd adcpmd1 = list( # I've picked these start and stop dati from looking at the data files # and finding where the depth measurements are right. adcp200701 = list( deployment = "2007adcp01", startDati = "2007/Dec/ 19 16:00:00", #GMT stopDati = "2008/Jan/15 17:00:00", # GMT startUtime = NA, # automatically filled later stopUtime = NA # automatically filled later ), adcp200801 = list( deployment = "2008adcp01", startDati = "2008/Oct/10 17:00: 00", # GMT stopDati = "2008/Dec/16 14:30:00", # GMT startUtime = NA, # automatically filled later stopUtime = NA # automatically filled later ), adcp200901 = list( # this file has a funky depth record where depth = 0 deployment = "2009adcp01", startDati = "2009/Jun/01 21:00:00", # GMT stopDati = "2009/Aug/20 16:30:00", # GMT startUtime = NA, # automatically filled later stopUtime = NA # automatically filled later ), adcp200902 = list( # On Aug 8 we foun d ADCP on its side, from looking at the pitch/roll log, # it looks like the ADCP fell over right away. From this file I'll only # use the temperature and depth information. When merging ADCP and SDL # data, be careful interpolating. Don't water flow from the previous and # following ADCP files. deployment = "2009adcp02",

PAGE 240

240 startDati = "2009/Aug/24 19:30:00", # GMT stopDati = "2009/Sep/08 15:00:00", # GMT startUtime = NA, # automatically filled later stopUtime = NA # automatically filled later ), adcp200903 = list( deployment = "2009adcp03", startDati = "2009/Sep/08 17:00:00", # GMT stopDati = "2010/Oct/01 16:00:00", # GMT startUtime = NA, # automatically filled later stopUtime = NA # aut omatically filled later ), adcp200904 = list( deployment = "2009adcp04", startDati = "2009/Oct/13 17:00:00", # GMT stopDati = "2009/Oct/26 14:30:00", # GMT startUtime = NA, # automatically filled later stopUtime = NA # automatica lly filled later ), adcp200905 = list( deployment = "2009adcp05", startDati = "2009/Nov/18 19:00:00", # GMT stopDati = "2009/Nov/26 16:09:00", # GMT # The actual recovery dati: 2009/Dec/14 18:30:00 # This is the dati when the bat tery died: 2009/Nov/26 16:09:00 GMT startUtime = NA, # automatically filled later stopUtime = NA # automatically filled later ) ) adcpmd2 = adcpmd1 # calculate the utimes for ADCP dati for (i in 1:length(adcpmd2)){ # get the adcp start and stop times adcpmd2[[i]]$startUtime = unclass(as.POSIXct(strptime(adcpmd2[[i]]$startDati, "%Y/%b/%d %H:%M:%S", tz="GMT"), origin="1970 1 1", tz="GMT"))[1] adcpmd2[[i]]$stopUtime = unclass(as.POSIXct(strptime(adcpmd2[[i]]$stopDat i, "%Y/%b/%d %H:%M:%S", tz="GMT"), origin="1970 1 1", tz="GMT"))[1] } adcpmd = adcpmd2 # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # global functions.r

PAGE 241

241 # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@ @@@@@@@@@@@@@@@@@@@@@@@@@ ### This file defines common functions which are used in other ### files. This way I only have to change them in one place. ###################################################################### ######## ### It requires variables defined in 'global variables.r' library(plyr) library(zoo) library(circular) ###################################################################### ###################################################################### ################################### ################################### # Index # Index # Index # Index # Index # Index # Index # Index # Index # Index # importALPSdata # filterALPSdata # subsample # kielFilter # positionStats1 # importADCPdata # mergeAlpsAdcpData # findHabType # mergeSonar Data # importGPSdata # importTideData # importSunData # importRawSDLdata # this one not yet written...this data in MySQL db # importToaData # toaStats1 # importBatteryData # importBiometricData # circles3d # clocks3d # anglefun # bearing.ta # meanAngle # homeRange # rotate # chop # kde2dplot # kde2dplot2 # mymovie # importFishData

PAGE 242

242 # plot.imagematrix.zy ###################################################################### ###################################################################### ############ ########################################################## ### This file reads in fish movement data from the daily files and combines ### them into a single data frame. It filters out bad points with ### erroneous ( 1) depth values. It filters on cn. A lso, sets the ### 'zero' unix time. ###################################################################### ######## ### This requires variables defined in 'commonVariables.r' ###################################################################### ##### ### ### IMPORT DATA NOTE: this was renamed from 'importSdlData()' ###################################################################### ######## importALPSdata = function(deployment, tagName, beaconName="useBest", psr=TRUE, offset=TRUE, whichDays= NA, chopTimes=TRUE, altDir="n"){ # 'deployment' is the experiment designation, i.e. IF43 for 2009 experiments # 'tagName' is the desired fish tag, i.e. f18 # 'beaconName' is the desired beacon, i.e. b1 # 'psr' use the psr or no psr output file # 'offset' use the global offsets when reporting positions? # 'whichDays' is the list of day indices, 1:38 for 2007, 1:52 for 2008, not used for 2009 # 'chopTimes' when true remove all data before and after start and stopUtimes # # 'altDir' is adde d in case you want to use an alternate directory, for example # when testing ALPS with different sound speeds you want to use multiple # directories. It should be "n" or set to the desired directory. # This function no longer filters or su bsamples data, see function 'subsample()' # deployment="hb2007"; tagName="b79100"; beaconName="useBest"; psr=TRUE; offset=TRUE; whichDays=12; # deployment="hb1"; tagName="f11"; beaconName="b1"; psr=TRUE; offset=TRUE; whichDays=NA; # setti ngs that change for each deployment for (i in 1:length(md)){ # i loops through all deployments if (deployment == md[[i]]$deployment){ year = md[[i]]$year SDLmode = md[[i]]$SDLmode timezone = md[[i]]$timezone

PAGE 243

243 if (beaconName == "useBest"){beaconName = md[[i]]$bestBeacon} # else make no changes and use 'beaconName' # ...but double check were not looking at the best beacon if(tagName == beaconName){beaconName = md[[i]]$secondBestBeacon} trueNames = md[[ i]]$trueNames alpsDir = paste(md[[i]]$homeDir,"/ALPS 2011Feb14",sep="") startUtime = md[[i]]$startUtime stopUtime = md[[i]]$stopUtime taggingDay = md[[i]]$taggingDay #reefEN = md[[i]]$reefEN sdlEN = md[[i]]$sdlEN } } print(paste("importALPSdata:",deployment," ",tagName)) # if altDir = F then use the alpsDir just set. Otherwise use the directory # specified by altDir if (altDir != "n"){alpsDir = altDir} # other stuff tagID = substr(tagNam e,2,100) tagType = substr(tagName,1,1) beaconID = substr(beaconName,2,100) oDir=getwd() setwd(alpsDir) ### names of columns to be read from the ALPS output files columnNames = c("utime", "easting", "northing", "depth", "cn", "rn", "dop" "hid", "hcount", "mystery1", "mystery2") # because in 2007/2008 the data for each tag are divided into multiple files... # ...there's a folder and file for each day if (year == 2007){ # this year is run in symbol mode # cr eate a pattern for all files for this beacon, tag combination namePattern = paste("T",tagID,beaconName,sep="") # gather all the file names for the tag fileNames = list.files(pattern = namePattern, recursive=TRUE, ignore.case=TRUE) ## # if you've asked for more days than are present, don't worry about it if (length(whichDays) > length(fileNames)) {whichDays = 1:length(fileNames)} ### pick one or more days from the middle because there's not enough memory ### Remember they might not be in chronological order... ### these file names include the subdirectory from the ~ \ ALPS directory if (whichDays=="all" || is.na(whichDays)){whichDays=1:length(fileNames)} fileNames = fileNames[whichDays] #1=09Dec2007, 24=01Jan2008 ;

PAGE 244

244 } else if (year == 2008){ # this year ends up being run in symbol mode # and there are two "code" files for each tag # B79200: 80, 81 # B79400: 85, 86 # B79500: 130 # F60100: 66, 67 # F60300: 70, 71 # F61100: 46, 47 # F61200: 48, 49 # F61300: 51, 52 # o61500: 55, 56 the REMUS tag originalTagNames = c("b79200", "b79400", "b79500", "f60100", "f60300", "f61100", "f61200", "f61300", "o61500") tagCodes = list( c("80", "81"), c("8 5", "86"), c("130"), c("66", "67"), c("70", "71"), c("46", "47"), c("48", "49"), c("51", "52"), c("55", "56") ) # Sometimes I sent these tagNames as b79200 and sometimes as b80 and b81... # ...determine which and proceed accordingly if (nchar(tagName) < 6){ # if you send only one code number, then proceed with only that one currentCodes = substr(tagName,2,5) } else if (nchar(tagName==6)){ # if you send one symbol number, then proceed with both relevant codes # ...pick the right codes for the current tag for (i in 1:length(originalTagNames)){ if (tagName == originalTagNames[i]){ currentCodes = tagCodes[[i]] } } } else { # if the tagName doesn't fit the pattern print the error flag. print("Tag name is wrong. Search for 'Error 1' ") } # create a pattern for all files for this beacon, tag combination namePatterns = paste("T",currentCodes,beaconName,sep="") # gather all the file names for the tag

PAGE 245

245 fileNames1 = list.files(pattern = namePatterns[1], recursive=TRUE, ignore.case=TRUE) fileNames = fileNames1 # if there's a second fileName, add it here if(length(namePatterns) == 2) { fileNames2 = list.files(pattern = namePatterns[2], recursive=TRUE, ignore.case=TRUE) fileNames = c(fileNames1, fileNames2) } # don't worry about picking only whichDays, there is so little data, just # get everything everytime } else if (year==2009){ # ...but in 2009 there's one file per tag # create the filename 'with psr' or 'without psr' if (psr){ fileNames = paste("T",tagID,"B",beaconID," psr.txt", sep="") } else { fileNames = p aste("T",tagID,"B",beaconID," no psr.txt", sep="") } } ### ...then read in data from that file d1 = lapply( as.list (fileNames), read.table, header=FALSE, col.names=columnNames) if (nrow(d1[[1]]) > 0){ # then do the normal thing s, if there are no data, # make a blank answer for ourput d2 = do.call("rbind", d1) # this doesn't ensure chronological order d3=d2 ### some years the SDLs were set to local (EST or EDT) time instead of GMT ### GMT = EDT+4hrs = EST+5hrs. 1hr = 60min 60sec = 3600sec. ### ### Read "Time zones in ALPS 2011Mar26.docx" for my final word on this. ### ### I am confident that the text version of the *.bin files with ### readable dat i show which ever timezone I synchronized the SDL clock to, ### local or GMT. I am also confident that ALPS output files showing ### position solutions (showing unix time) show time as GMT. ### ### So no utime correction required d4=d3 d5=d4 ### To offset or not to offset easting and northing numbers if (offset){ eOffsetTemp = eastingOffset; nOffsetTemp = northingOffset;

PAGE 246

246 } else { eOffsetTemp = 0; nOffsetTemp = 0; } d5$easting = d4$easting eOffsetTemp d5 $northing = d4$northing nOffsetTemp ### Convert from psi to depth in meters. ALPS output is ### in psi. The max psi is 19, and the location is 40ft=13m. ### Find this worked out in 'convert pressure to depth.xlsx'. d5$depth = (34/50) *d4$depth # change the negative values to NA...this may be all d5$depth[d5$depth < 0] = NA d6=d5 ### Ensure chronological order d7 = data.frame( cbind( "utime" = d6$utime, "easting" = d6$easting, "northing = d6$northing, "depth" = d6$depth, "cn" = d6$cn, "rn" = d6$rn, "dop" = d6$dop, "hid" = d6$hid, "hcount" = d6$hcount, "mystery1" = d6$mystery1, "mystery2" = d6$mystery2) [ord er(d6$utime), ] ) # Add datiG in the data.frame...for some reason the cbind step # undoes these conversions, so do this now d8 = cbind( utime = d7$utime, datiG = d7$utime, datiL = d7$utime, subset(d7,select=c("eas ting", "northing", "depth", "cn", "rn", "dop", "hid", "hcount", "mystery1", "mystery2")) ) d8$datiG = as.POSIXlt(d8$datiG, origin="1970 1 1", tz="GMT") d8$datiL = as.POSIXlt(d8$datiL, origin="1970 1 1", tz="EST5EDT") ### I R EPLACED THIS SECTION FROM THE LINES THAT FOLLOW IMMEDIATELY...THIS ### IS GOING TO BREAK SOME OTHER CODE SOMEPLACE # for fish and beacon tags, only keep positions after and before the start # and stop utimes. Sometimes "other" tags were used by divers during

PAGE 247

247 # deployments or recovery and so lie outside the official start and stop # times. Don't filter them here, but use their ALPS results with caution # to be sure you only use points obtained while the array was in place. #i f (tagType == "o"){ # do nothing # d9=d8 #} else { # keep positions only after and before the start and stop utimes. # d9 = d8[((d8$utime>startUtime) & (d8$utimestartUtime) & (d8$utime0){print("Some dod valu es are negative and have been changed to 1")} d9$dod[(d9$dod < 1)] = 1 } else if ((year == 2008) | (year == 2009)){ # add a 'day of deployment' column, which counts number of days after tagging tagDay = as.POSIXlt(strptime(taggingDay, "%Y/%B/%d", tz="EST5EDT"), origin="1970 1 1")$yday d9$dod = d9$datiL$yday tagDay + 1 } else { print("Error in importALPSdata. 'year' is wrong") } # add a 'time of day' column a fractional value between 0 and 23.99772 # 3600 sec/hr. 60 sec/min. d9$tod = ((( (d9$datiL$hour*3600) + (d9$datiL$min)*60 + (d9$datiL$sec) )/86400)*24)

PAGE 248

248 # add an 'hour of day' column THIS MIGHT BE COMPLETELY REPLACED BY TOD d9$hod = d9$datiL$hour # add 'lunarPhase' column. 1 = new moon through 16 or 17 = full moon. # See'lunar phases.r' and 'lunar phases.xlsx' for more. # There's a pretty way to do this, but this is much faster if (year == 2007){ temp1 = c(1:30,1:8) d9$lunarIndex = temp1[d9$dod] } else if (year == 2008) { temp1 = c(19:29,1:30,1:30,1:5) d9$lunarIndex = temp1[d9$dod] } else if (year == 2009){ # read in the lunarIndex for each day of 2009 fn1 = "C:/zy/Telemetry/R summary files/lunar phases 2009.csv" lunar2009 = read.table(file=fn1,header=T,sep=",", col.names=c("month","day","doy","lunarIndex"), colClasses=c("character",rep("numeric",3)) ) # now pick the day of each datum in d9 and determine the lunarIndex # luckil y the order of lunar2009 is the same as the order as yday # d9$lunarIndex = lunar2009$lunarIndex[d9$datiL$yday] } } else { # there were no data in d1, so make a blank data set d9 = data.frame( utime=NA, datiG=NA, datiL=NA, east ing=NA, northing=NA, depth=NA, cn=NA, rn=NA, dop=NA, hid=NA, hcount=NA, mystery1=NA, mystery2=NA, dod=NA, tod=NA, hod=NA, lunarIndex=NA) } ### put directory back to what it was setwd(oDir) # return the data (as a data.frame) and other stuff list("data"=d9, "tagName"=tagName, "beaconName"=beaconName, "psr"=psr, "deployment"=deployment) } #### end importData ######################################################## # bob = importALP Sdata(deployment="sb3", tagName="f39", beaconName="b1", psr=TRUE, offset=TRUE) # bob = importALPSdata(deployment="hb2007", tagName="f60200", beaconName="b79200", psr=TRUE, offset=TRUE, whichDays=c(1:3)) #################################################### ################## #########

PAGE 249

249 # filter raw ALPS output using depth, cn, rn, dop, hid, or hcount filterALPSdata = function( ### this functions is for filtering raw ALPS output using ALPS output values df1, # the dataframe holding ALPS output from 'impor tALPSdata()' depthF=F, cnF=F, rnF=F, dopF=F, hidF=F, hcountF=F, # T/F filter using these speedF=F, # if filtering on this, specify a speed in m/s minuteMean=F # a switch to compute the mean easting, etc for each minute ){ # settings that change f or each deployment for (i in 1:length(md)){ # i loops through all deployments if (df1$deployment == md[[i]]$deployment){ reefEN = md[[i]]$reefEN } } print(paste("filterALPSdata:",df1$deployment," ",df1$tagName)) d1=df1$data # what to do if there are lines of data in d1 if (nrow(d1) > 0){ # then filter as normal # filter using depth, etc. if (depthF) {d1 = d1[d1$depth < depthF,]} if (cnF) {d1 = d1[d1$cn < cnF,]} if (rnF) {d1 = d1[d1$rn < rnF,]} if (d opF) {d1 = d1[d1$dop < dopF,]} if (hidF) {print("I don't yet know how to filter on 'hid'")} if (hcountF) {d1 = d1[d1$hcount >= hcountF,]} } else { # don't do anything, just make data.frame look right d3=data.frame( utime=d1$utime, datiG=d1$datiG, datiL=d1$datiL, dod=d1$dod, tod=d1$tod, lunarIndex=d1$lunarIndex, easting=d1$easting, northing=d1$northing, depth=d1$depth, dtr=d1$depth, # I don't know how to make the rest empty the real wa y btr=d1$depth, interval = d1$depth, speed = d1$depth, turnAngle = d1$depth, npos = d1$depth )

PAGE 250

250 } # now check again to see if there are any lines of data remaining if (nrow(d1) > 3){ # then continue as normal...yo u need four points for some calculations # calculate travel speed and turning angles # gather times and positions at the 'last', 'now' and 'next' times, # these vectors will be 2 shorter than the basic columns lastUtime = head(d1$utime, 2) lastEasting = head(d1$easting, 2) lastNorthing = head(d1$northing, 2) nowUtime = tail(head(d1$utime, 1), 1) nowEasting = tail(head(d1$easting, 1), 1) nowNorthing = tail(head(d1$northing, 1), 1) nextUtime = tail(d1$utime 2) nextEasting = tail(d1$easting, 2) nextNorthing = tail(d1$northing, 2) # bearing.ta calculates the turning angle between the two steps: # (last to now), and (now to next). It also gets the distances between # the three points. bearing.ta takes 3 2 column data.frames... p1 = data.frame(lastEasting, lastNorthing) p2 = data.frame(nowEasting, nowNorthing) p3 = data.frame(nextEasting, nextNorthing) utimeDiff = c(nowUtime lastUtime, nextUtime [length(nextUtime)] nowUtime[length(nowUtime)],1) # 1 in last place to avoid division by zero # run bearing.ta and pick out answers temp1 = bearing.ta(p1,p2,p3, as.deg=TRUE, replaceNaN=TRUE) turnAngle = c(0,temp1$ta,0) speed = c(temp1$dist1, temp1$dist2[length(temp1$dist2)], 0) / utimeDiff # calculate the 2 D distance to reef (dtr) of each point dtr = sqrt((d1$easting reefEN$easting)^2 + (d1$northing reefEN$northing)^2) # calculate the 2 D bearing to the ree f (btr) of each point # ...first find the easting/northing difference between fish and reef # ...do reef fish so that a fish directly N of reef has a bearing to reef of 180deg etemp = reefEN$easting d1$easting ntemp = reefEN$northing d1$no rthing # ...anglefun() can't do two points in exactly the same place so... ntemp[etemp==0 & ntemp==0] = 7777777 # make this one directly north, only direction matters here # ...now calculate the bearing to fish from reef btr = anglefun(ete mp, ntemp, bearing=TRUE, as.deg=TRUE) # change the class of btr to 'circular' so you can take the circular mean correctly btr = circular(btr, units="degrees", zero=pi/2, rotation="clock", modulo="2pi")

PAGE 251

251 # ... as a note, if you ever want to use these as regular numbers just do... # ... attributes(btr) = NULL # calculate the time interval between two successive points interval = utimeDiff # add dtr, speed and turnAngle to the data.frame d2 = cbind( subset(d 1, select=c("utime", "datiG", "datiL", "dod", "tod", "hod", "lunarIndex", "easting", "northing", "depth")), dtr = dtr, btr = btr, interval = interval, speed = speed, turnAngle = turnAngle ) # filter using gag travel speed if (is.numeric(speedF)){d2 = d2[speed < speedF,]} d3 = d2 # do you want to use every recorded point or the mean of all points each minute if (minuteMean){ d3$min = cut(d3$datiG,breaks="min") d3 = ddply(d3,"min" function(x){ with(x,data.frame( # min=min, # this one gets put in automaticaly as a factor utime=unclass(as.POSIXct(min, origin="1970 1 1", tz="GMT")[1]), datiG=7777777,# I can't get this to be GMT instead of EST so leave it for later...as.POSIXct(min, tz="GMT")[1], datiL=7777777, dod = mean(dod), tod = mean(tod), hod = mean(hod), lunarIndex = mean(lunarIndex), easting=mean(easting), northing=mean (northing), depth=mean(depth), dtr=mean(dtr), btr=mean.circular(btr), interval=mean(interval), speed=mean(speed), turnAngle=meanAngle(turnAngle), npos=nrow(x)# how many positio ns went into this minute average ))} # end function/with/data.frame ) # end ddply()

PAGE 252

252 # now get rid of the $min column. The last row is/might be NA # because of calculating the min averages, so drop it d3 = s ubset(d3, select = c(min)) d3 = d3[ nrow(d3),] # fix the dati d3$datiG = as.POSIXlt(d3$utime, origin="1970 1 1", tz="GMT") d3$datiL = as.POSIXlt(d3$utime, origin="1970 1 1", tz="EST5EDT") # r ound some figures d3$dtr = round(d3$dtr,2) d3$btr = round(d3$btr,0) d3$interval = round(d3$interval,0) d3$speed = round(d3$speed,5) d3$turnAngle = round(d3$turnAngle,2) } else { # don't do anything, just make the da ta.frame look right attr(d3$data$btr,"circularp") = NULL d3$npos = NA }# end if minuteMean } else { # else there are no data rows, just make the data.frame look right d3=data.frame( utime=d1$utime, datiG=d1$datiG, datiL=d1$datiL, dod=d1$dod, tod=d1$tod, lunarIndex=d1$lunarIndex, easting=d1$easting, northing=d1$northing, depth=d1$depth, dtr=d1$depth, # I don't know how to make the r est empty the real way btr=d1$depth, interval = d1$depth, speed = d1$depth, turnAngle = d1$depth, npos = d1$depth ) } d9=d3 # return the data (as a data.frame) and other stuff list( "data"=d9, "tagName"=df1$tagName, "beaconName"=df1$beaconName, "psr"=df1$psr, "deployment"=df1$deployment) } # end 'filterALPS' # sam = filterALPSdata(bob, cnF=3)

PAGE 253

253 ###################################################################### ########### \ # subsample raw ALPS output subsample = function(df1, subSample=1, ...){ # this has been changed and not checked for accuracy ############################################### # if 'subSample' = "kiel" then # do kielFilter(), if 'subSample' = an int eger then it becomes the # subsampling frequency. So subSample=1 is equivalent to taking all points ### apply the kielFilter or simply take a subset if (subSample == "kiel"){d9=kielFilter(df1, ...)} # else take some fraction of hits, but if 24 o r fewer hits per day on average, take all else if ( nrow(df1) <= (24*length(whichDays)) ) {d9=df1} else { ii = seq(1,nrow(df1), by=subSample) d9=df1[ii,] } } ######################### ############################################# ###################################################################### ###################################################################### ### Kiel Filter to filter data like Brian Kiel ################### ################################################### ######## kielFilter = function(df1, ...){ # repeat Kiel's "first hit of the day" sampling # His samples were one hit per day usually between 10am and 2pm # I want to pick out one hit each day betwee n 10:00 and 14:00 # If there are no points between those hours return NA ### set utime to "seconds since 'startTime'" depending on which year if(year==2007) {startTime=startTime2007} else if (year==2008) {startTime=startTime2008} else start Time = 0 seconds = (df1$utime startTime) %% 86400 # seconds since midnight, %%=module divide, gives only the remainder temp1 = cbind(seconds, df1) # %/%=integer divide, gives only the integer # 10:00=36000 sec since midnight; 14:00=50400 cutoffs=c(36000,50400) # take only rows between 10:00 and 14:00 temp2 = subset(temp1, temp1$seconds>cutoffs[1] & temp1$seconds
PAGE 254

254 # split the entire dataframe into a list of smaller data frames, one for each day daylist = split(temp3, day) # a function to take one row from a data frame sample1 = function(x){x[sam ple(nrow(x), size=1),]} # apply 'sample1()' to each data frame in 'daylist' temp4 = do.call("rbind", lapply(daylist,sample1)) temp4[,3:6] # take only some columns } ###################################################################### ####### ############################################################### ###################################################################### # calculate position stats, percent of position solutions for a given tag # # This function takes the output of 'importAL PSdata()' and 'filterALPSdata()' # and calculates ther fraction of position solutions of a tag for a given # time interval, for example, every 60 min. # It produces a picture and returns a list of XX items. 1 5 are...6 8 are ... positionStats1 = functio n(positionData, mLines=TRUE){ # 'positionData' is the output of 'importALPSdata()' or 'filterALPSdata()' # which is a list of 4 items: 1 data, 2 tagName, 3 beaconName, 4 deployment # 'mLines' is a switch for plotting mean lines or not tagID = substr(positionData$tagName,2,100) tagType = substr(positionData$tagName,1,1) beaconID = substr(positionData$beaconName,2,100) # settings that change for each deployment for (i in 1:length(md)){ # i loops through all deployments if (position Data$deployment == md[[i]]$deployment){ startUtime = md[[i]]$startUtime stopUtime = md[[i]]$stopUtime } } # some 'time' book keeping totalSec = stopUtime startUtime thirtyMinBins = seq(from=startUtime, to=stopUtime, by=1800) # 30min 60sec sixtyMinBins = seq(from=startUtime, to=stopUtime, by=3600) bins = sixtyMinBins; mins=60; # if you choose a different bin size, fix the sentinel tpi in the lines below bins2 = as.POSIXct(bins, origin="1970 1 1", tz="GMT") if (tag Type == "f"){ tpm = 30 # transmissions per minute for a fish tag

PAGE 255

255 tpi = tpm mins # transmission per bin } else if (tagType == "b"){ tpm = 3 # transmissions per minute for a beacon tpi = tpm mins # transmission per bin } else if (tagT ype == "s"){ tpm = 30 # transmissions per minute for the sentinel tpi = tpm 5 2 # transmissions ber bin of 60 minutes } else { print("Please pick a tag type") } # count position solutions per time intervals for 'tag' freqList = data.frame("bin"=bins2[1], "frequency"=0) for (j in 2:length(bins2)){ # j counts time bins temp1 = positionData$data[ ((positionData$data$utime>bins[j 1]) & (positionData$data$utime
PAGE 256

256 importADCPdata = function(recalculate=FALSE){ # 'recalculate' is a switch to either import all ADCP data and re calculate # everything (=TRU E) or to simply read in the final useable # version saved as a text file (=FALSE) oDir = getwd() # to import or read in ADCP data if (!recalculate){ # 'getSummary' == TRUE # ask the user for the file to use #filename = getFile( directory = "C:/zy/Data/ADCP summary text files/") #setwd(filename$dir) cDir = setwd(paste(dataDir, adcpDir, sep="")) filename = list.files(pattern="adcpDataSummary") #filename = "adcpDataSummary.csv" adcpData = read.csv(filename, h eader=TRUE, colClasses=c("numeric", "character", "character", rep("numeric", 13)) ) print(paste("Read data from", filename)) # for some reason I can't read the dati in as POSIXct anymore, so convert now adcpData$datiG = as.POSIXlt(s trptime(adcpData$datiG, "%Y %m %d %H:%M:%S", tz="GMT"), origin="1970 1 1", tz="GMT") adcpData$datiL = as.POSIXlt(strptime(adcpData$datiL, "%Y %m %d %H:%M:%S", tz="EST5EDT"), origin="1970 1 1", tz="EST5EDT") } else { # 'recalculate' == F ALSE ### change to desired ADCP directory cDir = paste(dataDir, "/ADCP Data 2010Sep Final/for R", sep="") setwd(cDir) # get names for all the files filenames = list.files(pattern=".csv") # a list to hold the results adcpData = list() # for each of the adcp files in the folder... for (i in 1:length(filenames)){ # settings that change for each ADCP deployment for (j in 1:length(adcpmd)){ # i loops through all deployments if (substr(filenames[[j]],1,10) == adcpmd[[i]]$deployment){ startUtime = adcpmd[[j]]$startUtime stopUtime = adcpmd[[j]]$stopUtime deployment = adcpmd[[j]]$deployment } } # open the connection rfile = file(filenames[i], open= "rt") # read in the column headers, units, and bin numbers...on 3 lines

PAGE 257

257 headers = tolower(as.data.frame(read.csv(rfile, header=FALSE, skip=12, nrows=1, colClasses="character"))) units1 = read.csv(rfile, header=FALSE, nrows=1) binNum = read.csv(rfile, header=FALSE, nrows=1) # construct unique column names colNames=c() for (j in 1:7){colNames[j] = headers[j]} # column 8 name is duplicate and column is unnecessary. col 9 is empty colNames[8] = "junk1"; colNames[9] = "junk2"; for (j in 10:13){colNames[j] = headers[j]} for (j in 14:length(headers)){colNames[j] = paste(headers[j], binNum[j], sep="_")} colNames = unlist(colNames) # if this is the one dataset which coll ected data every second instead of # an average every ten minutes...then calculate a 10 min average and drop # the rest of the data if (filenames[i] == "2009adcp05.csv"){ ### read 600 lines, calculate and keep only the average, 10 min = 600 sec # for holding the 10 minute mean values meanDF = data.frame(matrix(data=NA, nrow=1, ncol=length(colNames))) means = vector(mode="numeric", length=124) # skip one blank line, and just to make the output times pretty, skip # the first data line readLines(rfile, n=2) go=TRUE; # a switch to indicate eof counter = 1 while(go){ # while the file still has lines to read in # read in 600 lines temp1 = read.csv(rfile, header=FALSE, nrows=600, col.names=colNames) # calculate the means of columns 10:133 (133 9 = 124) for (j in 1:124){means[j] = mean(temp1[j+9],na.rm=T)} # save the means with the dati of the last row meanDF[counter,] = c(temp1[ nrow(temp1), 1:9], means) # check to see if you got all the lines if(nrow(temp1)<600){go=FALSE} counter = counter+1 } # end while the file still has lines # save the results and give their names back df1 = meanDF names(df1) = colNames } else { # else the choosen ADCP data file already had data every 10min df1 = read.csv(rfile, header=FALSE, skip=1, col.names=colNames) }

PAGE 258

258 print(paste("Rea d data from", filenames[i])) close(rfile) df2 = df1 # Columns are as follows # 1 ensemble number # 2 8 dati; GMT # 9 blank; 10 pitch; 11 roll; 12 temp; 13 depth; # 14 43 average echo amplitude, all bins # 44 73 velocity magnitude, all bins (mm/sec) # 74 103 current direction in 10ths of a degree, all bins # 104 133 percent good 4 # create dati from the separate columns temp1 = paste("200", df2$yr,"/", df2$mo,"/", df2$da, ", df2$hh, ":", df2$mm, ":00", sep="") # convert from chr to utime (GMT) datiVec1 = unclass(as.POSIXct(strptime(temp1, "%Y/%m/%d %H:%M:%S", tz="GMT"), origin="1970 1 1", tz="GMT")) # remove the tz attribute from datiVec1 attributes(datiVec1) = NULL # convert to datiG datiVec2 = as.POSIXlt(datiVec1, origin="1970 1 1", tz="GMT") #convert to datiL datiVec3 = as.POSIXlt(datiVec1, origin="1970 1 1", tz="EST5EDT") # See 'Notes on processing ADCP data.docx' for notes on the bin decisions. # Ignore bins 21:30. # lower water layer is bins 1 4. # mid water layer is bins 5 11 # upper layer is 12 20 # ...that is, I want the row means over the columns containing data # for bins 1 4, 5 11, and 12 20 eaaL = rowMeans(df2[,14:17], na.rm=TRUE) eaaM = rowMeans(df2[,18:24], na.rm=TRUE) eaaU = rowMeans(df2[,25:33], na.rm=TRUE) magL = rowMeans(df2[,44:47 ], na.rm=TRUE) magM = rowMeans(df2[,48:54], na.rm=TRUE) magU = rowMeans(df2[,55:63], na.rm=TRUE) dirL = rowMeans(df2[,74:77], na.rm=TRUE) dirM = rowMeans(df2[,78:84], na.rm=TRUE) dirU = rowMeans(df2[,85:93], na.rm=TRUE)

PAGE 259

259 # repla ce these dati columns and only keep wanted columns: df3 = cbind("utime"=datiVec1, "datiG"=datiVec2, "datiL"=datiVec3, df2[,c(10:13)], "eaaL" = eaaL, "eaaM" = eaaM, "eaaU"=eaaU, "magL"=magL, "magM"=magM, "magU"=magU, "dirL"=dirL, "dirM"=dirM, "dirU"=dirU ) # now remove the leading and trailing times when ADCP was out of water df4 = df3[(df3$utime > startUtime) & (df3$utime < stopUtime),] df5 = df4 # there are some anomalous times when water depth seems wrong (i.e. less # than 10m) ... 1. remove lines with depth < 10m and 2. find the depth # change from one time to the next. Delete any changes greater than 0.2m df5 = df5[df5$dep > 10 ,] nextTime = c(tail( df5$dep, 1), NA) depChange = abs(nextTime df5$dep) # now to get the index right, move the NA from the tail to the head depChange = c(FALSE, head(depChange, 1)) # now delete rows with depth changes which are too large, >0.2m df5 = df5[!(depChange > 0.2),] # now if the current file is from deployment "2009adcp02", then only # use temperature and depth info. The SDL apparently tipped over almost # immediately. if (deployment == "2009adcp02") { # delete all data but temperature and depth df5$eaaL = NA; df5$eaaM = NA; df5$eaaU = NA; df5$magL = NA; df5$magM = NA; df5$magU = NA; df5$dirL = NA; df5$dirM = NA; df5$dirU = NA; } df6=df5 adcpData[[i]] = df6 } # end i loop over each of the ADCP files # combine all ADCP data into one list adcpData1 = rbind(adcpData[[1]], adcpData[[2]], adcpData[[3]], adcpData[[4]], adcpData[[5]], adcpData[[6]],adcpData[[7]]) adcpData = adc pData1 } # end 'import' == FALSE ### return the oDir setwd(oDir)

PAGE 260

260 ### return the data return(adcpData) } ### end importADCPdta() ###################################################################### ####################################### ############################### # adcpData = importADCPdata(recalculate=TRUE) # write.csv(adcpData, "C:/zy/Telemetry/R summary files/adcpDataSummary 2011Jun22.csv", row.names=FALSE) # adcpData = importADCPdata(recalculate=FALSE) ########################## ############################################ ###################################################################### ###################################################################### ### MERGE ALPS OUTPUT, ADCP DATA, AND TIDE DATA ##################### ################################################# # This function imports ALPS output and ADCP output and merges them together # The function name was changed from 'joinSdlAdcp()' mergeAlpsAdcpData = function(alpsData, ...){ # alpsData is a dataframe fro m importALPSdata and/or filterALPSdata # temp1 = importALPSdata(deployment="hb2007", tagName="f60200", beaconName="b79200", psr=FALSE, offset=TRUE, whichDays=c(11:14)) # temp1 = importALPSdata(deployment="hb2", tagName="f32", beaconName="b1", psr=FALSE offset=TRUE, whichDays=c(11:14)) # alpsData = filterALPSdata(temp1, cnF=3) # adcpData is a dataframe from importADCPdata # adcpData = importADCPdata() print(paste("mergeAlpsAdcpData:",alpsData$deployment, ", alpsData$tagName)) d1 = alp sData$data tagType = substr(alpsData$tagName,1,1) # what to do if there are/aren't lines of data in d1 if (nrow(d1) > 0){ # do the normal thing... d2 = importADCPdata(...) # convert magL, magM, magU units to m/s from mm/s d2$magL=d2$magL/1000; d2$magM=d2$magM/1000; d2$magU=d2$magU/1000; tideData = importTideData() deployment = alpsData$deployment # change things for each deployment for (i in 1:length(md)){ # i loops through all deployments if (deployment == md[[i]]$deployment){

PAGE 261

261 startUtime = md[[i]]$startUtime stopUtime = md[[i]]$stopUtime } } # only keep ADCP data 30min before and after the SDL array start and stop utimes # in order to be sure I don' t interpolate ADCP date between, say, Jan 2008 # and Oct 2008. However, if this is an 'other' tag, its position times # might lie outside the normal start and stop utimes, so be more lax # in deleting ADCP data if (tagType == "o") { # don' t delete as much ADCP data d3 = d2 } else { # do the normal thing d3 = d2[((d2$utime>startUtime 30*60) & (d2$utime
PAGE 262

262 # the result will be a shorter list than d4$utime. Then when I cbind # everything together later things won't match up at the correct time. # To fix this, 1. find out how many p ositions are known before the first ADCP # data, 2. find out how many positions are known after the last ADCP data. # 3. use na.approx to interpret ADCP data, resulting in a vector potentially # shorter than d4$utime, 4. to each of the interpol ated ADCP data columns, # add the right number of NAs at heads and tails. # count fish positions before the first ADCP data go = TRUE; headCount = 1; while (go){ if (is.na(d4$dep[headCount])){ headCount = headCount + 1 } else { firstADCPutime = d4$utime[headCount] headNAs = headCount 1 go = FALSE } } # end while loop # count fish positions after last ADCP data go = TRUE; tailCount = 0; while (go){ if ( is.na( d4$dep[nrow(d4) tailCount]) ){ tailCount = tailCount + 1 } else { #lastADCPutime = d4$utime[tailCount] # I believe this should be d4$utime[nrow(d4) tailCount] but this never gets used ??? tailNAs = tailCount go = FALSE } } # end while loop # for a check, count some things interpTimes = length(d4$utime) headNAs tailNAs # If there are at least two ADCP data lines then interpolate, otherwise ... # now interpolate ADCP data for all positions in interpTimes tempTem = c(rep(NA,headNAs),na.approx(object=d4$tem, x=d4$utime),rep(NA,tailNAs)) # AS OF 2011 JUNE 24 I'VE REALIZED THIS CHECK IS WRONG BUT LUCKILY IT WAS # STILL GIVING THE RIGHT ANSWER...

PAGE 263

263 # a check : length(tempTem+headNAs+tailNAs) should = length(d4$utime) #if (length(tempTem+headNAs+tailNAs) != length(d4$utime)){ # print("Failed check during interpolation in 'mergeAlpsAdcpData()'")} # REPLACE IT WITH THE CORRECT CHECK... if (length (tempTem) != length(d4$utime)){ print("Failed check during interpolation in 'mergeAlpsAdcpData()'")} tempDep = c(rep(NA,headNAs),na.approx(object=d4$dep, x=d4$utime),rep(NA,tailNAs)) tempEaaL = c(rep(NA,headNAs),na.approx(object=d4$ea aL, x=d4$utime),rep(NA,tailNAs)) tempEaaM = c(rep(NA,headNAs),na.approx(object=d4$eaaM, x=d4$utime),rep(NA,tailNAs)) tempEaaU = c(rep(NA,headNAs),na.approx(object=d4$eaaU, x=d4$utime),rep(NA,tailNAs)) tempMagL = c(rep(NA,headNAs),na.approx(obje ct=d4$magL, x=d4$utime),rep(NA,tailNAs)) tempMagM = c(rep(NA,headNAs),na.approx(object=d4$magM, x=d4$utime),rep(NA,tailNAs)) tempMagU = c(rep(NA,headNAs),na.approx(object=d4$magU, x=d4$utime),rep(NA,tailNAs)) tempDirL = c(rep(NA,headNAs),na.app rox(object=d4$dirL, x=d4$utime),rep(NA,tailNAs)) tempDirM = c(rep(NA,headNAs),na.approx(object=d4$dirM, x=d4$utime),rep(NA,tailNAs)) tempDirU = c(rep(NA,headNAs),na.approx(object=d4$dirU, x=d4$utime),rep(NA,tailNAs)) d5 = cbind( subset (d4, select = c(utime, dod, tod, hod, lunarIndex, easting, northing, depth, dtr, btr, interval, speed, turnAngle, npos)), "temperature"=tempTem, "waterDepth"=tempDep, "eaaL"=tempEaaL, "eaaM"=tempEaaM, "eaaU"=tempEaaU, "magL"=t empMagL, "magM"=tempMagM, "magU"=tempMagU, "dirL"=tempDirL, "dirM"=tempDirM, "dirU"=tempDirU ) # check to see if ADCP data is my 7777777 holder and replace them with NA if(mean(d5$eaaL, na.rm=TRUE) == 7777777){ d5[,c("eaaL", "eaaM","eaaU","magL","magM","magU","dirL","dirM","dirU")] = NA } if(mean(d5$temperature, na.rm=TRUE) == 7777777){ d5[,c("temperature","waterDepth")] = NA } # now that interpolation of ADCP data is done, remove lines without fish

PAGE 264

264 # position data, only ADCP data d6 = d5[!is.na(d4$easting),] # what is the fish altitude above the seafloor (waterDepth fish depth) d6$altitude = d6$waterDepth d6$depth # its ugly but I want this in a different order d6a = cbind(subset(d6, select=c(utime, dod, tod, hod, lunarIndex, easting, northing, depth, altitude, dtr, btr, interval, speed, turnAngle, npos, temperature, waterDepth, eaaL, eaaM, eaaU, magL, magM, magU, dirL, dirM, dirU))) # me rge tide data with ALPS/ADCP data...again don't bother with the datiG and datiL d7 = merge(d6a, subset(tideData, select=c(utime, tidalHeight)), all=TRUE) ## Tide Data comes every hour and I'm going to interpolate it to every 10 min ## Look to see how well this interpolation will be. It looks great! # ad = d7; # plot(ad$utime, ad$tidalHeight, pch=19, cex=0.5, xlim=c(ad$utime[1], ad$utime[50])) # ?how long am I looking at in this plot? (ad$utime[50] ad$uti me[1])/3600 = 49 hours # interpolate tidal heights at all times when there is ALPS data (every 10 min) d7$tidalHeight = na.approx(object=d7$tidalHeight, x=d7$utime) # now remove rows/utimes when there are no ALPS position data d8 = d7[!is. na(d7$easting),] # fill in missing datiG and datiL, you can't make it POSIXlt inside the # data.frame call so do it after d9 = data.frame( utime = d8$utime, datiG = d8$utime, datiL = d8$utime, subset(d8, select= uti me) ) d9$datiG = as.POSIXlt(d9$datiG, origin="1970 1 1", tz="GMT") d9$datiL = as.POSIXlt(d9$datiL, origin="1970 1 1", tz="EST5EDT") } else { # if there are no lines of data in d1, just add and delete the right # columns to/from $data d2 = data.frame(d1[1:length(d1)], "temperature"=numeric(0), "waterDepth"=numeric(0), "eaaL"=numeric(0), "eaaM"=numeric(0), "eaaU"=numeric(0), "magL"=numeric(0), "magM"=numeric(0), "magU"=numeric(0), "dirL"=numeric(0), dirM"=numeric(0), "dirU"=numeric(0),

PAGE 265

265 "tidalHeight"=numeric(0) ) d9=d2 } # return answers list("data"=d9, "tagName"=alpsData$tagName, "beaconName"=alpsData$beaconName, "psr"=alpsData$psr, "deployment"=alpsData$deployment) } # end 'mergeAlpsAdcpData()' ###################################################################### ###################################################################### ################################################################# ##### # This function takes a single (easting, northing) location and a # black and white categorical habitat map as *.jpg. It determines whether # each (E,N) location is in the white/black part of the map. It was originally # worked out in 'chapter 3 part 2.r'. There are # more details there about getting easting/northing correctly referenced to # column/row. # # It is written to be used in conjunction with mergeSonarData() below. findHabType = function( e, n, # easting and northing location o f fish, easting and northing offset applied reference, # which point to use for relating easting/northing to row/column # IF41, IF42, IF43, OH41, OS43, OF43 # OLD ... "blue box", "if41", "if42", "other" for when I use only a piece of full hab m ap # erange = NA, # OLD DON'T USE# easting range to use if 'reference' is 'other'. i.e. =c(0,50) # nrange = NA, # OLD DON'T USE # northing range to use if 'reference' is 'other' habmap = NA, # categorical habitat map matrix, should be 3 dimensions, a full 3 D jpeg. crosshairs = FALSE, show = FALSE, # draw something on a plot pixels = FALSE, # include the pixel location in the output? dotCol="green", # what color do you want the dot to be dotShape = 19, dotSize = 1 ) { # a check if the po int is off the image map outOfBounds=FALSE # THIS REFERENCING SYSTEM IS OLD AND ONLY WORKS FOR IF41_IF42. # ## all 4 blue box corners give the same answer # if (reference == "blue box"){

PAGE 266

266 # emin=244982 eastingOffset; emax=245506 eastingOffset; # easting of the left/western most column # nmin=3262241 northingOffset; nmax=3262556 northingOffset; # northing of the top/northern most row # } else if (reference == "if41"){ # emin=244994 eastingOffset; emax=245518 eastingOffset; # nmin=32622 35 northingOffset; nmax=3262550 northingOffset; # } else if (reference == "if42"){ # emin=244984 eastingOffset; emax=245508 eastingOffset; # nmin=3262237 northingOffset; nmax=3262552 northingOffset; # } else if(reference == "other"){ # emin=e range[1] eastingOffset; emax=erange[2] eastingOffset; # nmin=nrange[1] northingOffset; nmax=nrange[2] northingOffset; # } else print("Please specify a reference: blue box, if41, if42.") # pick the correct deployment for the given habitat map... # ...there are four maps, two of them have two reefs on them # # These are the choice of habitat maps, make sure you've choosen a reef actually # ... on this habmap # IF41_IF42_lines_aligned_HBandSB_bluebox.jpg # IF43_lines_aligned_HB_SB_bluebox.j pg # OH41_OS43_lines_SBonly_bluebox.jpg # OF43_lines_SBonly_bluebox.jpg # get the somar image to use if(is.na(habmap)){ #otherwise use the habmap passed to findHabType() library(rimage) if((reference == "if41") | (reference == "if42")){ rfile = "C:/zy/Telemetry/R summary files/IF41_IF42_lines_aligned_HBandSB_bluebox.jpg" } else if(reference == "if43"){ rfile = "C:/zy/Telemetry/R summary files/IF43_lines_aligned_HB_SB_bluebox.jpg" } else if((reference == "oh41") | (refe rence == "os43")){ rfile = "C:/zy/Telemetry/R summary files/OH41_OS43_lines_SBonly_bluebox.jpg" } else if(reference == "of43"){ rfile = "C:/zy/Telemetry/R summary files/OF43_lines_SBonly_bluebox.jpg" } else {print("You're not using a standard reference and habitat map.") } # now read in the correct habitat map. This should be a 3 D jpeg habmap = round(read.jpeg(rfile)) print("Reading in habitat map") } # what are the dimensions of habmap ne = dim(habmap)[2] # number of columns

PAGE 267

267 nn = dim(habmap)[1] # number of rows # we need to calculate the easting and northing edges of the image, see # 'habitat map procedures.xlsx' abd 'defining IF41 IF42 imagery.xlsx' # for more on this. # ...but in short, get the reefEN and make sure it's not offset, count the # ...number of pixels from it to the edges, and divide by 10, add/subtract # ...that number of meters. # First pick the right md[[i]] for each reef so you can get the reefEN # gimpCol and gimpRow are the (column, row) location of the reef in GIMP if (reference == "if41") { dn = 1; gimpCol = 1805; gimpRow = 1590; } else if(reference == "if42") { dn = 8; gimpCol = 3345; gimpRow = 1455; } else if(refer ence == "if43") { dn = 3; gimpCol = 2515; gimpRow = 1387; } else if(reference == "oh41") { dn = 5; gimpCol = 3232; gimpRow = 2288; } else if(reference == "os43") { dn = 7; gimpCol = 2413; gimpRow = 1472; } else if(reference == "of43") { dn = 4; gimp Col = 1966; gimpRow = 1473; } else {print("You're not using a standard reference and habitat map.") } reefEasting = md[[dn]]$reefEN$easting + eastingOffset reefNorthing = md[[dn]]$reefEN$northing + northingOffset # now calculate UTM at image edge s # ...the 1 is because GIMP starts at 0 and R starts at 1 emin = (reefEasting (gimpCol 0)/10) eastingOffset emax = (reefEasting + (ne 1 gimpCol)/10) eastingOffset nmin = (reefNorthing (nn 1 gimpRow)/10) northingOffset nmax = (r eefNorthing + (gimpRow 0)/10) northingOffset # checks if ((e < emin) | (e > emax)){print("e is out of bounds"); outOfBounds=TRUE} if ((n < nmin) | (n > nmax)){print("n is out of bounds"); outOfBounds=TRUE} if(!outOfBounds){ # given the fish is at (e,n) what pixel location is it at eanswer = floor(( ((e emin)/(emax emin)) ne)) 1 nanswer = floor(( ((n nmin)/(nmax nmin)) nn)) 1 # the term '(e emin)/(emax emin)' produces a fraction of how far across the # easting direc tion the fish is. Multiply that by the number of columns, ne, # and you get which column is occupied. This needs careful checking to see if # I should use round/floor/ceiling. Note that GIMP row/column counts start # at 0, while R starts th em at 1. # # If the fish position is ever outside the range of the image, this code will # need more work. # determine the habitat type. Apparently for plotting, (0,0) is the bottom

PAGE 268

268 # left of the figure, but for referencing the unde rlying array(jpg) (0,0) # is the top left, so when picking the habitat type, count from the top down # but for drawing dots, count from the bottom up. if(habmap[nn nanswer,eanswer,2] == 1){habType = "white" } else if (habmap[nn nanswer,e answer,2] == 0){habType="black" } else {habType="unknown"} if(show){ # this only works if is the map is already showing #if (habType=="white"){dotCol="blue"} else if (habType=="black"){dotCol="red" # } else {dotCol="yellow"} if(crosshairs){abline(h=nanswer, v=eanswer,col="green",lwd=1)} points(eanswer, nanswer, col=dotCol, pch=dotShape, cex=dotSize) } } else { # it is outOfBounds habType = NA outOfBounds = FALSE } # end if(outOfBou nds) # given what pixel location the fish is at, what habitat type is it occupying # 1 = white = sand, 0 = black = HB if (pixels){return(list("pixelEN"=c(eanswer, nanswer), "habType" = habType)) } else {return(habType)} } # end findHabType #library(rimage) #rfile = "C:/zy/Telemetry/R summary files/IF41_IF42_lines_aligned_HBandSB_2.jpg" #i1 = round(read.jpeg(rfile)) #par(mar=c(0.2,0.2,0.2,0.2)) #plot(i1) #box("plot", col="red") ### IF41 is at (245174.8 E, 3262391 N) #findHabType(e=245174.8 e astingOffset, n=3262391 northingOffset, # reference="if41", show=TRUE, crosshairs=FALSE) # #for (i in 1:100){print( # findHabType(e=tagfm[[1]]$data$easting[i], n=tagfm[[1]]$data$northing[i], # reference="if41", show=TRUE, crosshairs=TRUE))} #findH abType(e=245174.8 eastingOffset, n=3262391 northingOffset, # reference="if41", show=TRUE, crosshairs=FALSE) #

PAGE 269

269 #library(rimage) #rfile = "C:/zy/Telemetry/R summary files/IF41_IF42_lines_aligned_HBandSB_bluebox.jpg" #rfile = "C:/zy/Telemetry/R summary fi les/IF43_lines_aligned_HB_SB_bluebox.jpg" #rfile = "C:/zy/Telemetry/R summary files/OH41_OS43_lines_SBonly_bluebox.jpg" #rfile = "C:/zy/Telemetry/R summary files/OF43_lines_SBonly_bluebox.jpg" # #i1 = round(read.jpeg(rfile)) #plot.imagematrix(i1, useRas ter=T) # ## if41 --> e=245174.8 eastingOffset, n=3262391 northingOffset, ## if42 --> e=245318.1 eastingOffset, n=3262407 northingOffset, ## if43 --> e=245478 eastingOffset, n=3262128 northingOffset, ## oh41 --> e=237034.9 eastingOffset, n=3263760 north ingOffset, ## os43 --> e=236946.8 eastingOffset, n=3263839 northingOffset, ## of43 --> e=237897.1 eastingOffset, n=3263128 northingOffset, # #findHabType( e=237897.1 eastingOffset, n=3263128 northingOffset, habmap=i1, # reference="of43", show=TRUE, cros shairs=TRUE) #findHabType(e=245350 eastingOffset, n=3262100 northingOffset, # reference="if43", show=TRUE, crosshairs=FALSE) ###################################################################### ####################################################### ############### ###################################################################### # This function takes a data.frame with easting, northing columns and # applies findHabType() to each one. There's probably a faster vectorized # way to do this. # # So far this has only been tested with one image... # rfile = "C:/zy/Telemetry/R summary files/IF41_IF42_lines_aligned_HBandSB_2.jpg" mergeSonarData = function( alpsData, # typically this is one of tagfm habmap, # categorical habitat map matrix, sho uld be 3 dimensions, a full 3 D jpeg. reference # which point to use for relating easting/northing to row/column # "blue box", "if41", "if42"... ) { d1 = alpsData$data d1$habType = NA for (i in 1:nrow(d1)){ d1$habType[i] = findHabType(e=d1$easting[i], n=d1$northing[i], habmap=habmap, reference=reference)

PAGE 270

270 } # return answers list("data"=d1, "tagName"=alpsData$tagName, "beaconName"=alpsData$beaconName, "psr"=alpsData$psr, "deployment"=alpsData$deployment) } # end mergeSonarData ###################################################################### ###################################################################### ##################################################### ################# ### in this function, supply the filename, and the start and stop times marking ### the GPS locations to extract importGPSdata = function( fn, # filename of Garmin output file startTime, # start time in local time of GPS buoy at good position stopTime, # filter = 100, # for removing clear errors offset = TRUE, # use the global offset or not? plotThem = TRUE # show the plot or not ){ # fn="42 E GPS track.txt"; day=23; startTime="12:33:44"; stopTime="14:26:00"; filter=100 ; # 125m # fn="44 W GPS track.txt"; startTime="12:42:30"; stopTime="14:32:00"; filter=10; # 125m # fn="41 N GPS track.txt"; day=7; startTime="10:29:35"; stopTime="13:23:00"; # 150m # fn="43 S GPS track.txt"; day=7; startTime="11:07:00"; stopTime=" 13:56:00"; # 150m ### names of columns to be read in columnNames = c("type", "ident", "lat", "long", "y_proj", "x_proj", "new_seq", "display", "color", "altitude", "depth", "temp", "time", "model", "filename", "ltime") columnClasses = c('character','character','numeric','numeric','numeric', 'numeric','character','character','integer','numeric','integer','integer', 'character','character','character','character') # read in data temp1 = read.table(fn, header=TRUE, sep=",", col.names=columnNames, colClasses = columnClasses ) # extract lat/long and convert to UTM, UTM increases northward and eastward temp2 = data.frame(X = temp1$lon, Y = temp1$lat) attr(temp2, "zone") < 17 attr(temp2, "projection") < "LL

PAGE 271

271 temp3 = convUL(temp2,km=FALSE) #X is easting in m, Y is northing in m # add date and time, to offset or not to offset? if (offset) { temp4 = data.frame( dati = strptime(temp1$ltime, "%Y/%m/%d %H:%M:%S"), northing = temp3$Y nort hingOffset, easting = temp3$X eastingOffset) } else { temp4 = data.frame( dati = strptime(temp1$ltime, "%Y/%m/%d %H:%M:%S"), northing = temp3$Y, easting = temp3$X) } # filter for only the desired points # for the 125m spacing t rial the date is 23 April 2009, so only take points from that day # for the 150m and 100m trials, the date is 8 May 2009, so... temp5 = temp4 # take only points during the choosen period, (startTime, stopTime) temp6 = chron(times=format(temp5$dati, format="%H:%M:%S")) temp7 = temp5[((temp6 > startTime) & (temp6 < stopTime)),] # filter out any obvious outliers, this might have to be done carefully each time temp8 = temp7[abs(temp7$easting mean(temp7$easting))
PAGE 272

272 #bob=importGPSdata... fn="2009Oct12 GPS data.txt"; startTime = "10:39:45"; stopTime="10:53:00"; filter=100; offset=FALSE; plotThem=TRUE; ################################################### ################### ###################################################################### ###################################################################### ### IMPORT tide Data ###################################################################### ### ##### importTideData = function(){ ### record current directory and change to desired directory oDir = getwd() cDir = paste(dataDir, "/Environmental data 2011Apr Final/Tide data/", sep="") setwd(cDir) # get names for all the files filename s = list.files(pattern="tide") # read in all the data files df1 = list() for (i in 1:length(filenames)){ # for each of the files in the folder... rfile = file(filenames[i], open="rt") # open the connection df1[[i]] = read.table(rfile, skip=1 3, fill=TRUE, col.names=c("station", "date", "time", "predicted", "tidalHeight")) print(paste("Read data from", filenames[i])) close(rfile) } # combine all lists into one df df2 = rbind(df1[[1]], df1[[2]], df1[[3]]) # compare predicted with actual height #plot(tideData$predicted, tideData$height, pch=19, cex=0.5) # create dati from the separate columns, the times are already GMT temp1 = paste(df2$date, df2$time, sep=" ") # convert from chr to utime (GMT) datiVec 1 = unclass(as.POSIXct(strptime(temp1, "%Y%m%d %H:%M", tz="GMT"), origin="1970 1 1", tz="GMT")) # remove the tz attribute from datiVec1 attributes(datiVec1) = NULL # convert to datiG datiVec2 = as.POSIXlt(datiVec1, origin="1970 1 1", tz="GMT") #convert to datiL datiVec3 = as.POSIXlt(datiVec1, origin="1970 1 1", tz="EST5EDT") # replace missing 'height' with 'predicted'

PAGE 273

273 df3 = df2 df3[is.na(df3$tidalHeight),]$tidalHeight = df3[is.na(df3$tidalHeight) ,]$predicted # replace these dati columns and only keep wanted columns: df4 = data.frame("utime"=datiVec1, "datiG"=datiVec2, "datiL"=datiVec3, "tidalHeight"=df3[,5]) # set the directory back setwd(oDir) return(df4) } # end importTi deData # tideData = importTideData() ###################################################################### ###################################################################### ###################################################################### ### I MPORT sun/moon rise/set Data ###################################################################### ######## importSunData = function(){ ### record current directory and change to desired directory oDir = getwd() cDir = paste(dataDir, "/DATA/Env ironmental data 2010Aug Final/sun moon rise set", sep="") setwd(cDir) # get names for all the sun files filenames = list.files(pattern="sun") # read in all the data files Times are all EST, no EDT df1 = list() for (i in 1:length(filenames )){ # for each of the files in the folder... rfile = file(filenames[i], open="rt") # open the connection # construct headers headers = c("day", "jan.r", "jan.s", "feb.r", "feb.s", "mar.r", "mar.s", "apr.r", "apr.s", "may.r", "may.s", "ju n.r", "jun.s", "jul.r", "jul.s", "aug.r", "aug.s", "sep.r", "sep.s", "oct.r", "oct.s", "nov.r", "nov.s", "dec.r", "dec.s") # read table df1[[i]] = read.fwf(rfile, skip=9, fill=TRUE, col.names=headers, n=31, widths=c(4, rep(c(5 ,6),12))) print(paste("Read data from", filenames[i])) close(rfile) # re arrange, there are 31 days in all months, some have NA

PAGE 274

274 temp1 = df1[[i]][,1] # the day bymonth = list() for (j in 1:12){ bymonth[[j]] = cbind("mon th"=j, "day"=temp1, "r"=df1[[i]][,j*2], "s"=df1[[i]][,j*2+1]) } # combine into one list rs1 = bymonth[[1]] for (j in 2:12){rs1 = rbind(rs1, bymonth[[j]])} # remove non existant days rs2 = rs1[!is.na(rs1[,3]),] # generat e the date # aarg, make the months and days 2 digits for (j in 1:nrow(rs2)){ if(nchar(rs2[,1][j]) == 1){rs2[,1][j]=paste("0",rs2[,1][j], sep="")} if(nchar(rs2[,2][j]) == 1){rs2[,2][j]=paste("0",rs2[,2][j], sep="")} if(nchar(rs2[,3 ][j]) == 3){rs2[,3][j]=paste("0",rs2[,3][j], sep="")} if(nchar(rs2[,4][j]) == 3){rs2[,4][j]=paste("0",rs2[,4][j], sep="")} } # # create dati, the times are always EST, no EDT. aarg again year = substr(filenames[i],1,4) rise1 = paste (year, "/", rs2[,1], "/", rs2[,2], ", rs2[,3], sep="") set1 = paste(year, "/", rs2[,1], "/", rs2[,2], ", rs2[,4], sep="") # convert from chr to utime (GMT) datiVec1r = unclass(as.POSIXct(strptime(rise1, "%Y/%m/%d %H%M", tz="EST"), origin="1970 1 1", tz="EST")) datiVec1s = unclass(as.POSIXct(strptime(set1, "%Y/%m/%d %H%M", tz="EST"), origin="1970 1 1", tz="EST")) # remove the tz attribute from datiVec1 attributes(datiVec1r) = NULL attributes(datiVec1s) = NU LL # convert to datiG datiVec2r = as.POSIXlt(datiVec1r, origin="1970 1 1", tz="GMT") datiVec2s = as.POSIXlt(datiVec1s, origin="1970 1 1", tz="GMT") #convert to datiL datiVec3r = as.POSIXlt(datiVec1r, origin="1970 1 1", tz="EST5EDT") datiVec3s = as.POSIXlt(datiVec1s, origin="1970 1 1", tz="EST5EDT") # a check #datiVec3r[1]; as.POSIXct(datiVec3r[1]); unclass(as.POSIXct(datiVec3r[1])); #datiVec1r[1]; # make new df with desire d data #rs3 = data.frame("dateL"=format(datiVec3r,format="%Y/%m/%d"), # "rUtime"=datiVec1r, "rDatiG"=datiVec2r, "rDatiL"=datiVec3r,

PAGE 275

275 # "sUtime"=datiVec1s, "sDatiG"=datiVec2s, "sDatiL"=datiVec3s) # I'd rather have it like this... r s3 = data.frame("dateL"=format(datiVec3r,format="%Y/%m/%d"), "rUtime"=datiVec1r, "rGtime"=format(datiVec2r, format="%H:%M"), # GMT hr:min "rLtime"=format(datiVec3r, format="%H:%M"), # EST5EDT hr:min "sUtime"=datiVec1s, "sGt ime"=format(datiVec2s, format="%H:%M"), # GMT hr:min "sLtime"=format(datiVec3s, format="%H:%M") # EST5EDT hr:min ) # save the result df1[[i]] = rs3 } # end i loop over all files # combine all lists into one df df 2 = rbind(df1[[1]], df1[[2]], df1[[3]]) return(df2) } # end importSunData # sunData = importSunData() ###################################################################### ###################################################################### ### ################################################################### # This function imports the text files produced by WHSReader when converting # individual SDL *.bin files to *.txt files # These are huge files. Read else where for a description of their format. importRawSDLdata = function(){} ###################################################################### ###################################################################### ###################################################################### # This function returns a list of 8 items. 1 5 are data.frames for each of # the SDLs containing the toa data for that SDL. # 6 is the tag number, 7 indicates the deployment name, 8 is a tag/beacon/sentinel # switch. importToaData = function(tagName, de ployment){ # 'tagName' is the tag or beacon number with a leading letter, b1, t13, s79600 # 'deployment' is the experiment designation, i.e. IF43 tagID = substr(tagName,2,100) tagType = substr(tagName,1,1) # settings that change for ea ch deployment

PAGE 276

276 for (i in 1:length(md)){ # i loops through all deployments if (deployment == md[[i]]$deployment){ tempDir = md[[i]]$homeDir startUtime = md[[i]]$startUtime stopUtime = md[[i]]$stopUtime } } setwd(paste(tempDi r,"/ALPS 2011Feb14",sep="")) cDir = getwd() # create a pattern for all *.toa files for this tag namePattern = paste("TxId",tagID,".toa",sep="") # gather all the file names for the tag fileNames = list.files(pattern = namePattern, recursive=TRU E, ignore.case=TRUE) # read in data from each file... d1 = lapply( as.list (fileNames), read.table, header=FALSE) # ... and row bind them together d2 = do.call("rbind", d1) # this isn't necessarily in chronological order toaData = d2 # c lose all connections closeAllConnections() # divide into data.frames for each SDL d41 = data.frame("utime"=toaData[,1],"fraction"=toaData[,2],"power"=toaData[,3], "sType"=toaData[,4],"sValue"=toaData[,5]) d42 = data.frame("utime"=toaDat a[,6],"fraction"=toaData[,7],"power"=toaData[,8], "sType"=toaData[,9],"sValue"=toaData[,10]) d43 = data.frame("utime"=toaData[,11],"fraction"=toaData[,12],"power"=toaData[,13], "sType"=toaData[,14],"sValue"=toaData[,15]) d44 = data.frame( "utime"=toaData[,16],"fraction"=toaData[,17],"power"=toaData[,18], "sType"=toaData[,19],"sValue"=toaData[,20]) d45 = data.frame("utime"=toaData[,21],"fraction"=toaData[,22],"power"=toaData[,23], "sType"=toaData[,24],"sValue"=toaData[,25]) # get rid of empty rows d41 = d41[d41$utime>0,] d42 = d42[d42$utime>0,] d43 = d43[d43$utime>0,] d44 = d44[d44$utime>0,] d45 = d45[d45$utime>0,] # get rid of points before the time we finished deploying the array, startUtime d41 = d41[d41$utime>startUtime,] d42 = d42[d42$utime>startUtime,] d43 = d43[d43$utime>startUtime,]

PAGE 277

277 d44 = d44[d44$utime>startUtime,] d45 = d45[d45$utime>startUtime,] # get rid of points after the time we began recoveri ng the array, stopUtime d41 = d41[d41$utime
PAGE 278

278 stopUtime = md[[i]]$stopUtime } } # some 'time' book keeping totalSec = stopUtime start Utime thirtyMinBins = seq(from=startUtime, to=stopUtime, by=1800) # 30min 60sec sixtyMinBins = seq(from=startUtime, to=stopUtime, by=3600) bins = sixtyMinBins; mins=60; # if you choose a different bin size, fix the sentinel tpi in the lines below bins2 = as.POSIXct(bins, origin="1970 1 1",tz="GMT") if (tagType == "f"){ tpm = 30 # transmissions per minute for a fish tag tpi = tpm mins # transmission per bin } else if (tagType == "b"){ tpm = 3 # transmissions per minute for a beacon tpi = tpm mins # transmission per bin } else if (tagType == "s"){ tpm = 30 # transmissions per minute for the sentinel tpi = tpm 5 2 # transmissions ber bin of 60 minutes } else if (tagType == "c"){ # this is for using codes n ot symbols in hb2008 tpm = 30 # transmissions per minute for the sentinel tpi = tpm 5 2 # transmissions ber bin of 60 minutes } else { print("Please pick a tag type") } # count receptions per time intervals for 'tag' and each 'sdl freqList = list() for (i in 1:5){ # i counts SDLs temp1 = data.frame("bin"=bins2[1], "frequency"=0) # for storing frequencies in each bin for (j in 2:length(bins2)){ # j counts time bins temp2 = toaData[[i]][ # keep only rows within b in[j] ((toaData[[i]]$utime>bins[j 1]) & (toaData[[i]]$utime
PAGE 279

279 freqListMean[i] = mean(c(freqList[[1]]$frequency[i], freqList[[2]]$frequency[i], freqList[[3]]$frequency[i], freqList[[4]]$frequency[i], f reqList[[5]]$frequency[i])) } # plot the results plotColors = c("brown", "red", "blue", "green", "yellow") plot(freqList[[1]]$bin, freqList[[1]]$frequency, type="b", cex=1, xlim=c(startUtime, stopUtime), ylim=c(0, 1.1), main=paste("Tag ", tagID, "Detection Frequencies", sep=" "), sub=paste("Bin size:", mins, "min", sep=" "), xlab="Time", ylab="Frequency", col=plotColors[1] ) ## to compare bin sizes # points(freqList[[1]]$bin, freqList[[1]]$frequency, type="l", col="red") points(freqList[[2]]$bin, freqList[[2]]$frequency, type="b", col=plotColors[2]) points(freqList[[3]]$bin, freqList[[3]]$frequency, type="b", col=plotColors[3]) points(freqList[[4]]$bin, freqList[[4]]$frequency, type="b", col=plotColors[4]) points( freqList[[5]]$bin, freqList[[5]]$frequency, type="b", col=plotColors[5]) lines(freqList[[1]]$bin, freqListMean, type="l", col="black", lwd=2) # show the means if (mLines){ abline(h=mean(freqList[[1]]$frequency), col=plotColors[1]) ablin e(h=mean(freqList[[2]]$frequency), col=plotColors[2]) abline(h=mean(freqList[[3]]$frequency), col=plotColors[3]) abline(h=mean(freqList[[4]]$frequency), col=plotColors[4]) abline(h=mean(freqList[[5]]$frequency), col=plotColors[5]) } # return the detection frequencies for 'tag' for each SDL list("sdl41"=freqList[[1]], "sdl42"=freqList[[2]], "sdl43"=freqList[[3]], "sdl44"=freqList[[4]], "sdl45"=freqList[[5]], "tagName"=toaData$tagName, "deployment"=toaData$deployment) } # end of toaStats1 # b1toaStats1 = toaStats1(b1toaData) # b2toaStats = toaStats1(b2toaData) # b79400toaStats = toaStats1(b79400toaData) # b79500toaStats = toaStats1(b79500toaData) # 79600toaStats = toaStats1(s79600toaData) # t11toaStats = toaStats1(t11toaData) # sam = toaStats1(bob)

PAGE 280

280 ###################################################################### ###################################################################### ###################################################################### ### this code impo rts SDL battery data from all five SDLs at once # These files have 6 header lines, then 4 columns: # date, time, power battery, coin battery importBatteryData = function(deployment){ oDir = getwd() # original directory # fetch deployment specific i nformaiton for (i in 1:length(md)){ if(deployment == md[[i]]$deployment){ # set the directory cDir = paste(md[[i]]$homeDir,"/SDL data",sep="") # current directory year = md[[i]]$year # set the file name fileNameTemp = paste("_",md[[i]]$sdlDownloadDate,"_bat.txt",sep="") } } # end the 'for' and 'if' loop setwd(cDir) # names of columns in battery log files columnNames = c("date","time","powerBat","coinBat") # read in data from all five SDLs batLog1=list(); batLog2=list(); sdlNames = 41:45 for (i in 1:5){ # 5 SDLs fileName = paste("SN2650",sdlNames[i],fileNameTemp,sep="") batLog1[[i]] = read.table(fileName, skip=7, col.names=columnNames, colClasses = c('character', 'character', 'numeric', 'numeric') ) # convert to unix time temp1 = paste(batLog1[[i]]$date, batLog1[[i]]$time) batLog2[[i]] = list( dati = strptime(temp1, "%m/%d/%y %H:%M:%S", tz="GMT"), utime = u nclass(as.numeric((strptime(temp1, "%m/%d/%y %H:%M:%S", tz="GMT")))), powerBat = batLog1[[i]]$powerBat, coinBat = batLog1[[i]]$coinBat ) } # end 'for' loop over 5 SDLs # some plotting things minV = min(batLog2[[1]]$powerBat, ba tLog2[[2]]$powerBat, batLog2[[3]]$powerBat,

PAGE 281

281 batLog2[[4]]$powerBat, batLog2[[5]]$powerBat) maxV = max(batLog2[[1]]$powerBat, batLog2[[2]]$powerBat, batLog2[[3]]$powerBat, batLog2[[4]]$powerBat, batLog2[[5]]$powerBat) minTime = min(batLog2[[1]]$u time, batLog2[[2]]$utime, batLog2[[3]]$utime, batLog2[[4]]$utime, batLog2[[5]]$utime) maxTime = max(batLog2[[1]]$utime, batLog2[[2]]$utime, batLog2[[3]]$utime, batLog2[[4]]$utime, batLog2[[5]]$utime) # draw the plot plot(x=batLog2[[1]]$dati, y=batLog2[[1]]$powerBat, type="l", col=plotColors[1], xlab="Date (GMT)", ylab="Voltage (V)", xlim=c(minTime, maxTime), ylim=c(minV 0.1, maxV+0.1), main=paste(year,deployment,"SDL Battery Voltages", sep=" ") ) for (i in 2:5){ po ints(x=batLog2[[i]]$dati, y=batLog2[[i]]$powerBat, type="l", col=plotColors[i]) } leg.txt = c("SDL41", "SDL42", "SDL43", "SDL44", "SDL45") legend("topright", leg.txt, text.col=plotColors) # reset the directory setwd(oDir) } # end importBatt eryData() # importBatteryData(deployment="oh41", year=2009) ###################################################################### ###################################################################### ##################################################### ################# ### Read in all the biometric data about the fish. This data was collected ### during tagging, during collections, from otoliths, and Deb's otolith ### results. importBiometricData = function(){ # locate the right file #f ileName = file.choose() fileName = "C:/zy/Telemetry/R summary files/all fish tagging data 2011June08.csv" # column names columnNames = c("year1", "month1", "day1", "reefID1", "HBSB", "replicate", "collectionMethod", "weight1", "girth1", "TL1" "FL1", "sizeRange", "tagged", "tagID", "knockoutStart", "taggingStart", "recoveryStart", "release", "leftColor", "rightColor", "notes", "mmNumber", "recoveredTagID", "year2", "month2", "day2", "reefID2", "depth", "gear", "TL2", "FL2",

PAGE 282

282 "w eight2", "girth2", "lOto", "lOtoWeightRecorded", "lOtoWeightUseable", "lOtoLengthA", "lOtoLengthACalip", "lOtoLengthAUseable", "lOtoLengthB1", "lOtoLengthB2", "lOtoLengthBCalip", "lOtoLengthBUseable", "lOtoLengthC1", "lOtoLengthC 2", "lOtoLengthC3", "lOtoLengthC4", "lOtoLengthCCalip", "lOtoLengthCUseable", "rOto", "rOtoWeightRecorded", "rOtoWeightUseable", "rOtoLengthA", "rOtoLengthACalip", "rOtoLengthAUseable", "rOtoLengthB1", "rOtoLengthB2", "rOtoLengthBCal ip", "rOtoLengthBUseable", "rOtoLengthC1", "rOtoLengthC2", "rOtoLengthC3", "rOtoLengthC4", "rOtoLengthCCalip", "rOtoLengthCUseable", "comments", # from Deb's otolith work "mmNumberD", "monthD", "debAgeclassCorrected" # this is the one to use, the other "resolvedAgeclass" is wrong "debAnnuli", "debGrowth", "debAgeclass", "geoffAnnuli", "geoffGrowth", "geoffAgeClass", "difference", "resolvedAgeclass", "otoRadius", "ultimateAnnulus", "penultimateAnnulus", "growthIncrement", "notes2" ) # column units columnUnits = c("NA", "NA", "NA", "NA", "NA", "NA", "NA", "kg", "mm", "mm", "mm", "10cm", "Y/N", "tag", "time", "time", "time", "time", "NA", "NA", "NA", "NA", "NA", "NA", "NA", NA", "NA", "ft", "NA", "mm", "mm", "g", "mm", "YBPN", "mg", "mg", "um", "mm", "um", "um", "um", "mm", "um", "um", "um", "um", "um", "mm", "um", "YBPN", "mg", "mg", "um", "mm", "um", "um", "um", "mm", "u m", "um", "um", "um", "um", "mm", "um", "NA",

PAGE 283

283 # from Deb's otolith work "NA", "NA", "NA", "NA", "NA", "NA", "NA", "NA", "NA", "NA", "NA", "um", "um", "um", "um", "NA" ) # column classes columnClasse s = c("factor", "factor", "factor", "factor", "factor", "factor", "factor", "numeric", "numeric", "numeric", "numeric", "factor", "factor", "factor", "character", "character", "character", "character", "factor", "factor", "character", "charac ter", "factor", "factor", "factor", "factor", "factor", "numeric", "factor", "numeric", "numeric", "numeric", "numeric", "factor", "character", "numeric", "character", "character", "numeric", "character", "character", "character", "numeric", "character", "character", "character", "character", "character", "numeric", "factor", "character", "numeric", "character", "character", "numeric", "character", "character", "character", "numeric", "characte r", "character", "character", "character", "character", "numeric", "character", # from Deb's otolith work "factor", "factor", "factor", "numeric", "factor", "factor", "numeric", "factor", "factor", "numeric", "factor", "numeric", "numeric", "numeric", "numeric", "character" ) # before reading in this fileName, you have to get rid of the extra # commas Excel puts in. You only have to do this once when you change # the *.csv file. I'll use the find a nd replace in Word. biometrics = read.table(fileName, sep=",", col.names=columnNames, colClasses=columnClasses) return(biometrics) }

PAGE 284

28 4 ###################################################################### ########## ############################################################ ###################################################################### ### this code draws a 3d compass circles3d < function(x, y, z, r, ...){ # draw the circle ang = seq(0, 2*pi, length=512 ) xx = x + r cos(ang) yy = y + r sin(ang) zz = z points3d(xx, yy, zz, size=5, ...) # draw the compass notches segments3d(c(0.8*r*cos(0),r*cos(0))+x,c(0.8*r*sin(0),r*sin(0))+y,c(0,0)+z,size=5) segments3d(c(0.8*r*cos(pi/2),r*cos(pi/2) )+x,c(0.8*r*sin(pi/2),r*sin(pi/2))+y,c(0,0)+z,size= 5) segments3d(c(0.8*r*cos(pi),r*cos(pi))+x,c(0.8*r*sin(pi),r*sin(pi))+y,c(0,0)+z,size=5) segments3d(c(0.8*r*cos(3*pi/2),r*cos(3*pi/2))+x,c(0.8*r*sin(3*pi/2),r*sin(3*pi/2))+y,c(0,0) +z,size=5) # draw the labels t2 = 1.3 distancesX = c(r*cos(0),r*cos(pi/2),r*cos(pi),r*cos(3*pi/2))*t2+x distancesY = c(r*sin(0),r*sin(pi/2),r*sin(pi),r*sin(3*pi/2))*t2+y distancesZ = c(zz, zz, zz) labs = c("E", "N", "W", "S") text3d(distancesX ,distancesY,distancesZ,labs) } # end circles3d() ###################################################################### ###################################################################### ########################################### ########################### ### this code draws a 3d clock clocks3d < function(x, y, z, r, ...){ # draw the circle ang = seq(0, 2*pi, length=512) t1 = pi/12 # offset because the clock is slanted onto the xy plane hour = head(seq(0, 2*pi, length=25 ), 24)+t1 xx = x + r cos(ang) yy = y + r sin(ang) zz = z points3d(xx, yy, zz, size=2)#, ...)

PAGE 285

285 #################################################################### instead of using hour 5, 11, 17, etc, get the offset right # draw the hour notches # segments3d(c(0.8*cos(hour[1]),cos(hour[1]))*r+x,c(0.8*sin(hour[1]),sin(hour[1]))*r+y,c(0, 0)+z,size=5, col="red") # segments3d(c(0.8*cos(hour[2]),cos(hour[2]))*r+x,c(0.8*sin(hour[2]),sin(hour[2]))*r+y,c(0, 0)+z,size=5) # segments3d(c(0.8*c os(hour[3]),cos(hour[3]))*r+x,c(0.8*sin(hour[3]),sin(hour[3]))*r+y,c(0, 0)+z,size=5) # segments3d(c(0.8*cos(hour[4]),cos(hour[4]))*r+x,c(0.8*sin(hour[4]),sin(hour[4]))*r+y,c(0, 0)+z,size=5) segments3d(c(0.8*cos(hour[5]),cos(hour[5]))*r+x,c(0.8*sin(h our[5]),sin(hour[5]))*r+y,c(0, 0)+z,size=5) # segments3d(c(0.8*cos(hour[6]),cos(hour[6]))*r+x,c(0.8*sin(hour[6]),sin(hour[6]))*r+y,c(0, 0)+z,size=5) # segments3d(c(0.8*cos(hour[7]),cos(hour[7]))*r+x,c(0.8*sin(hour[7]),sin(hour[7]))*r+y,c(0, 0)+z,size= 5) # segments3d(c(0.8*cos(hour[8]),cos(hour[8]))*r+x,c(0.8*sin(hour[8]),sin(hour[8]))*r+y,c(0, 0)+z,size=5) # segments3d(c(0.8*cos(hour[9]),cos(hour[9]))*r+x,c(0.8*sin(hour[9]),sin(hour[9]))*r+y,c(0, 0)+z,size=5) # segments3d(c(0.8*cos(hour[10]), cos(hour[10]))*r+x,c(0.8*sin(hour[10]),sin(hour[10]))*r+ y,c(0,0)+z,size=5) # segments3d(c(0.8*cos(hour[11]),cos(hour[11]))*r+x,c(0.8*sin(hour[11]),sin(hour[11]))*r+ y,c(0,0)+z,size=5) segments3d(c(0.8*cos(hour[11]),cos(hour[11]))*r+x,c(0.8*sin(hour [11]),sin(hour[11]))*r+ y,c(0,0)+z,size=5) segments3d(c(0.8*cos(hour[17]),cos(hour[17]))*r+x,c(0.8*sin(hour[17]),sin(hour[17]))*r+ y,c(0,0)+z,size=5) segments3d(c(0.8*cos(hour[23]),cos(hour[23]))*r+x,c(0.8*sin(hour[23]),sin(hour[23]))*r+ y,c(0,0)+z, size=5)

PAGE 286

286 # draw the labels t2 = 1.39 hourTicks=c(hour[6],hour[12],hour[18],hour[24]) distancesX = cos(hourTicks)*r*t2+x distancesY = sin(hourTicks)*r*t2+y distancesZ = rep(zz,12) labs = c("24:00","18:00","12:00","6:00") text3d(distance sX,distancesY,distancesZ,labs) } # end clocks3d() ###################################################################### ###################################################################### ################################################## #################### # These two functions calculate turning angle in the right way... ### I got this from here: ### http://quantitative ecology.blogspot.com/2007/05/anglefun function xxyy bearing true as.html ### and here: ### http://quantitative ecology. blogspot.com/ anglefun < function(xx, yy, bearing=TRUE, as.deg=FALSE){ ## calculates the compass bearing of the line between two points ## xx and yy are the differences in x and y coordinates between two points ## Options: ## bearing = FALSE return s +/ pi instead of 0:2*pi ## as.deg = TRUE returns degrees instead of radians c = 1 if (as.deg){c = 180/pi} b< sign(xx) b[b==0]< 1 #corrects for the fact that sign(0) == 0 tempangle = b*(yy<0)*pi+atan(xx/yy) if(bearing){ #return a compa ss bearing 0 to 2pi #if bearing==FALSE then a heading (+/ pi) is returned tempangle[tempangle<0]< tempangle[tempangle<0]+2*pi } return(tempangle*c) } ###################################################################### ###################### ################################################ ###################################################################### bearing.ta < function(loc1, loc2, loc3, as.deg=TRUE, replaceNaN=FALSE){ # loc1 = p1[[i]]; loc2 = p2[[i]]; loc3 = p3[[i]]; ## calcu lates the bearing and length of the two lines ## formed by three points ## the turning angle from the first bearing to the ## second bearing is also calculated ## locations are assumed to be in (X,Y) format.

PAGE 287

287 ## Options: ## as.deg = TRUE r eturns degrees instead of radians if (length(loc1) != 2 | length(loc2) != 2 | length(loc3) !=2){ print("Locations must consist of either three vectors, length == 2,or three two column dataframes") return(NaN) } c = 1 if (as.deg){c = 180/pi} locdiff1< loc2 loc1 locdiff2< loc3 loc2 bearing1< anglefun(locdiff1[1],locdiff1[2],bearing=FALSE) bearing2< anglefun(locdiff2[1],locdiff2[2],bearing=FALSE) if(is.data.frame(locdiff1)){ dist1< sqrt(rowSums(locdiff1^2)) dist2< sqrt(ro wSums(locdiff2^2)) }else{ dist1< sqrt(sum(locdiff1^2)) dist2< sqrt(sum(locdiff2^2)) } ta=(bearing2 bearing1) # bearing1 or bearing2 will be NaN if there is a point that doesn't move # from one time to the next. If this happens then ta will have NaN in it. # Do you want to replace the NaN in ta with zero? if(replaceNaN){ ta$nextEasting[ is.nan(ta$nextEasting) ] = 0 } ta[ta < pi] = ta[ta < pi] + 2*pi ta[ta > pi] = ta[ta > pi] 2*pi return(list(bearing1=unlist(be aring1*c), bearing2=unlist(bearing2*c), ta=unlist(ta*c), dist1=unlist(dist1), dist2=unlist(dist2))) } ###################################################################### ###################################################################### ####### ############################################################### # This calculates the mean angle from a vector of angles in degrees. # Read Wikipedia for the equation meanAngle = function(x){ # x is a vector of angles in degrees (180/pi)*atan2(sum(sin(x* pi/180))/length(x), sum(cos(x*pi/180))/length(x)) } ###################################################################### ######################################################################

PAGE 288

288 ####################################################### ############### ### I got the following code from Ben's 'HPDregionplot()', I modified it here ### I worked out this function in 'kde2d.r' ### This function takes fish positions and calculates a KDE then draws a ### picture and returns the home range s ize in units of meters^2. homeRange = function (easting, northing, tagName, lims, reefEN, sdlEN, n = 100, prob = 0.5, h = c(bandwidth.nrd(easting),bandwidth.nrd(northing)), pts=TRUE, drawplot=TRUE, ...) { # calculate the kde post1 = kde2d(eas ting, northing, n=n, h=h, lims=lims) # find the size of the boxes in our '2D histogram' # each box has an x and y position with a z (density) value dx = diff(post1$x[1:2]) dy = diff(post1$y[1:2]) # sort the z (density) values then count the num ber greater than 'prob' sz = sort(post1$z) c1 = cumsum(sz) dx dy levels = approx(c1, sz, xout = 1 prob)$y # this gives an error... # "In approx(c1, sz, xout = 1 prob) : collapsing to unique 'x' values" # I (and Ben I believe)thin k it's okay to ignore this error. # how many are above the 'levels' value corresponding to 'prob'... hrSize = sum(sz > levels) dx dy # when you 'sum' TRUEs = 1 and FALSEs=0 # draw the plot if (drawplot){ #par(mfrow=c(1,1)) plot(x=re efEN$easting, y=reefEN$northing, type="n", col="blue", xlab = "Easting (m)", xlim=c(lims[1],lims[2]), ylab = "Northing (m)", ylim=c(lims[3], lims[4]), main=paste(tagName, ": ", prob*100, "% HR = ", round(hrSize,1), "m^2", sep="") ) if(pts){points(easting,northing,pch=19,cex=0.1)} points(x=reefEN$easting, y=reefEN$northing,pch=19, col="red") points(x=sdlEN$easting, y=sdlEN$northing, pch=19, col="blue") contour(post1$x, post1$y, post1$z, level = levels, add=T, c ol="green", lwd=2, drawlabels = FALSE, ...) invisible(contourLines(post1$x, post1$y, post1$z, level = levels)) } return(hrSize) # because of the dx and dy above, this number is m^2 } ################################################################ ###### ###################################################################### ######################################################################

PAGE 289

289 ### I got the following code from the R graph gallery, I modified it here kde2dplot < function(d, # a 2d density computed by kde2d() reefEN, sdlEN, # ncol=50, # the number of colors to use zlim=c(0,max(z)), # limits in z coordinates nlevels=20, # see option nlevels in contour theta=30, # see option theta in persp phi=30, # see option phi in persp ...) { z < d$z nrz < nrow(z) ncz < ncol(z) couleurs < tail(topo.colors(trunc(1.4 ncol)),ncol) fcol < couleurs[trunc(z/zlim[2]*(ncol 1))+1] dim(fcol) < c(nrz,ncz) fcol < fcol[ nrz, ncz] par(mfrow=c(1,2),mar=c(0.5,0.5,0.5,0.5)) persp(d, col=fcol, zlim=zlim, theta=theta, phi=ph i, xlab="Easting (m)", ylab="Northing (m)", zlab="Density")# main=tagName) ### why can't I use this? par(mar=c(2,2,2,2)) image(d,col=couleurs) contour(d,add=T,nlevels=nlevels) points(x=reefEN$easting, y=reefEN$northing,pch=1 7, col="red") points(x=sdlEN$easting, y=sdlEN$northing, pch=19, col="red") box() # return the plot area to normal par(mfrow=c(1,1),mar=c(5, 4, 4, 2) + 0.) } # end kde2dplot ################################################################### ### ###################################################################### ###################################################################### ### This is what I did... ### I worked out this function in 'kde2d.r' kde2dplot2 < function(easting, northing # the original data tagName, # tag being drawn d, # a 2d density computed by kde2d() prob = 0.5, # where to draw the contour line

PAGE 290

290 ncol=6 0, # the number of colors to use zlim=c(0,max(z)), # limits in z coordinates nlevels=20, # see option nlevels in contour theta=30, # see option theta in persp phi=30, # see option phi in persp pts=TRUE, # plot the original points? ... ) { # easting=d9$data$easting; northing=d9$data$northing; d=kde; # Code from homeRange() # find t he size of the boxes in our '2D histogram' # each box has an x and y position with a z (density) value dx = diff(d$x[1:2]) dy = diff(d$y[1:2]) # sort the z (density) values then count the number greater than 'prob' sz = sort(d$z) c1 = cumsum (sz) dx dy levels = approx(c1, sz, xout = 1 prob)$y # get output from from kde z < d$z nrz < nrow(z) ncz < ncol(z) couleurs < tail(topo.colors(trunc(1.4 ncol)),ncol) fcol < couleurs[trunc(z/zlim[2]*(ncol 1))+1] dim( fcol) < c(nrz,ncz) fcol < fcol[ nrz, ncz] # draw the UD image(d,col=couleurs, xlab = "Easting", ylab="Northing", main=paste(tagName, ": ", prob*100, "% HR ", sep="") ) # draw the contour line contour(d,add=T,level=levels)#nlevels=nlevels) # add the points if desired if(pts){points(easting,northing,pch=19,cex=0.1)} box() } # end kde2dplot2 ###################################################################### #################### ################################################## ###################################################################### ## hacked version of "movie3d". "movie3d" sets up a movie, ## but all it can do is change the perspective (not plot different ## things for each frame)

PAGE 291

291 ## to run this you need the 'ImageMagick' program, and you need ## your path set so R can find the 'convert' program mymovie < function (f, duration, dev = rgl.cur(), ..., fps = 10, movie = "movie", frames = movie, dir = temp dir(), convert = TRUE, clean = TRUE, verbose = TRUE, top = TRUE) { olddir < setwd(dir) on.exit(setwd(olddir)) for (i in 0:(duration fps)) { time < i/fps if (rgl.cur() != dev) rgl.set(dev) f(time,.. .) filename < sprintf("%s%03d.png", frames, i) if (verbose) { cat("Writing", filename, \ r") flush.console() } rgl.snapshot(filename = filename, fmt = "png", top = top) } cat(" \ n") if (.Pl atform$OS.type == "windows") system < shell if (is.logical(convert) && convert) { version < system("convert -version", intern = TRUE) if (!length(grep("ImageMagick", version))) stop("ImageMagick not found") movie.filename < paste(movie, ".gif", sep = "") if (verbose) cat("Will create: ", file.path(dir, movie.filename), \ n") wildcard < paste(frames, "*.png", sep = "") convert < paste("convert delay 1x", fps, ", w ildcard, ", movie.filename, sep = "") } if (is.character(convert)) { if (verbose) { cat("Executing: ", convert, \ n") flush.console() } system(convert) if (clean) { if (verbose) cat("Deleting frames. \ n") for (i in 0:(duration fps)) { filename < sprintf("%s%03d.png", frames, i)

PAGE 292

292 unlink(filename) } } } return(file.path(dir, movie. filename)) } ### end mymovie ############################################################## ### end code from Ben to make a movie ###################################################################### ######## ############################################## ######################## ###################################################################### ###################################################################### ## This imports all data relating to fish capture, tagging, collection. ## Look in "all f ish tagging data.xlsx" for the origin of these numbers. #columns and units are: #1. year of contact during tagging #2. month of contact during tagging #3. day of contact during tagging #4. reef where fish was tagged #5. landscape designation of reef #6. e xperimental replicate #7. collection method during tagging #8. fish weight during tagging (kg) #9. fish girth during tagging (mm) #10. fish total length during tagging (mm) #11. fish fork length during tagging (mm) #12. size range in 10cm increments, eithe r from diver or from 10. total length (cm) #13. was this fish tagged #14. tag ID used during tagging #15. start time in knock out tank #16. tagging start time #17. start time in recovery #18. release time (end of dive releasing fish to reef) #19. color of left external PIT tag #20. color of right external PIT tag #21. notes of tagging operations #22. MM number assigned during collections #23. tag ID found in collected fish #24. year of contact during collection #25. month of contact during collection #26. day of contact during collection #27. reef where fish was collected #28. estimated depth of reef where fish was collected (ft), not reliable

PAGE 293

293 #29. collection method during collection #30. fish total length after collection (mm) #31. fish fork length after c ollection (mm) #32. fish weight after collection (g) #33. fish girth after collection (mm) #34. status of left otolith (Yes, Broken, Partial, No) i.e. (present, broken but all present, piece missing, no otolith) #35. recorded weight of left otolith, has tw o measurements for broken otoliths (mg) #36. useable weight of total left otolith, i.e. from 35. 2+3 gives 5 (mg) #37. length of left otolith measurement A made digitally (um) #38. length of left otolith measurement A made with calipers (mm) #39. usable le ngth of left otolith measurement A. The number I pick from 37 38 (um) #40. length of left otolith measurement B1 made digitally (um) #41. length of left otolith measurement B2 made digitally (um) #42. length of left otolith measurement B made with calipers (mm) #43. usable length of left otolith measurement B. The number I pick from 40 42 (um) #44. length of left otolith measurement C1 made digitally (um) #45. length of left otolith measurement C2 made digitally (um) #46. length of left otolith measurement C3 made digitally (um) #47. length of left otolith measurement C4 made digitally (um) #48. length of left otolith measurement C made with calipers (mm) #49. useable length of left otolith measurement C. The number I pick from 44 48 (um) #50. status of righ t otolith (Yes, Broken, Partial, No) i.e. (present, broken but all present, piece missing, no otolith) #51. recorded weight of right otolith, has two measurements for broken otoliths (mg) #52. useable weight of total right otolith, i.e. from 35. 2+3 gives 5 (mg) #53. length of right otolith measurement A made digitally (um) #54. length of right otolith measurement A made with calipers (mm) #55. usable length of right otolith measurement A. The number I pick from 53 54 (um) #56. length of right otolith measu rement B1 made digitally (um) #57. length of right otolith measurement B2 made digitally (um) #58. length of right otolith measurement B made with calipers (mm) #59. usable length of right otolith measurement B. The number I pick from 56 58 (um) #60. lengt h of right otolith measurement C1 made digitally (um) #61. length of right otolith measurement C2 made digitally (um) #62. length of right otolith measurement C3 made digitally (um) #43. length of right otolith measurement C4 made digitally (um) #64. lengt h of right otolith measurement C made with calipers (mm) #65. useable length of right otolith measurement C. The number I pick from 60 64 (um) #66. notes on otoliths importFishData = function( fn="all fish tagging data.csv" # filename ){ setwd("C: /zy/USB working folder/Telemetry/Archive/fish data files")

PAGE 294

294 columnNames = c( # tagging activities "year", "month", "day", "reefID", "ttmt", "rep", "gear", "fishWeight", "girth", "TL", "FL", "sizeRange", "tagged", "tagID", "timeA", timeB", "timeC", "timeD", "pitL", "pitR", "taggingNotes", # collection activities "MM", "tagID2", "year2", "month2", "day2", "reefID2", "depth", "gear2", "TL2", "FL2", "fishWeight2", "girth2", "otoLs", # status "otoLwr", "otoLw", # w eight "otoLa1", "otoLac", "otoLa", # measurement A "otoLb1", "otoLb2", "otoLbc", "otoLb", # measurement B "otoLc1", "otoLc2", "otoLc3", "otoLc4", "otoLcc", "otoLc", # measurement C "otoRs", # status "otoRwr", "otoRw", # weight "o toRa1", "otoRac", "otoRa", # measurement A "otoRb1", "otoRb2", "otoRbc", "otoRb", # measurement B "otoRc1", "oroRc2", "otoRc3", "otoRc4", "otoRcc", "otoRc", "collectionNotes") columnClasses = c( 'character','character','character',' factor','factor', #year ttmt 'factor','character','numeric','numeric','numeric','numeric', #rep FL 'integer', 'factor','integer','character', #sizeRange timeA 'character','character', 'character',#timeB timeD 'character','character','charac ter','integer', 'integer', #pitL tagID2 'character', 'character', 'character', 'factor', 'integer', #year2 depth 'character', 'numeric', 'numeric', 'numeric', 'numeric', #gear2 girth2 'factor', 'character', 'numeric', 'character', 'numeric', #o toLs otoLac 'numeric', 'character', 'character', 'numeric', 'numeric', #otoLa otoLb 'character', 'character', 'character', 'character', #otoLc1 otoLc4 'numeric', 'numeric', 'factor', 'character', 'numeric',# otoLcc otoRw 'character', 'numer ic', 'numeric', 'character', #otoRa1 otoRb1 'character', 'numeric', 'numeric', 'character', #otoRb2 otoRc1 'character', 'character', 'character', 'numeric', #otoRc2 otoRcc 'numeric', 'character' #otoRc collectionNotes ) # read in file d1 = read.table(fn, header=FALSE, sep=",", col.names=columnNames, colClasses = columnClasses ) # create dates from text columns and put them into d1 temp1 = strptime(paste(d1$year, d1$month, d1$day, sep="/"), format="%Y/%m/%d")

PAGE 295

295 temp2 = strptime(paste(d1$year2, d1$month2, d1$day2, sep="/"), format="%Y/%m/%d") d2=d1 d2$year = temp1 d2$year2 = temp2 names(d2)[1] = "date" names(d2)[24] = "date2" # convert fishWeight from units of kg to g d2$fishWeight = d2$fishWeight*1000 # stuff I want to look at # NOTES: all tagID=tagID2; reefID=reefID2 d3 = d2[,c(1,4:6,8:12,22,24,30:33,36,39,43,49,52,55,59,65)] # look at weights d4 = d3[,c(1,5:8,10,12:15)] plot(d4$fishWeight, d4$fishWeight2) abline(0,1) d4[!is.n a(d4$fishWeight2),] } ###################################################################### ############ ### For plotting categorical habitat maps plot.imagematrix.zy = function (x, ...) { colvec < switch(attr(x, "type"), grey = grey(x), rgb = rg b(x[, 1], x[, 2], x[, 3])) if (is.null(colvec)) stop("image matrix is broken.") colors < unique(colvec) colmat < array(match(colvec, colors), dim = dim(x)[1:2]) image(x = 0:(dim(colmat)[2]), y = 0:(dim(colmat)[1]), z = t(colmat[nrow(colmat):1, ]), col = colors, bty="o", cex.lab=2, xlab = "Easting (m)", ylab = "", axes = FALSE, asp = 1, ...) } # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # make tagfm z0 2007 2008.r # @@@@@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

PAGE 296

296 # In this file I import SDL and ADCP data for 2007 and 2008. # I filter the SDL data and merge # it with ADCP data. The code for this was originally worked out # in 'analysis single deployment.r' under the section # 'Working with ALPS position output' AND 'chapter 3 part 1.r' also. # # For the 2008 data, the multiple codes for individuals tags are combined # to gather all info about individuals tags. # # This fi le also has work to manually cut some data from fish position solutions # at times when there were PSs that I don't think represent true fish movement, # like the fish seems to have died because the tag doesn't move at all. # # Next, z0 is constructed. I t mostly rearranges tagfm and preps it for use # with GAM model fitting. ###################################################################### ######### # if everything below here is to your liking, simply read in the stored # tagfm, z0, and results. so urce("C:/zy/Telemetry/R Data Processing/global variables.r") source("C:/zy/Telemetry/R Data Processing/global functions.r") source("C:/zy/Telemetry/R Data Processing/global metadata.r") # These lines give you data for all fish, good and bad... ????zzzzz zzzz load("C:/zy/Telemetry/R summary files/tag 2011Mar16.rdata") load("C:/zy/Telemetry/R summary files/tagf 2011Mar16.rdata") load("C:/zy/Telemetry/R summary files/tagfm 2011Mar16.rdata") load("C:/zy/Telemetry/R summary files/z0 2011May02.rdata") l oad("C:/zy/Telemetry/R summary files/results 2011Mar16.rdata") ## ...if you load the full dataset (i.e. including the bad fish)...then ## ...now take these and remove the bad fish... ## ...pick out only the 5 good fish from tagfm, z0, results ## ... order of tags in tagfm: ## ... f60200, f60400, f60500, f60700, f60900, f60100, f60300, f61100, f61200, f61300 ## ... good fish are: ## ... f60200, f60400, f60900, f60300, f61100 #library(gdata) #tag[[3]] = tag[[5]]; tag[[4]] = tag[[7]] ; tag[[5]] = tag[[8]]; tag = head(tag,5) #tagf[[3]] = tagf[[5]]; tagf[[4]] = tagf[[7]]; tagf[[5]] = tagf[[8]]; tagf = head(tagf,5) #tagfm[[3]] = tagfm[[5]]; tagfm[[4]] = tagfm[[7]]; tagfm[[5]] = tagfm[[8]]; tagfm = head(tagfm,5) # #z0 = z0[z0$tagName=="f60 200" | z0$tagName=="f60400" | z0$tagName=="f60900" |

PAGE 297

297 # z0$tagName=="f60300" |z0$tagName=="f61100", ] #z0$tagName=drop.levels(z0$tagName) #results = results[results$numHits>10000 ,] # ### Now save the 'good' tagfm, z0, and results #save("tagfm", file="C:/ zy/Telemetry/R summary files/tagfm 2011Mar16.rdata") #save("z0", file="C:/zy/Telemetry/R summary files/z0 2011Mar16.rdata") #save("results", file="C:/zy/Telemetry/R summary files/results 2011Mar16.rdata") # If all you want to do is read in the good fish.. # These lines give you data only for the 5 good fish. load("C:/zy/Telemetry/R summary files/tagfm 2011Mar16.rdata") load("C:/zy/Telemetry/R summary files/z0 2011May02.rdata") load("C:/zy/Telemetry/R summary files/results 2011Mar16.rdata") ########## ############################################################ ########## ###################################################################### ########## ###################################################################### ########## ####################### ############################################### ########## # if you'd like to re do it or see my decisions then continue on below... # choose the tags to work with # read in and execute variables, functions, metadata c2007TagNames = md[[1]]$fishNames c2008 TagNames = c("f60100", "f60300", "f61100", "f61200", "f61300") num2007Tags = length(c2007TagNames) num2008Tags = length(c2008TagNames) cTagNames = c(c2007TagNames, c2008TagNames) cDeploymentNames = c(rep("hb2007",length(c2007TagNames)), rep("hb2008",le ngth(c2008TagNames))) # but I really only want the good fish f60200, f60400, f60900, f61100 cTagNames = c("f60200", "f60400", "f60900", "f60300", "f61100") cDeploymentNames = c(rep("hb2007",3),rep("hb2008",2)) # get the somar image to use library(rimage) rfile = "C:/zy/Telemetry/R summary files/IF41_IF42_lines_aligned_HBandSB_bluebox.jpg" i1 = round(read.jpeg(rfile)) #par(mar=c(0.2,0.2,0.2,0.2)); plot(i1); box("plot", col="red")

PAGE 298

298 ###################################################################### ###### #### ### MAKE tagfm ###################################################################### ########## ## import, filter, and merge ALPS data tag = list() # raw tag data tagf = list() # filtered tag data tagfm = list() # filtered tag data merged with ADCP/ tide data for (i in 1:length(cTagNames)){ print(cTagNames[i]) tag[[i]] = importALPSdata(deployment=cDeploymentNames[i],tagName=cTagNames[i]) tagf[[i]] = filterALPSdata(df1=tag[[i]], cnF=1.5, speedF=0.8, minuteMean=TRUE) tagfm[[i]] = mergeAlpsAdcpDa ta(alpsData=tagf[[i]]) tagfm[[i]] = mergeSonarData(alpsData=tagfm[[i]],habmap=i1,reference="if41") } # Now that the data are in, look at each fish individually and maybe # manually chop some data because...maybe the fish looks like it died or, there # appear to be detections but no position solutions. # # T60200 ###################################################################### # cTag = 1 if(cDeploymentNames[cTag] == "hb2007"){cmd = md[[1]]} else {cmd = md[[2]]} par(mfrow=c(1,2)) plot(tagfm[[cTag]]$ data$easting, tagfm[[cTag]]$data$northing, pch=19, cex=0.2, main = tagfm[[cTag]]$tagName, xlim = cmd$plotLimits$easting, ylim = cmd$plotLimits$northing, ) points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19, col="blue", cex=1) points(cmd$reefEN$eastin g, cmd$reefEN$northing, pch=17, col="red", cex=1) plot(tag[[cTag]]$data$utime, tag[[cTag]]$data$northing, cex=0.7, pch=19, main = tagfm[[cTag]]$tagName, col="green", xlim=c(tagfm[[cTag]]$data$utime[1], tagfm[[cTag]]$data$utime[length(tagfm[[cTag]]$dat a$utime)]), ylim=c(650,750) ) points(tagf[[cTag]]$data$utime, tagf[[cTag]]$data$northing, cex=0.4, pch=19, col="yellow" ) points(tagfm[[cTag]]$data$utime, tagfm[[cTag]]$data$northing, cex=0.2, pch=19,

PAGE 299

299 col="black" ) abline(h=cmd$reefEN$northing, col=" red") # This fish appears to have been active the entire time. No action needed. # T60400 ###################################################################### cTag = 2 if(cDeploymentNames[cTag] == "hb2007"){cmd = md[[1]]} else {cmd = md[[2]]} par(mfrow =c(1,2)) plot(tagfm[[cTag]]$data$easting, tagfm[[cTag]]$data$northing, pch=19, cex=0.2, main = tagfm[[cTag]]$tagName, xlim = cmd$plotLimits$easting, ylim = cmd$plotLimits$northing, ) points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19, col="blue", ce x=1) points(cmd$reefEN$easting, cmd$reefEN$northing, pch=17, col="red", cex=1) plot(tag[[cTag]]$data$utime, tag[[cTag]]$data$northing, cex=0.7, pch=19, main = tagfm[[cTag]]$tagName, col="green", xlim=c(tagfm[[cTag]]$data$utime[1], tagfm[[cTag]]$data$u time[length(tagfm[[cTag]]$data$utime)]), ylim=c(650,750) ) points(tagf[[cTag]]$data$utime, tagf[[cTag]]$data$northing, cex=0.4, pch=19, col="yellow" ) points(tagfm[[cTag]]$data$utime, tagfm[[cTag]]$data$northing, cex=0.2, pch=19, col="black" ) abline (h=cmd$reefEN$northing, col="red") # This fish appears to have been active the entire time. No action needed. # T60500 ###################################################################### ############################### this is a bad tag cTag=3 if(c DeploymentNames[cTag] == "hb2007"){cmd = md[[1]]} else {cmd = md[[2]]} par(mfrow=c(1,2)) plot(tagfm[[cTag]]$data$easting, tagfm[[cTag]]$data$northing, pch=19, cex=0.2, main = tagfm[[cTag]]$tagName, xlim = cmd$plotLimits$easting, ylim = cmd$plotLimits$ northing, ) points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19, col="blue", cex=1) points(cmd$reefEN$easting, cmd$reefEN$northing, pch=17, col="red", cex=1)

PAGE 300

300 plot(tag[[cTag]]$data$datiL, tag[[cTag]]$data$northing, cex=0.7, pch=19, main = tagfm[[cTag]]$ tagName, col="green", xlim=c(tagfm[[cTag]]$data$utime[1], tagfm[[cTag]]$data$utime[length(tagfm[[cTag]]$data$utime)]), ylim=c(650,750) ) points(tagf[[cTag]]$data$utime, tagf[[cTag]]$data$northing, cex=0.4, pch=19, col="yellow" ) points(tagfm[[cTag]]$ data$utime, tagfm[[cTag]]$data$northing, cex=0.2, pch=19, col="black" ) abline(h=cmd$reefEN$northing, col="red") ## it looks like the last real point comes before about 1197400000, ## I'll chop everything after that # #cutoffTime = 1197400000 #tag[[cTag ]]$data = tag[[cTag]]$data[tag[[cTag]]$data$utime < cutoffTime, ] #tagf[[cTag]]$data = tagf[[cTag]]$data[tagf[[cTag]]$data$utime < cutoffTime, ] #tagfm[[cTag]]$data = tagfm[[cTag]]$data[tagfm[[cTag]]$data$utime < cutoffTime, ] # T60700 ################### ################################################### # ############################### this is a bad tag cTag=4 if(cDeploymentNames[cTag] == "hb2007"){cmd = md[[1]]} else {cmd = md[[2]]} par(mfrow=c(2,2)) plot(tagfm[[cTag]]$data$easting, tagfm[[cTag]]$da ta$northing, pch=19, cex=0.2, main = tagfm[[cTag]]$tagName, xlim = cmd$plotLimits$easting, ylim = cmd$plotLimits$northing, ) points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19, col="blue", cex=1) points(cmd$reefEN$easting, cmd$reefEN$northing, pch=17 col="red", cex=1) # plot(tag[[cTag]]$data$datiL, tag[[cTag]]$data$northing, cex=0.7, pch=19, main = tagfm[[cTag]]$tagName, col="green", xlim=c(tagfm[[cTag]]$data$utime[1], tagfm[[cTag]]$data$utime[length(tagfm[[cTag]]$data$utime)]), ylim=c(650,750) ) points(tagf[[cTag]]$data$utime, tagf[[cTag]]$data$northing, cex=0.4, pch=19, col="yellow"

PAGE 301

301 ) points(tagfm[[cTag]]$data$utime, tagfm[[cTag]]$data$northing, cex=0.2, pch=19, col="black" ) abline(h=cmd$reefEN$northing, col="red") # plot(tag[[cTag]]$data $datiL, tag[[cTag]]$data$easting, cex=0.7, pch=19, main = tagfm[[cTag]]$tagName, col="green", xlim=c(tagfm[[cTag]]$data$utime[1], tagfm[[cTag]]$data$utime[length(tagfm[[cTag]]$data$utime)]), ylim=c(8400,8800) ) points(tagf[[cTag]]$data$utime, tagf[[c Tag]]$data$easting, cex=0.4, pch=19, col="yellow" ) points(tagfm[[cTag]]$data$utime, tagfm[[cTag]]$data$easting, cex=0.2, pch=19, col="black" ) abline(h=cmd$reefEN$easting, col="red") # ## It looks like it sat for a long time at the eastern SDL. Ther e are a couple ## times of big movement in the easting and northing. I want to see the EN plots ## for those times. ## CONCLUSION: during those times of big movement in easting and northing, the ## fish was not simply traversing the array, it was moving w ithin the entire ## array area. Look at time windows c(1199200000,1199400000) and c(1198220000,1198250000) # # ## This one looks funky but appears to be true behavior ## the fish sits still at the east SDL for long periods, but makes a couple ## excursion s where it visits the whole area within the array. Don't cut anything. # T60900 ###################################################################### cTag = 3 if(cDeploymentNames[cTag] == "hb2007"){cmd = md[[1]]} else {cmd = md[[2]]} par(mfrow=c(1,2)) plot(tagfm[[cTag]]$data$easting, tagfm[[cTag]]$data$northing, pch=19, cex=0.2, main = tagfm[[cTag]]$tagName, xlim = cmd$plotLimits$easting, ylim = cmd$plotLimits$northing, ) points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19, col="blue", cex=1) point s(cmd$reefEN$easting, cmd$reefEN$northing, pch=17, col="red", cex=1)

PAGE 302

302 plot(tag[[cTag]]$data$utime, tag[[cTag]]$data$northing, cex=0.7, pch=19, main = tagfm[[cTag]]$tagName, col="green", xlim=c(tagfm[[cTag]]$data$utime[1], tagfm[[cTag]]$data$utime[lengt h(tagfm[[cTag]]$data$utime)]), ylim=c(650,750) ) points(tagf[[cTag]]$data$utime, tagf[[cTag]]$data$northing, cex=0.4, pch=19, col="yellow" ) points(tagfm[[cTag]]$data$utime, tagfm[[cTag]]$data$northing, cex=0.2, pch=19, col="black" ) abline(h=cmd$ree fEN$northing, col="red") # This fish appears to have been active the entire time. No action needed. # T60100 #################################################################### ############################### this is a bad tag cTag=6 # if(cDeploymen tNames[cTag] == "hb2007"){cmd = md[[1]]} else {cmd = md[[2]]} par(mfrow=c(2,2)) plot(tagfm[[cTag]]$data$easting, tagfm[[cTag]]$data$northing, pch=19, cex=0.2, main = tagfm[[cTag]]$tagName, xlim = cmd$plotLimits$easting, ylim = cmd$plotLimits$northing, ) points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19, col="blue", cex=1) points(cmd$reefEN$easting, cmd$reefEN$northing, pch=17, col="red", cex=1) # plot(tag[[cTag]]$data$datiL, tag[[cTag]]$data$northing, cex=0.7, pch=19, main = tagfm[[cTag]]$tagName, col="green", xlim=c(tagfm[[cTag]]$data$utime[1], tagfm[[cTag]]$data$utime[length(tagfm[[cTag]]$data$utime)]), ylim=c(650,750) ) points(tagf[[cTag]]$data$utime, tagf[[cTag]]$data$northing, cex=0.4, pch=19, col="yellow" ) points(tagfm[[cTag]]$data$uti me, tagfm[[cTag]]$data$northing, cex=0.2, pch=19, col="black" ) abline(h=cmd$reefEN$northing, col="red") # plot(tag[[cTag]]$data$datiL, tag[[cTag]]$data$easting, cex=0.7, pch=19,

PAGE 303

303 main = tagfm[[cTag]]$tagName, col="green", xlim=c(tagfm[[cTag]]$data$ut ime[1], tagfm[[cTag]]$data$utime[length(tagfm[[cTag]]$data$utime)]), ylim=c(8400,8800) ) points(tagf[[cTag]]$data$utime, tagf[[cTag]]$data$easting, cex=0.4, pch=19, col="yellow" ) points(tagfm[[cTag]]$data$utime, tagfm[[cTag]]$data$easting, cex=0.2, pc h=19, col="black" ) abline(h=cmd$reefEN$easting, col="red") ## T60100 cTag=6... This one cuts off all by itself. No need for action # T60300 ###################################################################### # cTag=4 if(cDeploymentNames[cTag] == "h b2007"){cmd = md[[1]]} else {cmd = md[[2]]} par(mfrow=c(1,2)) plot(tagfm[[cTag]]$data$easting, tagfm[[cTag]]$data$northing, pch=19, cex=0.2, main = tagfm[[cTag]]$tagName, xlim = cmd$plotLimits$easting, ylim = cmd$plotLimits$northing, ) points(cmd$sdlE N$easting, cmd$sdlEN$northing, pch=19, col="blue", cex=1) points(cmd$reefEN$easting, cmd$reefEN$northing, pch=17, col="red", cex=1) plot(tag[[cTag]]$data$utime, tag[[cTag]]$data$northing, cex=0.7, pch=19, main = tagfm[[cTag]]$tagName, col="green", xli m=c(tagfm[[cTag]]$data$utime[1], tagfm[[cTag]]$data$utime[length(tagfm[[cTag]]$data$utime)]), ylim=c(650,750) ) points(tagf[[cTag]]$data$utime, tagf[[cTag]]$data$northing, cex=0.4, pch=19, col="yellow" ) points(tagfm[[cTag]]$data$utime, tagfm[[cTag]]$d ata$northing, cex=0.2, pch=19, col="black" ) abline(h=cmd$reefEN$northing, col="red") # This fish appears to have been active the entire time. No action needed.

PAGE 304

304 # T61100 ###################################################################### # cTag=5 if( cDeploymentNames[cTag] == "hb2007"){cmd = md[[1]]} else {cmd = md[[2]]} par(mfrow=c(1,2)) plot(tagfm[[cTag]]$data$easting, tagfm[[cTag]]$data$northing, pch=19, cex=0.2, main = tagfm[[cTag]]$tagName, xlim = cmd$plotLimits$easting, ylim = cmd$plotLimits $northing, ) points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19, col="blue", cex=1) points(cmd$reefEN$easting, cmd$reefEN$northing, pch=17, col="red", cex=1) plot(tag[[cTag]]$data$utime, tag[[cTag]]$data$northing, cex=0.7, pch=19, main = tagfm[[cTag]] $tagName, col="green", xlim=c(tagfm[[cTag]]$data$utime[1], tagfm[[cTag]]$data$utime[length(tagfm[[cTag]]$data$utime)]), ylim=c(650,750) ) points(tagf[[cTag]]$data$utime, tagf[[cTag]]$data$northing, cex=0.4, pch=19, col="yellow" ) points(tagfm[[cTag]] $data$utime, tagfm[[cTag]]$data$northing, cex=0.2, pch=19, col="black" ) abline(h=cmd$reefEN$northing, col="red") # This fish appears to have been active the entire time. No action needed. # T61200 ###################################################### ################ # ############################### this is a bad tag cTag=9 if(cDeploymentNames[cTag] == "hb2007"){cmd = md[[1]]} else {cmd = md[[2]]} par(mfrow=c(2,2)) plot(tagfm[[cTag]]$data$easting, tagfm[[cTag]]$data$northing, pch=19, cex=0.2, mai n = tagfm[[cTag]]$tagName, xlim = cmd$plotLimits$easting, ylim = cmd$plotLimits$northing, ) points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19, col="blue", cex=1) points(cmd$reefEN$easting, cmd$reefEN$northing, pch=17, col="red", cex=1) # plot(tag[[cTa g]]$data$datiL, tag[[cTag]]$data$northing, cex=0.7, pch=19, main = tagfm[[cTag]]$tagName, col="green",

PAGE 305

305 xlim=c(tagfm[[cTag]]$data$utime[1], tagfm[[cTag]]$data$utime[length(tagfm[[cTag]]$data$utime)]), ylim=c(650,750) ) points(tagf[[cTag]]$data$utime, tagf[[cTag]]$data$northing, cex=0.4, pch=19, col="yellow" ) points(tagfm[[cTag]]$data$utime, tagfm[[cTag]]$data$northing, cex=0.2, pch=19, col="black" ) abline(h=cmd$reefEN$northing, col="red") # plot(tag[[cTag]]$data$datiL, tag[[cTag]]$data$easting, c ex=0.7, pch=19, main = tagfm[[cTag]]$tagName, col="green", xlim=c(tagfm[[cTag]]$data$utime[1], tagfm[[cTag]]$data$utime[length(tagfm[[cTag]]$data$utime)]), ylim=c(8400,8800) ) points(tagf[[cTag]]$data$utime, tagf[[cTag]]$data$easting, cex=0.4, pch=19 col="yellow" ) points(tagfm[[cTag]]$data$utime, tagfm[[cTag]]$data$easting, cex=0.2, pch=19, col="black" ) abline(h=cmd$reefEN$easting, col="red") # T61300 ###################################################################### # ##################### ########## this is a bad tag cTag=10 if(cDeploymentNames[cTag] == "hb2007"){cmd = md[[1]]} else {cmd = md[[2]]} par(mfrow=c(2,2)) plot(tagfm[[cTag]]$data$easting, tagfm[[cTag]]$data$northing, pch=19, cex=0.2, main = tagfm[[cTag]]$tagName, xlim = cm d$plotLimits$easting, ylim = cmd$plotLimits$northing, ) points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19, col="blue", cex=1) points(cmd$reefEN$easting, cmd$reefEN$northing, pch=17, col="red", cex=1) # plot(tag[[cTag]]$data$datiL, tag[[cTag]]$data$north ing, cex=0.7, pch=19, main = tagfm[[cTag]]$tagName, col="green", xlim=c(tagfm[[cTag]]$data$utime[1], tagfm[[cTag]]$data$utime[length(tagfm[[cTag]]$data$utime)]), ylim=c(650,750)

PAGE 306

306 ) points(tagf[[cTag]]$data$utime, tagf[[cTag]]$data$northing, cex=0.4, p ch=19, col="yellow" ) points(tagfm[[cTag]]$data$utime, tagfm[[cTag]]$data$northing, cex=0.2, pch=19, col="black" ) abline(h=cmd$reefEN$northing, col="red") # plot(tag[[cTag]]$data$datiL, tag[[cTag]]$data$easting, cex=0.7, pch=19, main = tagfm[[cTag]] $tagName, col="green", xlim=c(tagfm[[cTag]]$data$utime[1], tagfm[[cTag]]$data$utime[length(tagfm[[cTag]]$data$utime)]), ylim=c(8400,8700) ) points(tagf[[cTag]]$data$utime, tagf[[cTag]]$data$easting, cex=0.4, pch=19, col="yellow" ) points(tagfm[[cTag] ]$data$utime, tagfm[[cTag]]$data$easting, cex=0.2, pch=19, col="black" ) abline(h=cmd$reefEN$easting, col="red") ## T61300 cTag=10 #cutoffTime = 1224670000 #tag[[cTag]]$data = tag[[cTag]]$data[tag[[cTag]]$data$utime < cutoffTime, ] #tagf[[cTag]]$data = tagf[[cTag]]$data[tagf[[cTag]]$data$utime < cutoffTime, ] #tagfm[[cTag]]$data = tagfm[[cTag]]$data[tagfm[[cTag]]$data$utime < cutoffTime, ] # IF YOU'RE HAPPY WITH THESE RESULTS, SAVE THEM NOW. ### save these results save("tag", file="C:/zy/Telemetry/R sum mary files/tag 2011Mar16.rdata") save("tagf", file="C:/zy/Telemetry/R summary files/tagf 2011Mar16.rdata") save("tagfm", file="C:/zy/Telemetry/R summary files/tagfm 2011Mar16.rdata") # load("C:/zy/Telemetry/R summary files/tag 2010Nov11.rdata") # load("C: /zy/Telemetry/R summary files/tagf 2010Nov11.rdata") # load("C:/zy/Telemetry/R summary files/tagfm 2011Feb07.rdata") ###################################################################### ########## ### MAKE results ####################################### ############################### ########## # z0 combines tagfm into a single long list with data for all 2007/2008 fish. # It also adds a couple other columns for use in GAM fitting stuff.

PAGE 307

307 library(circular) library(rimage) library(MASS) # a dataframe to h old data about fish movement results1 = data.frame( "tagName" = cTagNames, "deployment" = cDeploymentNames, "weight" = rep(NA, length(cTagNames)), "TL" = rep(NA, length(cTagNames)), "FL" = rep(NA, length(cTagNames)), "relWeight" = rep(NA, lengt h(cTagNames)), "numDays" = rep(NA, length(cTagNames)), "numHits" = rep(NA, length(cTagNames)), "fracHitsPerDay" = rep(NA, length(cTagNames)), "medianInterval" = rep(NA, length(cTagNames)), "medianDtr" = rep(NA, length(cTagNames)), "medianSpeed" = rep(NA, length(cTagNames)), # "meanTurning" = rep(NA, length(cTagNames)), "kde50" = rep(NA, length(cTagNames)), "kde95" = rep(NA, length(cTagNames)), stringsAsFactors=FALSE ) # get fish biometric data. This contains data recorded in the field on tagging # day and any recaptures. It also contains the otolith data. biometrics = importBiometricData() # cycle through each tag/year, calculate things, create table, create plots par(mfrow=c(3,3)) for (i in 1:length(cTagNames) ){ # fetch deployment specific informaiton for (j in 1:length(md)){ if(results1$deployment[i] == md[[j]]$deployment){ cmd = md[[j]] } # end if statement } # end for j loop # for some things I'll want to know things before calcula ting the minuteMean temptag = filterALPSdata(df1=tag[[i]], cnF=1.5, speedF=0.8, minuteMean=F) # count number of days with tag receptions results1$numDays[i] = length(unique(temptag$data$datiL$yday)) # count total number of position solutions wit hout doing minuteMean

PAGE 308

308 results1$numHits[i] = nrow(temptag$data) # calculate the fraction (position solutions)/(transmissions) each day results1$fracHitsPerDay[i] = round(results1$numHits[i] / (results1$numDays[i] 30*60*24),3) # calculate the median interval between position solutions # (seconds per day) / (fracHitsPerDay pings per day) = mean interval #results1$meanInterval[i]=round((60*60*24)/(results1$fracHitsPerDay[i]*30*60*24),0) thistime = head(temp tag$data$utime, 1) nexttime = tail(temptag$data$utime, 1) ints = nexttime thistime results1$medianInterval[i] = median(ints) # calculate the median distance to reef using minute averaged data results1$medianDtr[i] = round(median(tagfm[[i]]$dat a$dtr),1) # calculate the median travel speed using minute averaged data results1$medianSpeed[i] = round(median(tagfm[[i]]$data$speed),3) # # calculate the mean turning angle # # ... for every set of three consecutive points # # # some lists # p1 = list() # all but the last two points # p2 = list() # all but the first and last points # p3 = list() # all but the first two points # turns = list() # the list of all turns, this will be one shorter than p1 # turns1 = list() # this is turns as the class 'circular' # difference=list(); uniques=list(); # these are for removing duplicates # # # check for duplicate positions next to each other, since 'bearing.ta()' # # ... doesn't allow zero length moves # # # find every row that's the same a s the one before it. To do this, look at the # # ... easting/northing columns, take the whole list but the first one (tail()) # # ... then take the whole list but the last one (head()) and subtact them # # ... this gives the 'difference[[ ]]' list wh ich has east and north columns), # difference[[i]] = tail(tagfm[[i]]$data[,4:5], 1) head(tagfm[[i]]$data[,4:5], 1) # # ... then any column that == 0 in both the east and north columns is dropped # uniques[[i]] = # tagfm[[i]]$data[ !((difference [[i]][1] == 0) & (difference[[i]][2] == 0)), ] # # now take just the 'easting' and 'northing' columns # uniques[[i]] = uniques[[i]][,4:5] # #

PAGE 309

309 # # ... take the entire list (of east/north pairs) but the last two # p1[[i]] = head(uniques[[i]], 2) # # ... take the entire list but the first and last # p2[[i]] = head( tail( uniques[[i]], 1), 1) # # ... take the entire list but the first two # p3[[i]] = tail(uniques[[i]], 2) # # # now calculate the bearing for each set of 3 pts # # ... bearin g.ta can accept three 2 column data.frames instead of three length 2 vectors # turns[[i]] = bearing.ta(p1[[i]],p2[[i]],p3[[i]],as.deg=TRUE)$ta # attr(turns[[i]], "names") = NULL # # # to make a rose diagram of turning angle # turnRadians = turns[[i]] pi / 180 # rose.diag(turnRadians, bins=18, pts=F, prop=2, # there is a rose.diag in both packages 'circular' and 'circStats' # main="Distribution of angles turned from forward travel") # text(0.8,0.8,"Left Turn") # text(0.8, 0.8,"Right Turn") # ############## end turning angle # # # change the class to 'circular' # turns[[i]] = as.circular(turns[[i]], units="degrees") # results1$meanTurning[i] = mean(turns[[i]]) # # get the biometric data for the fish with this tag oneTag = paste("f" ,biometrics$tagID, sep="") oneFish = biometrics[oneTag == results1$tagName[i] ,] results1$weight[i] = oneFish$weight1 results1$TL[i] = oneFish$TL1 results1$FL[i] = oneFish$FL1 # calculate the relative weight. I got this equation from Doug. # a = 9.21744 x 10 6; b = 3.04; # (standard weight,g) = a (length, mm)^b # relative weight = (actual weight / standard weight) 100 stdWt = 9.21744e 6 results1$TL[i]^3.04 results1$relWeight[i] = round(100 results1$weight[i]*1000 / stdWt, 1) # calculate the home ranges # THE LIMITS YOU USE WHEN CALCULATING THE KDE AFFECT THE ANSWER, SO FOR ALL # FISH MAKE SURE TO USE THE SAME LIMITS ON EASTING AND NORTHING. # There's more in 'chapter 3 part 1.r' on looking at home ranges. #

PAGE 310

310 # Also, I don't want to use days 1 or 2 hrUtime = tagfm[[i]]$data$utime hrEasting = tagfm[[i]]$data$easting hrNorthing = tagfm[[i]]$data$northing # ... figure out the end of day 2...pick out the day and ad d 2 tempday = cmd$taggingDay substr(tempday,10,12) = as.character(as.numeric(substr(cmd$taggingDay,10,12))+2) tempUtime=as.POSIXct(strptime(tempday, "%Y/%b/%d", tz="EST5EDT"), origin="1970 1 1") # drop all data before tempUtime hrEasting = hrEast ing[hrUtime > tempUtime ] hrNorthing = hrNorthing[hrUtime > tempUtime ] cProb = 0.50 results1$kde50[i] = round( homeRange(easting = hrEasting, northing = hrNorthing, tagName = cTagNames[i], lims = hrlims, reefEN=cmd$reefEN, sdlE N=cmd$sdlEN, prob=cProb, drawplot=FALSE ),0 ) cProb = 0.95 results1$kde95[i] = round( homeRange(easting = hrEasting, northing = hrNorthing, tagName = cTagNames[i], lims = hrlims, reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, prob=cProb, drawplot=FALSE ),0 ) # # another kind of plot # kde = kde2d(tagfm[[i]]$data$easting, tagfm[[i]]$data$northing, n=50, # lims=c(md[[2]]$plotLimits$easting, md[[2]]$plotLimits$northing)) # # kde2dplot2(tagfm[[i]]$data$easting, tagfm[[i]]$data$northing, d=kde, # prob=cProb, pts=FALSE, tagName=cTagNames[i]) # points(x=cmd$sdlEN$easting, y=cmd$sdlEN$northing, pch=19, col="blue") # points(x=cmd$reefEN$easting, y=cmd$reefEN$northing, pch=17, col="red") # # # plot EN # p lot(tagfm[[i]]$data$easting, tagfm[[i]]$data$northing, # pch=19, cex=0.1, # xlim = md[[2]]$plotLimits$easting, ylim = md[[2]]$plotLimits$northing, # main = cTagNames[i]

PAGE 311

311 # ) # points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19, col="blue", ce x=1) # points(cmd$reefEN$easting, cmd$reefEN$northing, pch=17, col="red", cex=1) # # hexbin EN # plot(hexbin(tagfm[[i]]$data$easting, tagfm[[i]]$data$northing, # xbnds = md[[2]]$plotLimits$easting, ybnds = md[[2]]$plotLimits$northing), # main = cTagNames[i] # ) # points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19, col="blue", cex=1) # points(cmd$reefEN$easting, cmd$reefEN$northing, pch=17, col="red", cex=1) # # # time v. northing # plot(tagfm[[i]]$data$datiL, tagfm[[i]]$data$northing, # pch=19, cex=0.1, # ylim = md[[2]]$plotLimits$northing, # main = cTagNames[i] # ) # abline(h=cmd$sdlEN$northing, col="blue", cex=1) # abline(h= cmd$reefEN$northing, col="red", cex=1) } # end for i loop over all tag names results = re sults1 resultsByWeight = results[order(results$weight),] results = resultsByWeight # if you're happy, save it save("results", file="C:/zy/Telemetry/R summary files/results 2011Mar16.rdata") ### Now that 'results' is full, add it to tagfm and rearrange into z0 # functions I'll use later zoom < function(...) coord_cartesian(...) # for easier plot limits in ggplot ss < function(...) drop.levels(subset(...),reorder=FALSE ) ## rearrange data for a single fish tmpf < function(x,ssize=1.0) { n < nrow(x$data) dat < x$data # pick out only some columns dat < subset(x$data,select=c(utime,datiG,datiL,dod,tod,hod,lunarIndex, easting,northing,depth,altitude,dtr,btr, interval,speed,npos,#turnAngle,

PAGE 312

312 temperature,waterDepth, eaaL,magL,dirL, #eaaM,eaaU,magM,magU,dirM,dirU,tidalHeight habType )) dat$yr = as.factor(substr(x$deployment,3,7)) #ifelse(dat$utime>1.21e9,2008,2007) # change class dat$datiG < as.POSIXct(dat$datiG) dat$datiL < as.POSIXct(dat$datiL) # should we be sampling randomly or regularly? # could do: x$data[seq(1,n,by=10),] for regular sampling if (ssize<1){dat < dat[sort(sample(1:n,size=round(ssize*n ),replace=FALSE)),]} # add a column with the fish ID data.frame(dat,tagName=x$tagName) } # end combineFish() ## subsample the data lists in the FULL data set down to 10% (ssize=10) of original ## combine the data lists into a single data frame with a factor indicating ## which fish it's associated with z0 = do.call(rbind,lapply(tagfm, tmpf, ssize=1)) # add some things z0$tl = NA z0$weight = NA z0$relWeight = NA for (i in 1:nrow(results)){ z0$tl[z0$tagName == results$tagName[i]] = results$TL[i] z0$weight[z0$tagName == results$tagName[i]] = results$weight[i] z0$relWeight[z0$tagName == results$tagName[i]] = results$relWeight[i] } z0$day = NA z0[(z0$tod<=6 | z0$tod>19), ]$day = "night" z0[(z0$tod>6 & z0$tod<=8), ]$day = "dawn" z0[(z0$tod>8 & z0$t od<=17), ]$day = "day" z0[(z0$tod>17 & z0$tod<=19), ]$day = "dusk" z0$day = as.factor(z0$day) # IF YOU'RE HAPPY WITH THESE RESULTS, SAVE THEM NOW. ### save these results save("z0", file="C:/zy/Telemetry/R summary files/z0 2011May02.rdata") # load("C:/zy/ Telemetry/R summary files/z0 2010Nov11.rdata") # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # make tagfm z9 2009.r

PAGE 313

313 # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # In this file I import SDL and ADCP data for 2009. # I filter the SDL data and merge # it with ADCP data. The code for this was originally worked out # in 'make tagfm z0 2007 2008.r' # # # This file also has work to manually cut some data from fish position solutions # at t imes when there were PSs that I don't think represent true fish movement, # like the fish seems to have died because the tag doesn't move at all. # # Next, z9 is constructed. It mostly rearranges tagfm9 and preps it for use # with GAM model fitting. lib rary(rimage) library(ggplot2) library(mgcv) ###################################################################### ######### # if everything below here is to your liking, simply read in the stored # tagfm9, z9, and results9. source("C:/zy/Telemetry/R Dat a Processing/global variables.r") source("C:/zy/Telemetry/R Data Processing/global metadata.r") source("C:/zy/Telemetry/R Data Processing/global functions.r") ###################################################################### ########## ############ ########################################################## ########## ###################################################################### ########## ###################################################################### ########## # gather all the data # md3 hb1 if43 2009Jun01 # md4 sb1 of43 2009Jul10 # md5 sb2 oh41 2009Aug03 # md6 hb2 if41 2009Aug24 # md7 sb3 os43 2009Sep14 # md8 hb3 if42 2009Oct12 # md9 sb4 of43 2009Nov16 numexpt = 3:9

PAGE 314

314 # look at a lis t of all the tags for(i in numexpt){ print(md[[i]]$deployment) print(md[[i]]$fishNames) } ### if you're happy with what I've done, simply read in the good tagfm9 data ### here, otherwise recreate it with the following lines load("C:/zy/Telemetry/R sum mary files/depList 2011June25.rData") load("C:/zy/Telemetry/R summary files/results9 2011Jun25.rdata") load("C:/zy/Telemetry/R summary files/z9 2011Jun25.rdata") ## import, filter, and merge ALPS data # my computer can't do all deployments at the same time...so save them as they finish # for each deployment... for(i in 1:length(numexpt)){ # pick the right metadata cmd = md[[ numexpt[i] ]] cTagNames = cmd$fishNames # read in the correct habitat map rfile = cmd$habmapFileName i1 = rou nd(read.jpeg(rfile)) # create some empty lists tag9 = list() # raw tag data tagf9 = list() # filtered tag data tagfm9 = list() # filtered tag data merged with ADCP/tide data # for each fishName in the current cDep...import, filter, merge dat a for (j in 1:length(cTagNames)){ print(paste("start",i,j)) print(Sys.time()) tag9[[j]] = importALPSdata(deployment=cmd$deployment,tagName=cTagNames[j]) tagf9[[j]] = filterALPSdata(df1=tag9[[j]], cnF=1.5, speedF=0.8, minuteMean=TRUE) tagfm9[[j]] = mergeAlpsAdcpData(alpsData=tagf9[[j]]) tagfm9[[j]] = mergeSonarData(alpsData=tagfm9[[j]],habmap=i1,reference=cmd$site) print(paste("stop",i,j)) print(Sys.time()) } # my computer can't do all deployments at the same time...s o save them as they finish #dep[[i]] = list(tag9=tag9, tagf9=tagf9, tagfm9=tagfm9, cmd=numexpt[i]) depData = list(tag9=tag9, tagf9=tagf9, tagfm9=tagfm9, dn=numexpt[i])

PAGE 315

315 save("depData", file=paste("C:/zy/Telemetry/R summary files/Experiment tagfm9 a nd figs/dep_", cmd$deployment, 2011June23.rdata", sep="")) rm(i1) rm(depData) } # to look at these saved depData... load("C:/zy/Telemetry/R summary files/Experiment tagfm9 and figs/dep_hb1 2011June23.rdata") # After this has been finish ed and files saved for all 7 depolyments # look at individual deployments, drop the bad fish, clean up data, and # save all the tagfm9 data into a single data.frame, something like z0 # All these files names are... depNamesList = c("hb1","sb1","sb2","h b2","sb3","hb3","sb4") fn9 = paste("C:/zy/Telemetry/R summary files/Experiment tagfm9 and figs/dep_", depNamesList, 2011June23.rdata",sep="") # This is a summary...after looking at all deployments, these are the tags # with consistent, continuous d ata # None of them look like they need to have data chopped off the end, one has a # big gap. # cDep = 1. deployment = hb1. # f11:bad. f12:bad. f13:good. f14:good. f15:bad. f16:good. f17:bad. f61000:bad. hb1 = c(F,F,T,T,F,T,F,F) # cDep = 2. deployme tn = sb1 # all tags : bad sb1 = rep(F,8) # cDep = 3. deployment = sb2. # f25:bad. f26:good. f27:bad. f28:good. f29:good. f30:good. f31:good with gap. f61600:bad. sb2 = c(F,T,F,T,T,T,T,F) # cDep = 4. deployment = hb2. # f32:bad. f33:good. f 34:good. f35:good. f36:good. f37:good. f38:good. f61700:bad. hb2 = c(F,T,T,T,T,T,T,F) # cDep = 5. deployment = sb3. # f39:good. f40:good. f41:bad. f42:good. f43:good. f44:bad. f45:bad. f61800:bad.

PAGE 316

316 sb3 = c(T,T,F,T,T,F,F,F) # cDep = 6. deployment = hb3. # f46:bad. f47:good. f48:good. f50:bad. f51:good. f52:good. f61900:bad. f62000:bad. # f52 might be considered bad, I'll chose good hb3 = c(F,T,T,F,T,T,F,F) # cDep = 7. deployment = sb4. # f53:bad. f54:good. f55:bad. f56:go od. f57:good. f58:bad. f59:good. f62100:good. sb4 = c(F,T,F,T,T,F,T,T) keepers = list(hb1, sb1, sb2, hb2, sb3, hb3, sb4) # pick one deployment, load and rename it, get metadata for it cDep = 6 # this should be 1 7 for the number of experimental deploymen ts load(fn9[cDep]) cmd = md[[ numexpt[cDep] ]] # take a look par(mfrow=c(2,4)) for(i in 1:length(depData$tag9)){ plot(depData$tagfm9[[i]]$data$datiL, depData$tagfm9[[i]]$data$northing, pch=".", xlim=c(cmd$taggingUtime,cmd$stopUtime), ylim=c( cmd$plotLimits$northing), main=paste(depData$tag9[[i]]$deployment," ",depData$tag9[[i]]$tagName)) abline(h=cmd$reefEN$northing) } # take another look par(mfrow=c(2,4)) for(i in 1:length(depData$tag9)){ plot(depData$tag9[[i]]$data$easting, depData$ tag9[[i]]$data$northing, pch=".", xlim=c(cmd$plotLimits$easting), ylim=c(cmd$plotLimits$northing), main=paste(depData$tag9[[i]]$deployment," ",depData$tag9[[i]]$tagName)) points(depData$tagfm9[[i]]$data$easting, depData$tagfm9[[i]]$data$northing pch=".", col="blue") points(cmd$reefEN$easting, cmd$reefEN$northing, pch=19,col="red") points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19,col="red") } # and another look par(mfrow=c(2,4)) for(i in 1:length(depData$tag9)){ plot(depData$tag 9[[i]]$data$cn, depData$tag9[[i]]$data$northing, pch=".", ylim=c(cmd$plotLimits$northing), main=paste(depData$tag9[[i]]$deployment," ",depData$tag9[[i]]$tagName)) }

PAGE 317

317 # Now that I've looked at them all, just gather the good tagfm9 files into one # structure depList = list() for(cDep in 1:length(numexpt)){ load(fn9[cDep]) cmd = md[[ numexpt[cDep] ]] # get only the tagfm9 data temp1 = depData$tagfm9 # drop the bad fish for (i in length(temp1):1){ # count backwards if (!keepers[[ cDep]][i]){temp1[[i]] = NULL} } # save only good tagfm9 into depList depList[[cDep]] = temp1 rm(temp1) } # now make sure you drop sb2 because there were no fish in it depList[[2]] = NULL ########################################################### ########### ######### # Now save depList as my working structure holding all tagfm9 data z save("depList", file="C:/zy/Telemetry/R summary files/depList 2011June25.rData") z load("C:/zy/Telemetry/R summary files/depList 2011June25.rData") # depList is a lis t of 6 elements, one for each good deployment. # each top element is another list holding tagfm9 data for each good tag in that deployment ###################################################################### ########## ### MAKE results ################# ##################################################### ########## # z9 combines tagfm9 into a single long list with data for all 2007/2008 fish. # It also adds a couple other columns for use in GAM fitting stuff. cTagNames = c() cDeploymentNames = c() for( i in 1:length(depList)){ for(j in 1:length(depList[[i]])){ cTagNames = c(cTagNames, depList[[i]][[j]]$tagName) cDeploymentNames = c(cDeploymentNames, depList[[i]][[j]]$deployment) }

PAGE 318

318 } # a dataframe to hold data about fish movement results1 = d ata.frame( "tagName" = cTagNames, "deployment" = cDeploymentNames, "weight" = rep(NA, length(cTagNames)), "TL" = rep(NA, length(cTagNames)), "FL" = rep(NA, length(cTagNames)), "relWeight" = rep(NA, length(cTagNames)), "numDays" = rep(NA, leng th(cTagNames)), "numHits" = rep(NA, length(cTagNames)), "fracHitsPerDay" = rep(NA, length(cTagNames)), "medianInterval" = rep(NA, length(cTagNames)), "medianDtr" = rep(NA, length(cTagNames)), "medianSpeed" = rep(NA, length(cTagNames)), # "meanTu rning" = rep(NA, length(cTagNames)), "kde50" = rep(NA, length(cTagNames)), "kde95" = rep(NA, length(cTagNames)), stringsAsFactors=FALSE ) # get fish biometric data. This contains data recorded in the field on tagging # day an d any recaptures. It also contains the otolith data. biometrics = importBiometricData() # cycle through each tag/year, calculate things, create table, create plots for(i in 1:length(depList)){ for(j in 1:length(depList[[i]])){ # figure out which r ow of tag1 is for this tag cRow = which(depList[[i]][[j]]$tagName == results1$tagName) # fetch deployment specific informaiton for (k in 1:length(md)){ if(results1$deployment[cRow] == md[[k]]$deployment){ cmd = md[[k]] } # end if statement } # end for k loop cTagfm9 = depList[[i]][[j]] # this is good, filtered, minute averaged tagfm9 data # for some things I'll want to know things before calculating the minuteMean # ... this is a pain for 2009 data... # read in the r ight deployment, extract the unfiltered data for the right tag aDir = "C:/zy/Telemetry/R summary files/Experiment tagfm9 and figs/" aFile = paste("dep_",depList[[i]][[1]]$deployment," 2011June23.rData",sep="")

PAGE 319

319 fn = paste(aDir, aFile, sep="") load(f n) # get only tag9 data and drop the full dataset temp1 = depData$tag9 rm(depData) for(k in 1:length(temp1)){ if(depList[[i]][[j]]$tagName == temp1[[k]]$tagName){temp2 = temp1[[k]]} } temptag = filterALPSdat a(df1=temp2, cnF=1.5, speedF=0.8, minuteMean=F) # count number of days with tag receptions results1$numDays[cRow] = length(unique(temptag$data$datiL$yday)) # count total number of position solutions without doing minuteMean results1$numHits[cRow ] = nrow(temptag$data) # calculate the fraction (position solutions)/(transmissions) each day results1$fracHitsPerDay[cRow] = round(results1$numHits[cRow] / (results1$numDays[cRow] 30*60*24),3) # calculate the m edian interval between position solutions # (seconds per day) / (fracHitsPerDay pings per day) = mean interval results1$meanInterval[cRow]=round((60*60*24)/(results1$fracHitsPerDay[cRow]*30*60* 24),0) thistime = head(temptag$data$utime, 1) nexttim e = tail(temptag$data$utime, 1) ints = nexttime thistime results1$medianInterval[cRow] = median(ints) # calculate the median distance to reef with minuteMean results1$medianDtr[cRow] = round(median(cTagfm9$data$dtr),1) # calculate the med ian travel speed with minuteMean results1$medianSpeed[cRow] = round(median(cTagfm9$data$speed),3) # get the biometric data for the fish with this tag oneTag = paste("f",biometrics$tagID, sep="") oneFish = biometrics[oneTag == results1$tagN ame[cRow] ,] results1$weight[cRow] = oneFish$weight1 results1$TL[cRow] = oneFish$TL1 results1$FL[cRow] = oneFish$FL1 # calculate the relative weight. I got this equation from Doug. # a = 9.21744 x 10 6; b = 3.04;

PAGE 320

320 # (standard weight,g) = a (length, mm)^b # relative weight = (actual weight / standard weight) 100 stdWt = 9.21744e 6 results1$TL[cRow]^3.04 results1$relWeight[cRow] = round(100 results1$weight[cRow]*1000 / stdWt,1) # calculate the home ranges # THE LIMITS YOU USE WHEN CALCULATING THE KDE AFFECT THE ANSWER, SO FOR ALL # FISH MAKE SURE TO USE THE SAME LIMITS ON EASTING AND NORTHING. # There's more in 'chapter 3 part 1.r' on looking at ho me ranges. # # Also, don't use the first two days hrEasting = cTagfm9$data$easting[cTagfm9$data$dod > 2] hrNorthing = cTagfm9$data$northing[cTagfm9$data$dod > 2] cProb = 0.50 results1$kde50[cRow] = round( homeRange(easting = hrEasting, northing = hrNorthing, n=250, tagName = results1$tagName[cRow], lims = c(cmd$reefEN$easting hrRange[1],cmd$reefEN$easting+hrRange[1], cmd$reefEN$northing hr Range[2],cmd$reefEN$northing+hrRange[2]), reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, prob=cProb, drawplot=FALSE ),0 ) cProb = 0.95 results1$kde95[cRow] = round( homeRange(easting = hrEasting, northing = hrNort hing, n=250, tagName = results1$tagName[cRow], lims = c(cmd$reefEN$easting hrRange[1],cmd$reefEN$easting+hrRange[1], cmd$reefEN$northing hrRange[2],cmd$reefEN$northing+hrRange[2]), reefEN=cmd$reefEN, s dlEN=cmd$sdlEN, prob=cProb, drawplot=TRUE ),0 ) } # end for j loop over all tag names in the current depList[[i]] } # end for i loop over all deployments in depList results9 = results1 resultsByWeight = results[order(results$wei ght),]

PAGE 321

321 results = resultsByWeight # if you're happy, save it z save("results9", file="C:/zy/Telemetry/R summary files/results9 2011Jun25.rdata") z load("C:/zy/Telemetry/R summary files/results9 2011Jun25.rdata") plot(as.factor(results9$deployment),results 9$kde95) # get some summary numbers for the abstract resultsHB = results9[grepl("hb",results9$deployment),] resultsSB = results9[grepl("sb",results9$deployment),] range(resultsHB$kde50) range(resultsSB$kde50) rhb1 = results9[results9$deployment == hb1",] rhb2 = results9[results9$deployment == "hb2",] rhb3 = results9[results9$deployment == "hb3",] rsb2 = results9[results9$deployment == "sb2",] rsb3 = results9[results9$deployment == "sb3",] rsb4 = results9[results9$deployment == "sb4",] plot(as.facto r(results9$deployment),results9$kde50) points(rep(1,nrow(rhb1)),rhb1$kde50,pch=19) points(rep(2,nrow(rhb2)),rhb2$kde50,pch=19) points(rep(3,nrow(rhb3)),rhb3$kde50,pch=19) points(rep(4,nrow(rsb2)),rsb2$kde50,pch=19) points(rep(5,nrow(rsb3)),rsb3$kde50,pch=1 9) points(rep(6,nrow(rsb4)),rsb4$kde50,pch=19) meanhb1 = mean(rhb1$kde50) meanhb2 = mean(rhb2$kde50) meanhb3 = mean(rhb3$kde50) meansb2 = mean(rsb2$kde50) meansb3 = mean(rsb3$kde50) meansb4 = mean(rsb4$kde50) hbMeans = c(meanhb1,meanhb2,meanhb3) sbMeans = c(meansb2,meansb3,meansb4) mean(c(meanhb1,meanhb2,meanhb3)) mean(c(meansb2,meansb3,meansb4)) # t test

PAGE 322

322 t.test(hbMeans, y=sbMeans, alternative="g", var.equal=TRUE) # z9 combines tagfm9 into a single long list with data for all 2009 fish. # It also adds a couple other columns for use in GAM fitting stuff. ### Now that 'results9' is full, add it to tagfm9 and rearrange into z9 # functions I'll use later ss < function(...) drop.levels(subset(...),reorder=FALSE) ## rearrange data for a single fish tmpf < function(x,ssize=1.0) { n < nrow(x$data) dat < x$data # pick out only some columns dat < subset(x$data,select=c(utime,datiG,datiL,dod,tod,hod,lunarIndex, easting,northing,depth,altitude,dtr,btr,interval,speed,npos,#turnAngle, temperatu re,waterDepth, eaaL,magL,dirL, #eaaM,eaaU,magM,magU,dirM,dirU,tidalHeight habType )) dat$deployment = as.factor(x$deployment) #ifelse(dat$utime>1.21e9,2008,2007) # change class dat$datiG < as.POSIXct(dat$datiG ) dat$datiL < as.POSIXct(dat$datiL) # should we be sampling randomly or regularly? # could do: x$data[seq(1,n,by=10),] for regular sampling if (ssize<1){dat < dat[sort(sample(1:n,size=round(ssize*n),replace=FALSE)),]} # add a column with the fi sh ID data.frame(dat,tagName=x$tagName) } # end combineFish() ## combine the data lists into a single data frame with a factor indicating ## which fish it's associated with ## ...first gather all tagfm9 in each deployment zdep1 = do.call(rbind,lapply (depList[[1]], tmpf, ssize=1)) zdep2 = do.call(rbind,lapply(depList[[2]], tmpf, ssize=1)) zdep3 = do.call(rbind,lapply(depList[[3]], tmpf, ssize=1)) zdep4 = do.call(rbind,lapply(depList[[4]], tmpf, ssize=1)) zdep5 = do.call(rbind,lapply(depList[[5]], tmpf, ssize=1)) zdep6 = do.call(rbind,lapply(depList[[6]], tmpf, ssize=1)) # ...now put these all together z9 = rbind(zdep1, zdep2, zdep3, zdep4, zdep5, zdep6) # add some things z9$tl = NA

PAGE 323

323 z9$weight = NA z9$relWeight = NA for (i in 1:nrow(results9)){ z9 $tl[z9$tagName == results9$tagName[i]] = results9$TL[i] z9$weight[z9$tagName == results9$tagName[i]] = results9$weight[i] z9$relWeight[z9$tagName == results9$tagName[i]] = results9$relWeight[i] } z9$day = NA z9[(z9$tod<=6 | z9$tod>19), ]$day = "night" z9[(z9$tod>6 & z9$tod<=8), ]$day = "dawn" z9[(z9$tod>8 & z9$tod<=17), ]$day = "day" z9[(z9$tod>17 & z9$tod<=19), ]$day = "dusk" z9$day = as.factor(z9$day) z9$treatment = as.factor(substr(z9$deployment,1,2)) # IF YOU'RE HAPPY WITH THESE RESULTS, SAVE THEM NOW. ### save these results z save("z9", file="C:/zy/Telemetry/R summary files/z9 2011Jun25.rdata") z load("C:/zy/Telemetry/R summary files/z9 2011Jun25.rdata") # some diagnostic looks at z9 plot(z9$tl, z9$weight, pch=19) plot(z9$datiL, pch=19) plot(z9$datiL, z9$easting, pch="."), ylim=c(0,1000)) ggplot(z9, aes(x=easting, y=northing, colour=tagName)) + # geom_point(alpha=0.05) + #geom_path(data=dC3,aes(group=f,col="red")) theme_bw() # loo k at daily EN plots of individual fish ggplot(zdep6, aes(x=easting, y=northing, colour=tagName))+ geom_point(alpha=0.2) + #coord_cartesian(xlim=c(0,60), ylim=c(0,10)) + theme_bw() + facet_wrap(~deployment) + scale_x_continuous("Easting (m)")+#, breaks = seq(0,60,by=20)) + scale_y_continuous("Northing (m)")#,breaks=c(0,1,2,3,4,6,8,10)) ggplot(z9, aes(x=tod, y=dtr, group=tagName, colour=tagName, fill=tagName)) + geom_point(alpha=0.05) + geom_smooth(method="gam",formula=y~s(x,bs="cc"),lw d=1.3) + #,bs="cc" geom_smooth(aes(group=1),colour="black",lwd=1.3,method="gam",formula=y~s(x,bs=" cc")) + #,bs="cc"

PAGE 324

324 #coord_cartesian(xlim=c(1,30), ylim=c(0,10)) + # range(z0$temperature,na.rm=T) range(z0$magL,na.rm=T) facet_wrap(~deployment) + theme_bw() #+ #scale_x_continuous("Lunar Index",breaks = c(1,7,15,23,30)) + #scale_y_continuous("Altitude (m)", breaks=c(0,1,2,3,seq(4,10,by=2))) + #opts(axis.text.x = theme_text(size = 15), axis.text.y = theme_text(size = 15)) + #opts(axis.title.x = theme_text(size=15), axis.title.y = theme_text(size=15, angle=90)) # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # making gam models.r # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@@@@@@@@ # This is a file for creating code for all gazillion gam models for chapter 3. # If there are 5 model variables to be included then all possible combinations # are below. Look at this website for help... # http://www.mathsisfun.com/c ombinatorics/combinations permutations calculator.html # There should be 31 (=5+10+10+5+1) possible combinations # ... I've removed waterDepth and eaaL # If my possible choices are numbers 1 5 then these are the combinations: # ones 1 2 3 4 5 # twos 12 13 23 14 24 34 15 25 35 45 # threes 123 124 134 234 125 135 145 235 245 345 # fours 1234 1235 1245 1345 2345 # fives 12345 load("C:/zy/Telemetry/R summary files/z0 2011Mar16.rdata") # create some emp ty lists to hold answers dtrList = list() speedList = list() altList = list() dtrAicList = data.frame(model=NA, df=NA, rsq=NA, AIC=NA) speedAicList = data.frame(model=NA, df=NA, rsq=NA, AIC=NA) altAicList = data.frame(model=NA, df=NA, rsq=NA, AIC=NA) # DTR ###################################################################### #### # specify all the possible models then add interaction term: (lunarIndex,tod) # ones

PAGE 325

325 dtrList[[1]]=dtr~s(temperature) dtrList[[2]]=dtr~s(magL) dtrList[[3]]=dtr~s(dirL,bs="cc") dtrList[[4]]=dtr~s(tod,bs="cc") dtrList[[5]]=dtr~s(lunarIndex,bs="cc") # ...ones with by=tagName dtrList[[6]]=dtr~s(temperature, by=tagName) dtrLis t[[7]]=dtr~s(magL, by=tagName) dtrList[[8]]=dtr~s(dirL,bs="cc", by=tagName) dtrList[[9]]=dtr~s(tod,bs="cc", by=tagName) dtrList[[10]]=dtr~s(lunarIndex,bs="cc", by=tagName) # twos dtrList[[11]]=dtr~s(temperature)+s(magL) dtrList[[12]]=dtr~s(temperature)+s( dirL,bs="cc") dtrList[[13]]=dtr~s(temperature)+s(tod,bs="cc") dtrList[[14]]=dtr~s(temperature)+s(lunarIndex,bs="cc") dtrList[[15]]=dtr~s(magL)+s(dirL,bs="cc") dtrList[[16]]=dtr~s(magL)+s(tod,bs="cc") dtrList[[17]]=dtr~s(magL)+s(lunarIndex,bs="cc") dtrList [[18]]=dtr~s(dirL,bs="cc")+s(tod,bs="cc") dtrList[[19]]=dtr~s(dirL,bs="cc")+s(lunarIndex,bs="cc") dtrList[[20]]=dtr~s(tod,bs="cc")+s(lunarIndex,bs="cc") dtrList[[21]]=dtr~s(tod,bs="cc")+s(lunarIndex,bs="cc")+s(lunarIndex,tod) # ... twos with by=tagName dt rList[[22]]=dtr~s(temperature, by=tagName)+s(magL, by=tagName) dtrList[[23]]=dtr~s(temperature, by=tagName)+s(dirL,bs="cc", by=tagName) dtrList[[24]]=dtr~s(temperature, by=tagName)+s(tod,bs="cc", by=tagName) dtrList[[25]]=dtr~s(temperature, by=tagName)+s(l unarIndex,bs="cc", by=tagName) dtrList[[26]]=dtr~s(magL, by=tagName)+s(dirL,bs="cc", by=tagName) dtrList[[27]]=dtr~s(magL, by=tagName)+s(tod,bs="cc", by=tagName) dtrList[[28]]=dtr~s(magL, by=tagName)+s(lunarIndex,bs="cc", by=tagName) dtrList[[29]]=dtr~s( dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName) dtrList[[30]]=dtr~s(dirL,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) dtrList[[31]]=dtr~s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) dtrList[[32]]=dtr~s(tod,bs="cc", by=tagName) +s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) # threes dtrList[[33]]=dtr~s(temperature)+s(magL)+s(dirL,bs="cc") dtrList[[34]]=dtr~s(temperature)+s(magL)+s(tod,bs="cc") dtrList[[35]]=dtr~s(temperature)+s(magL)+s(lunarIndex,bs="cc") dtrList[[36]]=dtr ~s(temperature)+s(dirL,bs="cc")+s(tod,bs="cc") dtrList[[37]]=dtr~s(temperature)+s(dirL,bs="cc")+s(lunarIndex,bs="cc") dtrList[[38]]=dtr~s(temperature)+s(tod,bs="cc")+s(lunarIndex,bs="cc") dtrList[[39]]=dtr~s(temperature)+s(tod,bs="cc")+s(lunarIndex,bs="cc" )+s(lunarIndex,tod)

PAGE 326

326 dtrList[[40]]=dtr~s(magL)+s(dirL,bs="cc")+s(tod,bs="cc") dtrList[[41]]=dtr~s(magL)+s(dirL,bs="cc")+s(lunarIndex,bs="cc") dtrList[[42]]=dtr~s(magL)+s(tod,bs="cc")+s(lunarIndex,bs="cc") dtrList[[43]]=dtr~s(magL)+s(tod,bs="cc")+s(lunarInde x,bs="cc")+s(lunarIndex,tod) dtrList[[44]]=dtr~s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarIndex,bs="cc") dtrList[[45]]=dtr~s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarIndex,bs="cc")+s(lunarIndex,tod) # ...threes with by=tagName dtrList[[46]]=dtr~s(temperature, by=t agName)+s(magL, by=tagName)+s(dirL,bs="cc", by=tagName) dtrList[[47]]=dtr~s(temperature, by=tagName)+s(magL, by=tagName)+s(tod,bs="cc", by=tagName) dtrList[[48]]=dtr~s(temperature, by=tagName)+s(magL, by=tagName)+s(lunarIndex,bs="cc", by=tagName) dtrList[[ 49]]=dtr~s(temperature, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName) dtrList[[50]]=dtr~s(temperature, by=tagName)+s(dirL,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) dtrList[[51]]=dtr~s(temperature, by=tagName)+s(tod,bs="cc" by=tagName)+s(lunarIndex,bs="cc", by=tagName) dtrList[[52]]=dtr~s(temperature, by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) dtrList[[53]]=dtr~s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by= tagName) dtrList[[54]]=dtr~s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) dtrList[[55]]=dtr~s(magL, by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) dtrList[[56]]=dtr~s(magL, by=tagName)+s(tod,bs=" cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) dtrList[[57]]=dtr~s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) dtrList[[58]]=dtr~s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+s(lun arIndex,bs="cc", by=tagName)+s(lunarIndex,tod) # fours dtrList[[59]]=dtr~s(temperature)+s(magL)+s(dirL,bs="cc")+s(tod,bs="cc") dtrList[[60]]=dtr~s(temperature)+s(magL)+s(dirL,bs="cc")+s(lunarIndex,bs="cc") dtrList[[61]]=dtr~s(temperature)+s(magL)+s(tod,bs ="cc")+s(lunarIndex,bs="cc") dtrList[[62]]=dtr~s(temperature)+s(magL)+s(tod,bs="cc")+s(lunarIndex,bs="cc")+s(lunarI ndex,tod) dtrList[[63]]=dtr~s(temperature)+s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarIndex,bs="cc") dtrList[[64]]=dtr~s(temperature)+s(dirL,bs=" cc")+s(tod,bs="cc")+s(lunarIndex,bs="cc")+s (lunarIndex,tod) dtrList[[65]]=dtr~s(magL)+s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarIndex,bs="cc") dtrList[[66]]=dtr~s(magL)+s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarIndex,bs="cc")+s(lunarI ndex,tod)

PAGE 327

327 # ...fours with by= tagName dtrList[[67]]=dtr~s(temperature, by=tagName)+s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName) dtrList[[68]]=dtr~s(temperature, by=tagName)+s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) d trList[[69]]=dtr~s(temperature, by=tagName)+s(magL, by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) dtrList[[70]]=dtr~s(temperature, by=tagName)+s(magL, by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", b y=tagName)+s(lunarIndex,tod) dtrList[[71]]=dtr~s(temperature, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) dtrList[[72]]=dtr~s(temperature, by=tagName)+s(dirL,bs="cc", by=tagName)+s(t od,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) dtrList[[73]]=dtr~s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) dtrList[[74]]=dtr~s(magL, by= tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) # fives dtrList[[75]]=dtr~s(temperature)+s(magL)+s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarIndex,b s="cc") dtrList[[76]]=dtr~s(temper ature)+s(magL)+s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarIndex,b s="cc")+s(lunarIndex,tod) # ...fives with by=tagName dtrList[[77]]=dtr~s(temperature, by=tagName)+s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", b y=tagName) dtrList[[78]]=dtr~s(temperature, by=tagName)+s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) # SPEED ############################################################## ######## ## # specify all the possible models then add interaction term: (lunarIndex,tod) # ones speedList[[1]]=speed~s(temperature) speedList[[2]]=speed~s(magL) speedList[[3]]=speed~s(dirL,bs="cc") speedList[[4]]=speed~s(tod,bs="cc") speedLis t[[5]]=speed~s(lunarIndex,bs="cc") # ...ones with by=tagName speedList[[6]]=speed~s(temperature, by=tagName) speedList[[7]]=speed~s(magL, by=tagName) speedList[[8]]=speed~s(dirL,bs="cc", by=tagName) speedList[[9]]=speed~s(tod,bs="cc", by=tagName)

PAGE 328

328 speedList [[10]]=speed~s(lunarIndex,bs="cc", by=tagName) # twos speedList[[11]]=speed~s(temperature)+s(magL) speedList[[12]]=speed~s(temperature)+s(dirL,bs="cc") speedList[[13]]=speed~s(temperature)+s(tod,bs="cc") speedList[[14]]=speed~s(temperature)+s(lunarIndex,b s="cc") speedList[[15]]=speed~s(magL)+s(dirL,bs="cc") speedList[[16]]=speed~s(magL)+s(tod,bs="cc") speedList[[17]]=speed~s(magL)+s(lunarIndex,bs="cc") speedList[[18]]=speed~s(dirL,bs="cc")+s(tod,bs="cc") speedList[[19]]=speed~s(dirL,bs="cc")+s(lunarIndex, bs="cc") speedList[[20]]=speed~s(tod,bs="cc")+s(lunarIndex,bs="cc") speedList[[21]]=speed~s(tod,bs="cc")+s(lunarIndex,bs="cc")+s(lunarIndex,tod) # ... twos with by=tagName speedList[[22]]=speed~s(temperature, by=tagName)+s(magL, by=tagName) speedList[[23] ]=speed~s(temperature, by=tagName)+s(dirL,bs="cc", by=tagName) speedList[[24]]=speed~s(temperature, by=tagName)+s(tod,bs="cc", by=tagName) speedList[[25]]=speed~s(temperature, by=tagName)+s(lunarIndex,bs="cc", by=tagName) speedList[[26]]=speed~s(magL, by=t agName)+s(dirL,bs="cc", by=tagName) speedList[[27]]=speed~s(magL, by=tagName)+s(tod,bs="cc", by=tagName) speedList[[28]]=speed~s(magL, by=tagName)+s(lunarIndex,bs="cc", by=tagName) speedList[[29]]=speed~s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagNa me) speedList[[30]]=speed~s(dirL,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) speedList[[31]]=speed~s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) speedList[[32]]=speed~s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+s( lunarIndex,tod) # threes speedList[[33]]=speed~s(temperature)+s(magL)+s(dirL,bs="cc") speedList[[34]]=speed~s(temperature)+s(magL)+s(tod,bs="cc") speedList[[35]]=speed~s(temperature)+s(magL)+s(lunarIndex,bs="cc") speedList[[36]]=speed~s(temperature)+s(dir L,bs="cc")+s(tod,bs="cc") speedList[[37]]=speed~s(temperature)+s(dirL,bs="cc")+s(lunarIndex,bs="cc") speedList[[38]]=speed~s(temperature)+s(tod,bs="cc")+s(lunarIndex,bs="cc") speedList[[39]]=speed~s(temperature)+s(tod,bs="cc")+s(lunarIndex,bs="cc")+s(lunar Ind ex,tod) speedList[[40]]=speed~s(magL)+s(dirL,bs="cc")+s(tod,bs="cc") speedList[[41]]=speed~s(magL)+s(dirL,bs="cc")+s(lunarIndex,bs="cc") speedList[[42]]=speed~s(magL)+s(tod,bs="cc")+s(lunarIndex,bs="cc") speedList[[43]]=speed~s(magL)+s(tod,bs="cc")+s(lu narIndex,bs="cc")+s(lunarIndex,tod) speedList[[44]]=speed~s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarIndex,bs="cc")

PAGE 329

329 speedList[[45]]=speed~s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarIndex,bs="cc")+s(lunarInd ex,tod) # ...threes with by=tagName speedList[[46]]=speed~ s(temperature, by=tagName)+s(magL, by=tagName)+s(dirL,bs="cc", by=tagName) speedList[[47]]=speed~s(temperature, by=tagName)+s(magL, by=tagName)+s(tod,bs="cc", by=tagName) speedList[[48]]=speed~s(temperature, by=tagName)+s(magL, by=tagName)+s(lunarIndex,bs= "cc", by=tagName) speedList[[49]]=speed~s(temperature, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName) speedList[[50]]=speed~s(temperature, by=tagName)+s(dirL,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) speedList[[51]]=speed~s (temperature, by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) speedList[[52]]=speed~s(temperature, by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) speedList[[53]]=speed~s(magL, by=tagName) +s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName) speedList[[54]]=speed~s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) speedList[[55]]=speed~s(magL, by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=ta gName) speedList[[56]]=speed~s(magL, by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) speedList[[57]]=speed~s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) speedList[[58]] =speed~s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) # fours speedList[[59]]=speed~s(temperature)+s(magL)+s(dirL,bs="cc")+s(tod,bs="cc") speedList[[60]]=speed~s(temperature)+s(magL)+s(dirL,bs="c c")+s(lunarIndex,bs="cc") speedList[[61]]=speed~s(temperature)+s(magL)+s(tod,bs="cc")+s(lunarIndex,bs="cc") speedList[[62]]=speed~s(temperature)+s(magL)+s(tod,bs="cc")+s(lunarIndex,bs="cc")+ s(lunarIndex,tod) speedList[[63]]=speed~s(temperature)+s(dirL,bs= "cc")+s(tod,bs="cc")+s(lunarIndex,bs= "cc") speedList[[64]]=speed~s(temperature)+s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarIndex,bs= "cc")+s(lunarIndex,tod) speedList[[65]]=speed~s(magL)+s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarIndex,bs="cc") speedList[[66]]=speed~ s(magL)+s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarIndex,bs="cc")+ s(lunarIndex,tod) # ...fours with by=tagName speedList[[67]]=speed~s(temperature, by=tagName)+s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)

PAGE 330

330 speedList[[68]]=speed~s(t emperature, by=tagName)+s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) speedList[[69]]=speed~s(temperature, by=tagName)+s(magL, by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) spee dList[[70]]=speed~s(temperature, by=tagName)+s(magL, by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) speedList[[71]]=speed~s(temperature, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+ s(lunarIndex,bs="cc", by=tagName) speedList[[72]]=speed~s(temperature, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) speedList[[73]]=speed~s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) speedList[[74]]=speed~s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) # fives speedList[[75]]=speed~s(temperature)+s(magL)+s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarI ndex,bs="cc") speedList[[76]]=speed~s(temperature)+s(magL)+s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarI ndex,bs="cc")+s(lunarIndex,tod) # ...fives with by=tagName spee dList[[77]]=speed~s(temperature, by=tagName)+s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) speedList[[78]]=speed~s(temperature, by=tagName)+s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod ,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) # ALT ###################################################################### #### # specify all the possible models then add interaction term: (lunarIndex,tod) # ones altList[[1]]= altitude~s(temperature) altList[[2]]=altitude~s(magL) altList[[3]]=altitude~s(dirL,bs="cc") altList[[4]]=altitude~s(tod,bs="cc") altList[[5]]=altitude~s(lunarIndex,bs="cc") # ...ones with by=tagName altList[[6]]=altitude~s(temperature, by=tag Name) altList[[7]]=altitude~s(magL, by=tagName) altList[[8]]=altitude~s(dirL,bs="cc", by=tagName) altList[[9]]=altitude~s(tod,bs="cc", by=tagName)

PAGE 331

331 altList[[10]]=altitude~s(lunarIndex,bs="cc", by=tagName) # twos altList[[11]]=altitude~s(temperature)+s(magL ) altList[[12]]=altitude~s(temperature)+s(dirL,bs="cc") altList[[13]]=altitude~s(temperature)+s(tod,bs="cc") altList[[14]]=altitude~s(temperature)+s(lunarIndex,bs="cc") altList[[15]]=altitude~s(magL)+s(dirL,bs="cc") altList[[16]]=altitude~s(magL)+s(tod,bs= "cc") altList[[17]]=altitude~s(magL)+s(lunarIndex,bs="cc") altList[[18]]=altitude~s(dirL,bs="cc")+s(tod,bs="cc") altList[[19]]=altitude~s(dirL,bs="cc")+s(lunarIndex,bs="cc") altList[[20]]=altitude~s(tod,bs="cc")+s(lunarIndex,bs="cc") altList[[21]]=altitud e~s(tod,bs="cc")+s(lunarIndex,bs="cc")+s(lunarIndex,tod) # ... twos with by=tagName altList[[22]]=altitude~s(temperature, by=tagName)+s(magL, by=tagName) altList[[23]]=altitude~s(temperature, by=tagName)+s(dirL,bs="cc", by=tagName) altList[[24]]=altitude~ s(temperature, by=tagName)+s(tod,bs="cc", by=tagName) altList[[25]]=altitude~s(temperature, by=tagName)+s(lunarIndex,bs="cc", by=tagName) altList[[26]]=altitude~s(magL, by=tagName)+s(dirL,bs="cc", by=tagName) altList[[27]]=altitude~s(magL, by=tagName)+s(to d,bs="cc", by=tagName) altList[[28]]=altitude~s(magL, by=tagName)+s(lunarIndex,bs="cc", by=tagName) altList[[29]]=altitude~s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName) altList[[30]]=altitude~s(dirL,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by= tagName) altList[[31]]=altitude~s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) altList[[32]]=altitude~s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) # threes altList[[33]]=altitude~s(temperature)+s(magL)+s(dir L,bs="cc") altList[[34]]=altitude~s(temperature)+s(magL)+s(tod,bs="cc") altList[[35]]=altitude~s(temperature)+s(magL)+s(lunarIndex,bs="cc") altList[[36]]=altitude~s(temperature)+s(dirL,bs="cc")+s(tod,bs="cc") altList[[37]]=altitude~s(temperature)+s(dirL,bs ="cc")+s(lunarIndex,bs="cc") altList[[38]]=altitude~s(temperature)+s(tod,bs="cc")+s(lunarIndex,bs="cc") altList[[39]]=altitude~s(temperature)+s(tod,bs="cc")+s(lunarIndex,bs="cc")+s(lunarIndex, tod) altList[[40]]=altitude~s(magL)+s(dirL,bs="cc")+s(tod,bs="cc ") altList[[41]]=altitude~s(magL)+s(dirL,bs="cc")+s(lunarIndex,bs="cc") altList[[42]]=altitude~s(magL)+s(tod,bs="cc")+s(lunarIndex,bs="cc") altList[[43]]=altitude~s(magL)+s(tod,bs="cc")+s(lunarIndex,bs="cc")+s(lunarIndex,tod) altList[[44]]=altitude~s(dir L,bs="cc")+s(tod,bs="cc")+s(lunarIndex,bs="cc") altList[[45]]=altitude~s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarIndex,bs="cc")+s(lunarIndex, tod) # ...threes with by=tagName

PAGE 332

332 altList[[46]]=altitude~s(temperature, by=tagName)+s(magL, by=tagName)+s(dirL,bs="cc", by=tagName) altList[[47]]=altitude~s(temperature, by=tagName)+s(magL, by=tagName)+s(tod,bs="cc", by=tagName) altList[[48]]=altitude~s(temperature, by=tagName)+s(magL, by=tagName)+s(lunarIndex,bs="cc", by=tagName) altList[[49]]=altitude~s(temperature, by=ta gName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName) altList[[50]]=altitude~s(temperature, by=tagName)+s(dirL,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) altList[[51]]=altitude~s(temperature, by=tagName)+s(tod,bs="cc", by=tagName)+s(lun arIndex,bs="cc", by=tagName) altList[[52]]=altitude~s(temperature, by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) altList[[53]]=altitude~s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName) altList[[54]]=altitude~s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) altList[[55]]=altitude~s(magL, by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) altList[[56]]=altitude~s(magL, by=tagName)+s(to d,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) altList[[57]]=altitude~s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) altList[[58]]=altitude~s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", b y=tagName)+s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) # fours altList[[59]]=altitude~s(temperature)+s(magL)+s(dirL,bs="cc")+s(tod,bs="cc") altList[[60]]=altitude~s(temperature)+s(magL)+s(dirL,bs="cc")+s(lunarIndex,bs="cc") altList[[61]]=altitude~ s(temperature)+s(magL)+s(tod,bs="cc")+s(lunarIndex,bs="cc") altList[[62]]=altitude~s(temperature)+s(magL)+s(tod,bs="cc")+s(lunarIndex,bs="cc")+s(l unarIndex,tod) altList[[63]]=altitude~s(temperature)+s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarIndex,bs="cc ") alt List[[64]]=altitude~s(temperature)+s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarIndex,bs="cc ")+s(lunarIndex,tod) altList[[65]]=altitude~s(magL)+s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarIndex,bs="cc") altList[[66]]=altitude~s(magL)+s(dirL,bs="cc")+s(tod,bs="cc")+s(lu narIndex,bs="cc")+s(lu narIndex,tod) # ...fours with by=tagName altList[[67]]=altitude~s(temperature, by=tagName)+s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName) altList[[68]]=altitude~s(temperature, by=tagName)+s(magL, by=tagNam e)+s(dirL,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)

PAGE 333

333 altList[[69]]=altitude~s(temperature, by=tagName)+s(magL, by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) altList[[70]]=altitude~s(temperature, by=tag Name)+s(magL, by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) altList[[71]]=altitude~s(temperature, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) altList[[72]]=altitude~s(temperature, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) altList[[73]]=altitude~s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagNa me)+s(lunarIndex,bs="cc", by=tagName) altList[[74]]=altitude~s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+s(lunarIndex,tod) # fives altList[[75]]=altitude~s (temperature)+s(magL)+s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarInd ex,bs="cc") altList[[76]]=altitude~s(temperature)+s(magL)+s(dirL,bs="cc")+s(tod,bs="cc")+s(lunarInd ex,bs="cc")+s(lunarIndex,tod) # ...fives with by=tagName altList[[77]]=altitude~s(temperature, by=tagName)+s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName) altList[[78]]=altitude~s(temperature, by=tagName)+s(magL, by=tagName)+s(dirL,bs="cc", by=tagName)+s(tod,bs="cc", by=tagName)+s(lunarIn dex,bs="cc", by=tagName)+s(lunarIndex,tod) # For each model, fit the gam, calculate the AIC, save the gam externally, remove the gam # ... and because you need two in the AIC command to get the df output... library(mgcv) s eedGamFit = gam(dtrList[[1]], data=z0) for (i in 78:78){#length(dtrList)){ # fit the gams dtrGamAns = gam(dtrList[[i]], data=z0) speedGamAns = gam(speedList[[i]], data=z0) # altGamAns = gam(altList[[i]], data=z0) # calculate AIC dtrAicAns = A IC(seedGamFit, dtrGamAns) speedAicAns = AIC(seedGamFit, speedGamAns) # altAicAns = AIC(seedGamFit, altGamAns) # store results (model number, d.f., adjusted r^2, AIC)

PAGE 334

334 dtrAicList[i,] = c(i, dtrAicAns [[2,1]], summary(dtrGamAns)$r.sq, dtrAicAns [ [2,2]]) speedAicList[i,] = c(i, speedAicAns[[2,1]], summary(dtrGamAns)$r.sq, speedAicAns[[2,2]]) # altAicList[i,] = c(i, altAicAns [[2,1]], summary(altGamAns)$r.sq, altAicAns [[2,2]]) # THE RESULT OF THIS GETS SAVED HERE save("dtrAicList", fi le="C:/zy/Telemetry/R summary files/dtrAicList 2011Mar30.rdata") save("speedAicList", file="C:/zy/Telemetry/R summary files/speedAicList 2011Mar30.rdata") # save("altAicList", file="C:/zy/Telemetry/R summary files/altAicList 2011Mar30.rdata") } # THE RESULT OF THIS GETS SAVED HERE save("dtrAicList", file="C:/zy/Telemetry/R summary files/dtrAicList 2011Mar30.rdata") save("speedAicList", file="C:/zy/Telemetry/R summary files/speedAicList 2011Mar30.rdata") save("altAicList", file="C:/zy/Telemetry /R summary files/altAicList 2011Mar30.rdata") # THE RESULTS ARE READ IN HERE load("C:/zy/Telemetry/R summary files/dtrAicList 2011Mar30.rdata") load("C:/zy/Telemetry/R summary files/speedAicList 2011Mar30.rdata") load("C:/zy/Telemetry/R summary files/altA icList 2011Mar30.rdata") # calculate AIC values relative to the minimum AIC value dtrAicList[,4] = dtrAicList [,4] min(dtrAicList[,4]) speedAicList[,4] = speedAicList[,4] min(speedAicList[,4]) altAicList[,4] = altAicList [,4] min(altAicList[,4]) # pick the five best dtrSortedAIC = dtrAicList [ order(dtrAicList[,4]),] speedSortedAIC = speedAicList[ order(speedAicList[,4]),] altSortedAIC = altAicList [ order(altAicList[,4]),] plot(altAicList$AIC, altAicList$rsq) # top dtr models: 78, 72, 70, 52, 77 # ...remaining 'drop one' models: 67, 68, 74 # top speed models: 32, 31, 10, 21, 9 # top altitude models: 78, 70, 72, 52, 77 # ...remaining 'drop one' models: 67, 68, 74 library(mgcv) # compare dtr models

PAGE 335

335 model1 = gam(dtrList[[78]], data=z0) # model2 = gam(dtrList[[72]], data=z0) # model3 = gam(dtrList[[70]], data=z0) # model4 = gam(dtrList[[52]], data=z0) # model5 = gam(dtrList[[77]], data=z0) # model6 = gam(dtrList[[67]], data=z0) # model7 = gam(dtrList[[68]], data=z0) # model8 = gam(dt rList[[74]], data=z0) # RSStable = c( sum(residuals(model1)^2), sum(residuals(model2)^2), sum(residuals(model3)^2), sum(residuals(model4)^2), sum(residuals(model5)^2), sum(residuals(model6)^2), sum(residuals(model7)^2), sum(residu als(model8)^2) ) RSStable = RSStable min(RSStable) AIC(seedGamFit, model1, model2) # compare speed models model9 = gam(speedList[[32]], data=z0) model10 = gam(speedList[[31]], data=z0) model11 = gam(speedList[[10]], data=z0) model12 = gam(speedList[ [21]], data=z0) model13 = gam(speedList[[9]], data=z0) RSStable = c( sum(residuals(model9)^2), sum(residuals(model10)^2), sum(residuals(model11)^2), sum(residuals(model12)^2), sum(residuals(model13)^2) ) RSStable = RSStable min(RSStable ) # compare alt models model14 = gam(altList[[2]], data=z0) model15 = gam(altList[[4]], data=z0) model16 = gam(altList[[7]], data=z0)

PAGE 336

336 model17 = gam(altList[[16]], data=z0) model18 = gam(altList[[9]], data=z0) bob= gam(altList[[32]], data=z0) altR2List = cbind(altAicList,"r2"=rep(NA,nrow(altAicList))) for (i in 1:nrow(altAicList)){ altR2List$r2[[i]] = summary.gam( } summary.gam(model18)$r.sq ###################################################################### ######## # start with the full model and remove each variable one at a time and see how # much the AIC decreases model1 = gam(modelList[[127]], data=z0) model2 = gam(modelList[[120]], data=z0) model3 = gam(modelList[[121]], data=z0) model4 = gam(modelList[[122]], data=z0) model5 = gam(modelList[ [123]], data=z0) model6 = gam(modelList[[124]], data=z0) model7 = gam(modelList[[125]], data=z0) model8 = gam(modelList[[126]], data=z0) bob=AIC(model1, model2, model3, model4, model5, model6, model7, model8) minaic = min(bob$AIC) bob$deltaAIC=bob$AIC mi naic bob1=gam(dtr~s(tod,bs="cc")+s(lunarIndex,bs="cc")+s(temperature)+s(magL)+s(dirL,bs ="cc"), data=z0) bob2=gam(dtr~s(tod,bs="cc", by=tagName)+s(lunarIndex,bs="cc", by=tagName)+ s(temperature, by=tagName)+s(magL, by=tagName)+s(dirL,bs="cc", by=tagName) data=z0) sum(z0$habType=="white", na.rm=T) + sum(is.na(z0$habType)) + sum(z0$habType=="black", na.rm=T) fraction of time over black sum(z0$habType=="black", na.rm=T) / nrow(z0) # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # chapter 2 array validation tag List.r # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

PAGE 337

337 # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ###################################################################### ########## ############################################### ####################### ########## ###################################################################### ########## ###################################################################### ########## ### Testing Array Performance ############################## ######################################## ########## ###################################################################### ########## ###################################################################### ########## ########################################### ########################### ########## # This file creates a list of tags to be used by the file: # 'chapter 2 array validation.r' ### FIRST MAKE A LIST OF ALL THE TAGS I'LL USE...then in a separate file I'll ### actually do the cool calculations. # 5 beacons in 2007 deployment # Don't do this for the single SDL detection trial... # 5 beacons in 2008 deployment # 3 fish tags, 2 beacons in 125m spacing trial # 2 fish tag, 4 beacons, 1 sentinel in 150 spacing trial # 2 fish tag, 4 beacons, 1 sentinel in 1 00m spacing trial # 40 fish tags first internal performance. Acutally, 8 tags at 40 places # (The 4 beacons, 1 sentinel will be included in first 2009 fish study) # 10 fish tags first internal performance # (The 4 beacons, 1 sentinel will be includ ed in first 2009 fish study) # 4 beacon, 1 sentinel fish study # 4 beacon, 1 sentinel fish study # 4 beacon, 1 sentinel fish study # 4 beacon, 1 sentinel fish study # 4 beacon, 1 sentinel fish study # 4 beacon fish study # 4 beacon fish stud y # # or in other words # 5+5+5+5+5+5+5+4+4 = 43 in fish studies # 19 in spacing trials + 43 = 62

PAGE 338

338 # 56 in internal performance + 62 = 118 # if you like what I already have, just load the saved file save("tagList", file="C:/zy/Telemetry/R summary files /tagList 2011Apr09.rdata") load("C:/zy/Telemetry/R summary files/tagList 2011Apr09.rdata") source("C:/zy/Telemetry/R Data Processing/global variables.r") source("C:/zy/Telemetry/R Data Processing/global functions.r") source("C:/zy/Telemetry/R Data Proces sing/global metadata.r") ###################################################################### ########## # create the empty list structure totalNumTags = 112 tagList = rep(list( list(tagName="", deployment="", tagLocation=c(), startUtime="", stopUtime= "") ),totalNumTags) ###################################################################### ########## # All fish study deployments ###################################################################### ########## # I can fill the table automatically for things in md[[ ]] except for spacing # trials 100 and 150m because they had two beacons at the same locations. They # will have to be done by hand. So far I do the 9 fish study deployments. numTagsDone = 0 for (z in 1:9){ # so far I can do this for t he first 9 lists in md[[]] # which deployment, which list in md[[z]] cmd = z # gather the beacon/sentinel tag names cTags = c(md[[cmd]]$beaconNames, md[[cmd]]$sentinelNames) for (i in 1:length(cTags)){ # which element of tagList are we on? listIndex = i+numTagsDone # set the tagName, deployment, startUtime, and stopUtime tagList[[listIndex]]$tagName = cTags[i] tagList[[listIndex]]$deployment = md[[cmd]]$deployment tagList[[listIndex]]$startUtime = md[[cmd]]$startUtime

PAGE 339

339 tagList[[listIndex]]$stopUtime = md[[cmd]]$stopUtime # figure out where this beacon/sentinel was... # ... for all 6 possible locations... location = FALSE for (j in 1:length(md[[cmd]]$beaconEN$beaconID)){ # ...figure out which SDL/reef the current beacon/sentinel was on if(cTags[i] == md[[cmd]]$beaconEN$beaconID[j]){ location = as.character(md[[cmd]]$beaconEN$location[j]) } } # end j for loop finding the tag location name # figure out the easting, northing if it was on the reef if(location == "reef"){ tagList[[listIndex]]$tagLocation = c(md[[cmd]]$reefEN$easting, md[[cmd]]$reefEN$northing) } # figure out the easting, northing if it was on an SDL for (j in 1:length(md[[cm d]]$sdlEN$ID)){ if (location == md[[cmd]]$sdlEN$ID[j]){ tagList[[listIndex]]$tagLocation = c(md[[cmd]]$sdlEN$easting[j], md[[cmd]]$sdlEN$northing[j]) } } # end j for loop finding the easting, northing } # end i for loop over all t he tags in this deployment numTagsDone = numTagsDone + length(cTags) } # end z loop over md[[]] elements ### Now because hb2008 is funky, fill in the tagLocations by hand... # b80 = b79200 listIndex = 6 # tagList[[listIndex]] tagList[[listIndex]]$t agLocation = c(8684.45, 686.0) # b81 = b79200 listIndex = 7 # tagList[[listIndex]] tagList[[listIndex]]$tagLocation = c(8684.45, 686.0) # b85 = b79400 listIndex = 8 # tagList[[listIndex]] tagList[[listIndex]]$tagLocation = c(8471.48, 699.5) # b86 = b7940 0 listIndex = 9 # tagList[[listIndex]] tagList[[listIndex]]$tagLocation = c(8471.48, 699.5) # b130 = b79500 listIndex = 10 # tagList[[listIndex]] tagList[[listIndex]]$tagLocation = c(8582.56, 699.2)

PAGE 340

340 ############################################# ######################### ########## # All Spacing Trials ###################################################################### ########## # Now for the spacing trials...when there were stationary fish tags numTagsDone # which element of tagList are we o n? = 43 # 125m spacing trial: These tags in these places # b79400 north 41 # b79500 center 45 # f2 reef # f61000 reef # f61500 outer cmd = 10 # b79400 on north 41 listIndex = 44 tagList[[listIndex]]$tagName = md[[cmd]]$beaconName s[1] tagList[[listIndex]]$deployment = md[[cmd]]$deployment tagList[[listIndex]]$startUtime = md[[cmd]]$startUtime tagList[[listIndex]]$stopUtime = md[[cmd]]$stopUtime tagList[[listIndex]]$tagLocation = c(md[[cmd]]$sdlEN$easting[1], md[[cmd]]$sdlEN$northi ng[1]) # b79500 on center 45 listIndex = 45 tagList[[listIndex]]$tagName = md[[cmd]]$beaconNames[2] tagList[[listIndex]]$deployment = md[[cmd]]$deployment tagList[[listIndex]]$startUtime = md[[cmd]]$startUtime tagList[[listIndex]]$stopUtime = md[[cmd]] $stopUtime tagList[[listIndex]]$tagLocation = c(md[[cmd]]$sdlEN$easting[5], md[[cmd]]$sdlEN$northing[5]) # f2 at reef listIndex = 46 tagList[[listIndex]]$tagName = md[[cmd]]$fishNames[1] tagList[[listIndex]]$deployment = md[[cmd]]$deployment tagList[[l istIndex]]$startUtime = md[[cmd]]$startUtime tagList[[listIndex]]$stopUtime = md[[cmd]]$stopUtime tagList[[listIndex]]$tagLocation = c(md[[cmd]]$reefEN$easting, md[[cmd]]$reefEN$northing) # f61000 at reef listIndex = 47

PAGE 341

341 tagList[[listIndex]]$tagName = md [[cmd]]$fishNames[2] tagList[[listIndex]]$deployment = md[[cmd]]$deployment tagList[[listIndex]]$startUtime = md[[cmd]]$startUtime tagList[[listIndex]]$stopUtime = md[[cmd]]$stopUtime tagList[[listIndex]]$tagLocation = c(md[[cmd]]$reefEN$easting, md[[cmd] ]$reefEN$northing) # f61500 at outer margin...I get this position estimate # from 'GPS position estimates 2010Nov12.r' listIndex = 48 tagList[[listIndex]]$tagName = md[[cmd]]$fishNames[3] tagList[[listIndex]]$deployment = md[[cmd]]$deployment tagList[ [listIndex]]$startUtime = md[[cmd]]$startUtime tagList[[listIndex]]$stopUtime = md[[cmd]]$stopUtime tagList[[listIndex]]$tagLocation = c(8529.822,643.8664) # 150m spacing trial: These tags in these places # b1 center 45 # b2 north 41 # b7940 0 north 41 # b79500 center 45 # s79600 reef # f61000 reef # f61500 outer margin cmd = 11 # b1 at center 45 listIndex = 49 tagList[[listIndex]]$tagName = md[[cmd]]$beaconNames[1] tagList[[listIndex]]$deployment = md[[cmd]]$deployment t agList[[listIndex]]$startUtime = md[[cmd]]$startUtime tagList[[listIndex]]$stopUtime = md[[cmd]]$stopUtime tagList[[listIndex]]$tagLocation = c(md[[cmd]]$sdlEN$easting[5], md[[cmd]]$sdlEN$northing[5]) # b2 at north 41 listIndex = 50 tagList[[listIndex]] $tagName = md[[cmd]]$beaconNames[2] tagList[[listIndex]]$deployment = md[[cmd]]$deployment tagList[[listIndex]]$startUtime = md[[cmd]]$startUtime tagList[[listIndex]]$stopUtime = md[[cmd]]$stopUtime tagList[[listIndex]]$tagLocation = c(md[[cmd]]$sdlEN$eas ting[1], md[[cmd]]$sdlEN$northing[1]) # b79400 at north 41

PAGE 342

342 listIndex = 51 tagList[[listIndex]]$tagName = md[[cmd]]$beaconNames[3] tagList[[listIndex]]$deployment = md[[cmd]]$deployment tagList[[listIndex]]$startUtime = md[[cmd]]$startUtime tagList[[lis tIndex]]$stopUtime = md[[cmd]]$stopUtime tagList[[listIndex]]$tagLocation = c(md[[cmd]]$sdlEN$easting[1], md[[cmd]]$sdlEN$northing[1]) # b79500 at center 45 listIndex = 52 tagList[[listIndex]]$tagName = md[[cmd]]$beaconNames[4] tagList[[listIndex]]$dep loyment = md[[cmd]]$deployment tagList[[listIndex]]$startUtime = md[[cmd]]$startUtime tagList[[listIndex]]$stopUtime = md[[cmd]]$stopUtime tagList[[listIndex]]$tagLocation = c(md[[cmd]]$sdlEN$easting[5], md[[cmd]]$sdlEN$northing[5]) # s79600 at reef li stIndex = 53 tagList[[listIndex]]$tagName = md[[cmd]]$sentinelNames[1] tagList[[listIndex]]$deployment = md[[cmd]]$deployment tagList[[listIndex]]$startUtime = md[[cmd]]$startUtime tagList[[listIndex]]$stopUtime = md[[cmd]]$stopUtime tagList[[listIndex]]$ tagLocation = c(md[[cmd]]$reefEN$easting, md[[cmd]]$reefEN$northing) # f61000 at reef listIndex = 54 tagList[[listIndex]]$tagName = md[[cmd]]$fishNames[1] tagList[[listIndex]]$deployment = md[[cmd]]$deployment tagList[[listIndex]]$startUtime = md[[cmd] ]$startUtime tagList[[listIndex]]$stopUtime = md[[cmd]]$stopUtime tagList[[listIndex]]$tagLocation = c(md[[cmd]]$reefEN$easting, md[[cmd]]$reefEN$northing) # f61500 at outer margin...I get this position estimate # from 'GPS position estimates 2010Nov12 .r' listIndex = 55 tagList[[listIndex]]$tagName = md[[cmd]]$fishNames[2] tagList[[listIndex]]$deployment = md[[cmd]]$deployment tagList[[listIndex]]$startUtime = md[[cmd]]$startUtime tagList[[listIndex]]$stopUtime = md[[cmd]]$stopUtime tagList[[listIndex ]]$tagLocation = c(8520.005,642.2417) # 100m spacing trial: These tags in these places # b1 center 45

PAGE 343

343 # b2 north 41 # b79400 north 41 # b79500 center 45 # s79600 reef # f61000 reef # f61500 outer margin cmd = 12 # b1 at ce nter 45 listIndex = 56 tagList[[listIndex]]$tagName = md[[cmd]]$beaconNames[1] tagList[[listIndex]]$deployment = md[[cmd]]$deployment tagList[[listIndex]]$startUtime = md[[cmd]]$startUtime tagList[[listIndex]]$stopUtime = md[[cmd]]$stopUtime tagList[[list Index]]$tagLocation = c(md[[cmd]]$sdlEN$easting[5], md[[cmd]]$sdlEN$northing[5]) # b2 at north 41 listIndex = 57 tagList[[listIndex]]$tagName = md[[cmd]]$beaconNames[2] tagList[[listIndex]]$deployment = md[[cmd]]$deployment tagList[[listIndex]]$startUt ime = md[[cmd]]$startUtime tagList[[listIndex]]$stopUtime = md[[cmd]]$stopUtime tagList[[listIndex]]$tagLocation = c(md[[cmd]]$sdlEN$easting[1], md[[cmd]]$sdlEN$northing[1]) # b79400 at north 41 listIndex = 58 tagList[[listIndex]]$tagName = md[[cmd]]$be aconNames[3] tagList[[listIndex]]$deployment = md[[cmd]]$deployment tagList[[listIndex]]$startUtime = md[[cmd]]$startUtime tagList[[listIndex]]$stopUtime = md[[cmd]]$stopUtime tagList[[listIndex]]$tagLocation = c(md[[cmd]]$sdlEN$easting[1], md[[cmd]]$sdlE N$northing[1]) # b79500 at center 45 listIndex = 59 tagList[[listIndex]]$tagName = md[[cmd]]$beaconNames[4] tagList[[listIndex]]$deployment = md[[cmd]]$deployment tagList[[listIndex]]$startUtime = md[[cmd]]$startUtime tagList[[listIndex]]$stopUtime = m d[[cmd]]$stopUtime tagList[[listIndex]]$tagLocation = c(md[[cmd]]$sdlEN$easting[5], md[[cmd]]$sdlEN$northing[5]) # s79600 at reef

PAGE 344

344 listIndex = 60 tagList[[listIndex]]$tagName = md[[cmd]]$sentinelNames[1] tagList[[listIndex]]$deployment = md[[cmd]]$depl oyment tagList[[listIndex]]$startUtime = md[[cmd]]$startUtime tagList[[listIndex]]$stopUtime = md[[cmd]]$stopUtime tagList[[listIndex]]$tagLocation = c(md[[cmd]]$reefEN$easting, md[[cmd]]$reefEN$northing) # f61000 at reef listIndex = 61 tagList[[listInd ex]]$tagName = md[[cmd]]$fishNames[1] tagList[[listIndex]]$deployment = md[[cmd]]$deployment tagList[[listIndex]]$startUtime = md[[cmd]]$startUtime tagList[[listIndex]]$stopUtime = md[[cmd]]$stopUtime tagList[[listIndex]]$tagLocation = c(md[[cmd]]$reefEN$ easting, md[[cmd]]$reefEN$northing) # f61500 at outer margin...We didn't get any GPS data for this one, use the # target location... listIndex = 62 tagList[[listIndex]]$tagName = md[[cmd]]$fishNames[2] tagList[[listIndex]]$deployment = md[[cmd]]$deplo yment tagList[[listIndex]]$startUtime = md[[cmd]]$startUtime tagList[[listIndex]]$stopUtime = md[[cmd]]$stopUtime tagList[[listIndex]]$tagLocation = c(245128.6 eastingOffset, 3262353 northingOffset) ######################################################## ############## ########## # All Internal Performance Trials ###################################################################### ########## # # These trials took place over two days at if4.3 at the beginning of hb1. # Because each of these tags has it's ow n start and stop time do them all by hand. # They don't have an md[[cmd]]. Also note we didn't record their positions # so here we just use their target locations. # # Instead of re typing all the locations, I'll read them in and convert to UTM... df1 = read.table("C:/zy/Telemetry/R summary files/internal performance locations used 2011Mar25.txt", header = TRUE) library("PBSmapping") # for converting LL to UTM targetLL = data.frame(X= df1$Longitude, Y=df1$Latitude) attr(targetLL, "zone") < 17 attr(targe tLL, "projection") < "LL" targetUTM = convUL(targetLL,km=FALSE) #(longitude, latitude)

PAGE 345

345 df1$easting = targetUTM$X df1$northing = targetUTM$Y # now convert date, time to utime df1$startUtime = unclass(as.POSIXct(strptime(paste(df1$Date, df1$startTime, sep=" "), "%Y%B%d %H:%M:%S", tz="EST5EDT"), origin="1970 1 1", tz="EST5EDT")) df1$stopUtime = unclass(as.POSIXct(strptime(paste(df1$Date, df1$stopTime, sep=" "), "%Y%B%d %H:%M:%S", tz="EST5EDT"), origin="1970 1 1", tz="EST5EDT")) # lastl y, I don't want to use f49 so remove them df1 = df1[df1$tag != 49, ] listIndex = 62 # where in the tagList did I stop for (i in 1:nrow(df1)){ tagList[[listIndex+i]]$tagName = paste("f",df1$tag[i], sep="") tagList[[listIndex+i]]$deployment = "hb1" t agList[[listIndex+i]]$startUtime = df1$startUtime[i] tagList[[listIndex+i]]$stopUtime = df1$stopUtime[i] if(df1$Location[i]=="reef"){ tagList[[listIndex+i]]$tagLocation = c(md[[3]]$reefEN$easting, md[[3]]$reefEN$northing) } else { tagLis t[[listIndex+i]]$tagLocation = c(df1$easting[i] eastingOffset, df1$northing[i] northingOffset) } } ###################################################################### ######### # Some plots to check that all is well...YES, ALL IS WELL. ########### ########################################################### ######### # sp150 plot(md[[11]]$sdlEN$easting, md[[11]]$sdlEN$northing, pch=19, col="red") points(md[[11]]$reefEN$easting, md[[11]]$reefEN$northing, pch=19, col="red") points(tagList[[49]]$tagLoca tion[1], tagList[[49]]$tagLocation[2], pch=17, col="green") points(tagList[[50]]$tagLocation[1], tagList[[50]]$tagLocation[2], pch=17, col="green") points(tagList[[51]]$tagLocation[1], tagList[[51]]$tagLocation[2], pch=17, col="blue") points(tagList[[52]]$ tagLocation[1], tagList[[52]]$tagLocation[2], pch=17, col="blue") points(tagList[[53]]$tagLocation[1], tagList[[53]]$tagLocation[2], pch=17, col="green") points(tagList[[54]]$tagLocation[1], tagList[[54]]$tagLocation[2], pch=17, col="green") points(tagList [[55]]$tagLocation[1], tagList[[55]]$tagLocation[2], pch=17, col="green")

PAGE 346

346 # sp125 points(md[[10]]$sdlEN$easting, md[[10]]$sdlEN$northing, pch=19, col="black") points(md[[10]]$reefEN$easting, md[[10]]$reefEN$northing, pch=19, col="black") points(tagList[[4 4]]$tagLocation[1], tagList[[44]]$tagLocation[2], pch=17, col="brown") points(tagList[[45]]$tagLocation[1], tagList[[45]]$tagLocation[2], pch=17, col="brown") points(tagList[[46]]$tagLocation[1], tagList[[46]]$tagLocation[2], pch=17, col="brown") points(ta gList[[47]]$tagLocation[1], tagList[[47]]$tagLocation[2], pch=17, col="yellow") points(tagList[[48]]$tagLocation[1], tagList[[48]]$tagLocation[2], pch=17, col="brown") # sp100 points(md[[12]]$sdlEN$easting, md[[12]]$sdlEN$northing, pch=19, col="black") po ints(md[[12]]$reefEN$easting, md[[12]]$reefEN$northing, pch=19, col="black") points(tagList[[56]]$tagLocation[1], tagList[[56]]$tagLocation[2], pch=17, col="grey") points(tagList[[57]]$tagLocation[1], tagList[[57]]$tagLocation[2], pch=17, col="grey") point s(tagList[[57]]$tagLocation[1], tagList[[58]]$tagLocation[2], pch=17, col="yellow") points(tagList[[58]]$tagLocation[1], tagList[[59]]$tagLocation[2], pch=17, col="yellow") points(tagList[[60]]$tagLocation[1], tagList[[60]]$tagLocation[2], pch=17, col="gre y") points(tagList[[61]]$tagLocation[1], tagList[[61]]$tagLocation[2], pch=17, col="yellow") points(tagList[[62]]$tagLocation[1], tagList[[62]]$tagLocation[2], pch=17, col="grey") # hb2008 points(md[[2]]$sdlEN$easting, md[[2]]$sdlEN$northing, pch=19, col= "red", cex=2) points(md[[2]]$reefEN$easting, md[[2]]$reefEN$northing, pch=19, col="red", cex=2) points(tagList[[6]]$tagLocation[1], tagList[[6]]$tagLocation[2], pch=17, col="green") points(tagList[[7]]$tagLocation[1], tagList[[7]]$tagLocation[2], pch=17, c ol="green") points(tagList[[8]]$tagLocation[1], tagList[[8]]$tagLocation[2], pch=17, col="green") points(tagList[[9]]$tagLocation[1], tagList[[9]]$tagLocation[2], pch=17, col="green") points(tagList[[10]]$tagLocation[1], tagList[[10]]$tagLocation[2], pch=1 7, col="green") # hb2007 points(md[[1]]$sdlEN$easting, md[[1]]$sdlEN$northing, pch=19) points(md[[1]]$reefEN$easting, md[[1]]$reefEN$northing, pch=19) points(tagList[[1]]$tagLocation[1], tagList[[1]]$tagLocation[2], pch=17) points(tagList[[2]]$tagLocation [1], tagList[[2]]$tagLocation[2], pch=17) points(tagList[[3]]$tagLocation[1], tagList[[3]]$tagLocation[2], pch=17) points(tagList[[4]]$tagLocation[1], tagList[[4]]$tagLocation[2], pch=17) points(tagList[[5]]$tagLocation[1], tagList[[5]]$tagLocation[2], pch =17) # sp125 # hb1 plot(md[[3]]$sdlEN$easting, md[[3]]$sdlEN$northing, pch=19) points(md[[3]]$reefEN$easting, md[[3]]$reefEN$northing, pch=19) points(tagList[[11]]$tagLocation[1], tagList[[11]]$tagLocation[2], pch=17, col="red") points(tagList[[12]]$tagL ocation[1], tagList[[12]]$tagLocation[2], pch=17, col="red")

PAGE 347

347 points(tagList[[13]]$tagLocation[1], tagList[[13]]$tagLocation[2], pch=17, col="red") points(tagList[[14]]$tagLocation[1], tagList[[14]]$tagLocation[2], pch=17, col="red") points(tagList[[15]]$ta gLocation[1], tagList[[15]]$tagLocation[2], pch=17, col="blue") for(i in 63:112){ points(tagList[[i]]$tagLocation[1], tagList[[i]]$tagLocation[2], pch=17, col="green") } # sb1 plot(md[[4]]$sdlEN$easting, md[[4]]$sdlEN$northing, pch=19) points(md[[4]]$re efEN$easting, md[[4]]$reefEN$northing, pch=19) points(tagList[[16]]$tagLocation[1], tagList[[16]]$tagLocation[2], pch=17, col="red") points(tagList[[17]]$tagLocation[1], tagList[[17]]$tagLocation[2], pch=17, col="red") points(tagList[[18]]$tagLocation[1], tagList[[18]]$tagLocation[2], pch=17, col="red") points(tagList[[19]]$tagLocation[1], tagList[[19]]$tagLocation[2], pch=17, col="red") points(tagList[[20]]$tagLocation[1], tagList[[20]]$tagLocation[2], pch=17, col="red") # sb2 plot(md[[5]]$sdlEN$easting, md[[5]]$sdlEN$northing, pch=19) points(md[[5]]$reefEN$easting, md[[5]]$reefEN$northing, pch=19) points(tagList[[21]]$tagLocation[1], tagList[[21]]$tagLocation[2], pch=17, col="red") points(tagList[[22]]$tagLocation[1], tagList[[22]]$tagLocation[2], pch=17, col="red") points(tagList[[23]]$tagLocation[1], tagList[[23]]$tagLocation[2], pch=17, col="red") points(tagList[[24]]$tagLocation[1], tagList[[24]]$tagLocation[2], pch=17, col="red") points(tagList[[25]]$tagLocation[1], tagList[[25]]$tagLocation[2], pch=1 7, col="red") # hb2 plot(md[[6]]$sdlEN$easting, md[[6]]$sdlEN$northing, pch=19) points(md[[6]]$reefEN$easting, md[[6]]$reefEN$northing, pch=19) points(tagList[[26]]$tagLocation[1], tagList[[26]]$tagLocation[2], pch=17, col="red") points(tagList[[27]]$tagL ocation[1], tagList[[27]]$tagLocation[2], pch=17, col="red") points(tagList[[28]]$tagLocation[1], tagList[[28]]$tagLocation[2], pch=17, col="red") points(tagList[[29]]$tagLocation[1], tagList[[29]]$tagLocation[2], pch=17, col="red") points(tagList[[30]]$ta gLocation[1], tagList[[30]]$tagLocation[2], pch=17, col="red") # sb3 plot(md[[7]]$sdlEN$easting, md[[7]]$sdlEN$northing, pch=19) points(md[[7]]$reefEN$easting, md[[7]]$reefEN$northing, pch=19) points(tagList[[31]]$tagLocation[1], tagList[[31]]$tagLocation [2], pch=17, col="red") points(tagList[[32]]$tagLocation[1], tagList[[32]]$tagLocation[2], pch=17, col="red") points(tagList[[33]]$tagLocation[1], tagList[[33]]$tagLocation[2], pch=17, col="red") points(tagList[[34]]$tagLocation[1], tagList[[34]]$tagLocati on[2], pch=17, col="red") points(tagList[[35]]$tagLocation[1], tagList[[35]]$tagLocation[2], pch=17, col="red") # hb3 plot(md[[8]]$sdlEN$easting, md[[8]]$sdlEN$northing, pch=19) points(md[[8]]$reefEN$easting, md[[8]]$reefEN$northing, pch=19)

PAGE 348

348 points(tagLis t[[36]]$tagLocation[1], tagList[[36]]$tagLocation[2], pch=17, col="red") points(tagList[[37]]$tagLocation[1], tagList[[37]]$tagLocation[2], pch=17, col="red") points(tagList[[38]]$tagLocation[1], tagList[[38]]$tagLocation[2], pch=17, col="red") points(tagL ist[[39]]$tagLocation[1], tagList[[39]]$tagLocation[2], pch=17, col="red") # sb4 plot(md[[9]]$sdlEN$easting, md[[9]]$sdlEN$northing, pch=19) points(md[[9]]$reefEN$easting, md[[9]]$reefEN$northing, pch=19) points(tagList[[40]]$tagLocation[1], tagList[[40]] $tagLocation[2], pch=17, col="red") points(tagList[[41]]$tagLocation[1], tagList[[41]]$tagLocation[2], pch=17, col="red") points(tagList[[42]]$tagLocation[1], tagList[[42]]$tagLocation[2], pch=17, col="red") points(tagList[[43]]$tagLocation[1], tagList[[43 ]]$tagLocation[2], pch=17, col="red") ###################################################################### ######### # ONCE THE LIST IS FULLY POPULATED GO TO THE NEXT FILE AND DO THE CALCULATIONS... # NAMED SOMETHING LIKE "chapter 2 array validation.r" # ##################################################################### ######### # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # chapter 2 array validation.r # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@@@ ###################################################################### ########## ###################################################################### ########## ###################################################################### ######## ## ###################################################################### ########## ### Testing Array Performance ###################################################################### ########## ############################################################# ######### ########## ###################################################################### ########## ###################################################################### ########## ###################################################################### ### #######

PAGE 349

349 ### detections ###################################################################### ########## # # These are the data sets which can be used for this: # 1. 7 Dec 2007 50m deployment, B79100 B79500 # 2. 22 July 2008 single SDL/single tag detection trials at 200m and 300m, T60800 # 3. 9 Oct 2008 125m deployment, B79100 79500 # 4. 23 April 2009 spacing trial, 125m # 5. 7 May 2009 spacing trials, 150m and 100m # 6. 1 June 2009 internal performance, 100m # 7 13. 1 June 2009 30 Nov 2009 100m deployme nts # From these days I want the distance between each tag and each SDL, then I # want to calculate the hourly and total detection rate. For this use *.toa # data. # # I also want to calculate the position solution rate for each tag depending # on the a rray spacing, and how it changes through time. # First read/execute another file which creates 'tagList[[ ]]'. # This list has tag info for these tags from these deployments...t.r") # # 5 beacons in 2007 deployment # Don't do this for the single SDL detec tion trial... # 5 beacons in 2008 deployment # 3 fish tags, 2 beacons in 125m spacing trial # 2 fish tag, 4 beacons, 1 sentinel in 150 spacing trial # 2 fish tag, 4 beacons, 1 sentinel in 100m spacing trial # 40 fish tags first internal performance. Acuta lly, 8 tags at 40 places # (The 4 beacons, 1 sentinel will be included in first 2009 fish study) # 10 fish tags first internal performance # (The 4 beacons, 1 sentinel will be included in first 2009 fish study) # 4 beacon, 1 sentinel fish study # 4 beacon, 1 sentinel fish study # 4 beacon, 1 sentinel fish study # 4 beacon, 1 sentinel fish study # 4 beacon, 1 sentinel fish study # 4 beacon fish study # 4 beacon fish study # # or in other words # 5+5+5+5+5+5+5+4+4 = 43 in fish studies # 19 in spacing trials + 43 = 62 # 56 in internal performance + 62 = 118 #

PAGE 350

350 # Each element in tagList has: tagName, deployment, tagLocation, startUtime, stopUtime load("C:/zy/Telemetry/R summary files/tagList 2011Apr09.rdata") # get the global code source( "C:/zy/Telemetry/R Data Processing/global variables.r") source("C:/zy/Telemetry/R Data Processing/global functions.r") source("C:/zy/Telemetry/R Data Processing/global metadata.r") library(ggplot2) library(plotrix) # for multhist library(TeachingDemos) # for subplot # get the info for internal performance ("C:/zy/Telemetry/R summary files/internal performance locations used 2011Mar25.txt") tagList[[1]] # to decide how to handle things, how long is each tag in the water outTimes = data.frame(tag=N A, time=NA) for (i in 1:length(tagList)){ outTimes[i,] = c(i,(tagList[[i]]$stopUtime tagList[[i]]$startUtime)/3660) } plot(outTimes,pch=19,ylim=c(0,5)) # I'll use 5hrs as a cut off for doing hourly calculations or just one, I'm # thinking mostly of the internal array performance tags, not out for very long ###################################################################### ########## # A function in which I specify one tag, one deployment and calculate the # detection frequency at different distances through time, and maybe at # different water conditions. I also calculate the position solution frequency # at different array spacings, through time, and maybe at different water # conditions. testTheTag = function(cTag, showme=FALSE, filterMe=TRUE, psrMe=TRUE){ # cTag = 1; tagName = tagList[[cTag]]$tagName; deployment = tagList[[cTag]]$deployment; tagLocation = tagList[[cTag]]$tagLocation; startUtime=tagList[[cTag]]$startUtime; stopUtime=tagList[[cTag]]$stopUtime # maybe later get the ADCP data tagName=cTag$tagName deployment=cTag$deployment tagLocation=cTag$tagLocation

PAGE 351

351 startUtime=cTag$startUtime stopUtime=cTag$stopUtime tagType = substr(tagName,1,1) tagID = substr(tagName,2,10) numSdl = 5 showRed = FALSE # switch fo r plotting in red when fractions are above 1 # get info from md[[]] for (i in 1:length(md)){ # i loops through all deployments if (deployment == md[[i]]$deployment){ print(i) spacing = md[[i]]$spacing homeDir = paste(md[[i]]$ho meDir,"/ALPS 2011Feb14", sep="") bestBeacon = substr(md[[i]]$bestBeacon,2,10) secondBestBeacon = substr(md[[i]]$secondBestBeacon,2,10) reefEN = md[[i]]$reefEN sdlEN = md[[i]]$sdlEN # is this beacon/sentinel at the center or n ot, find out which SDL it was on... location = NA # keep this if you're using a funky 2008 beacon code, or a # non beacon which # probably doesn't have a position estimate. I won't do the # sentinels now either. for (j in 1:nrow(md[[i]]$beaconEN)){ if(tagName == md[[i]]$beaconEN$beaconID[j]){ location = md[[i]]$beaconEN$location[j] } else { # you're probably on one of the problem 2008 tags, where codes # get used in place of beacons if ((tagName == "b80") | (tagName == "b81" )) {location=42} if ((tagName == "b85") | (tagName == "b86" )) {location=44} if (tagName == "b130") {location=45} # or you're using a fish tag in one of the trials if (tagType == "f"){location="inside"} # these tags are all over the place # now the sp150...read the metadata about this... # N41 had B2, B79400. C45 had B1, B79500 # The sentinel s 79600 and T61000 were at the reef if ((tagName == "b2") | (tagName == "b79400")){location="41"} if ((tagName == "b1") | (tagNa me == "b79500")){location="45"} } } # ...then designate it as center or not c enter = ifelse(((location==45)|(location=="reef")|(location=="inside")),T,F) } }

PAGE 352

352 setwd(homeDir) # if cTag == bestBeacon, then use secondBestBeacon cBeacon = ifelse(grepl(bestBeacon,tagName), secondBestBeacon, bestBeacon) # calculate the nubmer of transmissions this tag made during the entire deployment numSec = stopUtime startUtime numMin = numSec/60 # this is for visually weighting data points if (tagType == "f"){ numPings = numSec / 2 pingsPerHour = 30 60 } else if (tagType == "b"){ numPings = numSec / 20 pingsPerHour = 180 } else if(tagType == "s"){ # it's a sentinel 5min on, 25min off, 2 sec burst interval = 300 pings/hr # Luckily all the sentinels were out longer than 1 hour...makes it easi er numPings = (numSec / 3600) 300 pingsPerHour = 300 } else {print("This tag isn't right")} # calculate the distance between this tag and 5 SDLs sdlDist = sqrt( (sdlEN$easting tagLocation[1])^2 + (sdlEN$northing tagLocation[2])^2 ) # read in the *.toa files for this deployment fileNames = list.files(pattern=paste("TxId",tagID,".toa",sep=""), recursive=TRUE, ignore.case=TRUE) d1 = lapply( as.list (fileNames), read.table, header=FALSE) d2 = do.call("rbind", d1) # I'M NOT SU RE THESE ARE IN CHRONOLOGICAL ORDER # gather the non 1 detections times during the interval by each SDL individually sdlList = list() # a list of 5 SDLs sdlList[[1]] = d2$V1[(d2$V1 != 1) & (d2$V1 > startUtime) & (d2$V1 < stopUtime)] sdlList[[2 ]] = d2$V6[(d2$V6 != 1) & (d2$V6 > startUtime) & (d2$V6 < stopUtime)] sdlList[[3]] = d2$V11[(d2$V11 != 1) & (d2$V11 > startUtime) & (d2$V11 < stopUtime)] sdlList[[4]] = d2$V16[(d2$V16 != 1) & (d2$V16 > startUtime) & (d2$V16 < stopUtime)] sdlList[[ 5]] = d2$V21[(d2$V21 != 1) & (d2$V21 > startUtime) & (d2$V21 < stopUtime)] # in a couple cases (tags 46 and 48) during the internal array performance # test there were no tag detections by some SDLs so check for and fix this # so that the logic test in the next for loop doesn't fail when it reaches # and NA for (i in 1:numSdl){ if(length(sdlList[[i]]) == 0){sdlList[[i]][1] = startUtime} }

PAGE 353

353 # to make the 'cut()' results uniform add a detection at the start and stopUtimes # if they're not already there for (i in 1:numSdl){ if(sdlList[[i]][1] != startUtime){sdlList[[i]] = c(startUtime, sdlList[[i]])} if(sdlList[[i]][length(sdlList[[i]])] != stopUtime){sdlList[[i]] = c(sdlList[[i]],stopUtime)} } ### DETECTIONS ### # Total detections over the entire deployment by each SDL detFrac=vector(length=numSdl) for (i in 1:numSdl){ detFrac[i] = length(sdlList[[i]])/numPings } # a check that my fractions are less than 1 redFlag = FALSE for (i in 1:numSdl){ i f((detFrac[i] > 1)&(showRed)) {redFlag=TRUE} } if(redFlag){cColors = "red"} else {cColors = "black"} par(mfrow=c(2,2)) plot(sdlDist, detFrac, pch=19, main=paste(tagName, deployment), ylim=c(0,1), col=cColors) # Look at hourly detections datiG = list() dSplit = list() for (i in 1:numSdl){ datiG[[i]] = as.POSIXlt(sdlList[[i]], origin="1970 1 1", tz="GMT") dSplit[[i]] = split(sdlList[[i]], cut(datiG[[i]], breaks="hour")) } # How many hours were in this deployment dep loymentHour = 1:length(dSplit[[1]]) #hourList = attr(dSplit[[1]], "names") # Create a list of lists to hold the hourly detection fraction: # ... a list of 5 elements, each element a list with one element for each hour # ...but the first and last hours are incomplete...drop them or figure the # ...fraction of the hour hourlyDetFrac = list() for (i in 1:numSdl){ hourlyDetFrac[[i]] = vector(length=length(dSplit[[i]])) } # Calculate the hourly detection fraction of this tag by each SD L each hour # ... for tags out less than 5hrs (I'm thinking of the internal performance test) # ... just calculate one detection fraction. 5hrs = 18000sec if (numSec > 18000){ # if it's more than 5hrs...

PAGE 354

354 for (i in 1:numSdl){ # the first' (and 'last') hour started mid way through the hour...I could # find the fraction of the hour actually used...but with hundreds of # hours this won't change the answer much # Don't use this code for special first hour calcu lations #j=1 # # look at the startUtime to figure out which hour it's in # fd=as.POSIXlt(startUtime, origin="1970 1 1", tz="GMT") # # now find the utime of the start of that hour...what a pain # temp1 = paste(fd$year+1900," ",fd$m on+1," ",fd$mday," ",fd$hour,":00:00", sep="") # startOfFirstHour = as.POSIXct(strptime(temp1, "%Y %m %d %H:%M:%S", tz="GMT"), # origin="1970 1 1", tz="GMT") # notused = fd startOfFirstHour # this is how much of the hour was not used # # how many pings during the partial hour # partialPings = floor(pingsPerHour (60 notused[[1]])/60) # # fraction of partialPings detected # hourlyDetFrac[[i]][[j]] = length(dSplit[[i]][[j]]) / partialPings # now calculate hourlyD etFrac for (j in 1:(length(dSplit[[1]]))){ hourlyDetFrac[[i]][[j]] = length(dSplit[[i]][[j]]) / pingsPerHour } } # end for (i in 1:numSdl) loop } else { # else numSec < 18000 so calculate just one fracDet for (i in 1:numS dl){ # for each SDL calculate one overall detFrac # make hourlyDetFrac[[i]] be only one element long hourlyDetFrac[[i]] = hourlyDetFrac[[i]][1] j=1 # the only one totalHours = numSec/3600 totalPings = floor(pingsPerHour totalHours) hourlyDetFrac[[i]][[j]] = length(sdlList[[i]]) / totalPings } } # end else (stopUtime startUtime)<2hrs so calculate just one fracDet # a check that my fractions are less than 1 redFlag = FALSE for (i in 1:numSdl){ if((m ax(hourlyDetFrac[[i]]) > 1)&(showRed)) {redFlag=TRUE} } # show the hourlyDetFrac plot(hourlyDetFrac[[1]], type="n", xlab="hour of deployment", ylim=c(0,1)) for (i in 1:numSdl){

PAGE 355

355 if(redFlag){cColors = rep("red",n umSdl)} else {cColors = plotColors} points(hourlyDetFrac[[i]], type="b", col=cColors[i]) } # Read in ADCP data ad = importADCPdata() # Pick out data within the current deployment ad1 = ad[(ad$utime > startUtime) & (ad$utime < stopUtime ),] if(nrow(ad1)>0){ # To make ADCP data cover the same time as the detection data... # ...change the first and last time to start and stopUtime, make the data=NA ad2 = ad1 ad2$utime[1] = startUtime ad2$datiG[1] = as.POSIXlt(startUti me, origin="1970 1 1", tz="GMT") ad2$datiL[1] = as.POSIXlt(startUtime, origin="1970 1 1", tz="EST5EDT") ad2[1,4:16] = NA ad2$utime[nrow(ad2)] = stopUtime ad2$datiG[nrow(ad2)] = as.POSIXlt(stopUtime, origin="1970 1 1", tz="GMT") ad2$datiL[nrow(ad2)] = as.POSIXlt(stopUtime, origin="1970 1 1", tz="EST5EDT") ad2[nrow(ad2),4:16] = NA # calculate hourly averages for each deployment hour aSplit = split(ad2, cut(ad2$datiG, breaks="hour")) temp1 = length(aS plit) aHourlyMeans = data.frame("tem"=rep(NA,temp1), "dep"=rep(NA,temp1), "eaaL"=rep(NA,temp1), "eaaM"=rep(NA,temp1), "eaaU"=rep(NA,temp1), "magL"=rep(NA,temp1), "magM"=rep(NA,temp1), "magU"=rep(NA,temp1), "dirL"=rep(NA,temp1), "di rM"=rep(NA,temp1), "dirU"=rep(NA,temp1)) for (i in 1:temp1){ aHourlyMeans[i,] = mean(aSplit[[i]])[22:32] } # show water conditions v. hourlyDetFrac if (showme){ # I don't see any patterns in here par(mfrow=c(2,3)) plot( aHourlyMeans$tem, hourlyDetFrac[[1]], type="n", ylim=c(0,1)) for(i in 1:numSdl){ points(aHourlyMeans$tem,hourlyDetFrac[[i]],pch=19,cex=0.2,col=plotColors[i]) } plot(aHourlyMeans$dep, hourlyDetFrac[[1]], type="n", ylim=c(0,1)) for(i in 1:numSdl){ points(aHourlyMeans$dep,hourlyDetFrac[[i]],pch=19,cex=0.2,col=plotColors[i]) } plot(aHourlyMeans$eaaL, hourlyDetFrac[[1]], type="n", ylim=c(0,1)) for(i in 1:numSdl){

PAGE 356

356 points(aHourlyMeans$eaaL,hourlyDet Frac[[i]],pch=19,cex=0.2,col=plotColors[i]) } plot(aHourlyMeans$magL, hourlyDetFrac[[1]], type="n", ylim=c(0,1)) for(i in 1:numSdl){ points(aHourlyMeans$magL,hourlyDetFrac[[i]],pch=19,cex=0.2,col=plotColors[i]) } plot( aHourlyMeans$dirL, hourlyDetFrac[[1]], type="n", ylim=c(0,1)) for(i in 1:numSdl){ points(aHourlyMeans$dirL,hourlyDetFrac[[i]],pch=19,cex=0.2,col=plotColors[i]) } } } else { # end if(nrow(ad1)>0) showme=FALSE aHourlyMeans = data.frame("tem"=NA, "dep"=NA, "eaaL"=NA, "eaaM"=NA, "eaaU"=NA, "magL"=NA, "magM"=NA, "magU"=NA, "dirL"=NA, "dirM"=NA, "dirU"=NA) } ##############################################33 ### POSITION SOLUTIONS ### d1 = impo rtALPSdata(deployment=cTag$deployment, tagName=cTag$tagName, psr=psrMe) # if there are any data in d1 then filter it if ( nrow(d1[1]$data) > 1 ){ if (filterMe){ d2 = filterALPSdata(df1=d1, cnF=1.5, speedF=0.8, minuteMean=FALSE)$data } els e { d2 = filterALPSdata(df1=d1, minuteMean=FALSE)$data } # gather the PS times during the interval d3 = d2$utime[(d2$utime > startUtime) & (d2$utime < stopUtime)] # to make the number of hours consistent with detection stuff above add # a PS at the start and stopUtime d3 = c(startUtime, d3, stopUtime) # Total PS over the entire deployment psFrac = length(d3)/numPings } else { # there are no data in d1, so make the answers = 0 d3 = c(startUtime, stopUtime) psFrac = 0 } # Look at hourly PS datiG = as.POSIXlt(d3, origin="1970 1 1", tz="GMT")

PAGE 357

357 pSplit = split(d3, cut(datiG, breaks="hour")) # check that the number of hours is the same if (length(dSplit[[1]]) != length(pSplit)){ print("The nu mber of hours isn't right") } # Create a list to hold the hourly PS fraction: # ... with one element for each hour hourlyPsFrac = vector(length=length(pSplit)) # Calculate the hourly PS fraction of this tag each hour # ... for tags out less than 5hrs (I'm thinking of the internal performance test) # ... just calculate one detection fraction. 5hrs = 18000sec if (numSec > 18000){ # if it's more than 5hrs... # the first and last hours could be handled differently because they're # partial hours, but I won't do that now. # Calculate the hourly PS fraction of this tag by the array for (i in 1:length(pSplit)){hourlyPsFrac[i]=length(pSplit[[i]])/pingsPerHour} } else { # else numSec < 18000 so calculate just one PsFrac # make hourlyPSFrac be only one element long hourlyPsFrac = hourlyPsFrac[1] # calculate the one PsFrac i=1 # the only one # totalHours and totalPings from above hourlyPsFrac[i] = psFrac } # a check that my fractions are less t han 1 redFlag = FALSE if((max(hourlyPsFrac) > 1)&(showRed)) {redFlag=TRUE} # show the hourlyPsFrac if(redFlag){cColor="red"} else {cColor="black"} plot(hourlyPsFrac, ylim=c(0,1), xlab="hour of deployment", type="b", col=cColor) # show the average hourlyDetFrac v hourlyPsFrac meanHourlyDetFrac = rowMeans(cbind(hourlyDetFrac[[1]], hourlyDetFrac[[2]], hourlyDetFrac[[3]], hourlyDetFrac[[4]], hourlyDetFrac[[5]])) plot(meanHourlyDetFrac, hourlyPsFrac, pch=19, xlim=c(0,1), ylim=c (0,1)) # show water conditions v. hourlyPsFrac if (showme){ # I don't see any patterns in here

PAGE 358

358 par(mfrow=c(2,3)) plot(aHourlyMeans$tem, hourlyPsFrac, pch=19,cex=0.2,ylim=c(0,1)) plot(aHourlyMeans$dep, hourlyPsFrac, pch=19,cex=0.2,ylim=c( 0,1)) plot(aHourlyMeans$eaaM, hourlyPsFrac, pch=19,cex=0.2,ylim=c(0,1)) plot(aHourlyMeans$magM, hourlyPsFrac, pch=19,cex=0.2,ylim=c(0,1)) plot(aHourlyMeans$dirM, hourlyPsFrac, pch=19,cex=0.2,ylim=c(0,1)) } # combine hourly data for ggplots later on hourlyMeans = data.frame( hour=1:length(hourlyDetFrac[[1]]), detFrac1=hourlyDetFrac[[1]], detFrac2=hourlyDetFrac[[2]], detFrac3=hourlyDetFrac[[3]], detFrac4=hourlyDetFrac[[4]], detFrac5=hourlyDetFrac[[5]], psFrac=hou rlyPsFrac, temperature=aHourlyMeans$tem, magL=aHourlyMeans$magL, eaaL=aHourlyMeans$eaaL, eaaM=aHourlyMeans$eaaM, eaaU=aHourlyMeans$eaaU ) ### RETURN RESULTS ### answer=list(hourlyMeans=hourlyMeans,tagName=tagName, deployment=de ployment, numMin=numMin, spacing=spacing, center=center, sdlDist=sdlDist, detFrac=detFrac, psFrac=psFrac) return(answer) } # end testTheTag() function bob = testTheTag(cTag=tagList[[46]], showme=F) allresults = list() filterMeNow = T ps rMeNow = F # 2007 deployment for (i in 1:5){ # cycle through all tags in this deployment allresults[[i]]=testTheTag(tagList[[i]], filterMe=filterMeNow, psrMe=psrMeNow) } # 2008 deployment...the problem child for (i in 6:10){ # cycle through all tags in this deployment

PAGE 359

359 allresults[[i]]=testTheTag(tagList[[i]], filterMe=filterMeNow, psrMe=psrMeNow) } # cTag 6 and 7 are b80 and b81, which are b79200 # ...something wrong with these so I won't use them... # cTag 8 and 9 are b85 and b86, which are b79400 # c Tag 10 is b130, which is b79500 allresults[[6]]$detFrac = rep(NA,5) #allresults[[6]]$detFrac + allresults[[7]]$detFrac allresults[[7]]$detFrac = rep(NA,5) allresults[[6]]$psFrac = NA #allresults[[6]]$psFrac + allresults[[7]]$psFrac allresults[[7]]$psFrac = NA allresults[[8]]$detFrac = allresults[[8]]$detFrac + allresults[[9]]$detFrac allresults[[9]]$detFrac = rep(NA,5) allresults[[8]]$psFrac = allresults[[8]]$psFrac + allresults[[9]]$psFrac allresults[[9]]$psFrac = NA plot(allresults[[1]]$sdlDist, allresu lts[[1]]$detFrac, pch=19, xlim=c(0,250), ylim=c(0,1), type="n") for (i in 1:10){ points(allresults[[i]]$sdlDist, allresults[[i]]$detFrac, pch=19, col=plotColors[i]) } # hb1:11 15. sb1:16 20. sb2:21 25. hb2:26 30. sb3:31 35. hb3:36 39. sb4:40 43. # ..remember these are only the stationary tags for (i in 11:43){ # cycle through all tags in this deployment allresults[[i]]=testTheTag(tagList[[i]], filterMe=filterMeNow, psrMe=psrMeNow) } # spacing trials...44 62 # for (i in 44:62){ # cycle through al l tags in the spacing trials allresults[[i]]=testTheTag(tagList[[i]], filterMe=filterMeNow, psrMe=psrMeNow) } # inter performance trials... for (i in c(63:112)){ # cycle through all tags in the internal performance trial allresults[[i] ]=testTheTag(tagList[[i]], filterMe=filterMeNow, psrMe=psrMeNow) } ###### # # I want to do plots with unfiltered and filtered psFrac zz rawResults = allresults # This used to be 'rawResults' zz filteredResults = allresults # This used to be 'filteredR esults' # gather some things into one dataframe for regressions

PAGE 360

360 # add regression lines # gather data into one place distVec = c(); detFrac = c(); detNumMin = c(); spacingVec = c(); psFracRaw = c(); psFracFiltered = c(); psNumMin = c(); for (i in 1:leng th(rawResults)){ # for each # detections distVec = c(distVec, rawResults[[i]]$sdlDist) detFrac = c(detFrac, rawResults[[i]]$detFrac) detNumMin = c(detNumMin, rep(rawResults[[i]]$numMin, length(rawResults[[i]]$detFrac))) # PS spacingVec = c(spacingVec, rawResults[[i]]$spacing) psFracRaw = c(psFracRaw, rawResults[[i]]$psFrac) psFracFiltered = c(psFracFiltered, filteredResults[[i]]$psFrac) psNumMin = c(psNumMin, rawResults[[i]]$numMin) } detections = data.frame(distVec,detFrac, detNumMin) solutions = data.frame(spacingVec,psFracRaw,psFracFiltered,psNumMin) ###################################################################### ######## # a plot of distance v detFrac plColors = rep("black",length(allresults)) plType = rep(19,length (allresults)) plSize = rep(1,length(allresults)) for(i in 1:length(allresults)){ if(allresults[[i]]$numMin < 3600){ plType[i]=4; plColors[i]="red"; plSize[i]=2; } } # plot 1 # a plot of distance v detFrac par(mar=c(5,5,1,1)+0.1) plot(allresults[[1 ]]$sdlDist, allresults[[1]]$detFrac, las=1, type="n", bty="l", cex.axis = 1.5, cex.lab=1.5, xlab="Distance Between Transmitter and Hydrophone (m)", ylab="", xlim=c(0,300), ylim=c(0,0.99)) for (i in 1:length(allresults)){ points(allresults[[i]]$ sdlDist, allresults[[i]]$detFrac, pch=plType[i], cex=plSize[i], col="black")

PAGE 361

361 } abline(h=0) mtext("Detection Fraction", side=2, line=3.5, cex=1.7) # now add the single HP single tag detection trial results... find them in # "C: \ zy \ The closets \ data clos et \ Telemetry \ 2008 \ 2008 Jul 22 SDL detection trials" # "SDL detection trial results from ALPS.xls" # at 200m for 7 min, detection fraction is 0.679 # at 300m for 5 min, detection fraction is 0.245 points(c(200,300), c(0.679, 0.245), pch=4, cex=2, col="black ) # add a legend legend(220,1,legend=c("Longer than 6hr","Shorter than 6hr"),pch=c(19,4)) ###################################################################### ######## # Look at detFrac through time, I looked all all tagList 1 43 and I like 26 the # be st because it's a central beacon, good detections, good PS, places when # drops in detFracs agree among some sdls and places when some don't agree. # pick one central and one marginal beacon...central i=26, marginal i=12 i=12 # for shorthand pick out just the pertinent data and drop the first and last hours r1 = filteredResults[[i]]$hourlyMeans[,c("hour","detFrac1","detFrac2", "detFrac3","detFrac4","detFrac5","psFrac")] r1 = head(r1, 1) ### what about a spatial arrangement for the figure ### par(oma= c(4,4,1,1)) par(mfrow=c(3,3)) figmargins = c(1,1,1,1) showboxes=F shadeboxes=F shadecolor = rgb(190, 190, 190, alpha=150, maxColorValue=255) shadeB = 1 shadeT = 1.0 # pane 1 empty par(mar=figmargins+0.1) plot(r1$hour, r1$detFrac1, type="n", xaxt="n", yax t="n", bty="n", xlab="", ylab="") mtext(text="a)",side=2,line=0,cex=1.5,las=1,padj= 3) if(showboxes){box("figure"); box("plot");} # pane 2. north sdl par(mar=figmargins+0.1) plot(r1$hour, r1$detFrac1, cex.axis=1.3, type="l", bty="l", las=1, xaxt="n",

PAGE 362

362 main="North Hydrophone", xlab="", ylab="") #mtext("Fraction of Detections",side=2,line=2.8,cex=1.4) axis(1,seq(0,350,by=50),cex.axis=1.3, labels=NA) # add shaded regions if(shadeboxes){ rect(0,shadeB,30,shadeT,col=shadecolor, border=NA) rect(210,shadeB ,280,shadeT,col=shadecolor, border=NA) } if(showboxes){box("figure"); box("plot");} # I want the black lines on top points(r1$hour, r1$detFrac1, type="l") # pane 3 empty par(mar=figmargins+0.1) plot(r1$hour, r1$detFrac1, type="n", xaxt="n", yaxt="n", bty= "n", xlab="", ylab="") if(showboxes){box("figure"); box("plot");} # pane 4. west sdl par(mar=figmargins+0.1) plot(r1$hour, r1$detFrac4, cex.axis=1.3, type="l", bty="l", las=1, xaxt="n", main="West Hydrophone", xlab="", ylab="") mtext("Detection Fracti on",side=2,line=2.8,cex=1.4) #mtext("Hour of Deployment",side=1,line=2.8,cex=1.3) axis(1,seq(0,350,by=50),cex.axis=1.3) # add shaded regions if(shadeboxes){ rect(0,shadeB,30,shadeT,col=shadecolor, border=NA) rect(210,shadeB,280,shadeT,col=shadecolor, b order=NA) } if(showboxes){box("figure"); box("plot");} # I want the black lines on top points(r1$hour, r1$detFrac4, type="l") # pane 5. center sdl par(mar=figmargins+0.1) plot(r1$hour, r1$detFrac5, type="l", bty="l", las=1, xaxt="n", yaxt="n", main="" xlab="", ylab="") #mtext("Hour of Deployment",side=1,line=2.8,cex=1.3) axis(1,seq(0,350,by=50),cex.axis=1.3, labels=NA) axis(2,seq(0,1,by=0.2),cex.axis=1.3, labels=NA) # add shaded regions if(shadeboxes){ rect(0,shadeB,30,shadeT,col=shadecolor, border= NA) rect(210,shadeB,280,shadeT,col=shadecolor, border=NA)

PAGE 363

363 } if(showboxes){box("figure"); box("plot");} # I want the black lines on top points(r1$hour, r1$detFrac5, type="l") # pane 6. east sdl par(mar=figmargins+0.1) plot(r1$hour, r1$detFrac2, type="l ", bty="l", las=1, xaxt="n", yaxt="n", main="East Hydrophone", xlab="", ylab="") axis(1,seq(0,350,by=50),cex.axis=1.3) axis(2,seq(0,1,by=0.2),cex.axis=1.3, labels=NA) # add shaded regions if(shadeboxes){ rect(0,shadeB,30,shadeT,col=shadecolor, border=N A) rect(210,shadeB,280,shadeT,col=shadecolor, border=NA) } if(showboxes){box("figure"); box("plot");} # I want the black lines on top points(r1$hour, r1$detFrac2, type="l") # pane 7 empty par(mar=figmargins+0.1) plot(r1$hour, r1$detFrac1, type="n", xaxt ="n", yaxt="n", bty="n", xlab="", ylab="") if(showboxes){box("figure"); box("plot");} # pane 8. south sdl par(mar=figmargins+0.1) plot(r1$hour, r1$detFrac3, type="l", bty="l", las=1, xaxt="n", yaxt="n", main="", xlab="", ylab="") mtext("Hour of Deploy ment",side=1,line=2.8,cex=1.3) axis(1,seq(0,350,by=50),cex.axis=1.3) axis(2,seq(0,1,by=0.2),cex.axis=1.3) # add shaded regions if(shadeboxes){ rect(0,shadeB,30,shadeT,col=shadecolor, border=NA) rect(210,shadeB,280,shadeT,col=shadecolor, border=NA) } if (showboxes){box("figure"); box("plot");} # I want the black lines on top points(r1$hour, r1$detFrac3, type="l") # pane 9 empty par(mar=figmargins+0.1) plot(r1$hour, r1$detFrac1, type="n", xaxt="n", yaxt="n", bty="n", xlab="", ylab="")

PAGE 364

364 if(showboxes){box("f igure"); box("plot");} # plot for PS frac par(mar=c(4.5,4,1,1)+0.1) plot(r1$hour, r1$psFrac, cex.axis=1.3, cex.lab=1.5, type="l", bty="l", las=1, #xaxt="n",yaxt="n", main="", xlab="Hour of Deployment", ylab="Position Solution Fraction", ylim=c(0,1) ) text(0,1,"b)",cex=1.5) # add shaded regions if(shadeboxes){ rect(0,shadeB,30,shadeT 0.05,col=shadecolor, border=NA) rect(210,shadeB,280,shadeT,col=shadecolor, border=NA) } # I want the black lines on top points(r1$hour, r1$psFrac, type="l") par(mfr ow=c(1,2)) # plot X THIS HAS MORE THINGS ADDED TO IT BELOW... # a plot of spacing v psFrac plot(rawResults[[1]]$spacing, rawResults[[1]]$psFrac, las=1, type="n", bty="l", cex.axis=1.5, cex.lab=1.5, xaxt="n", xlab="Array Spacing (m)", ylab="Fraction o f Position Solutions", xlim=c(50,155), ylim=c(0,1)) # label the x axis axis(1, at = seq(50,225,by=25), labels = seq(50,225,by=25), tick = TRUE, cex.axis=1.5) # I want to jitter the 100m column more than other columns for (i in 1:length( rawResults)){ points(jitter(rawResults[[i]]$spacing, factor=2), rawResults[[i]]$psFrac, pch=plType[i], col="black")#plotColors[i]) } # now for plot 2, I see that the 50m spacing is lower than it should be, the # problem is that it has no psr (coll ected in symbol mode) and so doesn't really # compare with the psr results of all other deployments. So to account for that # I'll run 'testTheTag()' for everything except 2007 # (also, because 2008 if funky don't use it, and don't use spacing trials # or interal performance, they're too short, so just 2009 really) using non psr results, # then again using psr results. I'll calculate the percent increase in PS for # each point, find the mean percent increase, finally apply that increase to the

PAGE 365

365 # 2007 points. # # I'll follow this process both for the unfiltered and filtered data. # # In the end, I'll run 'testTheTag()' (on the 2009 stuff): # unfiltered no psr # unfiltered psr # filtered no psr # filtered psr # then ... # run 2007 data: # unfiltered not psr adjusted # unfiltered psr adjusted # filtered not psr adjusted # filtered psr adjusted # # run the code above for each of the first 4 cases and save the results here... zz unno = allresults # in this I'll just be looking at 2008 data zz unpsr = rawResults # --OR --allresults # in this I'll just be looking at 2008 data zz filtno = allresults # in this I'll just be looking at 2007 data zz filtpsr = filteredResults # --OR --allresults # in this I'll just be looking at 2007 da ta # now that I have all the 'allresults' calculated for the four cases, calculate # the percent increase for everything (but 2007) unPercentIncrease = c() filtPercentIncrease = c() for (i in 11:43){ # 11 is first 2009, 43 is the last before the spacing trails unPercentIncrease = c(unPercentIncrease, (unpsr[[i]]$psFrac unno[[i]]$psFrac)/unno[[i]]$psFrac) filtPercentIncrease = c(filtPercentIncrease, (filtpsr[[i]]$psFrac filtno[[i]]$psFrac)/filtno[[i]]$psFrac) } # now calculate the mean per cent increase for unfiltered and filtered meanun = mean(unPercentIncrease) #=1.47, stdev = 2.108, range= (0.07, 10.07) range(unPercentIncrease) meanfilt = mean(filtPercentIncrease) #=5.27, stdev = 10.912, range= ( 0.63, 58.71) range(filtPercentIncrease) --or a short cut --meanun = 1.47 meanfilt = 5.27 # now apply this percentIncrease to the 2007 data, there are only five data points

PAGE 366

366 # I want to do plots with unfiltered and filtered psFrac # # to keep nomenclature consistend zz rawResults = unpsr # u nfiltered with psr zz filteredResults = filtpsr # filtered with psr # now calculate how much the 2007 data might increase if it had PSR # ... and when they go above 100%, limit them to 100% un2007 = list() filt2007 = list() for (i in 1:5){ # for hb2007 unpsr = unno and filtpsr=filtno un2007[[i]] = min(unpsr[[i]]$psFrac (1+meanun),1) filt2007[[i]] = min(filtpsr[[i]]$psFrac (1+meanfilt),1) } # pick the symbol: # ... closed circle: beacon on center sdl, longer than 360min # ... open circle: be acon not on center, loner than 360min # ... plColors = rep("black",length(unpsr)) plType = rep(23,length(unpsr)) # none should end up with this symbol plSize = rep(1,length(unpsr)) for(i in 1:length(unpsr)){ # use unpsr because the 'numMin' and 'center' won't change if((unpsr[[i]]$numMin < 360)&(unpsr[[i]]$center)){plType[i]=4} if((unpsr[[i]]$numMin < 360)&(!unpsr[[i]]$center)){plType[i]=4} if((unpsr[[i]]$numMin > 360)&(unpsr[[i]]$center)){plType[i]=19} if((unpsr[[i]]$numMin > 360)&(!unpsr[[i]]$c enter)){plType[i]=1} } # now reproduce figure 2 with the extra 2007 data points par(mfrow=c(1,2)) par(mar=c(5,5,1,1)+0.1) # plot 2a WITH ADDED 2007 WITH PRETEND PSR # a plot of spacing v psFrac plot(unpsr[[1]]$spacing, unpsr[[1]]$psFrac, las=1, type="n", bty="l", cex.axis=1.5, cex.lab=1.5, xaxt="n", xlab="Array Spacing (m)", ylab="", xlim=c(50,155), ylim=c(0,1.1)) # label the x axis axis(1, at = seq(50,150,by=25), labels = seq(50,150,by=25), tick = TRUE, cex.axis=1.5) # I want to jit ter the 100m column more than other columns

PAGE 367

367 for (i in 1:length(unpsr)){ points(jitter(unpsr[[i]]$spacing, factor=2), unpsr[[i]]$psFrac, pch=plType[i], col="black")#plotColors[i]) } # add the 2007 data that have been increased as if they used PSR po ints(jitter(rep(50,length(un2007)),2),un2007,pch=c(2,2,2,2,17)) abline(h=0) # add y axis label mtext(text="Position Solution Fraction", side=2, line=3.5, cex=1.5) # add the "a)" text(50,1.1,"a)", cex=1.5) # plot 2b # a plot of spacing v psFrac par(mar=c(5 ,3,1,1)+0.1) plot(filtpsr[[1]]$spacing, filtpsr[[1]]$psFrac, las=1, type="n", bty="l", cex.axis=1.5, cex.lab=1.5, xaxt="n", xlab="Array Spacing (m)", ylab="", xlim=c(50,155), ylim=c(0,1.1)) # label the x axis axis(1, at = seq(50,225,by=25), labels = seq(50,225,by=25), tick = TRUE, cex.axis=1.5) # I want to jitter the 100m column more than other columns for (i in 1:length(filtpsr)){ points(jitter(filtpsr[[i]]$spacing, factor=2), filtpsr[[i]]$psFrac, pch=plType[i], col="black")# plotColors[i]) } # add the 2007 data that have been increased as if they used PSR # ... make the center triangle filled in instead of open points(jitter(rep(50,length(filt2007)),2),filt2007,pch=c(2,2,2,2,17)) abline(h=0) # add the "b)" text(50,1.1,"b)", ce x=1.5) # add a legend legend(112, 1.14, legend=c( "Central, longer than 6hr", "Marginal, longer than 6hr", "All locations, shorter than 6hr", "Central, Deployment A", "adjusted for PSR", "Marginal, Deployment A", "Adjusted for PSR"),

PAGE 368

368 pch= c(19,1,4,17,NA,2,NA) ) # in this plot I want to know how many points are actually on the zero line temp1 = c() for (i in 1:length(filtpsr)){ temp1 = c(temp1,filtpsr[[i]]$psFrac*100) } min(temp1,na.rm=T) # how much is lost with filtering? What percent decrease is there in the # number of PS, for central and marginal locations # From above we have... rawResults filteredResults percentRemovedCentral = vector(length=0) percentRemovedMarginal = vector(length=0) # loop through all and calculate the percen t decrease after filtering for (i in 1:length(rawResults)){ # extract psFrac and numMin, then calculate the number of PS rawPScount = rawResults[[i]]$numMin rawResults[[i]]$psFrac filteredPScount = filteredResults[[i]]$numMin filteredResults[[ i]]$psFrac if(rawResults[[i]]$center){ percentRemovedCentral[i] = 100*(rawPScount filteredPScount)/rawPScount } else if (!rawResults[[i]]$center){ percentRemovedMarginal[i] = 100*(rawPScount filteredPScount)/rawPScount } } range(percentRemov edCentral, na.rm=T) mean(percentRemovedCentral, na.rm=T) range(percentRemovedMarginal, na.rm=T) mean(percentRemovedMarginal, na.rm=T) ###################################################################### ######### # What is the position solution accuracy ? Two methods: # 1. match ALPS mean position solution with GPS best estimates # import and filter beacon, etc. data i=15 cTag = tagList[[i]]

PAGE 369

369 psAccuracy = function(cTag, filterMe=TRUE, psrMe=TRUE){ # cTag = 1; tagName = tagList[[cTag]]$tagName; deploy ment = tagList[[cTag]]$deployment; tagLocation = tagList[[cTag]]$tagLocation; startUtime=tagList[[cTag]]$startUtime; stopUtime=tagList[[cTag]]$stopUtime # maybe later get the ADCP data tagName=cTag$tagName deployment=cTag$deployment tagLocatio n=cTag$tagLocation startUtime=cTag$startUtime stopUtime=cTag$stopUtime tagType = substr(tagName,1,1) tagID = substr(tagName,2,10) # numSdl = 5 # get info from md[[]] for (i in 1:length(md)){ # i loops through all deployments if (de ployment == md[[i]]$deployment){ print(i) spacing = md[[i]]$spacing homeDir = paste(md[[i]]$homeDir,"/ALPS 2011Feb14", sep="") bestBeacon = substr(md[[i]]$bestBeacon,2,10) secondBestBeacon = substr(md[[i]]$secondBestBeacon,2,1 0) reefEN = md[[i]]$reefEN sdlEN = md[[i]]$sdlEN beaconEN = md[[i]]$beaconEN plotLimits = md[[i]]$plotLimits } } setwd(homeDir) # # if cTag == bestBeacon, then use secondBestBeacon # cBeacon = ifelse(grepl(bestBeac on,tagName), secondBestBeacon, bestBeacon) ### POSITION SOLUTIONS ### d1 = importALPSdata(deployment=cTag$deployment, tagName=cTag$tagName, psr=psrMe) # if there are any data in d1 then filter it if ( nrow(d1[1]$data) > 1 ){ if (filterMe){ d2 = filterALPSdata(df1=d1, cnF=1.5, speedF=0.8, minuteMean=FALSE)$data } else { d2 = filterALPSdata(df1=d1, minuteMean=FALSE)$data } # gather the PS during the interval

PAGE 370

370 d3 = d2[(d2$utime > startUtime) & (d2$utime < stopUtime),c( "datiL","easting","northing")] } else { # there are no data in d1, so make the answers = 0 print("No data imported") } # determine which SDL this beacon was on (if any) and the distance between the # GPS SDL position estimate and the mean PS location = NA # keep this if you're using a funky 2008 beacon code, or a # non beacon which # probably doesn't have a position estimate. I won't do the # sentinels now either. for (i in 1:nrow(beaconEN)){ if(tagName == beaconEN$beaconID[[i]]){ location = beaconEN$location[[i]] } else { # you're ptobably on one of the problem 2008 tags, where codes # get used in place of beacons if ((tagName == "b80") | (tagName == "b81" )) {location=42} if ((tagName == "b85") | (tagName == "b86" )) {location=44} if (tagName == "b130") {location=45} } } # ...then designate it as center or not center = ifelse(((location==45)|(location=="reef")),T,F) # now find the position of that location gpsEN = data.frame(easting=NA,northing=NA) # keep this if you don't find a match for (i in 1:length(sdlEN$ID)){ if (as.character(location) == sdlEN$ID[[i]]){ gpsEN = data.frame(easting=sdlEN$e asting[[i]], northing=sdlEN$northing[[i]]) } } # if it's the sentinel on the reef if(as.character(location) == reefEN$ID){ gpsEN = data.frame(easting=reefEN$easting, northing=reefEN$northing) } # now calculate the distance between the gps EN and the mean PS meanPS = data.frame(easting=mean(d3$easting, na.rm=T), northing=mean(d3$northing, na.rm=T)) distOff = sqrt( (gpsEN$easting meanPS$easting)^2 + (gpsEN$northing meanPS$northing)^2) # determine the size of the 90% cloud of P S locations around the mean PS using # just the Northing axis # ...gather all the northing PS and sort d4 = sort(d3$northing)

PAGE 371

371 # find the bounds containing 90% of all PSs fiveP = max(floor(0.05*length(d4)),1) ninteyfiveP = floor(0.95*length(d4)) northingCloudSize = d4[ninteyfiveP] d4[fiveP] ### RETURN RESULTS ### answer=list(data=d3, tagName=tagName, deployment=deployment, spacing=spacing, meanPS=meanPS,northingCloudSize=northingCloudSize,gpsEN=gpsEN, distOff=distOff,reefEN=ree fEN, sdlEN=sdlEN, plotLimits=plotLimits,center=center) return(answer) } # end psAccuracy() function # now pick a deployment, gather all the tagList tags for that deployment # and run psAccuracy for each tag, then plot drawPlot = function(deploym ent, showClouds=F){ # pick deployemnt hb2007 hb2008 hb1 3, sb1 4 cDeployment = deployment # get tagList tags for that deployment shortList = list() # holding just the tags in this deployment tagCounter = 1 # how many tags have I found so far belonging to this deployment for (i in 1:length(tagList)){ if (tagList[[i]]$deployment == cDeployment){ shortList[[tagCounter]] = tagList[[i]] tagCounter = tagCounter + 1 } } # for all tags in shortList run psAccuracy p sList = list() # for holding psAccuracy output for (i in 1:length(shortList)){ psList[[i]] = psAccuracy(cTag=shortList[[i]]) } # now plot sdl, these are the same in all psLists plot(psList[[1]]$sdlEN$easting, psList[[1]]$sdlEN$northing, pch= 17, xlim=psList[[1]]$plotLimits$easting, ylim=psList[[i]]$plotLimits$northing ) # add clouds if desired for (i in 1:length(psList)){ # draw the clouds of ps if (showClouds){ points(psList[[i]]$data$easting, psList[[i]]$data$northing pch=19, cex=0.5, col=plotColors[i+1])

PAGE 372

372 } # redraw the sdls points(psList[[1]]$sdlEN$easting, psList[[1]]$sdlEN$northing, pch=17) # draw the ALPS mean PSs points(mean(psList[[i]]$data$easting,na.rm=T), mean(psList[[i]]$data$no rthing,na.rm=T), pch=15 ) } } # end drawPlot() # pick deployemnt hb2007 hb2008 hb1 3, sb1 4 drawPlot(deployment="sb4", showCloud=T) # now make a histogram of all tags / all deployments of distance between # GPS position estimate and ALSP me an PS # for all tags in shortList run psAccuracy psList = list() # for holding psAccuracy output distOffVec = vector(length=0) centerVec = vector(length=0) northingCloudSizeVec = vector(length=0) for (i in c(1:5,8:43)){# only from fish deploym ents. not b80 or b81 from hb2008 psList[[i]] = psAccuracy(cTag=tagList[[i]]) distOffVec[i] = psList[[i]]$distOff centerVec[i] = psList[[i]]$center northingCloudSizeVec[i] = psList[[i]]$northingCloudSize } # how far of f was each of the beacon tags # now a plot for the paper ############################################ # histogram of "distance between position estiamtes" # Panel A par(mfrow=c(1,2)) par(mar=c(5,4,1,1)+0.1) # pick the right data centertags = distOffVec[ c enterVec==T ] offcentertags = distOffVec[ centerVec==F ] bob=multhist(list(centertags,offcentertags), breaks=80, cex.axis=1, cex.names=1, col=c("black","grey"), space=c(0,0), axis.lty=0, axes=F, xlim=c(1,60), ylim=c(0,9), legen d.text=c("Central transmitters", "Marginal transmitters"), xlab="",

PAGE 373

373 ylab="", names.arg = seq(1,81,by=1) #c(1,"","","",5,"","","","",10,"","","","",15,"","","","",20,"","","","", #25,"","","","",30,"","","","",35,"","","","",40,"","","","",45, "","","","",50,"","","","", #55,"","","","",60,"","","","",65,"","","","",70,"","","","",75,"","","","",80,"") ) axis(2,seq(0,8,by=1),labels=seq(0,8,by=1), las=1, cex.axis=1.5) mtext("Difference Between Transmitter Position Estimates (m) ",1,3, cex= 1.5) mtext("Frequency",2,2.5,cex=1.7) text(1,8.5,"a)",cex=1.5) # histogram of 90% variation in PS locations...the size of the clouds # panel B # # now the histogram par(mar=c(5,3,1,1)+0.1) # pick the right centertags = northingCloudSizeVec[ centerVec==T ] offcentertags = northingCloudSizeVec[ centerVec==F ] sam=multhist(list(centertags,offcentertags), breaks=40, cex.axis=1.5, cex.names=1, col=c("black","grey"),space=c(0,0), axis.lty=0, xaxt="n", axes=F, #xlim=c(1,46), ylim=c(0,1 3), xlab="", ylab="", names.arg = seq(0,139,by=5) ) axis(2,seq(0,12,by=1),labels=seq(0,12,by=1), las=1, cex.axis=1.5) mtext("90% Range in Northing Position Estimates (m)",1,3, cex=1.5) mtext("Frequency",2,2.5,cex=1.7) text(1,12.4,"b)",cex=1.5) #### ################################################################## # Appendix Figure # look at accuracy over time, pick a tag from psList par(mfrow=c(1,2)) par(mar=c(5,5,1,1)+0.1) cex.pt = 0.6 i = 1 plot(psList[[i]]$data$datiL, psList[[i]]$data$northing, pch=1, cex=cex.pt, cex.lab=1.5, cex.axis=1.5, bty="l", yaxt="n", ylim=psList[[i]]$plotLimits$northing, xlab="Date", ylab="")

PAGE 374

374 mtext("Northing (m)", side=2,line=3.5, cex=1.7) axis(2,seq(600,800,by=25),cex.axis=1.5,las=1) abline(h=psList[[i]]$sdl EN$northing[1],lwd=2) i=3 points(psList[[i]]$data$datiL, psList[[i]]$data$northing,pch=2,cex=cex.pt) abline(h=psList[[i]]$sdlEN$northing[3],lwd=2) i=5 points(psList[[i]]$data$datiL, psList[[i]]$data$northing,pch=0,cex=cex.pt) abline(h=psList[[i]]$sdlE N$northing[5],lwd=2) # on figure labels text(psList[[1]]$data$datiL[1],765,"a)",cex=1.5) text(psList[[1]]$data$datiL[29000],735,"North beacon",cex=1.5) text(psList[[1]]$data$datiL[29000],690,"Center beacon",cex=1.5) text(psList[[1]]$data$datiL[29000],645 ,"South beacon",cex=1.5) # details for the caption i=1 mean(psList[[i]]$data$northing,na.rm=T) # mean=742.8m northing, range=(738.8,755.7) psList[[i]]$sdlEN$northing[i] # = 742 abline(h=c(mean(psList[[i]]$data$northing,na.rm=T)),lty=2,lwd=2) i=3 mean (psList[[i]]$data$northing,na.rm=T) # mean=630.0m northing, range=(621.8,637.0) psList[[i]]$sdlEN$northing[i] # = 639 abline(h=c(mean(psList[[i]]$data$northing,na.rm=T)),lty=2,lwd=2) i=5 mean(psList[[i]]$data$northing,na.rm=T) # mean=698.3m northing range=(694.7,706.7) psList[[i]]$sdlEN$northing[i] # = 699 abline(h=c(mean(psList[[i]]$data$northing,na.rm=T)),lty=2,lwd=2) # panel 2 par(mar=c(5,5,1,1)+0.1) i=21 plot(psList[[i]]$data$datiL, psList[[i]]$data$northing, pch=0, las=1, cex=cex.pt, cex. lab=1.5, cex.axis=1.5, bty="l", #yaxt="n", ylim=psList[[i]]$plotLimits$northing, xlab="Date", ylab="") mtext("Northing (m)", side=2,line=4.2, cex=1.7) axis(2,seq(600,800,by=25),cex.axis=1.5) abline(h=psList[[i]]$sdlEN$northing[1],lwd=2) i=22

PAGE 375

375 point s(psList[[i]]$data$datiL, psList[[i]]$data$northing,pch=1,cex=cex.pt) abline(h=psList[[i]]$sdlEN$northing[3],lwd=2) i=24 points(psList[[i]]$data$datiL, psList[[i]]$data$northing,pch=2,cex=cex.pt) abline(h=psList[[i]]$sdlEN$northing[5],lwd=2) # on figure labels text(psList[[i]]$data$datiL[1],2190,"b)",cex=1.5) text(psList[[21]]$data$datiL[42000],2150,"North beacon",cex=1.5) text(psList[[21]]$data$datiL[42000],2050,"Center beacon",cex=1.5) text(psList[[21]]$data$datiL[42000],1970,"South beacon",cex=1.5) # details for the caption i=21 # center beacon mean(psList[[i]]$data$northing,na.rm=T) # mean=2068m northing, range=(1981,2088) psList[[i]]$sdlEN$northing[5] # = 2070 abline(h=c(mean(psList[[i]]$data$northing,na.rm=T)),lty=2,lwd=2) i=22 #north beacon me an(psList[[i]]$data$northing,na.rm=T) # mean=2169m northing, range=(1980,2205) psList[[i]]$sdlEN$northing[1] # = 2165m abline(h=c(mean(psList[[i]]$data$northing,na.rm=T)),lty=2,lwd=2) i=24 # south beacon range(psList[[i]]$data$northing,na.rm=T) # me an=1955m northing, range=(1927,2132) psList[[i]]$sdlEN$northing[3] # = 1956 abline(h=c(mean(psList[[i]]$data$northing,na.rm=T)),lty=2,lwd=2) ### internal array trial # SEE 'internal array test.r' ######################################################## ############## ######### ###################################################################### ######### ### How does the chosen sound speed affect accuracy, PS requency? # The 2008 deployment had a temperature range of almost 10 degC, but you can # only pi ck one sound speed. The question is how much does the wrong sound # speed/water temperature affect PS accuaracy and frequency? ### I'll use the 2009Aug03 OH41 deployment because it has a small temperature # range. I'll increase and decrease the temp by +/ 5 and +/ 10 degC. # I'll run ALPS five times total for the beacons (1,2,79400,79500) and # sentinel (79600). Then I'll compare the estimated positions. I'll also # compare the fraction of PS over time. # # 20 degC = 1521 m/s

PAGE 376

376 # 25 degC = 1533 m/ s # 30 degC = 1545 m/s # 35 degC = 1554 m/s # 40 degC = 1562 m/s # read in all five data sets alltags = list(beacon1=list(),beacon2=list(),beacon3=list(),beacon4=list()) alltagsf = list(beacon1=list(),beacon2=list(),beacon3=list(),beacon4=list()) # minus10, minus5, normal, plus5, plus10 numTemps = 5 # this is the number of different temperatures I'm using # pick the deployment and tags cmd=md[[5]] cDeploymentNames = cmd$deployment cTagNames = c(cmd$beaconNames) al tDir = paste("C:/zy/The closets/data closet/Telemetry/2009/2009Aug03 OH41/", c("ALPS minus 10 degC","ALPS minus 5 degC", "ALPS normal", "ALPS plus 5 degC", "ALPS plus 10 degC"), sep="") numPings = (cmd$stopUtime cmd$startUtime)/20 for (i in 1:length(cTagNames)){ for (j in 1:numTemps){ # pick one temperature ALPS run, calculate sound speed and drop things I don't need temp1 = importALPSdata(deployment=cDeploymentNames,tagName=cTagNames[i], altDir=altDir[j]) temp2 = filterALPSdata(df1=temp1, minuteMean=F) temp3 = filterALPSdata(df1=temp1, cnF=1.5, speedF=0.8, minuteMean=F) # pick important columns temp2$data = temp2$data[,c('utime','datiL','easting','northing')] temp3$data = temp3$data[,c('u time','datiL','easting','northing')] # now get only data between start and stopUtimes temp2$data = temp2$data[(temp2$data$utime > cmd$startUtime) & (temp2$data$utime < cmd$stopUtime),] temp3$data = temp3$data[(temp3$data$utime > cmd$star tUtime) & (temp3$data$utime < cmd$stopUtime),] # calculate the psFrac temp2$psFrac = nrow(temp2$data)/numPings temp3$psFrac = nrow(temp3$data)/numPings # calculate the beacon position estimate temp2$easting = mean(temp2$data$east ing) temp2$northing = mean(temp2$data$northing) temp3$easting = mean(temp3$data$easting) temp3$northing = mean(temp3$data$northing)

PAGE 377

377 # which temperature is this temp2$alpsdir = altDir[j] temp3$alpsdir = altDir[j] # save the answe r, ready for plotting alltags[[i]][[j]] = temp2 alltagsf[[i]][[j]] = temp3 } } # from these results how what's the biggest move in estimated beacon position # ... I see that the biggest temp range always gives the biggest distance # ... diffe rence...so find the dist between the extremes for each tag i = 1 sqrt( (alltagsf[[i]][[5]]$easting alltagsf[[i]][[1]]$easting)^2 + (alltagsf[[i]][[5]]$northing alltagsf[[i]][[1]]$northing)^2 ) # beacon position estimates moves 1.9m with over the e ntire temperature range # see below for how far from the nominal position any one point moves # a plot of how the psFrac changed for each beacon over 5 distances # extract data from alltagsf psresults = data.frame(b1=NA, b2=NA, b3=NA, b4=NA,temperatures =c( 10, 5,0,5,10), soundSpeeds=c(1521, 1533, 1545, 1554, 1562)) for (i in 1:length(cTagNames)){ for (j in 1:numTemps){ psresults[j,i] = alltagsf[[i]][[j]]$psFrac } } # how much does the psFrac drop (psresults[3,1] psresults[2,1]) / psresults[3 ,1] # ...so there's the worst is an 18% drop from nominal to 5degC # a plot of how each beacon's psFrac changes with temperature changes par(mfrow=c(2,1)) # inner beacon par(mar=c(1,6,1,1)+0.1) plot(psresults$soundSpeeds, psresults$b1, pch=19, las=1, bty ="l", cex.lab=1.5, cex.axis=1.3, xaxt="n", yaxt="n", ylim=c(0.55,0.75), xlab="", ylab="") axis(side=1, at=psresults$soundSpeeds, cex.axis=1.3, labels=NA) axis(side=2, at=seq(0.55,0.75,by=0.02), #labels=c(0.57,"",0.61,"",0.65,"",0.69,"",0.73), c ex.axis=1.3, las=1) # add a)

PAGE 378

378 text(1521,0.744,labels="a)",cex=1.7) # legend legend(1525,0.75,legend=c("Central","North","East","South"),pch=c(19,2,3,4)) # outer beacons par(mar=c(5,6,0,1)+0.1) plot(psresults$soundSpeeds, psresults$b2, pch=2, las=1, b ty="l", cex.lab=1.5, cex.axis=1.5, xaxt="n", yaxt="n", ylim=c(0,0.065), xlab="Sound Speed (m/s)", ylab="") points(psresults$soundSpeeds, psresults$b3, pch=3) points(psresults$soundSpeeds, psresults$b4, pch=4) axis(side=1, at=psresults$soundSpeeds, c ex.axis=1.3) axis(side=2, at=c(0, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06), cex.axis=1.3, las=1) abline(h=0) # add y axis label mtext(text="Position Solution Fraction", side=2, line=4.3, cex=1.7, adj= 1.5) # now that I've got the data, look at the picture par( mfrow=c(1,1)) par(mar=c(4,6,1,1)+0.1) plot(cmd$sdlEN$easting, cmd$sdlEN$northing, type="n", las=1, bty="l", cex.axis = 1.5, cex.lab=1.5, xlim=cmd$plotLimits$easting, ylim=cmd$plotLimits$northing, xlab="", ylab="") mtext("Easting (m)",side=1,line=2.5 ,cex=1.5) mtext("Northing (m)", side=2, line=4.5, cex=1.7) # add the reef and sdls points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=17, col="black", cex=1.5) #points(cmd$reefEN$easting, cmd$reefEN$northing, pch=17, col="red", cex=1.5) # add recorded posi tions # point types different for each temperature plType = c(3,4,20,2,0) for(i in 1:length(cTagNames)){ # for each tag for(j in 1:numTemps){ points(alltagsf[[i]][[j]]$easting, alltagsf[[i]][[j]]$northing, pch=plType[j], cex=2) } } text(325,219 0,"b)",cex=1.7) # legend legend(x=470,y=2200,legend=c("1521 m/s", "1533 m/s", "1545 m/s", "1554 m/s", "1562 m/s","GPS estimate"), pt.cex=2, pch=c(plType,17), cex=1.3) # redo this with smaller x and y limits for an inset

PAGE 379

379 # now that I've got the data, look at the picture par(mfrow=c(1,1)) par(mar=c(6,9.5,1,1.5)+0.1) plot(cmd$sdlEN$easting, cmd$sdlEN$northing, type="n", las=1, bty="l", cex.axis = 3, bty="o", xlim=c(441,445), ylim=c(2066,2070), xaxt="n", xlab="", ylab="") mtext("Easting (m)",side =1,line=2.5, cex=3, padj=1) mtext("Northing (m)", side=2, line=4.5, cex=3, padj= 1.6) axis(1,441:445,labels=c(441,"",443,"",445),cex.axis=3, padj=0.5) # add the reef and sdls points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=17, col="black", cex=4) #point s(cmd$reefEN$easting, cmd$reefEN$northing, pch=17, col="red", cex=1.5) # add recorded positions # point types different for each temperature plType = c(3,4,20,2,0) for(i in 1:length(cTagNames)){ # for each tag for(j in 1:numTemps){ points(alltagsf[[i ]][[j]]$easting, alltagsf[[i]][[j]]$northing, pch=plType[j], cex=4) } } remember to stretch to the right size # but how far away do the center beacon mean position solutions move? centerbeacon = 1 alltagsf[[centerbeacon]][[1]]$northing # plus all tagsf[[centerbeacon]][[2]]$northing # x alltagsf[[centerbeacon]][[3]]$northing # dot alltagsf[[centerbeacon]][[4]]$northing # open triangle alltagsf[[centerbeacon]][[5]]$northing # square # pick the one farthest from the center, plus to dot x1 = alltag sf[[centerbeacon]][[1]]$easting # plus y1 = alltagsf[[centerbeacon]][[1]]$northing # plus x2 = alltagsf[[centerbeacon]][[3]]$easting # dot y2 = alltagsf[[centerbeacon]][[3]]$northing # dot distance = sqrt( (x2 x1)^2 + (y2 y1)^2 ) # so the 10degC chang e moves the mean ps the most and it it 0.9m ###################################################################### ######### ###################################################################### ######### ### How does the estimated SDL position affect acc uracy, PS frequency? ### I'll use the 2009Aug03 OH41 deployment because it has a small temperature

PAGE 380

380 # range. I'll move the center sdl (and beacon) 2, 4, 6, 8, and 10m to the # east. # I'll run ALPS five times total for the beacons (1,2,79400,79500) # Then I'll compare the estimated positions. I'll also # compare the fraction of PS over time. # read in all five data sets alltags = list(beacon1=list(),beacon2=list(),beacon3=list(),beacon4=list(),sentinel=list()) alltagsf = list(beacon1=list(),bea con2=list(),beacon3=list(),beacon4=list(),sentinel=list()) # minus10, minus5, normal, plus5, plus10 numDistances = 6 # this is the number of different distances I'm using # pick the deployment and tags cmd=md[[5]] cDeploymentNames = cmd$deployment cTagNames = c(cmd$beaconNames,cmd$sentinelNames) altDir = c("C:/zy/The closets/data closet/Telemetry/2009/2009Aug03 OH41/ALPS 2011Feb14", paste( "C:/zy/The closets/data closet/Telemetry/2009/2009Aug03 OH41/ALPS c45 ", c("2m east","4m east","6m east","8m east","10m east"), sep="" )) numPingsB = ((cmd$stopUtime cmd$startUtime)/3600) 180 # 180 pings per hour numPingsS = ((cmd$stopUtime cmd$startUtime)/3600) 300 # 300 pings per hour for (i in 1:length(cTagNames)){ for (j in 1:numDistances){ # pick one temperature ALPS run, calculate sound speed and drop things I don't need temp1 = importALPSdata(deployment=cDeploymentNames,tagName=cTagNames[i], altDir=altDir[j]) temp2 = filterALPSdata(df1=temp1, minuteMean=F) temp3 = filterALPSdata(df1=temp1, cnF=1.5, speedF=0.8, minuteMean=F) # pick important columns temp2$data = temp2$data[,c('utime','datiL','easting','northing')] temp3$data = temp3$data[,c('uti me','datiL','easting','northing')] # now get only data between start and stopUtimes temp2$data = temp2$data[(temp2$data$utime > cmd$startUtime) & (temp2$data$utime < cmd$stopUtime),] temp3$data = temp3$data[(temp3$data$utime > cmd$startU time) & (temp3$data$utime < cmd$stopUtime),] # calculate the psFrac if ( substr(temp3$tagName,1,1)=="b" ){ temp2$psFrac = nrow(temp2$data)/numPingsB temp3$psFrac = nrow(temp3$data)/numPingsB

PAGE 381

381 } else if ( substr(temp3$tagName,1 ,1)=="s" ) { temp2$psFrac = nrow(temp2$data)/numPingsS temp3$psFrac = nrow(temp3$data)/numPingsS } else {print("It's not a beacon or sentinel")} # calculate the position estimate temp2$easting = mean(temp2$data$easting) temp2$no rthing = mean(temp2$data$northing) temp3$easting = mean(temp3$data$easting) temp3$northing = mean(temp3$data$northing) # which temperature is this temp2$alpsdir = altDir[j] temp3$alpsdir = altDir[j] # save the answer, ready for plot ting alltags[[i]][[j]] = temp2 alltagsf[[i]][[j]] = temp3 } } # from these results how what's the biggest move in estimated beacon position # ... I see that the biggest artificial displacement always gives the biggest distance # ... differenc e...so find the dist between the extremes for each tag i = 1 sqrt( (alltagsf[[i]][[5]]$easting alltagsf[[i]][[1]]$easting)^2 + (alltagsf[[i]][[5]]$northing alltagsf[[i]][[1]]$northing)^2 ) # with a 10m artificial displacement the beacon mean ps mov es 3.2m # a plot of how the psFrac changed for each beacon over 5 distances # extract data from alltagsf psresults = data.frame(b1=NA, b2=NA, b3=NA, b4=NA,s79600=NA,distance=c(0,2,4,6,8,10)) for (i in 1:length(cTagNames)){ for (j in 1:numDistances){ psresults[j,i] = alltagsf[[i]][[j]]$psFrac } } # what percent decrease is there from nominal position to 10m artificial displacement (psresults[1,1] psresults[6,1])/psresults[1,1] # 16.5% # a plot of how each beacon's psFrac changes with c45 moves par(mfrow=c(2,1)) # inner beacon par(mar=c(1,6,1,1)+0.1) plot(psresults$distance, psresults$b1, pch=19, las=1, bty="l", cex.lab=1.5, cex.axis=1.3, xaxt="n", yaxt="n",

PAGE 382

382 ylim=c(0.55,0.75), xlab="", ylab="") points(psresults$distance,psresults$s79600) # this doesn't actually show on the plot axis(side=1, at=psresults$distance, cex.axis=1.3, labels=NA) axis(side=2, at=seq(0.55,0.75,by=0.02), cex.axis=1.3, las=1) # add a) text(0,0.744,labels="a)",cex=1.7) # legend legend(8,0.75,legend=c("Central","North ","East","South"),pch=c(19,2,3,4)) par(mar=c(5,6,0,1)+0.1) plot(psresults$distance, psresults$b2, pch=2, las=1, bty="l", cex.lab=1.5, cex.axis=1.5, xaxt="n", yaxt="n", ylim=c(0,0.065), xlab="Artificial Displacement of Central Hydrophone (m)", yl ab="") points(psresults$distance, psresults$b3, pch=3) points(psresults$distance, psresults$b4, pch=4) axis(side=1, at=psresults$distance, cex.axis=1.3) axis(side=2, at=c(0, 0.01,0.02, 0.03,0.04,0.05,0.06), cex.axis=1.3, las=1) abline(h=0) # add y axis lab el mtext(text="Position Solutions Fraction", side=2, line=4.3, cex=1.7, adj= 1.5) # now that I've got the data, look at the picture par(mfrow=c(1,1)) par(mar=c(4,6,1,1)+0.1) plot(cmd$sdlEN$easting, cmd$sdlEN$northing, type="n", las=1, bty="l", cex.axis = 1.5, cex.lab=1.5, xlim=cmd$plotLimits$easting, ylim=cmd$plotLimits$northing, xlab="", ylab="") mtext("Easting (m)",side=1,line=2.5,cex=1.5) mtext("Northing (m)", side=2, line=4.5, cex=1.7) # add the reef and sdls points(cmd$sdlEN$easting, cmd$sdlE N$northing, pch=17, col="black", cex=1.5) # point types different for each distance plType = c(20,3,4,2,0,5) for(i in 1:(length(cTagNames) 1)){ # for each tag, but don't do the sentinel for(j in 1:numDistances){ points(alltagsf[[i]][[j]]$easting, al ltagsf[[i]][[j]]$northing, pch=plType[j],cex=2) } } text(325,2190,"b)",cex=1.7) legend(x=470,y=2200,legend=c("0 m","2 m","4 m","6 m","8 m","10 m","GPS estimate"),

PAGE 383

383 pch=c(plType,17), cex=1.3, pt.cex=2) # redo this with smaller x and y limits for an inset THIS ISN'T RIGHT YET # now that I've got the data, look at the picture par(mfrow=c(1,1)) par(mar=c(6,9.5,1,1.5)+0.1) plot(cmd$sdlEN$easting, cmd$sdlEN$northing, type="n", las=1, bty="l", cex.axis = 3, bty="o", xlim=c(440,450), ylim=c( 2065,2075), #xaxt="n", xlab="", ylab="") mtext("Easting (m)",side=1,line=2.5, cex=3, padj=1) mtext("Northing (m)", side=2, line=4.5, cex=3, padj= 1.6) axis(1,441:445,labels=c(441,"",443,"",445),cex.axis=3, padj=0.5) # add the reef and sdls points(cmd$ sdlEN$easting, cmd$sdlEN$northing, pch=17, col="black", cex=4) #points(cmd$reefEN$easting, cmd$reefEN$northing, pch=17, col="red", cex=1.5) # add recorded positions # point types different for each temperature plType = c(3,4,20,2,0) for(i in 1:length(cTagN ames)){ # for each tag for(j in 1:numDistances){ points(alltagsf[[i]][[j]]$easting, alltagsf[[i]][[j]]$northing, pch=plType[j], cex=4) } } remember to stretch to the right size # but far away do the center beacon mean position solutions move ? centerbeacon = 1 alltagsf[[centerbeacon]][[1]]$northing # dot alltagsf[[centerbeacon]][[2]]$northing # plus alltagsf[[centerbeacon]][[3]]$northing # x alltagsf[[centerbeacon]][[4]]$northing # open triangle alltagsf[[centerbeacon]][[5]]$northing # sq uare alltagsf[[centerbeacon]][[6]]$northing # diamond # pick the one farthest from the center, plus to dot x1 = alltagsf[[centerbeacon]][[1]]$easting # plus y1 = alltagsf[[centerbeacon]][[1]]$northing # plus x2 = alltagsf[[centerbeacon]][[3]]$easting # dot y2 = alltagsf[[centerbeacon]][[3]]$northing # dot distance = sqrt( (x2 x1)^2 + (y2 y1)^2 ) # don't add this for the paper for(i in 1:length(cTagNames)){ # for each tag for(j in 1:numDistances){ points(alltags[[i]][[j]]$easting, alltags[[i]][[j ]]$northing, pch=plType[j],

PAGE 384

384 cex=2, col="red") } } ### How does the number of position solutions change as the CN filtering cut off # decreases justFilter = function(cTag, psrMe=TRUE){ tagName=cTag$tagName deployment=cTag$deployment ta gLocation=cTag$tagLocation startUtime=cTag$startUtime stopUtime=cTag$stopUtime tagType = substr(tagName,1,1) tagID = substr(tagName,2,10) # get info from md[[]] for (i in 1:length(md)){ # i loops through all deployments if (deployment = = md[[i]]$deployment){ print(i) spacing = md[[i]]$spacing homeDir = paste(md[[i]]$homeDir,"/ALPS 2011Feb14", sep="") bestBeacon = substr(md[[i]]$bestBeacon,2,10) secondBestBeacon = substr(md[[i]]$secondBestBeacon,2,10) r eefEN = md[[i]]$reefEN sdlEN = md[[i]]$sdlEN # is this beacon/sentinel at the center or not, find out which SDL it was on... location = NA # keep this if you're using a funky 2008 beacon code, or a # non beacon which # probably doesn't have a position estimate. I won't do the # sentinels now either. for (j in 1:nrow(md[[i]]$beaconEN)){ if(tagName == md[[i]]$beaconEN$beaconID[j]){ location = md[[i]]$beaconEN$locat ion[j] } else { # you're probably on one of the problem 2008 tags, where codes # get used in place of beacons if ((tagName == "b80") | (tagName == "b81" )) {location=42} if ((tagName == "b85") | (tagName == "b8 6" )) {location=44} if (tagName == "b130") {location=45} # or you're using a fish tag in one of the trials if (tagType == "f"){location="inside"} # these tags are all over the place # now the sp150...read the metada ta about this... # N41 had B2, B79400. C45 had B1, B79500 # The sentinel s79600 and T61000 were at the reef if ((tagName == "b2") | (tagName == "b79400")){location="41"}

PAGE 385

385 if ((tagName == "b1") | (tagName == "b79500") ){location="45"} } } # ...then designate it as center or not center = ifelse(((location==45)|(location=="reef")|(location=="inside")),T,F) } } setwd(homeDir) # import ALPS data d1 = importALPSdata(deployment=cTa g$deployment, tagName=cTag$tagName, psr=psrMe) # set the range of CN filters cnVec = seq(0.1,50,by=0.2) # create a data.frame to hold results psDecay = data.frame(cn=NA, psNum=NA, psFrac=NA) # if there are any data in d1 then filter it if ( nrow(d1[1]$data) > 1 ){ for (i in 1:length(cnVec)){ d2 = filterALPSdata(df1=d1, cnF=cnVec[i])$data psDecay[i,] = c(cnVec[i], nrow(d2), NA) # do the psFrac later } } else {print("No Data in d1")} # no action necesary, psDecay just sta ys empty # I really want the fraction not the number... psDecay$psFrac = psDecay$psNum / tail(psDecay$psNum,1) answer = list(tagName=tagName, deployment=deployment, center=center, psDecay=psDecay, dataUnfiltered=d1$data[,c("utime"," datiL","easting","northing","depth","cn","rn","dop")] ) return(answer) } # end just Filter # bob = justFilter(cTag) # now that justFilter works, apply it to the fish trials filterResults = list() # 2007 deployment for (i in 1:5){ # cycl e through all tags in this deployment filterResults[[i]]=justFilter(tagList[[i]]) print(i) } # 2008 deployment...the problem child

PAGE 386

386 for (i in 6:10){ # cycle through all tags in t his deployment filterResults[[i]]=justFilter(tagList[[i]]) print(i) } # cTag 6 and 7 are b80 and b81, which are b79200 # ...something wrong with these so I won't use them... # cTag 8 and 9 are b85 and b86, which are b79400 # cTag 10 is b130, which i s b79500 #filterResults[[6]]$detFrac = rep(NA,5) #allresults[[6]]$detFrac + allresults[[7]]$detFrac #filterResults[[7]]$detFrac = rep(NA,5) #filterResults[[6]]$psFrac = NA #allresults[[6]]$psFrac + allresults[[7]]$psFrac #filterResults[[7]]$psFrac = NA # # filterResults[[8]]$detFrac = allresults[[8]]$detFrac + allresults[[9]]$detFrac #filterResults[[9]]$detFrac = rep(NA,5) #filterResults[[8]]$psFrac = allresults[[8]]$psFrac + allresults[[9]]$psFrac #filterResults[[9]]$psFrac = NA # ## hb1:11 15. sb1:16 20. s b2:21 25. hb2:26 30. sb3:31 35. hb3:36 39. sb4:40 43. # ...remember these are only the stationary tags for (i in 11:43){ # cycle through all tags in this deployment filterResults[[i]]=justFilter(tagList[[i]]) print(i) } plot(bob$psDecay$cn, bob$p sDecay$psFrac), #type="n", type="l", xlim=c(), ylim=c()) # now plot the results plType = vector() for (i in 1:(length(filterResults) 1)){plType[i]=ifelse(filterResults[[i]]$center,1,2)} par(mar=c(5,4.5,1,1)+0.1) # create the main plot plot(fi lterResults[[1]]$psDecay$cn, filterResults[[1]]$psDecay$psFrac, type="n", las=1, cex.lab=1.5, cex.axis=1.5, bty="l", xlim=c(0,20), ylim=c(0,1), xlab="Condition Number", ylab="Fraction of Unfiltered Position Solutions") for (i in 1:(length(filterRe sults) 1)){ points(filterResults[[i]]$psDecay$cn, filterResults[[i]]$psDecay$psFrac, type="l", lty=plType[[i]], las=1) } # add a a) text(0,1,"a)",cex=1.5) # now darken the line of the tag shown in panel A, which is tagList[[11]]. # note that tagList [[11]] ~ and filterResults[11]] i=11

PAGE 387

387 points(filterResults[[i]]$psDecay$cn, filterResults[[i]]$psDecay$psFrac, type="l", lty=plType[[i]], lwd=3) # add a legend legend(13.2,0.65, c("Central trans.","Marginal trans.","Central trans. in b)"), lty = c(1 2, 1), lwd=c(1,1,2) ) # add an inset plot tmp = subplot( plot(filterResults[[1]]$psDecay$cn, filterResults[[1]]$psDecay$psFrac, type="n", las=1,xlim=c(0.8,2.5), ylim=c(0,1),xlab="",ylab=""), 17,0.29,size=c(2,2) ) op = par(no.readonly=TRU E) par(tmp) for (i in 1:(length(filterResults) 1)){ points(filterResults[[i]]$psDecay$cn, filterResults[[i]]$psDecay$psFrac, type="l", lty=plType[[i]], las=1) } # now darken the line of the tag shown in panel A, which is tagList[[11]]. # note that t agList[[11]] ~ and filterResults[11]] i=11 points(filterResults[[i]]$psDecay$cn, filterResults[[i]]$psDecay$psFrac, type="l", lty=plType[[i]], lwd=3) mtext("Condition Number",1,2) mtext("Fraction of Unfiltered",2,3.5) mtext("Position Solutions",2,2.5 ) par(op) ### more filtering # how does the northing position changes with CN of unfiltered data # get the raw data, probably best not to use a 2007 tag...no psr i = 11 cTag=tagList[[i]] d1 = importALPSdata(deployment=cTag$deployment, tagName=cTag$tagNam e, psr=psrMe) # plot(d1$data$cn, d1$data$northing, pch=19, cex=0.5) # a two panel plot, each with an insert # Panel A is here # a plot with an inset...first the main plot #split.screen(figs=c(1,2)) #screen(1)

PAGE 388

388 # I CAN'T GET MFROW() OR SPLIT.SCREEN OR L AYOUT TO WORK SO JUST MAKE # TWO SEPERATE PLOTS par(mar=c(5,4.5,1,1)+0.1) plot(d1$data$cn, d1$data$northing, pch=19, cex=0.5, las=1, bty="l", cex.axis=1.5, xlab="", ylab="", xlim=c(1,3),ylim=c(428,436) ) abline(h=cTag$tagLocation[2]) mtext("C ondition Number",1,2.5, cex=1.5) mtext("Northing (m)",2,3.5,cex=1.5) text(1,436,"b)", cex=1.5) # because they don't work with the code below, add the inset axes labels noe text(2.78, 433, "Condition Number") mtext("Northing (m)",2, 18, adj=0.9) # add an inset plot subplot( plot(d1$data$cn, d1$data$northing, pch=19, cex=0.5, las=1, ylim=c(400,480), xlab="",ylab=""), x=2.8,y=435,size=c(1.5,1.5) ) # unfortunately these end up in panel b #op = par(no.readonly=TRUE) #par(tmp) #mtext("Conditio n Number",1,2) #mtext("Northing (m)",2,2.5) #par(op) # # examples of how filtering affects EN plots # remember these load("C:/zy/Telemetry/R summary files/tag 2011Mar16.rdata") # pick a tag cTag = tagList[[4]] tagName=cTag$tagName deployment=cTag$dep loyment tagLocation=cTag$tagLocation

PAGE 389

389 startUtime=cTag$startUtime stopUtime=cTag$stopUtime tagType = substr(tagName,1,1) tagID = substr(tagName,2,10) numSdl = 5 # get the raw data d1 = importALPSdata(deployment=cTag$deployment, tagName=cTag$tagName, p sr=TRUE) # filter or no d2 = filterALPSdata(df1=d1, minuteMean=FALSE)$data d3 = filterALPSdata(df1=d1, cnF=1.5, speedF=0.8, minuteMean=FALSE)$data # plots par(mfrow=c(1,2)) plot(d2$easting, d2$northing, pch=19, cex=0.5) plot(d3$easting, d3$northing, pch=1 9, cex=0.5) cTag$tagName nrow(d2) nrow(d3) ###################################################################### # # how do water conditions affect hourly detection fraction and hourly ps fraction # # run testTheTag again just for the main array deploymen ts lasting longer than # a day. Filter and PSR. allresults=list(); filterMeNow=T; psrMeNow=T; # 2007 deployment for (i in 1:5){ # cycle through all tags in this deployment allresults[[i]]=testTheTag(tagList[[i]], filterMe=filterMeNow, psrMe=psrMeNow) } # 2008 deployment...the problem child for (i in 6:10){ # cycle through all tags in this deployment allresults[[i]]=testTheTag(tagList[[i]], filterMe=filterMeNow, psrMe=psrMeNow) } # cTag 6 and 7 are b80 and b81, which are b79200 # ...something wrong w ith these so I won't use them... # cTag 8 and 9 are b85 and b86, which are b79400 # cTag 10 is b130, which is b79500 allresults[[6]]$detFrac = rep(NA,5) #allresults[[6]]$detFrac + allresults[[7]]$detFrac allresults[[7]]$detFrac = rep(NA,5) allresults[[6]]$ psFrac = NA #allresults[[6]]$psFrac + allresults[[7]]$psFrac

PAGE 390

390 allresults[[7]]$psFrac = NA allresults[[6]]$hourlyMeans = allresults[[6]]$hourlyMeans[allresults[[6]]$hourlyMeans$detFrac1>999 ,] allresults[[7]]$hourlyMeans = allresults[[7]]$hourlyMeans[a llresults[[7]]$hourlyMeans$detFrac1>999 ,] allresults[[8]]$detFrac = allresults[[8]]$detFrac + allresults[[9]]$detFrac allresults[[9]]$detFrac = rep(NA,5) allresults[[8]]$psFrac = allresults[[8]]$psFrac + allresults[[9]]$psFrac allresults[[9]]$psFrac = NA allresults[[8]]$hourlyMeans$detFrac1 = allresults[[8]]$hourlyMeans$detFrac1 + allresults[[9]]$hourlyMeans$detFrac1 allresults[[8]]$hourlyMeans$detFrac2 = allresults[[8]]$hourlyMeans$detFrac2 + allresults[[9]]$hourlyMeans$detFrac2 allresults[[8]]$hourlyMe ans$detFrac3 = allresults[[8]]$hourlyMeans$detFrac3 + allresults[[9]]$hourlyMeans$detFrac3 allresults[[8]]$hourlyMeans$detFrac4 = allresults[[8]]$hourlyMeans$detFrac4 + allresults[[9]]$hourlyMeans$detFrac4 allresults[[8]]$hourlyMeans$detFrac5 = allresults[ [8]]$hourlyMeans$detFrac5 + allresults[[9]]$hourlyMeans$detFrac5 allresults[[8]]$hourlyMeans$psFrac = allresults[[8]]$hourlyMeans$psFrac + allresults[[9]]$hourlyMeans$psFrac allresults[[9]]$hourlyMeans = allresults[[9]]$hourlyMeans[allresults[[9]]$hour lyMeans$detFrac1>999 ,] # hb1:11 15. sb1:16 20. sb2:21 25. hb2:26 30. sb3:31 35. hb3:36 39. sb4:40 43. # ...remember these are only the stationary tags # ...40 41 has no ADCP data for (i in 11:39){ # cycle through all tags in this deployment allresults [[i]]=testTheTag(tagList[[i]], filterMe=filterMeNow, psrMe=psrMeNow) } # now that I've got all the hourly data in allresults, compile it all into # a single data.frame hd1 = data.frame(hour=NA, detFrac1=NA, detFrac2=NA, detFrac3=NA, detFrac4=NA, detFr ac5=NA, psFrac=NA, temperature=NA, magL=NA, eaaL=NA, eaaM=NA, eaaU=NA, tagName=NA, center=NA) for (i in c(1:5,8,10:39)){ #1:length(allresults)){ temp1 = allresults[[i]]$hourlyMeans temp1$tagName = allresults[[i]]$tagName temp1$center = allr esults[[i]]$center # append this one to the whole list

PAGE 391

391 hd1 = rbind(hd1, temp1) } # finally ready for ggplot ggplot(hd1, aes(x=eaaU, y=eaaM, group=center, colour=center)) + geom_point(alpha=0.5) ggplot(z6, aes(x=magL, y=altitude, group=ID, colour=ID, fill=ID)) + geom_point(alpha=0.05) + geom_smooth(method="gam",formula=y~s(x),lwd=1.3) + #,bs="cc" geom_smooth(aes(group=1),colour="black",lwd =1.3,method="gam",formula=y~s(x)) + #,bs="cc" coord_cartesian(xlim=range(z0$magL,na.rm=T), ylim=c(0,10)) + # range(z0$temperature,na.rm=T) theme_bw() + scale_x_continuous("Current Speed (m/s)") + scale_y_continuous("Altitude (m)", breaks = c(0 ,1,2,3,seq(4,10,by=2))) ###################################################################### ##### BELOW HERE IS THE OLD WAY OF DOING THINGS. ###################################################################### ######### # First Dec 2007 deployment cmd = md[[2]] cmd$beaconNames cmd$sdlEN cTagNames = cmd$beaconNames -or -cTagNames = c("c80", "c81", "c85", "c86", "c130") # set the database and connect dbName = paste("db",cmd$deployment ,sep="")# connect to the database dbcon = dbConnect(MySQL(), user="root", password="zy0014", dbname=dbName) detections = list() # this will hold lists for all cTagNames for (i in 1:length(cTagNames)){ # grab data from dbTable dbphrase = paste("select from toa", cTagNames[i], ";", sep="") res = dbGetQue ry(dbcon, dbphrase) # calculate hourly detection frequencies and total detection frequency of this beacon by all SDLs

PAGE 392

392 sdlNumber = 41:45 sdl = list() # list to compute detections of beacon i by all j sdls totalFrac = data.frame("t ag"=NA, "sdl"=NA, "distance"=NA, "totalFrac"=NA) # to hold the total detection fraction of tag i by sdl j for (j in 1:length(cmd$sdlEN$ID)){ # pick only detections at this SDL sdl[[j]] = res[res$sdlNumber == sdlNumber[j],2:4] # convert charac ter times to POSIXlt sdl[[j]]$datiG = as.POSIXlt(strptime(sdl[[j]]$datiG, "%Y %m %d %H:%M:%S", tz="GMT"), origin="1970 1 1", tz="GMT") sdl[[j]]$datiL = as.POSIXlt(strptime(sdl[[j]]$datiG, "%Y %m %d %H:%M:%S", tz="EST5EDT"), origin="19 70 1 1", tz="EST5EDT") # Because hb2007 runs over the change of year... # create a vector of the index of the day of the run, 1 52 for example # ...there's got to be a prettier way of doing this, but... numUniqueDays = length(unique(sdl[ [j]]$datiL$yday)) startOfFirstDay = unclass(as.POSIXct( strptime(cmd$startDay, "%Y/%B/%d", tz="EST5EDT"), origin="1970 1 1", tz="EST5EDT"))[1] endOfAllHours = startOfFirstDay + 3600 1:(numUniqueDays*24) # something to hold hourly de tection frequencies for this sdl hrlyFrac = data.frame("hourOfDeployment"=NA, "frac"=NA) # calculate the hourly detection frequency of beacon i by sdl j for (k in 1:((numUniqueDays*24) 1)){ temp1 = sdl[[j]][ (sdl[[j]]$utime > endOfAllH ours[k]) & (sdl[[j]]$utime < endOfAllHours[k+1]), ] hrlyFrac[k,] = c(k, nrow(temp1)/180) } plot(hrlyFrac$hourOfDeployment, hrlyFrac$frac, type="l", main=paste("Beacon ", cTagNames[[i]], "; SDL ", sdlNumber[j], sep="")) # find the position of the beacon and distance to each SDL temp1 = paste("4", substr(cTagNames[i],4,4), sep="") for (k in 1:length(cmd$sdlEN$ID)){ if (temp1 == cmd$sdlEN$ID[k]){ tagPos = c(cmd$sdlEN$easting[k], cmd$sdlEN$northing[k]) } } distance = sqrt( (tagPos[1] cmd$sdlEN$easting[j])^2 + (tagPos[2] cmd$sdlEN$northing[j])^2 ) # now compute the total detection frequency of all hours with some detections

PAGE 393

393 temp1 = hrlyFrac[hrlyFrac$frac > 0 ,] totalFrac[j,] = c(cTagNames[i], sdlNumber[j], distance, mean(temp1$frac)) } # end for j loop over all slds # # find the position of the beacon and distance to each SDL # temp1 = paste("4", substr(cTagNames[i],4,4), sep="") # # for (j in 1:l ength(cmd$sdlEN$ID)){ # if (temp1 == cmd$sdlEN$ID[j]){ # tagPos = c(cmd$sdlEN$easting[j], cmd$sdlEN$northing[j]) # } # totalFrac$distance[j] = sqrt( # (tagPos[1] cmd$sdlEN$easting[j])^2 + (tagPos[2] cmd$sdlEN$northing[j])^2 # ) # } # put everything into the detections list detections[[i]] = totalFrac } # pick one of the following to save the current detection results detections7 = detections detections8 = detections # while I'm with Ben, I won't have the db, so save detect ions as a file to take with me... write.csv(detections, "C:/zy/Telemetry/R summary text files/2007 detection distance trial 2010Oct28.csv", row.names=FALSE) ###################################################################### ######### # Second...22 July 2008 detection trials at 200m and 300m, T60800 # On this day we placed tag T60800 at these distances at these times: # # distance 300m (diver probably between tag and sdl # 13:29:00 EDT 13:34:00 EDT (1216747740 1216748040 utime GMT) # # distance 2 00m (diver not between tag and sdl # 13:52:00 EDT 13:57:00 EDT (1216749120 1216749420 utime GMT) # distance 200m (diver between tag and sdl) # 13:58:00 EDT 14:00:00 EDT (1216749480 1216749600 utime GMT) # # sdl data is in two files named "SN265045_22 Jul08.txt": one is the converted # *.bin file and one is the ALPS generated record of all detections by sdl45 # on 22Jul08.txt. # There's also "TxId60800.toa" ...the times T60800 has detections at sdl45

PAGE 394

394 # # Although it would be interesting to see how these compare, the right thing to # do for comparisons of detections is to use the *.toa file, which only shows # detections when a full symbol was possible. No PSR is involved at the # detection stage. # read in the data cDir = setwd("E:/DATA/Te lemetry/2008/2008 Jul 22 SDL detection trials/ALPS output/20080722") filename = "TxId60800.toa" d1 = read.table(filename, header=FALSE, col.names = c("utime", "fraction", "power", "sType", "sValue")) # now calculate detection fractions during the three time periods...and also for # periods 2 and 3 together, just for fun # distance 300m (diver probably between tag and sdl utime1 = c(1216747740, 1216748040) # 5 min utime2 = c(1216749120, 1216749420) # 5 min utime3 = c(1216749480, 1216749600) # 2 min utim e4 = c(1216749120, 1216749600) # 8 min detections1 = nrow(d1[ (d1$utime > utime1[1]) & (d1$utime < utime1[2]) ,]) numTransmissiona1 = (utime1[2] utime1[1])/2 frac1 = detections1/numTransmissiona1 detections2 = nrow(d1[ (d1$utime > utime2[1]) & (d1$utim e < utime2[2]) ,]) numTransmissiona2 = (utime2[2] utime2[1])/2 frac2 = detections2/numTransmissiona2 detections3 = nrow(d1[ (d1$utime > utime3[1]) & (d1$utime < utime3[2]) ,]) numTransmissiona3 = (utime3[2] utime3[1])/2 frac3 = detections3/numTransmis siona3 detections4 = nrow(d1[ (d1$utime > utime4[1]) & (d1$utime < utime4[2]) ,]) numTransmissiona4 = (utime4[2] utime4[1])/2 frac4 = detections4/numTransmissiona4 # combine those points that are really meaningful for comparison oneDayDistance = c(300, 200, 200) oneDayFrac = c(frac1, frac2, frac3) ### look at differences between minutes...pick one time period at a time startTime = utime3[1] # change this to change time periods numMin = 2 # change this to change time periods

PAGE 395

395 minuteBreaks = seq(from=0, by=60, length.out = numMin+1) bins = startTime + minuteBreaks # count detections each minute minFreqs = vector(mode="numeric", length=numMin) for (i in 1:numMin){ minFreqs[i] = nrow(d1[ (d1$utime > bins[i]) & (d1$utime < bins[i+1]) ,])/30 } plot(minFre qs, pch=19) abline(h= mean(minFreqs), col="red") ###################################################################### ######### # Third...Oct 2008 deployment...remember this one must work on codes not on full # symbol. So remember that the comparison w on't be exactly nice becasue # # BE CAREFUL WITH THESE, THERE ARE TIMES WHEN CODES COME EVERY 2s EVEN THOUGH # THEY ARE 20s BEACONS. ALSO, REMEMBER THAT INDIVIDUAL CODES ONLY REPRESENT # HALF OF A TAG. THEY WILL NEED TO BE COMBINED AFTER. # # These are the codes that uniquely identify a beacon # b79500 c130 # b79200 c80, c81 # b79400 c85, c86 # # Besides these... # ...when only beacons were in the water: # b79100 c78, c79 # ...and when all tags are in the water: # b79100 c79 occurs uniquely wi th b79100 every other transmission...every 40 sec # first do the 'normal' ones cmd = md[[2]] cmd$beaconNames cmd$sdlEN cTagNames = c("c80", "c81", "c85", "c86", "c130") #cmd$beaconNames # set the database and connect dbName = paste("db",cmd$deployment ,sep="")# connect to the database dbcon = dbConnect(MySQL(), user="root", password="zy0014", dbname=dbName) detections = list() # this will hold lists for all cTagNames for (i in 1:length(cTagNames)){

PAGE 396

396 # grab data from dbTable dbphrase = paste("selec t from toa", cTagNames[i], ";", sep="") res = dbGetQuery(dbcon, dbphrase) # calculate hourly detection frequencies and total detection frequency of this code by all SDLs sdlNumber = 41:45 sdl = list() # list to compute detec tions of beacon i by all j sdls totalFrac = data.frame("tag"=NA, "sdl"=NA, "distance"=NA, "totalFrac"=NA) # to hold the total detection fraction of tag i by sdl j for (j in 1:length(cmd$sdlEN$ID)){ # pick only detections at this SDL sdl[[j]] = res[res$sdlNumber == sdlNumber[j],2:4] # convert character times to POSIXlt sdl[[j]]$datiG = as.POSIXlt(strptime(sdl[[j]]$datiG, "%Y %m %d %H:%M:%S", tz="GMT"), origin="1970 1 1", tz="GMT") sdl[[j]]$datiL = as.POSIXlt(strptime(sdl[[j]]$d atiG, "%Y %m %d %H:%M:%S", tz="EST5EDT"), origin="1970 1 1", tz="EST5EDT") # This is unnecessary in 2008 but easier to leave it # create a vector of the index of the day of the run, 1 52 for example # ...there's got to be a prettier w ay of doing this, but... numUniqueDays = length(unique(sdl[[j]]$datiL$yday)) startOfFirstDay = unclass(as.POSIXct( strptime(cmd$startDay, "%Y/%B/%d", tz="EST5EDT"), origin="1970 1 1", tz="EST5EDT"))[1] endOfAllHours = startOfFirstDay + 36 00 1:(numUniqueDays*24) # something to hold hourly detection frequencies for this sdl hrlyFrac = data.frame("hourOfDeployment"=NA, "frac"=NA) # calculate the hourly detection frequency of beacon i by sdl j for (k in 1:((numUniqueDay s*24) 1)){ temp1 = sdl[[j]][ (sdl[[j]]$utime > endOfAllHours[k]) & (sdl[[j]]$utime < endOfAllHours[k+1]), ] hrlyFrac[k,] = c(k, nrow(temp1)/180) } plot(hrlyFrac$hourOfDeployment, hrlyFrac$frac, type="l", main=paste("Beacon ", c TagNames[[i]], "; SDL ", sdlNumber[j], sep="")) # find the position of the beacon and distance to each SDL if( (cTagNames[i] == "c80") | (cTagNames[i] == "c81") ){ tagPos = c(cmd$sdlEN$easting[2], cmd$sdlEN$northing[2]) } else if ( ( cTagNames[i] == "c85") | (cTagNames[i] == "c86") ){ tagPos = c(cmd$sdlEN$easting[4], cmd$sdlEN$northing[4]) } else if (cTagNames[i] == "c130"){ tagPos = c(cmd$sdlEN$easting[5], cmd$sdlEN$northing[5])

PAGE 397

397 } else { tagPos = NULL print("This tag appears to not be a beacon") } distance = sqrt( (tagPos[1] cmd$sdlEN$easting[j])^2 + (tagPos[2] cmd$sdlEN$northing[j])^2 ) # now compute the total detection frequency of all hours with some detections t emp1 = hrlyFrac[hrlyFrac$frac > 0 ,] totalFrac[j,] = c(cTagNames[i], sdlNumber[j], distance, mean(temp1$frac)) } # end for j loop over all slds # # find the position of the beacon and distance to each SDL # temp1 = paste("4", substr(cTagN ames[i],4,4), sep="") # # for (j in 1:length(cmd$sdlEN$ID)){ # if (temp1 == cmd$sdlEN$ID[j]){ # tagPos = c(cmd$sdlEN$easting[j], cmd$sdlEN$northing[j]) # } # totalFrac$distance[j] = sqrt( # (tagPos[1] cmd$sdlEN$easting[j])^2 + (tagPo s[2] cmd$sdlEN$northing[j])^2 # ) # } # put everything into the detections list detections[[i]] = totalFrac } # now combine 80/81 and 85/86 d1 = detections d2 = d1 # d2[[1]][,1] = "c80c81" d2[[1]][1,4] = as.numeric(d1[[1]][1,4]) + as.numeric(d 1[[2]][1,4]) d2[[1]][2,4] = as.numeric(d1[[1]][2,4]) + as.numeric(d1[[2]][2,4]) d2[[1]][3,4] = as.numeric(d1[[1]][3,4]) + as.numeric(d1[[2]][3,4]) d2[[1]][4,4] = as.numeric(d1[[1]][4,4]) + as.numeric(d1[[2]][4,4]) d2[[1]][5,4] = as.numeric(d1[[1]][5,4]) + as.numeric(d1[[2]][5,4]) # d2[[3]][,1] = "c85c86" d2[[3]][1,4] = as.numeric(d1[[3]][1,4]) + as.numeric(d1[[4]][1,4]) d2[[3]][2,4] = as.numeric(d1[[3]][2,4]) + as.numeric(d1[[4]][2,4]) d2[[3]][3,4] = as.numeric(d1[[3]][3,4]) + as.numeric(d1[[4]][3,4]) d2[[3 ]][4,4] = as.numeric(d1[[3]][4,4]) + as.numeric(d1[[4]][4,4])

PAGE 398

398 d2[[3]][5,4] = as.numeric(d1[[3]][5,4]) + as.numeric(d1[[4]][5,4]) d3 = list(d2[[1]], d2[[3]], d2[[5]]) detections8 = d3 # while I'm with Ben, I won't have the db, so save detections as a fil e to take with me... write.csv(detections8, "C:/zy/Telemetry/R summary text files/2008 detection distance trial 2010Oct28.csv", row.names=FALSE) ###################################################################### ######### # look at results so far # put the interesting bits into one list plot(detections7[[1]]$distance, detections7[[1]]$totalFrac, xlim=c(0,300), ylim=c(0,1.5), pch=19) points(detections7[[2]]$distance, detections7[[2]]$totalFrac, pch=19, col="red") points(detections7[[3]]$distance, de tections7[[3]]$totalFrac, pch=19, col="blue") points(detections7[[4]]$distance, detections7[[4]]$totalFrac, pch=19, col="green") points(detections7[[5]]$distance, detections7[[5]]$totalFrac, pch=19, col="orange") points(oneDayDistance, oneDayFrac, pch=15, col="yellow") points(detections8[[1]]$distance, detections8[[1]]$totalFrac, pch=17, col="black") points(detections8[[2]]$distance, detections8[[2]]$totalFrac, pch=17, col="red") points(detections8[[3]]$distance, detections8[[3]]$totalFrac, pch=17, col="b lue") # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # chapter 3 prelim data.r # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # In this chapter I use 2007 and 2008 data to describe fish mo vement. library(gdata) library(hexbin) library(rgl) #library(lattice) #or lme4 for 'histogram()' library(mgcv) #library(plotrix) #library(lme4) library(ggplot2) library(MASS) library(reshape)

PAGE 399

399 library(rimage) source("C:/zy/Telemetry/R Data Processing/glob al variables.r") source("C:/zy/Telemetry/R Data Processing/global functions.r") source("C:/zy/Telemetry/R Data Processing/global metadata.r") # read in tagfm, z0, results load("C:/zy/Telemetry/R summary files/tag 2011Mar16.rdata") load("C:/zy/Telemetry/R summary files/tagf 2011Mar16.rdata") load("C:/zy/Telemetry/R summary files/tagfm 2011Mar16.rdata") load("C:/zy/Telemetry/R summary files/z0 2011May02.rdata") load("C:/zy/Telemetry/R summary files/results 2011Mar16.rdata") # let's shift all the easting/no rthing plots to be close to (0,0)...see below # for words on how I get these nubmers # ??? do I always want to do this z lleft=8434 z bbottom=555 z z0$easting = z0$easting lleft z z0$northing = z0$northing bbottom # now the reef location is z md[[1]] $reefEN$easting lleft # = 140.76m E z md[[1]]$reefEN$northing bbottom # = 136.2mN # I want the legend to say tags 1 5 not 60300, etc...so change the tagName and # the order they appear. DON'T FORGET THAT z0 IS NOW DIFFERENT. z0$ID = rep(NA, nrow(z0)) z0$ID[z0$tagName == "f60200"] = 5 z0$ID[z0$tagName == "f60400"] = 4 z0$ID[z0$tagName == "f60900"] = 3 z0$ID[z0$tagName == "f60300"] = 1 z0$ID[z0$tagName == "f61100"] = 2 z0$ID = as.factor(z0$ID) # for the 38/51 panel plots get the date in each b ox label z0$Date = as.Date(z0$datiL) # ...let's say the first 2/3 days are a tagging recovery period z0short = z0[z0$dod > 3, ] # for lunarIndex stuff...only the night z0night = z0[z0$day=="night", ] z0day = z0[z0$day=="day", ]

PAGE 400

400 # for altitude stuff z6 = z0[!is.na(z0$altitude), ] z6 = drop.levels(z6[(z6$tagName!="f60300")|(z6$tagName!="f61100"),],reorder=FALSE) z6night = z6[z6$day=="night", ] z6day = z6[z6$day=="day", ] #...or do you want one fish at a time # main effects for one fish, change x and y for various relationships z1 = drop.levels(subset(z0,tagName=="f60200"),reorder=FALSE) z2 = drop.levels(subset(z0,tagName=="f60400"),reorder=FALSE) z3 = drop.levels(subset(z0,tagName= ="f60900"),reorder=FALSE) z4 = drop.levels(subset(z0,tagName=="f60300"),reorder=FALSE) z5 = drop.levels(subset(z0,tagName=="f61100"),reorder=FALSE) cTagNames = c("f60200", "f60400", "f60900", "f60300", "f61100") cDeploymentNames = c(rep("hb2007",3),rep("h b2008",2)) xLimits = md[[2]]$plotLimits$easting lleft yLimits = md[[2]]$plotLimits$northing bbottom ###################################################################### ########## # Figure 1. Two panels, first the habitat map, then habitat use inde x # ... this code was originally worked out in 'habitat maps.r' ### figure 1, left panel ################## # my own plotting function # plot.imagematrix.zy now lives in 'global functions.r' plot.imagematrix.zy = function (x, ...) { colvec < switch(a ttr(x, "type"), grey = grey(x), rgb = rgb(x[, 1], x[, 2], x[, 3])) if (is.null(colvec)) stop("image matrix is broken.") colors < unique(colvec) colmat < array(match(colvec, colors), dim = dim(x)[1:2]) image(x = 0:( dim(colmat)[2]), y = 0:(dim(colmat)[1]), z = t(colmat[nrow(colmat):1, ]), col = colors, bty="o", cex.lab=2, xlab = "Easting (m)", ylab = "", axes = FALSE, asp = 1, ...) } # import the image to use rfile = "C:/zy/Telemetry/R sum mary files/IF41_IF42_lines_aligned_HBandSB_bluebox.jpg"

PAGE 401

401 i1 = round(read.jpeg(rfile)) z plot(i1, useRaster=TRUE) findHabType(e=245174.8, n=3262391, reference="if41",show=TRUE,pixels=T, dotCol="red", dotShape=4, dotSize=2) # pick out just the portion I wan t bc = c(1590+1, 1805+1) # bc = box center...location of reef bw = 2600 # bw = box width, 2*130m = 2600pixels bh = 2600 # bh = box height bl = function(){bc[2] bw/2} br = function(){bc[2] + bw/2} bt = function(){bc[1] bh/2} bb = function(){bc[1] + bh/2 } catmap = i1[bt():bb(), bl():br(), 3] catmap1 = i1[bt():bb(), bl():br(), ] # curse it, plotting needs 2D and findHabType needs 3D # From this I calculate the eastings and northings of the edged of catmap, using # 10pix/m. Reef IF41/center is at (245174 .8 easting, 3262391 northing). The # image is 2600 pixels square, 260m square. So: # ...the top row is 3262391 N + 130m = 3262521 N # ...the bottom row is 3262391 N 130m = 3262261 N # ...the left column is 245175 E 130m = 245045 E # ...the right co lumn is 245175 E + 130m = 245305 E # nx=bw+1 ny=bh+1 r=10 # 10 pixels = 1m # look down to "plot fig 1" for the plotting code ### figure 1, right panel ################## habmap = i1 # location of IF41 in easting/northi ng or row,column. row/column is right? #ce = 245174.8 eastingOffset # center (meters) in easting direction #cn = 3262391 northingOffset # center (meters) in northing direction ce = 1805 # center column cn = 1590 # center row ne = dim(habmap)[2] # number of columns nn = dim(habmap)[1] # number of rows # compute the distance between pixel (ce,cn) and every other pixel # ...the default is for outer to do the product, but this will add instead d = sqrt(outer( (cn (1:nn))^2, (ce (1:ne))^2, "+"))

PAGE 402

402 # which ta gs are we dealing with cTagNames = as.character(unique(z0$tagName)) # list to hold (tagName, radius, # positions, % time over HB) allfish = list() d2 = data.frame(radius=NA, percentHB=NA) annulusThickness = 1 # that is 5m rings = seq(an nulusThickness,150, by=annulusThickness) # but only go out to 50m rings = head(rings,50) for (i in 1:length(rings)){ #start with 2 because of the [i 1]...but it seems to work with starting at 1? pixelsPerMeter = 10 rOuter = rings[i] pixelsPerMete r # r is in units of pixels, radius is in meters...10 pixels per meter if(i==1){rInner=0} else {rInner = rings[i 1] pixelsPerMeter} kernel = function(z){(z > rInner) & (z < rOuter)} # if passed a matrix, this returns a T/F matrix # k is an n x by ny matrix with TRUE(=1) everywhere [within # rOuter of (i,j) and beyond rInner of (i,j)] and FALSE(=0) everywhere else k = kernel(d) #k = k/sum(k) # this normalizes k so that it sums to one, probably not necessary but a good habit percentHB = 1 sum(k habmap[,,2]) / sum(k) # some exploration #d[(cn 10):(cn+10),(ce 10):(ce+10)] #habmap[(cn 10):(cn+10), (ce 10):(ce+10), 2] #k[(cn 10):(cn+10), (ce 10):(ce+10)] d2[i,] = c(rings[i], percentHB) #points(c(ce,ce+r,ce,ce r),c(cn+r ,cn,cn r,cn),pch=19,cex=0.5,col="green") #points(ce,cn,pch=10, cex=0.5,col="red") } # end for loop # If you want to see it... # plot(d2,type="l",main="Landscape composition around IF41") # for each fish, loop through all radii and calculate the % HB use for (i in 1:length(cTagNames)){ habUse = data.frame(tagName=NA, radius=NA, numPositions=NA, HBuse=NA, HBpreference=NA)

PAGE 403

403 cFish = z0[z0$tagName == cTagNames[i], ] for (j in 1:length(rings)){ rOuter = rings[j] # rOuter is in units of m if( j==1){rInner=0} else {rInner = rings[j 1]} cPositions = cFish[(cFish$dtr > rInner) & (cFish$dtr < rOuter), ] cHB = cPositions[cPositions$habType == "black", ] habUse[j,] = c(cTagNames[i], rings[j] nrow(cPositions), nrow(cHB)/nrow(cPositions), (nrow(cHB)/nrow(cPositions))/d2$percentHB[j] ) allfish[[i]] = habUse } } # for all fish together calculate the %HB use at all radii # THIS WAY THE 'ALL FISH TOGETHER' NUMBER IS OVERLY WEIGHTED TO FISH WITH MANY # POSITION SOLUTIONS IN A GIVEN RING. I DON'T THINK THIS IS RIGHT #i=6 #for (j in 1:length(rings)){ # rOuter = rings[j] # rOuter is in units of m # if(j==1){rInner=0} else {rInner = rings[j 1]} # # cPositions = z0[(z0$dtr > rInner) & (z0$dtr < rOuter), ] # cHB = cPositions[cPositions$habType == "black", ] # habUse[j,] = c("all", rings[j], nrow(cPositions), # nrow(cHB)/nrow(cPositions), # (nrow(cHB)/nrow(cPositions))/d2$percentHB[j] # ) # allfish[[i]] = habUse # } # THIS WAS THE 'ALL FISH TOGETHER' NUMBER IS GIVES EVEN WEIGHT TO EACH FISH i=6 habUse = data.frame(tagName=NA, radius=NA, numPositions=NA, HBuse=NA, HBpreference=NA) for (j in 1:length(rings)){ # for each ring, calculate the mean of the habUse of all fish temp1 = vector(length=length(allfish)) temp2 = vector(length=length(allfish)) for (k in 1:length(allfish)){ temp1[k] = as.numeric(allfish[[k]]$HBuse[j]) temp2[k] = as.numeric(allfish[[k]]$HBpreference[j])

PAGE 404

404 } habUse[j,] = c("all", rin gs[j], NA, mean(temp1, na.rm=T), mean(temp2, na.rm=T)) } allfish[[i]] = habUse ### plot fig 1 ... a double plot for the paper ############################3 par(mfrow=c(1,2)) #stretch this to be as wide as you want # draw the plot par(mar=c(5,6,3, 2)+0.1) plot.imagematrix.zy(imagematrix(catmap),useRaster=TRUE) box(which = "plot", lty = "solid") # add axes labels and numbers mtext(text="Northing (m)", side=2, line=3.4, cex=2) axis(1, at=seq(0,2600,by=500), cex.axis=1.5)#, labels=seq(0,260,by=50))#seq (8440,8700,by=50)) axis(2, at=seq(0,2600,by=500), cex.axis=1.5)#, las=1, labels=seq(0,260,by=50))#seq(560,820,by=50)) # I want to draw the locations of the reef and sdls on the image... # ...these are my best estimates locations of the reef and sdl # ... IF41 (245174.8 E, 3262391 N) # ... 50m array spacing md[[1]]$sdlEN md[[1]]$sdlEN$easting + eastingOffset md[[1]]$sdlEN$northing + northingOffset # ... 125m spacing md[[2]]$sdlEN md[[2]]$sdlEN$easting + eastingOffset md[[2]]$sdlEN$northing + northingOffset # ...in the following you have to change the cpoint and the color # 50m spacings this is broken somehow...and I've changed how I calcula te the 'all fish HBuse' for (cpoint in 1:5){ dotColors = c("black", "white", "black", "black", "black") findHabType(e=md[[1]]$sdlEN$easting[cpoint], n=md[[1]]$sdlEN$northing[cpoint], habmap=catmap1, reference="other", erange=c(245045,245305), nra nge=c(3262261,3262521), show=TRUE, crosshairs=FALSE, pixels=TRUE, dotCol=dotColors[cpoint], dotShape=3, dotSize=2) } for (cpoint in 1:5){ dotColors = c("black", "white", "black", "black", "black") findHabType(e=md[[1]]$sdlEN$easting[cpoint], n=md[[1]]$sdlEN$northing[cpoint],

PAGE 405

405 habmap=catmap1, reference="if41", show=TRUE, crosshairs=FALSE, pixels=TRUE, dotCol=dotColors[cpoint], dotShape=3, dotSize=2) } # 125m spacings for (cpoint in 1:4){ dotColors = c("black", "white", "black" "white", "black") findHabType(e=md[[2]]$sdlEN$easting[cpoint], n=md[[2]]$sdlEN$northing[cpoint], habmap=catmap1, reference="other", erange=c(245045,245305), nrange=c(3262261,3262521), show=TRUE, crosshairs=FALSE, pixels=TRUE, dotCol=dotC olors[cpoint], dotShape=3, dotSize=2) } for (cpoint in 1:4){ dotColors = c("black", "white", "black", "white", "black") findHabType(e=md[[2]]$sdlEN$easting[cpoint], n=md[[2]]$sdlEN$northing[cpoint], habmap=catmap1, reference="if41", show=TRU E, crosshairs=FALSE, pixels=TRUE, dotCol=dotColors[cpoint], dotShape=3, dotSize=2) #dotColors[cpoint] } # reef cpoint=1 findHabType(e=md[[2]]$reefEN$easting[cpoint]+1, n=md[[2]]$reefEN$northing[cpoint], habmap=catmap1, reference="other", erange=c( 245045,245305), nrange=c(3262261,3262521), show=TRUE, crosshairs=FALSE, pixels=TRUE, dotCol="black", dotShape=4, dotSize=2) findHabType(e=md[[2]]$reefEN$easting[cpoint]+1, n=md[[2]]$reefEN$northing[cpoint], habmap=catmap1, reference="if41", s how=TRUE, crosshairs=FALSE, pixels=TRUE, dotCol="green", dotShape=4, dotSize=2) reefEN = data.frame( "ID" = "reef", "easting" = 8574.76, "northing" = 691.2323 findHabType(e=8574.76, n=891.2323, reference="if41",show=TRUE, pixels=T, dotCol="red", dotShape=4, dotSize=2) z lleft=8434 z bbottom=555

PAGE 406

406 # label the reef with an arrow text(900, 900, "Reef", cex=1.5) arrows(950,950,1220,1220, length=0.1, lwd=2) # place the a) text(130,2500,"a)", cex=2) ### figure 1, right pan el ################## par(mar=c(5,6,3,2)+0.1) plot(d2,type="l",lwd=4, las=1, bty="l", cex.lab = 2, cex.axis=2, xlim=c(0,50), ylim=c(0,1), xlab="Ring Number", ylab="") # mtext(text="Fraction Hard bottom or Hard bottom Use", side=2, line=3.7, cex=1. 8) # add individual fish with dashed lines #for (i in 1:length(cTagNames)){ # points(allfish[[i]]$radius, allfish[[i]]$HBuse, type="l") # text(x=100,y=as.numeric(tail(allfish[[i]]$HBuse,1))+0.01, labels=tail(allfish[[i]]$tagName,1)) #} # f60200 i=1 poi nts(allfish[[i]]$radius, allfish[[i]]$HBuse, type="l", lty=1, lwd=2) # f60400, lowest of triplet i=2 points(allfish[[i]]$radius, allfish[[i]]$HBuse, type="l", lty=2, lwd=2) # f60900, mid of triplet i=3 points(allfish[[i]]$radius, allfish[[i]]$HBuse, type= "l", lty=3, lwd=2) # f60300 i=4 points(allfish[[i]]$radius, allfish[[i]]$HBuse, type="l", lty=4, lwd=2) # f61100, highest of triplet i=5 points(allfish[[i]]$radius, allfish[[i]]$HBuse, type="l", lty=5, lwd=2) # all fish i=6 points(allfish[[i]]$radius, all fish[[i]]$HBuse, type="l", lty=2, lwd=4) # place the b) text(1,1,"b)", cex=2) # add a legend legend(4, 1.04, legend=c("Fraction hard bottom cover", "ID 1", "ID 2",

PAGE 407

407 "ID 3", "ID 4", "ID 5", "All individuals"), seg.len=4, lty=c(1,4,5,3,2,1,2), lwd=c( 4,2,2,2,2,2,4)) # check the stretching to that the map x and y axes cross at the zero hash marks ###################################################################### ########## # Figure 2. ADCP data and lunar phase in ggplot ########################## ############################################ ### # ADCP plots # ...for this plot I want the lunar phase to show from the beginning of the # ...2007 depoyment even though there's no water flow data...to do that I'll # ...make up data lines which will be emp ty except for the lunar phase stuff library(reshape) library(ggplot2) # import ADCP data ad = importADCPdata() # get just 2007 and 2008 ad = ad[ad$utime < 1230768000, ] #1230768000 = 2009 Jan 1 midnight GMT ad$datiL = as.POSIXct(ad$datiL) # pick only s ome columns ad = ad[,c(3,6,11,14)] ad[,3] = ad[,3]/1000 # create empty data lines # tagging start on 9 Dec 16:01:00, ADCP starts on 19 Dec 16:01:00. That's # 11 days and 240 hours, with data every 10min...=1440 data lines dday = rep(9:19,each=144) hhour = rep(0:23,each=6,times=11) mmin = rep(seq(1,51, by=10),264) temp1 = paste("2007 12 ",dday," ",hhour,":",mmin,":00",sep="") temp2 = strptime(temp1, "%Y %m %d %H:%M:%S", tz="EST5EDT") temp3 = as.POSIXct(temp2, origin="1970 1 1", tz="EST5EDT") temp4 = data .frame(datiL = temp3[temp3 < ad$datiL[1]]) # now combine these newad=merge(ad, temp4, by="datiL", all=TRUE) # add a year indicator newad$year = as.factor(ifelse( newad$datiL < as.POSIXct("2008 05 01", origin="1970 1 1", tz="EST5EDT"), 2007,2008)) # no w add a column just for the figure captions, since I don't know how to change them manually

PAGE 408

408 newad$deployment = as.factor(ifelse( newad$datiL < as.POSIXct("2008 05 01", origin="1970 1 1", tz="EST5EDT"), "2007 Deployment","2008 Deployment")) # add a column for creating a sine wave indicating lunar phase newad$moon = NA # split the years apart while the moon is added nad7 = newad[newad$year == 2007, ] nad8 = newad[newad$year == 2008, ] # add an index column 2007... # ...and each day has 24hrs with 6 lines each = 144 lines, and there are 30 days # ..in the lunar cycle...30*144 = 4320 lines per 2 pi radians or (2160 lines = pi rads) nad7$index = (1:nrow(nad7)*pi/2160) + pi # add pi to make it 1 at full moon and 1 at new # ... and by chance 9 Dec (the first day) is a new moon so I don't have to # ... shift left or right except to make the full moon be up at 1 # create the lunar curve nad7$moon = (cos(nad7$index)+1)/2 # look and add lines where the new and full moons should be, as a check...looks good plot(nad7$datiL, nad7$moon, type="l") abline(v=as.POSIXct("2007 12 09", origin="1970 1 1", tz="EST5EDT")) #new moon abline(v=as.POSIXct("2007 12 24", origin="1970 1 1", tz="EST5EDT")) #full abline(v=as.POSIXct("2008 01 08", origin="1970 1 1", tz="EST5EDT" )) #new # add an index column 2008... nad8$index = ((1:nrow(nad8))*(pi/2160)) (497*(pi/2160)) # ... by looking at it I see that 14 Oct (the first full moon) is 497 lines down # # create the lunar curve nad8$moon = (cos(nad8$index)+1)/2 plot(nad8$datiL, nad8$moon, type="l") abline(v=as.POSIXct("2008 10 15", origin="1970 1 1", tz="EST5EDT")) # full moon abline(v=as.POSIXct("2008 10 29", origin="1970 1 1", tz="EST5EDT")) # new abline(v=as.POSIXct("2008 11 13", origin="1970 1 1", tz="EST5EDT")) # full abline (v=as.POSIXct("2008 11 28", origin="1970 1 1", tz="EST5EDT")) # new abline(v=as.POSIXct("2008 12 13", origin="1970 1 1", tz="EST5EDT")) # full abline(v=as.POSIXct("2008 12 27", origin="1970 1 1", tz="EST5EDT")) # new #this looks close enough # put them ba ck together without the index or the year newerad = rbind(nad7[,c( 5, 8)], nad8[,c( 5, 8)]) names(newerad) = c("Date", "Temperature (Celsius)", "Current Speed (m/s)", "Current Direction", "Deployment", "Lunar Phase")

PAGE 409

409 meltedad = melt(newerad, id.v ars=c("Date", "Deployment")) mad2007 = subset(meltedad,Deployment=="2007 Deployment") mad2008 = subset(meltedad,Deployment=="2008 Deployment") names(mad2007) = c("2007 Deployment", "year", "variable", "value") names(mad2008) = c("2008 Deployment", "year", "variable", "value") ggplot(meltedad, aes(x=Date, y=value)) + geom_line() + facet_grid(variable~Deployment, space="fixed",scales="free") + scale_x_datetime(major = "14 days", format="%b %d") + # specifying these works around a bug theme_bw() + scale_y_continuous(' ') + # instead of...opts(axis.title.y = theme_text(colour = 'white')) + opts(axis.title.x = theme_text(size=20)) + opts(axis.text.x = theme_text(size = 15)) + opts(axis.text.y = theme_text(size = 15)) aaa Here's Ben's wo rk on this figure. see the original R code (freespace.R) in an email g1 < ggplot(meltedad,aes(x=Date,y=value))+geom_line()+theme_bw()+ scale_x_datetime(major="14 days") g1 + facet_grid(variable~year,scales="free",space="free") g1 + facet_grid(variabl e~year,scales="free",space="fixed") library(gridExtra) g2 < ggplot(subset(meltedad,year==2007), aes(x=Date,y=value))+geom_line()+theme_bw()+ scale_x_datetime(major="14 days")+ facet_grid(variable~year,scale="free") g3 < g2 %+% subset( meltedad,year==2008) ## suppress labels g2B < g2+opts(strip.background=theme_blank(), strip.text.x=theme_blank(),strip.text.y=theme_blank()) g3B < g3 + opts(axis.text.y=theme_blank(),axis.title.y=theme_blank()) n2007 < length(d2007) n2 008 < length(d2008) tot < n2007+n2008

PAGE 410

410 grid.show.layout(grid.layout(1,2,widths=unit(c(n2007/tot,n2008/tot),"null"))) grid.arrange(g2B,g3B,ncol=2,widths=unit(c(n2007/tot,n2008/tot),"null")) bbb End Ben's work on this figure ############################# ######################################### ########## # Figure 3. rose plots of current directions ad = importADCPdata() # get just 2007 and 2008 ad = ad[ad$utime < 1230768000, ] #1230768000 = 2009 Jan 1 midnight GMT ad$datiL = as.POSIXct(ad$datiL) # pic k just what I want newad = ad[,c(3,14)] # add a year indicator #newad$year = as.factor(ifelse( # newad$datiL < as.POSIXct("2008 05 01", origin="1970 1 1", tz="EST5EDT"), # 2007,2008)) # now add a column just for the figure captions, since I don't know h ow to change them manually newad$deployment = as.factor(ifelse( newad$datiL < as.POSIXct("2008 05 01", origin="1970 1 1", tz="EST5EDT"), "2007 Deployment","2008 Deployment")) ggplot(newad,aes(x=dirL))+ geom_bar(binwidth=10)+ facet_wrap(~deploy ment) + theme_bw() + coord_polar(start= pi/20) + # I don't know why 0 isn't at top, but 'start' to fix it opts(axis.text.x = theme_text(size = 12)) + #opts(title="Water Flow Direction") + # scale_y_continuous(' ') + # instead of ...opts(axis.t itle.y = theme_text(colour = 'white')) + scale_x_continuous(' ') ggplot(newad,aes(x=dirL))+stat_bin(binwidth=10,aes(y=19*..density..))+ scale_x_continuous(limits=c(0,360),breaks=seq(0,360,by=45))+ #geom_bar(binwidth=10)+ facet_wrap(~deployment) + theme_bw() + coord_polar() + # I don't know why 0 isn't at top, but 'start' to fix it opts(axis.text.x = theme_text(size = 12)) + #opts(title="Water Flow Direction") + # scale_y_continuous(' ') + # instead of ...opts( axis.title.y = theme_text(colour = 'white')) +

PAGE 411

411 labs(x="",y="Proportion") ###################################################################### ########## # Figure 4. two panel: a) Easting Northing b) hourly fraction PS cTag = 1 # E N plots with colors telling something # ... for altitude I want the shallower points on top, so sort by altitude z1sorted = z1[order(z1$altitude), ] z2sorted = z2[order(z2$altitude), ] z3sorted = z3[order(z3$altitude), ] z4 = z4 z5 = z5 # make l egend have a capital 'A' z1sorted$Altitude=z1sorted$altitude z2sorted$Altitude=z2sorted$altitude z3sorted$Altitude=z3sorted$altitude z4$Altitude=z4$altitude z5$Altitude=z5$altitude # EN plot, pick different fish d = z5 # calculate the contour lines...loo k in 'ggcont.R' from Ben...but this doesn't quite work now # ... remember that the easting and northing have been shifted to show (0,0) in plot # ... so shift the hrlims also... z from above lleft=8434 z from above bbottom=555 hrlimsShif ted = hrlims c(lleft, lleft, bbottom, bbottom) # prob < c(0.5,0.95) ## utilization regions to plot dens < kde2d(d$easting,d$northing,n=250,lims=hrlimsShifted) dx < diff(dens$x[1:2]) dy < diff(dens$y[1:2]) sz < sort(dens$z) c1 < cumsum(s z) dx dy levels < sapply(prob, function(x) { approx(c1, sz, xout = 1 x)$y }) dC < contourLines(dens$x, dens$y, dens$z, level = levels)

PAGE 412

412 dC2 = dC # I added this...ASK BEN IF THIS IS RIGHT ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ ## transform contourLines into a useful format for (i in 1:length(dC)) dC2[[i]] < with(dC[[i]],data.frame(x,y,level=prob[match(level,levels)],f=i)) dC3 < transform(do.call(rbind,dC2),level=factor(level)) d = z3sorted ggplot(d, aes(x=easting, y=northing, colour=Altitude)) + # geom_point(alpha=0.05) + #geom_path(data=dC3,aes(group=f,col="red")) theme_bw() + coord_cartesian(xlim=c(0,280), ylim=c(0,280)) + scale_y_continuous("Northing (m)", breaks = seq(0,280,by=40)) + scale_x_continuous("Easting (m)", breaks = seq(0,280,by=40)) + opts(axis.title.x = theme_text(size = 20)) + opts(axis.title.y = theme_text(size=20, angle=90)) + opts(axis.text.x = theme_text(size = 15)) + opts(axis.text.y = theme_text(size = 15)) # add a) grid.text("a)", x = uni t(0.15, "npc"), y = unit(0.96, "npc"), hjust=0, vjust=1, gp=gpar(fontsize=20)) ## right and top justified # add contour lines ggplot(z4, aes(x=easting, y=northing)) + geom_point(alpha=0.05) + theme_bw() + coord_cartesian(xlim=c(0,280), ylim=c(0,280)) + scale_y_continuous("Northing (m)", breaks = seq(0,280,by=40)) + scale_x_continuous("Easting (m)", breaks = seq(0,280,by=40)) + opts(axis.title.x = theme_text(size = 20)) + opts(axis.title.y = theme_text(size=20, angle=90)) + o pts(axis.text.x = theme_text(size = 15)) + opts(axis.text.y = theme_text(size = 15)) # add a) grid.text("a)", x = unit(0.15, "npc"), y = unit(0.96, "npc"), hjust=0, vjust=1, gp=gpar(fontsize=20)) ## right and top justified plot(z1$easting, z1$northing, pch=19, alpha=0.1) ### figure 4, right panel ################## # fraction of hourly PS before the minute mean...so re filter the data # read in 'tag' data then filter it without the minuteMean load("C:/zy/Telemetry/R summary files/tag 2011M ar16.rdata") ttf = list() # filtered tag data

PAGE 413

413 for (i in 1:length(cTagNames)){ print(cTagNames[i]) ttf[[i]] = filterALPSdata(df1=tag[[i]], cnF=1.5, speedF=0.8, minuteMean=F) } # now remove 'tag' rm(tag) cTag=5 # pick one dat = ttf[ [cTag]]$data time = dat$datiL dat < subset(dat,select=c("northing","easting")) timecat < cut(time, breaks="hour") datsplit < split(dat,timecat) hourlyFrac = length(datsplit)/(60*30) fracHits = vector(length=length(datsplit)) for(i in 1:length(datspl it)){ fracHits[i]=nrow(datsplit[[i]])/(60*30) } # draw the right plot of fig 4 # this code is for 2007 which does the dates nicely... par(mar=c(5,6,2,2)+0.1) dates = as.POSIXct(levels(timecat)) plot(dates,fracHits,type="l", las=1, bty="l", cex.lab=2, cex.axis=1.7, ylim=c(0,0.8), xlab="Date", ylab="" ) text(as.POSIXlt(levels(timecat))[10], 0.8, "b)", cex=2) mtext("Fraction of Transmissions", side=2, line=4, cex=2) # this code is for 2008 which doesn't do dates nicely... par(mar=c(5,6,2,2)+0.1) da tes = as.POSIXct(levels(timecat)) whichdates = seq(14,1204,by=336) # dates starting 18 Oct going every 14days plot(dates,fracHits,type="l", las=1, bty="l", cex.lab=2, cex.axis=1.7, xaxt="n", ylim=c(0,0.8), xlab="Date", ylab="" ) axis(1,at=dates[whi chdates], format(dates[whichdates], "%b %d"), cex.axis=1.5) text(as.POSIXlt(levels(timecat))[10], 0.8, "b)", cex=2) mtext("Fraction of Transmissions", side=2, line=4, cex=2)

PAGE 414

414 #### figure 4, left panel ################## ## DON'T USE HEXBIN ANYMORE hexbin of PS for one fish # #cTag = 5 #cmd=md[[2]] # pick a tag ## I want the axes to start from zero or close...so subtract ## ... also I want to have the same scale for all 5 fish, so I'll pick a ## ... south east most point that I want the gra phing limits to be, then to make ## ... the plotting limits be what I want, and uniform among all 5 fish I'll add ## ... a point in each of the four corners ## ## I get my range by looking at tagfm manually, and at md[[2]]$plotLimits which I ## made to be square...easting (8434 8721), northing (555,842) ## First, add points to all corners: bottom left, top left, top right, bottom right #lleft=8434; rright=8721; bbottom=555; ttop=842; #easting = c(tagfm[[cTag]]$data$easting, lleft, lleft, rright, rright) # lleft ...already done at top #northing = c(tagfm[[cTag]]$data$northing, bbottom, ttop, ttop, bbottom) # bbottom ...already done at top #eLimits = NA #nLimits = NA ## In these plots the reef location is at #md[[1]]$reefEN$easting lleft #md[[1]]$reefEN $northing bbottom # ## for now, I can't figure out how to manually control the plotting limits in ## hexbin, so I'll add a point in each of the four corners # #bins = hexbin(easting, northing, xbins=40) # ##plot(bins,main="", xlab="Easting (m)", ylab="Northing (m)") ## is gplot.hexbin any better? # ## par(mar=c(1,1,1,1)+0.1) plot(bins,main="", xlab="Easting (m)", ylab="Northing (m)") hexbinplot(bins, xbnds=c(0,500), ybnds=c(0,500)) # To make Appendix figures like these two just change c Tag and re evaluate # I want an EN plot for the dissertation talk # pick a fish tag, don't start with shifted EN data # this code has been moved to 'plots for dissertation talk...it needs z1 z5 mostly # remember

PAGE 415

415 # now the reef location is md[[1]]$r eefEN$easting # = 140.76m E md[[1]]$reefEN$northing # = 136.2mN xLimits = md[[2]]$plotLimits$easting yLimits = md[[2]]$plotLimits$northing d = z3 ###################################################################### ########## # Figure 5. four p anels: histograms of interval length, DTR, SPDG, ALT # dtr histogram par(mfrow=c(2,3)) bks = c(20,20,20,30,30) for (i in 1:length(tagfm)){ cTag=i; #cmd=md[[2]] hist(tagfm[[cTag]]$data$dtr, breaks=bks[i], freq=T, main=tagfm[[cTag]]$tagName, xlim= c(0,100)) abline(v=median(tagfm[[cTag]]$data$dtr), col="red") } hist(z0$dtr, breaks=90, freq=F, xlim=c(0,100), ylim=c(0,0.05), col="grey", las=1, main="", xlab="Distance to the Reef (m)") abline(v=median(z0$dtr),lwd=4) # speed histogram par(mfrow =c(2,3)) for (i in 1:length(tagfm)){ cTag=i; #cmd=md[[2]] hist(tagfm[[cTag]]$data$speed, breaks=30, freq=F, main=tagfm[[cTag]]$tagName) abline(v=mean(tagfm[[cTag]]$data$speed), col="red") } hist(z0$speed, breaks=30, freq=F, col="grey", las=1, mai n="", xlab="Gag Speed (m/s)") abline(v=median(z0$speed),col="black",lwd=4) # interval histogram # ... for this you want the filtered, but not minuteMeaned data # ... too bad...I have to get that fresh now...you need some code from 'chapter 3 xxxxx.r" cT agNames

PAGE 416

416 # import the 'tag' dataset load("C:/zy/Telemetry/R summary files/tag 2011Mar16.rdata") ttf = list() # filtered tag data for (i in 1:length(cTagNames)){ print(cTagNames[i]) ttf[[i ]] = filterALPSdata(df1=tag[[i]], cnF=1.5, speedF=0.8, minuteMean=F) } # combine them tempInterval = c(ttf[[1]]$data$interval, ttf[[2]]$data$interval, ttf[[3]]$data$interval, ttf[[4]]$data$interval, ttf[[5]]$data$interval) par( mfrow=c(2,3)) bks = seq(0,max(tempInterval),by=2) for (i in 1:length(cTagNames)){ cTag=i; hist(ttf[[cTag]]$data$interval, breaks=bks, freq=T, main=ttf[[cTag]]$tagName, xlim=c(0,60)) abline(v=median(ttf[[cTag]]$data$interval), col="red") } par(mar=c(5,4,2,2)+0.1) hist(tempInterval, breaks=bks, freq=F, col="grey", las=1, main="", cex.lab=1.5, cex.axis=1.5, xlim=c(0,60), ylim=c(0,0.25), xlab="Interval Length (s)", y lab="") abline(v=median(tempInterval),lwd=4) mtext("Density", side = 2, line = 4, cex = 1.5) ### Altitude distribution par(mfrow=c(2,3)) possAlts = 1:13 for (i in 1:3){ # only the 2007 fish have altitudes # ...how do I want to round alt = ceiling(t agfm[[i]]$data$altitude[ tagfm[[i]]$data$altitude >= 0 ]) altVec = c() for (j in 1:length(possAlts)){ altVec = rbind(altVec, sum(alt == possAlts[j],na.rm=T)) } altVec = altVec/sum(altVec,na.rm=T) # remove altitudes 10 13 because they are or a re close to zero ... or not

PAGE 417

417 # altVec = head(altVec,9) # possAlts = head(possAlts,9) barplot(altVec, horiz=T, beside=T, main=paste(tagfm[[i]]$tagName,"Altitude"), xlab="Frequency", ylab="Altitude (m)", cex.axis=1.5, cex.names=1.5, cex.lab= 1.5, names=possAlts, ylim=c(0,15)) # gap.barplot(altVec,gap=c(0.25, 0.6),xlab="Altitude (m)", # ytics=c(0,0.05, 0.1, 0.15, 0.2, 0.6383233), # xtics=possAlts, xaxlab=possAlts, # ylab="Frequency",horiz=T, # main=paste(tagfm[[i]]$tagName,"D istribution in the Water Column"), # col=rep("grey",length(altVec))) abline(h=mean(tagfm[[i]]$data$altitude, na.rm=T),col="red",lwd=2) } # now do all fish together alt = ceiling(z0$altitude[ z0$altitude >= 0 ]) altVec = c() for (j in 1:length(pos sAlts)){ altVec = rbind(altVec, sum(alt == possAlts[j],na.rm=T)) } altVec = altVec/sum(altVec) barplot(altVec, horiz=T, beside=T, col="grey", las=1, main="", xlab="Density", ylab="Gag Height Above Seafloor (m)", cex.axis=1.5, cex.names=1.5, cex. lab=1.5, names=possAlts, xlim=c(0,0.7), ylim=c(1,13) ) # get a regular histogram for altitude distribution, but group the 0 into 1 hist(z0$altitude, breaks=30, freq=F, main="", col="grey", las=1, xlim=c(0,13), #ylim=c(0,30000), xlab="Altitude (m)") abline(v=median(z0$altitude,na.rm=T),lwd=4) ###################################################################### ######## ###################################################################### ######## # a four panel figure...using stuff generated above

PAGE 418

418 par(mfrow=c(2,2)) # 1. Time interval between detections par(mar=c(5,5,2,2)+0.1) bks = seq(0,max(tempInterval),by=2) hist(tempInterval, breaks=bks, freq=F, col="grey", las=1, main="", cex.lab=1.5, cex.axis=1.5, xlim=c(0,40), ylim=c(0,0.25), xlab="", ylab="") abline(v=median(tempInterval),lwd=3) mtext("Density", side = 2, line = 3.8, cex = 1.5) mtext("Interval Length (s)", side = 1, line = 3, cex = 1.5) text(0,0.25,"a)", cex=1.5) # 2. distance to reef par(mar=c(5,4,2,2)+0.1) bks = seq(0,max(z0$dtr),by =1) hist(z0$dtr, breaks=90, freq=F, col="grey", las=1, cex.lab=1.5, cex.axis=1.5, xlim=c(0,150), ylim=c(0,0.05), xlab="", ylab="", main="") abline(v=median(z0$dtr),lwd=3) mtext("Density", side = 2, line = 4, cex = 1.5) mtext("Distance from Reef (m)", s ide = 1, line = 3, cex = 1.5) text(0,0.05,"b)", cex=1.5) # 3. Gag Travel Speed par(mar=c(5,5,2,2)+0.1) hist(z0$speed, breaks=30, freq=F, col="grey", las=1, cex.lab=1.5, cex.axis=1.5, xlim=c(0,0.8), ylim=c(0,4.5), xlab="", ylab="", main="") abline(v=me dian(z0$speed),col="black",lwd=4) mtext("Density", side = 2, line = 2.8, cex = 1.5) mtext("Travel Speed (m/s)", side = 1, line = 3, cex = 1.5) text(0,4.5,"c)", cex=1.5) # 4. altitude...this one from below par(mar=c(5,4,2,2)+0.1) barplot(altVec, horiz=T, b eside=T, col="grey", las=1, xlab="", ylab="", cex.axis=1.5, cex.names=1.5, cex.lab=1.5, names=c(1,"",3,"",5,"",7,"",9,"",11,"",13), #possAlts, xlim=c(0,0.7), ylim=c(1,13) ) mtext("Altitude (m)", side = 2, line = 3, cex = 1.5) mtext("Density", side = 1, line = 3, cex = 1.5) text(0.02,13,"d)", cex=1.5)

PAGE 419

419 ###################################################################### ########## # Figure 6. two panels: KDE stabilization curves # ...because I put 'results' in order of size and this wants them in original order results = results[c(5,4,3,1,2) ,] # something to hold the changing HR answers for all fish allFishHR = list() cProb = 0.5 for (cTag in 1:length(tagfm)){ if(results$deployment[cTag] == "hb20 07"){ cmd = md[[1]] numUniqueDays = length(unique(tagfm[[cTag]]$data$datiL$yday)) } else { cmd = md[[2]] numUniqueDays = length(unique(tagfm[[cTag]]$data$datiL$yday)) } # get only the easting/northing data d1 = subset(tagfm[[cTag]] $data, select=c(utime,datiL,easting,northing)) # create a vector of the index of the day of the run, 1 52 for example # ...there's got to be a prettier way of doing this, but... startOfFirstDay = unclass(as.POSIXct( strptime(cmd$taggingDay, "%Y/% B/%d", tz="EST5EDT"), origin="1970 1 1", tz="EST5EDT"))[1] endOfAllDays = startOfFirstDay + 86400 1:numUniqueDays # something to hold the changing HR for one fish hrVSdays = data.frame("numDays" = NA, "hrSize" = NA) for (i in 3:numUni queDays){ # zyzyzy Try this without the first two days # grab only position solutions during the first i days d2 = d1[d1$utime < endOfAllDays[i],] # ...but drop data from the first two days d2 = d 2[d2$utime > endOfAllDays[2] ,] print(i) if(nrow(d2) > 0){ # calculate the home range for these PS hrSize = homeRange( easting = d2$easting, northing = d2$northing, tagName = paste(tagfm[[cTag]]$tagName, ", ", i, days", sep=""), lims = hrlims, #c(md[[2]]$plotLimits$easting, md[[2]]$plotLimits$northing),

PAGE 420

420 reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, prob=cProb, drawplot=FALSE ) } else {hrSize = 0} # save the answer hrVSdays[i ,] = c(i, hrSize) } # end for loop # a plot plot(hrVSdays$numDays, hrVSdays$hrSize, pch=19, main=paste(cTagNames[cTag], ": ", cProb*100, "% HR", sep="") ) # save the answer for this fish allFishHR[[cTag]] = hrVSdays } # end cTag fo r loop z hr50 = allFishHR z hr95 = allFishHR # a two panel plot, par(mfrow=c(1,2)) # left pane, 50% KDE par(mar=c(4,5.5,1,0.5)+0.1) plot(hr50[[4]],type="l", las=1, bty="l", cex.lab=1.5, cex.axis=1.5, xlim=c(2.3,60), ylim=c(46,1200), # these are to make (0,0) at the corner xlab="", ylab="" ) for (cTag in 1:length(hr50)){ points(hr50[[cTag]], type="l", lwd=2) } mtext("Number of Days", side=1, line=2.5, cex=1.5) mtext("50% KDE ( )", side=2, line=3.9, cex=1.7) mtext(expression(m^2),side=2 line=4, adj=0.621, cex=1.7) text(x=c(38,38,38,51,51)+4, y=results$kde50, labels=c("ID 5","ID 4", "ID 3","ID 1","ID 2"), cex=1.5) #c(results$tagName)) # add the a) text(2.3,1200,"a)",cex=1.5) # f60200 = ID5, f60400=ID4, f60900=ID3, f60300=ID1, f61100 =ID2 #abline(h=results$kde50) # left pane, 95% KDE par(mar=c(4,5.5,1,0.5)+0.1)

PAGE 421

421 plot(hr95[[4]],type="l", las=1, bty="l", cex.lab=1.5, cex.axis=1.5, xlim=c(2.3,60), ylim=c(245,6400), # these are to make (0,0) the corner xlab="", ylab="" ) for (cTag in 1:length(hr95)){ points(hr95[[cTag]], type="l", lwd=2) } mtext("Number of Days", side=1, line=2.5, cex=1.5) mtext("95% KDE ( )", side=2, line=4.4, cex=1.7) mtext(expression(m^2),side=2, line=4.5, adj=0.621, cex=1.7) text(x=c(38,38,38,51,51)+4, y=results$kde95, labels=c("ID 5","ID 4", "ID 3","ID 1","ID 2"), cex=1.5) #c(results$tagName)) # add the b) text(2.3,6400,"b)",cex=1.5) # ...also create these same curves for 2008 fish as if there had onl y been 50m arrays # ...you'll need 'rotate()' and 'chop()' which are in 'testing array spacing.r' for now # ...the two rish of interest are 2008 f60300, f61100, tagfm[[4]], tagfm[[5]] cmd=md[[2]] s60300 = subset(tagfm[[4]]$data, select=c(utime,easting,nor thing)) r60300 = rotate(s60300$utime,s60300$easting, s60300$northing, spin=45) c60300 = chop(r60300$fisht, r60300$fishx, r60300$fishy, spacing=50) u60300 = rotate(c60300$fisht,c60300$fishx,c60300$fishy, spin= 45) plot(s60300$easting,s60300$northing,pch=19) points(u60300$fishx,u60300$fishy,pch=19,col="blue") s61100 = subset(tagfm[[5]]$data, select=c(utime,easting,northing)) r61100 = rotate(s61100$utime,s61100$easting, s61100$northing) c61100 = chop(r61100$fisht,r61100$fishx, r61100$fishy, spacing=50) u611 00 = rotate(c61100$fisht,c61100$fishx,c61100$fishy, spin= 45) plot(s61100$easting,s61100$northing,pch=19) points(u61100$fishx,u61100$fishy,pch=19,col="blue") # for 2008 fish see how things would have changed if the array spacing had been different cProb = 0.5 for (cTag in 4:5){ cmd = md[[2]] numUniqueDays = length(unique(tagfm[[4]]$data$datiL$yday)) # data for 2008 deployment # get only the easting/northing data if(cTag==4){d1=u60300} else {d1=u61100}

PAGE 422

422 names(d1) = c("utime","easting","northing ") # create a vector of the index of the day of the run, 1 52 for example # ...there's got to be a prettier way of doing this, but... startOfFirstDay = unclass(as.POSIXct( strptime(cmd$taggingDay, "%Y/%B/%d", tz="EST5ED T"), origin="1970 1 1", tz="EST5EDT"))[1] endOfAllDays = startOfFirstDay + 86400 1:numUniqueDays # something to hold the changing HR for one fish hrVSdays = data.frame("numDays" = NA, "hrSize" = NA) for (i in 3:numUniqueDays){ # zyzyzy Try this without the first two days # grab only position solutions during the first i days d2 = d1[d1$utime < endOfAllDays[i],] # ...but drop data from the first two days d2 = d2[d2$utime > endO fAllDays[2] ,] print(i) if(nrow(d2) > 0){ # calculate the home range for these PS hrSize = homeRange( easting = d2$easting, northing = d2$northing, tagName = paste("u60300 or u61100", ", ", i, days", sep=" "), lims = hrlims, #c(md[[2]]$plotLimits$easting, md[[2]]$plotLimits$northing), reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, prob=cProb, drawplot=FALSE ) } else {hrSize = 0} # save the answer hrVSdays[i,] = c(i, hrSize) } # end for loop # a plot plot(hrVSdays$numDays, hrVSdays$hrSize, pch=19, main=paste(cTagNames[cTag], ": ", cProb*100, "% HR", sep="") ) # save the answer for this fish allFishHR[[cTag+2]] = hrVSdays } # end cTag for loop z hr50[6] = allFishHR[6]; hr50[7] = allFishHR[7]; z hr95[6] = allFishHR[6]; hr95[7] = allFishHR[7];

PAGE 423

423 # a two panel plot, par(mfrow=c(1,2)) # left pane, 50% KDE par(mar=c(4,5.5,1,0.5)+0.1) plot(hr50[[4]],type="l", las=1, bty="l", cex.lab=1. 5, cex.axis=1.5, xlim=c(2.3,60), ylim=c(46,1200), # these are to make (0,0) at the corner xlab="", ylab="" ) for (cTag in 1:5){ points(hr50[[cTag]], type="l", lwd=2) } mtext("Number of Days", side=1, line=2.5, cex=1.5) mtext("50% KDE ( )", side= 2, line=3.9, cex=1.7) mtext(expression(m^2),side=2, line=4, adj=0.621, cex=1.7) text(x=c(38,38,38,51,51)+4, y=results$kde50, labels=c("ID 5","ID 4", "ID 3","ID 1","ID 2"), cex=1.5) #c(results$tagName)) # add the a) text(2.3,1200,"a)",cex=1.5) # f60200 = ID5, f60400=ID4, f60900=ID3, f60300=ID1, f61100=ID2 # add the dashed lines for hr50 estimates with smaller array spacing points(hr50[[6]], type="l", lty="dashed", lwd=2) points(hr50[[7]], type="l", lty="dashed", lwd=2) # right pane, 95% KDE par(mar=c(4,5.5,1,0.5)+0.1) plot(hr95[[4]],type="l", las=1, bty="l", cex.lab=1.5, cex.axis=1.5, xlim=c(2.3,60), ylim=c(245,6400), # these are to make (0,0) the corner xlab="", ylab="" ) for (cTag in 1:5){ points(hr95[[cTag]], type="l", lwd=2) } mtext("Number of Days", side=1, line=2.5, cex=1.5) mtext("95% KDE ( )", side=2, line=4.4, cex=1.7) mtext(expression(m^2),side=2, line=4.5, adj=0.621, cex=1.7) text(x=c(38,38,38,51,51)+4, y=results$kde95, labels= c("ID 5","ID 4", "ID 3","ID 1","ID 2"), cex=1.5) #c(results$tagName)) # add the b) text(2.3,6400,"b)",cex=1.5) points(hr95[[6]], type="l", lty="dashed", lwd=2) points(hr95[[7]], type="l", lty="dashed", lwd=2) # add one more "ID1" label

PAGE 424

424 text(x=51+4,y=43 00,labels="ID 1",cex=1.5) # now calcualte how much the final HR estiamte decreased for the 2008 fish # from using the 125m array to using the similuated 50m array # ...in hr50, list elements 4 and 6 go together # ... and elements 5 and 7 go together ( hr 50[[4]]$hrSize[51] hr50[[6]]$hrSize[51] ) / hr50[[4]]$hrSize[51] #=3.8% ( hr50[[5]]$hrSize[51] hr50[[7]]$hrSize[51] ) / hr50[[5]]$hrSize[51] #=1.5% ( hr95[[4]]$hrSize[51] hr95[[6]]$hrSize[51] ) / hr95[[4]]$hrSize[51] #=12.5% ( hr95[[5]]$hrSize[51] hr95[[7]]$hrSize[51] ) / hr95[[5]]$hrSize[51] #=6.6% ###################################################################### ######### ###################################################################### ######### ### Bootstrapping home range estimates ## #################################################################### ######### ###################################################################### ######### bootHR < function(dat, by="day", nboot=100, prob=0.95, progressbar=FALSE, bootplot=FALSE, pts=F ALSE, drawplot=FALSE, ... # 'by' will subset the data by whatever you choose, say 'day', then the # bootstrap will pick randomly from the 'days' ) { if (progressbar) { require(tcltk) pb < tkProgress Bar("hr bootstrap",min=0,max=nboot) } time < dat$datiL dat < subset(dat,select=c("northing","easting")) timecat < cut(time, breaks=by) datsplit < split(dat,timecat) nt < length(levels(timecat)) bootres < numeric(nboot) if (bootplot) w ith(dat,plot(easting,northing,pch=".")) for (i in 1:nboot) { bootsamp < sample(nt,size=nt,replace=TRUE) bootdat < do.call(rbind,datsplit[bootsamp]) if (bootplot) with(bootdat,points(easting,northing,pch=".",col=i+1)) bootres[i] < with( bootdat, homeRange(easting, northing, prob=prob, pts=pts, drawplot=drawplot, ...) ) if (progressbar) setTkProgressBar(pb,i) }

PAGE 425

425 if (progressbar) close(pb) bootres } # end bootHR system.time(boottest < bootHR(dat=z0, by="day", #by = "2 d ays" or "week" prob=0.95, lims=hrlims,nboot=5, progressbar=TRUE, bootplot=TRUE)) # now to put this all together and run it 1000 for each tag # create a list to hold the results hrboots = list( list( tagName = NA, # which tag originalHR = NA, # a single number, HR estimate using all data bootResults = NA # a vector holding the 1000 boot strapped HR estimates ), list(tagName = NA, originalHR = NA, bootResults = NA), list(tagName = NA, originalHR = NA, bootResults = NA), list(tagName = NA, originalHR = NA, bootResults = NA), list(tagName = NA, originalHR = NA, bootResults = NA), list(tagName = NA, originalHR = NA, bootResults = NA) # this one for all fish combined ) ################################################################## ### cProb = 0.95 for (i in 1:length(tagfm)){ # grab things pertinent to this tag for (j in 1:length(md)){ if(tagfm[[i]]$deployment == md[[j]]$deployment){ cmd = md[[j]] } } # end for j loop # name hrboots[[i]]$tagName = tagfm[ [i]]$tagName # find HR estimate will all data # ...except I don't want to use days 1 or 2 hrUtime = tagfm[[i]]$data$utime hrEasting = tagfm[[i]]$data$easting hrNorthing = tagfm[[i]]$data$northing # ... figure out the end of day 2...pick out the day and add 2 tempday = cmd$taggingDay substr(tempday,10,12) = as.character(as.numeric(substr(cmd$taggingDay,10,12))+2) tempUtime=as.POSIXct(strptime(tempday, "%Y/%b/%d", tz="EST5EDT"), origin="1970 1 1") # drop all data before tempUtime hrEa sting = hrEasting[hrUtime > tempUtime ]

PAGE 426

426 hrNorthing = hrNorthing[hrUtime > tempUtime ] hrboots[[i]]$originalHR = homeRange(hrEasting, hrNorthing, prob=cProb, lims=hrlims, tagName=tagfm[[i]]$tagName, reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, pts=T RUE, drawplot=TRUE) # now the boot strapping...first put the hrUtime, hrEasting, hrNorthing into a dataframe hrDF = data.frame(easting=hrEasting, northing=hrNorthing) hrboots[[i]]$bootResults = bootHR(tagfm[[i]]$data, #by = "day" or "2 days" or "w eek" prob=cProb, lims=hrlims, nboot=1000, progressbar=TRUE, bootplot=FALSE, tagName=tagfm[[i]]$tagName, reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, pts=FALSE, drawplot=FALSE ) } # now calculate the 95% KDE for all fish combined...z0 is required i=6 everyfish = the answer I get with z0 seems funky...do it again with this = hrboots[[i]]$originalHR = bob=homeRange(z0$easting, z0$northing, prob=cProb, lims=hrlims, tagName="All", reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, pts=FALSE, drawplot=FALSE) hrboots[[i]]$bootResults = sam=bootHR(z0, #by = "day" or "2 days" or "week" prob=cProb, lims=hrlims, nboot=1000, progressbar=TRUE, bootplot=FALSE, tagName="All", reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, pts=FALSE, drawplot=FALSE ) cTag = 1 plot(rep( cTag,length(hrboots[[cTag]]$bootResults)), hrboots[[cTag]]$bootResults, pch=19, xlim=c(0,6), ylim=c(1000,6000)) cTag = 2 points(rep(cTag,length(hrboots[[cTag]]$bootResults)), hrboots[[cTag]]$bootResults, pch=19) cTag = 3 points(rep(cTag,length(hrboots[ [cTag]]$bootResults)), hrboots[[cTag]]$bootResults, pch=19) cTag = 4 points(rep(cTag,length(hrboots[[cTag]]$bootResults)), hrboots[[cTag]]$bootResults, pch=19) cTag = 5 points(rep(cTag,length(hrboots[[cTag]]$bootResults)), hr boots[[cTag]]$bootResults, pch=19) cTag = 6 points(rep(cTag,length(hrboots[[cTag]]$bootResults)), hrboots[[cTag]]$bootResults,

PAGE 427

427 pch=19) points(1:6,c(hrboots[[1]]$originalHR, hrboots[[2]]$originalHR,hrboots[[3]]$originalHR, hrboots[[4]]$ originalHR, hrboots[[5]]$originalHR, hrboots[[6]]$originalHR), pch=19,col="red") # now pick out the 95% confidence interval (2.5% and 97.5% points), that is # 25, 975 our of 1000 different runs cTag=4 plot(hrboots[[cTag]]$bootResults[order(hrboots[[cT ag]]$bootResults)]) abline(v=c(25,975)) abline(h=hrboots[[cTag]]$originalHR) print(hrboots[[cTag]]$tagName) c(hrboots[[cTag]]$bootResults[order(hrboots[[cTag]]$bootResults)][25], hrboots[[cTag]]$bootResults[order(hrboots[[cTag]]$bootResults)][975] ) sav e("hrboots", file="C:/zy/Telemetry/R summary files/hrboots 2010April08.rdata") load("C:/zy/Telemetry/R summary files/hrboots 2011April08.rdata") ###################################################################### ########## # GGPLOT FIGURES # Use this code for making many figures # main effects change x and y for various relationships # notes: # tod x axis breaks = c(0,6,8,12,17,19,24) cyclic # dirL x axis breaks = c(0,90,180,270,360) cyclic # magL x axis breaks = seq(0,0.35, by=0.05) # luna rIndex x axis breaks = c(1,7,15,23,30) cyclic # gag speed y axis breaks = seq(0.08,0.26, by=0.02) # temperature x axis breaks=seq(14,26,by=2) # # altitude coord_cartesian(xlim=c(1,30), ylim=c(0,10)) # altitude breaks=c(0,1,2,3,seq(4,10,by=2))) # dtr breaks = seq(6,28, by=2) # z0 is all fish # z6, z6day, z6night are just 2007 fish with altitude data ggplot(z6night, aes(x=lunarIndex, y=altitu de, group=ID, colour=ID, fill=ID)) + geom_point(alpha=0.05) + geom_smooth(method="gam",formula=y~s(x,bs="cc"),lwd=1.3) + #,bs="cc"

PAGE 428

428 geom_smooth(aes(group=1),colour="black",lwd=1.3,method="gam",formula=y~s(x,bs=" cc")) + #,bs="cc" coord_cartesian(x lim=c(1,30), ylim=c(0,10)) + # range(z0$temperature,na.rm=T) range(z0$magL,na.rm=T) theme_bw() + scale_x_continuous("Lunar Index",breaks = c(1,7,15,23,30)) + scale_y_continuous("Altitude (m)", breaks=c(0,1,2,3,seq(4,10,by=2))) + opts(axis.te xt.x = theme_text(size = 15), axis.text.y = theme_text(size = 15)) + opts(axis.title.x = theme_text(size=15), axis.title.y = theme_text(size=15, angle=90)) EXAMPLES geom_text(x=22, y=27, label="a)") opts(axis.title.y = theme_text( colour = 'red', angle = 45, size = 10, hjust = 0.2, vjust = 0.5, face = 'italic')) opts(axis.ticks.margin = unit(1, "cm")) opts(axis.title.y = theme_text(hjust = 0.2)) + opts(axis.ticks.margin = unit(1, "cm")) # full time series of individuals dtr and alt ggplot(z5, aes(x=tod, y=speed))+#, group=tagName, colour=tagName, fill=tagName)) + geom_point(size=1, alpha=0.2) + #geom_smooth(method="gam",formula=y~s(x)) + #geom_smooth(aes(group=1),colour="black",lwd=1.3,method="gam",formula=y~s(x)) + coord_cartesian(xlim=c(0,24), ylim=c(0,0.8)) + theme_bw() + facet_wrap(~Date) + scale_y_continuous("Travel Speed (m/s)", breaks=seq(0,0.80,by=0.20)) + scale_x_continuous("Time of Day", breaks = seq(0,18,by=6)) + opts(axis.text.x = theme_ text(size = 15), axis.text.y = theme_text(size = 15)) + opts(axis.title.x = theme_text(size=15), axis.title.y = theme_text(size=15, angle=90)) ggplot(z6night, aes(x=lunarIndex, y=altitude, group=tagName, colour=tag Name, fill=tagName))+ geom_point(alpha=0.05) + geom_smooth(method="gam",formula=y~s(x,bs="cc")) + #,bs="cc" geom_smooth(aes(group=1),colour="black", fill="black", lwd=1.3,method="gam",formula=y~s(x,bs="cc")) + coord_cartesian(xlim=c (1,30), ylim=c(0,10)) + theme_bw() + #facet_wrap(~dod) + scale_x_continuous("Lunar Index", breaks = c(1,7,15,23,30)) + scale_y_continuous("Altitude (m)",breaks=c(0,1,2,3,4,6,8,10))

PAGE 429

429 # look at daily EN plots of individual fish ggplot(z5, aes(x=east ing, y=northing))+ geom_point(alpha=0.05) + #coord_cartesian(xlim=c(0,60), ylim=c(0,10)) + theme_bw() + facet_wrap(~dod) + scale_x_continuous("Easting (m)")+#, breaks = seq(0,60,by=20)) + scale_y_continuous("Northing (m)")#,breaks=c(0,1,2,3,4 ,6,8,10)) ###################################################################### ######## # Otolith work, using Deb's otolith data results biometrics = importBiometricData() # for some reason the very first element (year) has strange characters, so # repla ce it and drop the bad factor level. biometrics[1,1] = biometrics[2,1] biometrics$year1 = drop.levels(biometrics$year1, reorder=FALSE) # pick out the relevant data b1 = biometrics[,c("year1","month1","reefID1","HBSB","replicate","weight1", "girth1", TL1","tagID","mmNumber","recoveredTagID","year2","month2", "TL2","weight2","girth2","debAnnuli","debGrowth", "debAgeclass","resolvedAgeclass","otoRadius","ultimateAnnulus", "penultimateAnnulus","growthIncrement")] # get some prettier names names(b1[4 ]) < "treatment" names(b1) # that didn't work # narrow it to the right rows and columns hb = b1[(b1$HBSB=="HB") & (!is.na(b1$HBSB)) & (!is.na(b1$mmNumber)) ,] sb = b1[(b1$HBSB=="SB") & (!is.na(b1$HBSB)) & (!is.na(b1$mmNumber)) ,] # some plots Deb a sked for # Is there a difference in the length weight relationship between the treatments plot(hb$TL2, hb$weight2, pch=19, xlab="Total Length (mm)", ylab="Weight (g)") points(sb$TL2, sb$weight2, pch=2, col="blue") # fit length weight curves to thes e using nls nlhb = nls(weight2 ~ aa*TL2^bb, data=hb, start=list(aa=0.00001, bb=2.7)) nlsb = nls(weight2 ~ aa*TL2^bb, data=sb, start=list(aa=0.00001, bb=2.7)) # add curves to plot

PAGE 430

430 fnhb = function(TL){coef(nlhb)[1] TL^coef(nlhb)[2]} fnsb = function(T L){coef(nlsb)[1] TL^coef(nlsb)[2]} curve(fnhb(x), add=TRUE) curve(fnsb(x), add=TRUE, lty=2) # Is there a difference between treatments in age class plot(as.numeric(hb$resolvedAgeclass), hb$TL2, pch=19, col="red", xlab="Age Clas s", ylab="Total Length (mm)") points(sb$resolvedAgeclass, sb$TL2, pch=3, col="blue") # Is there a difference between treatments in back calculated TL plot(hb$TL2, hb$la, pch=19, xlab="Length at Capture (mm)", ylab="Back calculated Length (mm)") points(sb$TL2, sb$la, pch=2) # fit lines to each treatment reghb2 = lm(la ~ TL2, hb) regsb2 = lm(la ~ TL2, sb) abline(a=reghb2$coefficients[1], b=reghb2$coefficients[2]) abline(a=regsb2$coefficients[1], b=regsb2$coefficients[2], lty=2) # look at TL an d oto increments by treatment and rep plot(x=rep(1,nrow(hb1)), y=hb1$TL2, pch=19, xlim=c(1,6), ylim=range(b1$TL2,na.rm=T)) points(x=rep(2,nrow(hb2)), y=hb2$TL2, pch=19) points(x=rep(3,nrow(hb3)), y=hb3$TL2, pch=19) points(x=rep(4,nrow(sb1)), y=sb1$TL2, p ch=19) points(x=rep(5,nrow(sb2)), y=sb2$TL2, pch=19) points(x=rep(6,nrow(sb3)), y=sb3$TL2, pch=19) # I want to compare the size of my fish, but they were all caught at different # times so I can't really compare the length measurements I have. I will # b ack calculate their lengths to 31 Dec using the equation is Deb's paper. # # Here are two equations # la = [ (a + b ra)/(a + b rc) ] lc # lc = a + b rcR # # la = back calculated length to opaque zone 'a' # a = intercept from the linear regression of total length as a function of # otolith radius, rc # b = slope from the same linear regression # ra = otolith radius to opaque zone 'a' # rc = total otolith radius at time of capture # lc = total length at time of capture # do the length otolith radius regression for each group

PAGE 431

431 # hard bottom reghb = lm(TL2 ~ otoRadius, hb) summaryhb = summary(reghb) ahb = summaryhb$coefficients[1] bhb = summaryhb$coefficients[2] # soft bottom regsb = lm(TL2 ~ otoRadius, sb) summarysb = summary(regsb) asb = summarysb$coefficients[1] bsb = summarysb$coefficients[2] # a plot plot(hb$otoRadius, hb$TL2, pch=19, xlim=c(0.5,1.5), ylim=c(300,800), xlab="Otolith Radius at Capture (mm)", ylab="Total Length at Capture (mm)") abline(a=ahb, b=bhb) points(sb$otoRadi us, sb$TL2, pch=2) abline(a=asb, b=bsb, lty=2) reg1 = lm(TL2 ~ otoRadius, b1) sum1 = summary(reg1) aa = sum1$coefficients[1] bb = sum1$coefficients[2] plot(b1$TL2, b1$otoRadius, pch=19) abline(a=aa, b=bb) # back calculate length at ag e hb$la = ( (ahb + bhb hb$ultimateAnnulus)/(ahb + bhb hb$otoRadius) ) hb$TL2 sb$la = ( (asb + bsb sb$ultimateAnnulus)/(asb + bsb sb$otoRadius) ) sb$TL2 b1$la = ( (aa + bb b1$ultimateAnnulus)/(aa + bb b1$otoRadius) ) b1$TL2 # how do thes e compare to each other plot(b1$TL2, b1$la, pch=19, xlab="Total Length at Capture", ylab="Back calculated Total Length") abline(0,1) # OLD STUFF BELOW HERE par(mfrow=c(2,3)) cTag = 1; cmd=md[[1]] plot(tagfm[[cTag]]$data$easting,tagfm[[cTag]]$data$nort hing,pch=19,cex=0.1, xlim=xLimits, ylim=yLimits,

PAGE 432

432 main=tagfm[[cTag]]$tagName) points(cmd$reefEN$easting, cmd$reefEN$northing,pch=19,col=2) points(cmd$sdlEN$easting, cmd$sdlEN$northing,pch=19,col=4) cTag = 2; cmd=md[[1]] plot(tagfm[[cTag]]$data$easting, tagfm[[cTag]]$data$northing,pch=19,cex=0.1, xlim=xLimits, ylim=yLimits, main=tagfm[[cTag]]$tagName) points(cmd$reefEN$easting, cmd$reefEN$northing,pch=19,col=2) points(cmd$sdlEN$easting, cmd$sdlEN$northing,pch=19,col=4) cTag = 3; cmd=md[[1]] plot(tagf m[[cTag]]$data$easting,tagfm[[cTag]]$data$northing,pch=19,cex=0.1, xlim=xLimits, ylim=yLimits, main=tagfm[[cTag]]$tagName) points(cmd$reefEN$easting, cmd$reefEN$northing,pch=19,col=2) points(cmd$sdlEN$easting, cmd$sdlEN$northing,pch=19,col=4) cTag = 4; cmd=md[[2]] plot(tagfm[[cTag]]$data$easting,tagfm[[cTag]]$data$northing,pch=19,cex=0.1, xlim=xLimits, ylim=yLimits, main=tagfm[[cTag]]$tagName) points(cmd$reefEN$easting, cmd$reefEN$northing,pch=19,col=2) points(cmd$sdlEN$easting, cmd$sdlEN$northin g,pch=19,col=4) cTag = 5; cmd=md[[2]] plot(tagfm[[cTag]]$data$easting,tagfm[[cTag]]$data$northing,pch=19,cex=0.1, xlim=xLimits, ylim=yLimits, main=tagfm[[cTag]]$tagName) points(cmd$reefEN$easting, cmd$reefEN$northing,pch=19,col=2) points(cmd$sdlEN$ea sting, cmd$sdlEN$northing,pch=19,col=4) # more pictures par(mfrow=c(2,3)) cTag = 1; cmd=md[[1]] plot(hexbin(tagfm[[cTag]]$data$easting,tagfm[[cTag]]$data$northing), main=tagfm[[cTag]]$tagName) cTag = 2; cmd=md[[1]] plot(hexbin(tagfm[[cTag]]$data$easting ,tagfm[[cTag]]$data$northing),main=tagfm[[cTag ]]$tagName) cTag = 3; cmd=md[[1]] plot(hexbin(tagfm[[cTag]]$data$easting,tagfm[[cTag]]$data$northing),main=tagfm[[cTag ]]$tagName)

PAGE 433

433 cTag = 4; cmd=md[[2]] bins = hexbin(tagfm[[cTag]]$data$easting,tagfm[[cTag]]$da ta$northing, xbins=30,xbnds=xLimits, ybnds=yLimits) gplot.hexbin plot(bins, main=paste("Positions of ",tagfm[[cTag]]$tagName,sep=""), xlab="Easting (m)", ylab="Northing (m)" ) text(0, 0, labels="bob") mtext("Easting (m)", side=1, line=0, cex=2) m text("Northing (m)", side=2, line=0, cex=2) xLimits yLimits cTag = 5; cmd=md[[2]] plot(hexbin(tagfm[[cTag]]$data$easting,tagfm[[cTag]]$data$northing),main=tagfm[[cTag ]]$tagName) ###################################################################### # ## # ADCP plots # ...for this plot I want the lunar phase to show from the beginning of the # ...2007 depoyment even though there's no water flow data...to do that I'll # ...make up data lines which will be empty except for the lunar phase stuff # import ADCP data ad = importADCPdata() # get just 2007 and 2008 ad = ad[ad$utime < 1230768000, ] #1230768000 = 2009 Jan 1 midnight GMT ad$datiL = as.POSIXct(ad$datiL) # pick only some columns ad = ad[,c(3,6,11)] ad[,3] = ad[,3]/1000 # create empty data lines # tagging start on 9 Dec 16:01:00, ADCP starts on 19 Dec 16:01:00. That's # 11 days and 240 hours, with data every 10min...=1440 data lines dday = rep(9:19,each=144) hhour = rep(0:23,each=6,times=11) mmin = rep(seq(1,51, by=10),264) temp1 = paste("2007 12 ",dday," ",hhour,":",mmin,":00",sep="") temp2 = strptime(temp1, "%Y %m %d %H:%M:%S", tz="EST5EDT")

PAGE 434

434 temp3 = as.POSIXct(temp2, origin="1970 1 1", tz="EST5EDT") temp4 = data.frame(datiL = temp3[temp3 < ad$datiL[1]]) # now combine these newad=merge(ad, te mp4, by="datiL", all=TRUE) # add a year indicator newad$yr = as.factor(ifelse( newad$datiL < as.POSIXct("2008 05 01", origin="1970 1 1", tz="EST5EDT"), 2007,2008)) # add a column for creating a sine wave indicating lunar phase newad$moon = NA # split the years apart while the moon is added nad7 = newad[newad$yr == 2007, ] nad8 = newad[newad$yr == 2008, ] # add an index column 2007... # ...and each day has 24hrs with 6 lines each = 144 lines, and there are 30 days # ..in the lunar cycle...30*144 = 4320 lines per 2 pi radians or (2160 lines = pi rads) nad7$index = (1:nrow(nad7)*pi/2160) + pi # add pi to make it 1 at full moon and 1 at new # ... and by chance 9 Dec (the first day) is a new moon so I don't have to # ... shift left or right except to mak e the full moon be up at 1 # create the lunar curve nad7$moon = cos(nad7$index) # look and add lines where the new and full moons should be, as a check...looks good plot(nad7$datiL, nad7$moon, type="l") abline(v=as.POSIXct("2007 12 09", origin="1970 1 1" tz="EST5EDT")) #new moon abline(v=as.POSIXct("2007 12 24", origin="1970 1 1", tz="EST5EDT")) #full abline(v=as.POSIXct("2008 01 08", origin="1970 1 1", tz="EST5EDT")) #new # add an index column 2008... nad8$index = ((1:nrow(nad8))*(pi/2160)) (497*(pi/ 2160)) # ... by looking at it I see that 14 Oct (the first full moon) is 497 lines down # # create the lunar curve nad8$moon = cos(nad8$index) plot(nad8$datiL,nad8$moon, type="l") abline(v=as.POSIXct("2008 10 15", origin="1970 1 1", tz="EST5EDT")) # full moon abline(v=as.POSIXct("2008 10 29", origin="1970 1 1", tz="EST5EDT")) # new abline(v=as.POSIXct("2008 11 13", origin="1970 1 1", tz="EST5EDT")) # full abline(v=as.POSIXct("2008 11 28", origin="1970 1 1", tz="EST5EDT")) # new abline(v=as.POSIXct("2008 12 13", origin="1970 1 1", tz="EST5EDT")) # full abline(v=as.POSIXct("2008 12 27", origin="1970 1 1", tz="EST5EDT")) # new

PAGE 435

435 #this looks close enough # put them back together without the index newerad = rbind(nad7[, 6], nad8[, 6]) names(newerad) = c("datiL", "Temperature (Celcius)", "Water Speed (m/s)", "yr", "Lunar Phase") meltedad = melt(newerad, id.vars=c("datiL", "yr")) names(meltedad) = c("Date", "yr", "variable", "value") ggplot(meltedad, aes(x=Date, y=value)) + geom_line() + facet_grid(v ariable~yr, space="fixed",scales="free") + scale_x_datetime(major = "14 days") + # specifying these works around a bug theme_bw() + opts(axis.title.y = theme_text(colour = 'white')) + opts(axis.title.x = theme_text(size=20)) + opts(axis.text. x = theme_text(size = 15)) + opts(axis.text.y = theme_text(size = 15)) # ztemp = z0[!is.na(z0$dirL),] don't use this so lunarIndex is on same scale ztemp = z0 ztemp$lunarIndex = ztemp$lunarIndex/15 ggplot(ztemp, aes(x=datiL,y=magL)) + geom_line() + fa cet_grid(.~yr, scales="free_x") + #opts(title="Water Temperature") + xlab("Date") + ylab("Water Speed (m/s)") + theme_bw() ggplot(z0,aes(x=utime,y=magL))+geom_point()+facet_grid(.~yr,scales="free_x") opts(title="All Fish") # dtr histogram par(m frow=c(2,3)) bks = c(20,20,20,30,30) for (i in 1:length(tagfm)){ cTag=i; #cmd=md[[2]] hist(tagfm[[cTag]]$data$dtr, breaks=bks[i], freq=T, main=tagfm[[cTag]]$tagName, xlim=c(0,100)) abline(v=median(tagfm[[cTag]]$data$dtr), col="red") } hist(z0$dt r, breaks=30, freq=F, xlim=c(0,80), ylim=c(0,0.04),

PAGE 436

436 col="grey", las=1, main="", xlab="Distance to the Reef (m)") abline(v=median(z0$dtr),lwd=4) # speed histogram par(mfrow=c(2,3)) for (i in 1:length(tagfm)){ cTag=i; #cmd=md[[2]] hist(tagfm[[cTag ]]$data$speed, breaks=30, freq=F, main=tagfm[[cTag]]$tagName) abline(v=mean(tagfm[[cTag]]$data$speed), col="red") } hist(z0$speed, breaks=30, freq=F, col="grey", las=1, main="", xlab="Gag Speed (m/s)") abline(v=median(z0$speed),col="black",lwd=4) # interval histogram...for this you want the filtered, but not minuteMeaned data # ... too bad...I have to get that fresh now...you need some code from 'chapter 3 xxxxx.r" cTagNames # import the 'tag' dataset load("C:/zy/Telemetry/R summary f iles/tag 2011Mar16.rdata") ttf = list() # filtered tag data for (i in 1:length(cTagNames)){ print(cTagNames[i]) ttf[[i]] = filterALPSdata(df1=tag[[i]], cnF=1.5, speedF=0.8, minuteMean=F) } # combine them tempInterval = c(ttf[[1]]$data$interval, ttf[[2]]$data$interval, ttf[[3]]$data$interval, ttf[[4]]$data$interval, ttf[[5]]$data$interval) par(mfrow=c(2,3)) bks = seq(0,max(tempInterval),by=2) for (i in 1:length(cTagNames)){ cTag=i; hist(ttf[ [cTag]]$data$interval, breaks=bks, freq=T, main=ttf[[cTag]]$tagName, xlim=c(0,60)) abline(v=median(ttf[[cTag]]$data$interval), col="red") } par(mar=c(5,4,2,2)+0.1) hist(tem pInterval, breaks=bks, freq=F, col="grey", las=1, main="",

PAGE 437

437 cex.lab=1.5, cex.axis=1.5, xlim=c(0,60), ylim=c(0,0.25), xlab="Interval Length (s)", ylab="") abline(v=median(tempInterval),lwd=4) mtext("Density", side = 2, line = 4, cex = 1.5) # ... whic h element in temp1 represents the first 99% of all elements temp1 = tempInterval[order(tempInterval)] c1 = round(0.50*length(temp1)) c2 = round(0.95*length(temp1)) abline(v=c(temp1[c1],temp1[c2]), col=c("blue","blue"), lwd=2) ###################################################################### ######## # I want to calcualte the mean number of position solutions per day...I've lost # the code where I first did this...so redo it here...do this before the minuteMean str(ttf) temp1 = list() # each element will be a list for one fish...each element of that # sublist will be the number of hits each day #...don't use day 1, the tagging day for (cTag in 1:length(ttf)){ dat = ttf[[cTag]]$data time = dat$datiL timecat = cut(time, breaks="day") datsplit = split(dat,timecat) temp2 = c() for (i in 2:length(datsplit)){ temp2 = c(temp2,nrow(datsplit[[i]])[1]) } temp1[[cTag]] = temp2 mean(temp2) } # now l ook at the results for individual fish mean(temp1[[5]]) # now look at results for all fish together...on day 1 how many hits did everyone get # ...what's that for every day, what's the mean temp3 = list() # the first 37 days (38 first day) for (i in 1:3 7){ temp3[[i]] = sum(temp1[[1]][[i]]+temp1[[2]][[i]]+temp1[[3]][[i]]+ temp1[[4]][[i]]+temp1[[5]][[i]])

PAGE 438

438 } # days with only 2008 fish for (i in 38:50){ temp3[[i]] = sum(temp1[[4]][[i]]+temp1[[5]][[i]]) } # find mean of all numbers in temp3 temp4 = c() for (i in 1:length(temp3)){ temp4 = c(temp4, temp3[[i]]) } temp5 = mean(temp4/5) # f=5 fish ### I've checked this answer two ways and get the same thing, but the number seems wrong mean(temp3, rm.na=T) # now do thi s for all fish combined alltime = data.frame(ttf[[1]]$data$datiL, ttf[[2]]$data$datiL) ###################################################################### ######## ###################################################################### ######## # a four p anel figure...using stuff generated above par(mfrow=c(2,2)) # 1. Time interval between detections par(mar=c(5,4,2,2)+0.1) bks = seq(0,max(tempInterval),by=2) hist(tempInterval, breaks=bks, freq=F, col="grey", las=1, main="", cex.lab=1.5, cex.axis=1.5, xlim=c(0,40), ylim=c(0,0.25), xlab="", ylab="") abline(v=median(tempInterval),lwd=4) mtext("Density", side = 2, line = 3.8, cex = 1.5) mtext("Interval Length (s)", side = 1, line = 3, cex = 1.5) text(40,0.25,"a)", cex=1.5) # 2. distance to reef par(ma r=c(5,4,2,2)+0.1) bks = seq(0,max(z0$dtr),by=1) hist(z0$dtr, breaks=90, freq=F, col="grey", las=1, cex.lab=1.5, cex.axis=1.5, xlim=c(0,60), ylim=c(0,0.05), xlab="", ylab="", main="") abline(v=median(z0$dtr),lwd=4) mtext("Density", side = 2, line = 4, c ex = 1.5)

PAGE 439

439 mtext("Distance from Reef (m)", side = 1, line = 3, cex = 1.5) text(60,0.05,"b)", cex=1.5) # 3. Gag Travel Speed hist(z0$speed, breaks=30, freq=F, col="grey", las=1, cex.lab=1.5, cex.axis=1.5, xlim=c(0,0.6), ylim=c(), xlab="", ylab="", main= "") abline(v=median(z0$speed),col="black",lwd=4) mtext("Density", side = 2, line = 3.8, cex = 1.5) mtext("Gag Travel Speed (m/s)", side = 1, line = 3, cex = 1.5) text(0.6,3.8,"c)", cex=1.5) # 4. altitude...this one from below barplot(altVec, horiz=T, besi de=T, col="grey", las=1, xlab="", ylab="", cex.axis=1.5, cex.names=1.5, cex.lab=1.5, names=c(1,"",3,"",5,"",7,"",9), #possAlts, xlim=c(0,0.7), ylim=c(1,10) ) mtext("Altitude (m)", side = 2, line = 4, cex = 1.5) mtext("Density", side = 1, line = 3, cex = 1.5) text(0.6,10,"d)", cex=1.5) ### a figure cTag = 5 cmd = md[[2]] par(mfrow=c(1,3)) par(mar=c(5,4,2,2)+0.1) plot(hexbin(tagfm[[cTag]]$data$easting, tagfm[[cTag]]$data$northing, xbnds=cmd$plotLimits$easting, ybnd s=cmd$plotLimits$northing, xbins=40), #cex.lab=1.5, cex.axis=1.5, bty="l", xlab="Easting (m)", ylab="Northing (m)") gplot.hexbin( hexbin(tagfm[[cTag]]$data$easting, tagfm[[cTag]]$data$northing, xbnds=cmd$plotLimits$easting, ybnds=cmd$plotLimi ts$northing, xbins=40), lcex=1, xlab="Easting (m)", ylab="Northing (m)" ) plot(tagfm[[cTag]]$data$easting, tagfm[[cTag]]$data$northing, pch=19, cex=0.2, xlim=cmd$plotLimits$easting, ylim=cmd$plotLimits$northing) points(cmd$reefEN$easting, cmd $reefEN$northing, pch=19, cex=1, col=2)

PAGE 440

440 points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19, cex=1, col=4) plot(tagfm[[cTag]]$data$datiL, tagfm[[cTag]]$data$northing, pch=19, cex=0.2, ylim=cmd$plotLimits$northing, main=tagfm[[cTag]]$tagName) abline(h=c md$reefEN$northing, col=2) abline(h=cmd$sdlEN$northing, col=4) plot(tagfm[[cTag]]$data$datiL, tagfm[[cTag]]$data$depth, pch=19, cex=0.2, ylim=rev(range(tagfm[[cTag]]$data$depth))) ###################################################################### ## ######## # fraction of detections by hour cTag=3 dat = ttf[[cTag]]$data time = dat$datiL dat < subset(dat,select=c("northing","easting")) timecat < cut(time, breaks="hour") datsplit < split(dat,timecat) hourlyFrac = length(dats plit)/(60*30) fracHits = vector(length=length(datsplit)) for(i in 1:length(datsplit)){ fracHits[i]=nrow(datsplit[[i]])/(60*30) } par(mar=c(5,4,2,2)+0.1) plot(as.POSIXlt(levels(timecat)),fracHits,type="l", las=1, bty="l", cex.lab=1.5, cex.axis=1.5, ylim=c(0,0.8), xlab="Date", ylab="Fraction of Recorded Positions") ###################################################################### #### ### Altitude distribution par(mfrow=c(2,3)) possAlts = 1:13 for (i in 1:3){ # only the 2007 fish have altitudes # ...how do I want to round alt = ceiling(tagfm[[i]]$data$altitude[ tagfm[[i]]$data$altitude >= 0 ]) altVec = c() for (j in 1:length(possAlts)){ altVec = rbind(altVec, sum(alt == possAlts[j],na.rm=T))

PAGE 441

441 } # # make all the 'zero' va lues into 'one' values, and get rid of 'zero' element # altVec[2] = altVec[1] + altVec[2]; altVec[1]=0 # altVec = altVec[2:14] # possAlts = 1:13 altVec = altVec/sum(altVec,na.rm=T) # remove altitudes 10 13 because they are or are close to zero altV ec = head(altVec,9) possAlts = head(possAlts,9) barplot(altVec, horiz=T, beside=T, main=paste(tagfm[[i]]$tagName,"Altitude"), xlab="Frequency", ylab="Altitude (m)", cex.axis=1.5, cex.names=1.5, cex.lab=1.5, names=possAlts, ylim=c(0,15) ) # gap.barplot(altVec,gap=c(0.25, 0.6),xlab="Altitude (m)", # ytics=c(0,0.05, 0.1, 0.15, 0.2, 0.6383233), # xtics=possAlts, xaxlab=possAlts, # ylab="Frequency",horiz=T, # main=paste(tagfm[[i]]$tagName,"Distribution in the Water Column"), # col=rep("grey",length(altVec))) abline(h=mean(tagfm[[i]]$data$altitude, na.rm=T),col="red",lwd=2) } # now do all fish together alt = ceiling(z0$altitude[ z0$altitude >= 0 ]) altVec = c() for (j in 1:length(possAlts)){ altVec = rbind(altVec, su m(alt == possAlts[j],na.rm=T)) } # # make all the 'zero' values into 'one' values, and get rid of 'zero' element # altVec[2] = altVec[1] + altVec[2]; altVec[1]=0 # altVec = altVec[2:14] # possAlts = 1:13 altVec = altVec/sum(altVec) barplot(altVec, ho riz=T, beside=T, col="grey", las=1, main="", xlab="Density", ylab="Gag Height Above Seafloor (m)", cex.axis=1.5, cex.names=1.5, cex.lab=1.5, names=possAlts, xlim=c(0,0.7), ylim=c(1,13) )

PAGE 442

442 # get a regular histobram for altitude distribution, but grou p the 0 into 1 hist(z0$altitude, breaks=30, freq=F, main="", col="grey", las=1, xlim=c(0,13), #ylim=c(0,30000), xlab="Altitude (m)") abline(v=median(z0$altitude,na.rm=T),lwd=4) barplot(z0$altitude, beside=T, col="grey", las=1, breaks=30) # LINEAR RE GRESSION????? # TL FL tlfl = lm(formula=results$TL ~ results$FL) tlfl$coef summary(tlfl) fltl = lm(formula=results$FL ~ results$TL) fltl$coef # W TL # this is the standard relationship # W = a TL^b or log(W) = log (a) + b Log(TL) logwtl = lm(formula = log(results$weight) ~ log(results$TL)) logwtl$coef summary(logwtl) plot(log(results$TL), log(results$weight)) abline (a = 22.124, b=3.607) plot(results$FL, results$TL) abline(a=tlfl$coef[[1]], b=tlfl$coef[[2]]) abline(tlfl) summary(tlfl)$r.squared t lfl$r.squared # kde95 kde95 hrhr = lm(results$kde95 ~ results$kde50) summary(hrhr) plot(results$kde50, results$kde95) abline(hrhr) # kde95 TL hrtl = lm(results$hr95 ~ results$TL) summary(hrtl) plot(results$TL, results$hr95) abline(hrtl)

PAGE 443

443 # KDE95 weight h rwt = lm(results$kde95 ~ results$weight) summary(hrwt) plot(results$weight, results$kde95) abline(hrwt) # KDE50 weight hrwt = lm(results$kde50 ~ results$weight) summary(hrwt) plot(results$weight, results$kde50) abline(hrwt) # dtr TL dtrtl = lm(goodResult s$dtr~goodResults$TL) summary(dtrtl) plot(goodResults$TL, goodResults$dtr) abline(dtrtl) # dtr kde50 dtrhr = lm(results$medianDtr~results$kde50) summary(dtrhr) plot(results$kde50, results$medianDtr) abline(dtrhr) # dtr kde9 5 dtrhr = lm(results$medianDtr~results$kde95) summary(dtrhr) plot(results$kde95, results$medianDtr) abline(dtrhr) # dtr weight dtrwt = lm(results$dtr~results$weight) summary(dtrwt) plot(results$weight, results$dtr) abline(dtrwt) # speed weight speedwt = lm(results$meanSpeed~results$weight) summary(speedwt) plot(results$weight, results$meanSpeed) abline(speedwt) # dtr temp dtrtemp = lm(z0$dtr~z0$temperature) plot(z0$temperature, z0$dtr)

PAGE 444

444 plot(dtrtemp) summary.lm(dtrtemp)$r.squared # dtr hod dtrhod = lm(z0 $dtr~z0$hod) plot(dtrhod) plot(z0$hod, z0$dtr) summary.lm(dtrhod)$r.squared par(mfrow=c(2,3)) plot(results$weight,results$dtr, pch=19, col="red") points(goodResults$weight,goodResults$dtr, pch=19) plot(results$weight,results$meanSpeed, pch=19, col="red") points(goodResults$weight,goodResults$meanSpeed, pch=19) plot(results$weight,results$homeRange95, pch=19, col="red") points(goodResults$weight,goodResults$homeRange95, pch=19) plot(results$dtr,results$homeRange95, pch=19, col="red") points(goodResults$ dtr,goodResults$homeRange95, pch=19) plot(results$dtr,results$meanSpeed, pch=19, col="red") points(goodResults$dtr,goodResults$meanSpeed, pch=19) plot(results$weight,results$TL, pch=19, col="red") points(goodResults$weight,goodResults$TL, pch=19) # som e box plots asking if the two years are different plot(results$deployment, results$dtr, pch=19) boxplot(list(results$deployment, results$dtr), notch=FALSE) ### a big combo plot for the paper library(hexbin) cTag=1 str(tagfm[[cTag]]) plot(hexbin(tagfm[[ cTag]]$data$easting, tagfm[[cTag]]$data$northing)) par(mfrow=c(3,2)) plot(tagfm[[cTag]]$data$datiL, tagfm[[cTag]]$data$easting, type="l") plot(tagfm[[cTag]]$data$datiL, tagfm[[cTag]]$data$depth, type="l")

PAGE 445

445 ################################################# ##################### ########## ###################################################################### ########## ### Home Ranges ###################################################################### ########## ############################################## ######################## ########## # # THE LIMITS YOU USE WHEN CALCULATING THE KDE AFFECT THE ANSWER, SO FOR ALL # FISH MAKE SURE TO USE THE SAME LIMITS ON EASTING AND NORTHING. # # DO SOME WORK TO CHECK THE hr ESTIMATE SENSITIVITY TO THE LIMITS AND n GRI D # CELLS IN EACH DIRECTION. # # USE BOOT STRAPPING TO FIND CONFIDENCE INTERVALS ON HR ESTIMATES. ABOUT 1000 # RUNS OF THE BOOT STRAP IS ABOUT RIGHT. BE SURE TO CHECK THAT THE MEAN OF # ALL THE BOOT STRAPS IS ABOUT EQUAL TO THE HR ESTIMATE WITH ALL THE DATA. USE # 95% QUANTILES TO APPROXIMATE THE 95% CI. # calculate the home range cTag = 5 if(results$deployment[cTag] == "hb2007"){cmd = md[[1]]} else {cmd = md[[2]]} cProb = 0.95 HRsize = homeRange( easting = tagfm[[cTag]]$data$easting, northing = tagfm[[cTag]]$data$northing, tagName = cTagNames[cTag], lims = hrlims, # c(md[[2]]$plotLimits$easting, md[[2]]$plotLimits$northing), reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, prob=cProb, drawplot=TRUE ) ############################################## ######################## ########## # how does the HR change with number of days used...home range stabilization # ...because I put 'results' in order of size and this wants them in original order results = results[c(5,4,3,1,2) ,]

PAGE 446

446 # something to hold the changing HR answers for all fish allFishHR = list() cProb = 0.5 for (cTag in 1:length(tagfm)){ if(results$deployment[cTag] == "hb2007"){ cmd = md[[1]] numUniqueDays = length(unique(tagfm[[cTag]]$data$ datiL$yday)) } else { cmd = md[[2]] numUniqueDays = length(unique(tagfm[[cTag]]$data$datiL$yday)) } # get only the easting/northing data d1= subset(tagfm[[cTag]]$data, select=c(utime,datiL,easting,northing)) # create a vector of the inde x of the day of the run, 1 52 for example # ...there's got to be a prettier way of doing this, but... startOfFirstDay = unclass(as.POSIXct( strptime(cmd$taggingDay, "%Y/%B/%d", tz="EST5EDT"), origin="1970 1 1", tz="EST5EDT"))[1] endOfAllDa ys = startOfFirstDay + 86400 1:numUniqueDays # something to hold the changing HR for one fish hrVSdays = data.frame("numDays" = NA, "hrSize" = NA) for (i in 1:numUniqueDays){ # grab only position solutions during the first i days d2 = d1[d1$utime < endOfAllDays[i],] print(i) if(nrow(d2) > 0){ # calculate the home range for these PS hrSize = homeRange( easting = d2$easting, northing = d2$northing, tagName = paste(tagfm[[cTag]]$tagName, ", ", i, days", sep=""), lims = hrlims, #c(md[[2]]$plotLimits$easting, md[[2]]$plotLimits$northing), reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, prob=cProb, drawplot=FALSE ) } else {hrSize = 0} # save the answer hrVSdays[i ,] = c(i, hrSize) } # end for loop # a plot

PAGE 447

447 plot(hrVSdays$numDays, hrVSdays$hrSize, pch=19, main=paste(cTagNames[cTag], ": ", cProb*100, "% HR", sep="") ) # save the answer for this fish allFishHR[[cTag]] = hrVSdays } # end cTag fo r loop plot(allFishHR[[4]],type="l", xlim=c(0,60), main=paste(cProb*100, "% KDEs", sep=""), xlab="Number of Days", ylab=paste(cProb*100, "% KDE", sep="") ) for (cTag in 1:length(allFishHR)){ points(allFishHR[[cTag]], type="l") } text(x=c(38,38,38,51 ,51)+4, y=results$kde95, labels=c(results$tagName)) # ...also create these same curves for 2008 fish as if there had only been 50m arrays # ...you'll need 'rotate()' and 'chop()' which are in 'testing array spacing.r' for now # ...the two rish of interest are 2008 f60300, f61100, tagfm[[4]], tagfm[[5]] cmd=md[[2]] s60300 = subset(tagfm[[4]]$data, select=c(utime,easting,northing)) r60300 = rotate(s60300$utime,s60300$easting, s60300$northing, spin=45) c60300 = chop(r60300$fisht, r60300$fishx, r60300$fishy, spacing=50) u60300 = rotate(c60300$fisht,c60300$fishx,c60300$fishy, spin= 45) plot(s60300$easting,s60300$northing,pch=19) points(u60300$fishx,u60300$fishy,pch=19,col="blue") s61100 = subset(tagfm[[5]]$data, select=c(utime,easting,northing)) r61100 = rotat e(s61100$utime,s61100$easting, s61100$northing) c61100 = chop(r61100$fisht,r61100$fishx, r61100$fishy, spacing=50) u61100 = rotate(c61100$fisht,c61100$fishx,c61100$fishy, spin= 45) plot(s61100$easting,s61100$northing,pch=19) points(u61100$fishx,u61100$fi shy,pch=19,col="blue") for (cTag in 4:5){ cmd = md[[2]] numUniqueDays = length(unique(tagfm[[4]]$data$datiL$yday)) # data for 2008 deployment # get only the easting/northing data if(cTag==4){d1=u60300} else {d1=u61100} names(d1) = c("utime", "easting","northing")

PAGE 448

448 # create a vector of the index of the day of the run, 1 52 for example # ...there's got to be a prettier way of doing this, but... startOfFirstDay = unclass(as.POSIXct( strptime(cmd$taggingDay, "%Y/%B/%d", tz="EST5EDT"), origin="1970 1 1", tz="EST5EDT"))[1] endOfAllDays = startOfFirstDay + 86400 1:numUniqueDays # something to hold the changing HR for one fish hrVSdays = data.frame("numDays" = NA, "hrSize" = NA) for (i in 1:numUniqueDays){ # grab only position solutions during the first i days d2 = d1[d1$utime < endOfAllDays[i],] print(i) if(nrow(d2) > 0){ # calculate the home range for these PS hrSize = homeRange( easting = d2$easting, northing = d2$northing, tagName = paste("u60300 or u61100", ", ", i, days", sep=""), lims = c(md[[2]]$plotLimits$easting, md[[2]]$plotLimits$northing), reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, prob=cProb, drawplot=FALSE ) } else {hrSize = 0} # save the answer hrVSdays[i,] = c(i, hrSize) } # end for loop # a plot plot(hrVSdays$numDays, hrVSdays$hrSize, pch=19, main=paste(cTagNames[cTag], ": ", cProb*100, "% HR", sep="") ) # save the answer for this fish allFish HR[[cTag+2]] = hrVSdays } # end cTag for loop plot(allFishHR[[4]],type="l", xlim=c(0,60), main="Home Range Stabilization", xlab="Number of Days", ylab=paste(cProb*100, "% KDE", sep="") ) for (cTag in 1:5){ points(allFishHR[[cTag]], type="l", lw d=2) } points(allFishHR[[6]], type="l", lty="dashed", lwd=2)

PAGE 449

449 points(allFishHR[[7]], type="l", lty="dashed", lwd=2) text(x=c(38,38,38,51,51)+4, y=results$kde50, labels=c(results$tagName)) ################################################################### ### ######### ###################################################################### ######### ### Bootstrapping home range estimates ###################################################################### ######### ############################################ ########################## ######### bootHR < function(dat, by="day", nboot=100, prob=0.95, progressbar=FALSE, bootplot=FALSE, pts=FALSE, drawplot=FALSE, ... # 'by' will subset the data by whatever you choose, say 'day', then the # bootstrap will pick randomly from the 'days' ) { if (progressbar) { require(tcltk) pb < tkProgressBar("hr bootstrap",min=0,max=nboot) } time < dat$datiL dat < subset(dat,select=c("northing","easting")) timecat < cut(time, breaks=by) datsplit < split(dat,timecat) nt < length(levels(timecat)) bootres < numeric(nboot) if (bootplot) with(dat,plot(easting,northing,pch=".")) for (i in 1:nboot) { bootsamp < sample(nt,size=nt,replace=TRUE) bootdat < do.call(rbind,datsplit[bootsamp]) if (bootplot) with(bootdat,points(easting,northing,pch=".",col=i+1)) bootres[i] < with(bootdat, homeRange(easting, northing, prob=prob, pts=pts, drawplot=drawplot, ...) ) if (progressbar) setTkPr ogressBar(pb,i) } if (progressbar) close(pb) bootres } # end bootHR system.time(boottest < bootHR(dat=z0, by="day", #by = "2 days" or "week" prob=0.95, lims=hrlims,nboot=5, progressbar=TRUE, bootplot=TRUE)) # now to put this all together and run it 1000 for each tag # create a list to hold the results

PAGE 450

450 hrboots = list( list( tagName = NA, # which tag originalHR = NA, # a single number, HR estimate using all data bootResults = NA # a vector holding the 1000 boot strapped HR estimat es ), list(tagName = NA, originalHR = NA, bootResults = NA), list(tagName = NA, originalHR = NA, bootResults = NA), list(tagName = NA, originalHR = NA, bootResults = NA), list(tagName = NA, originalHR = NA, bootResults = NA), list(tagName = NA, originalHR = NA, bootResults = NA) # this one for all fish combined ) ##################################################################### cProb = 0.95 for (i in 1:length(tagfm)){ # grab things pertinent to this tag for (j in 1:length(md)){ if( tagfm[[i]]$deployment == md[[j]]$deployment){ cmd = md[[j]] } } # end for j loop # name hrboots[[i]]$tagName = tagfm[[i]]$tagName # find HR estimate will all data hrboots[[i]]$originalHR = homeRange(tagfm[[i]]$data$easting, tagfm[[i]]$dat a$northing, prob=cProb, lims=hrlims, tagName=tagfm[[i]]$tagName, reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, pts=TRUE, drawplot=TRUE) # now the boot strapping hrboots[[i]]$bootResults = bootHR(tagfm[[i]]$data, #by = "day" or "2 days" or "week" prob=cProb, lims=hrlims, nboot=1000, progressbar=TRUE, bootplot=FALSE, tagName=tagfm[[i]]$tagName, reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, pts=FALSE, drawplot=FALSE ) } # now calculate the 95% KDE for all fish combined...z0 is required i=6 everyfi sh = the answer I get with z0 seems funky...do it again with this = hrboots[[i]]$originalHR = bob=homeRange(z0$easting, z0$northing, prob=cProb, lims=hrlims, tagName="All", reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, pts=FALSE, drawplot=FALSE)

PAGE 451

451 hrbo ots[[i]]$bootResults = sam=bootHR(z0, #by = "day" or "2 days" or "week" prob=cProb, lims=hrlims, nboot=1000, progressbar=TRUE, bootplot=FALSE, tagName="All", reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, pts=FALSE, drawplot=FALSE ) cTag = 1 plot(rep(cTag,len gth(hrboots[[cTag]]$bootResults)), hrboots[[cTag]]$bootResults, pch=19, xlim=c(0,6), ylim=c(1000,6000)) cTag = 2 points(rep(cTag,length(hrboots[[cTag]]$bootResults)), hrboots[[cTag]]$bootResults, pch=19) cTag = 3 points(rep(cTag,length(hrboots[[cTag]]$ bootResults)), hrboots[[cTag]]$bootResults, pch=19) cTag = 4 points(rep(cTag,length(hrboots[[cTag]]$bootResults)), hrboots[[cTag]]$bootResults, pch=19) cTag = 5 points(rep(cTag,length(hrboots[[cTag]]$bootResults)), hrboots[[c Tag]]$bootResults, pch=19) cTag = 6 points(rep(cTag,length(hrboots[[cTag]]$bootResults)), hrboots[[cTag]]$bootResults, pch=19) points(1:6,c(hrboots[[1]]$originalHR, hrboots[[2]]$originalHR,hrboots[[3]]$originalHR, hrboots[[4]]$originalH R, hrboots[[5]]$originalHR, hrboots[[6]]$originalHR), pch=19,col="red") # now pick out the 95% confidence interval (2.5% and 97.5% points), that is # 25, 975 our of 1000 different runs cTag=6 plot(hrboots[[cTag]]$bootResults[order(hrboots[[cTag]]$boot Results)]) abline(v=c(25,975)) abline(h=hrboots[[cTag]]$originalHR) print(hrboots[[cTag]]$tagName) c(hrboots[[cTag]]$bootResults[order(hrboots[[cTag]]$bootResults)][25], hrboots[[cTag]]$bootResults[order(hrboots[[cTag]]$bootResults)][975] ) save("hrboot s", file="C:/zy/Telemetry/R summary files/hrboots 2010April08.rdata") load("C:/zy/Telemetry/R summary files/hrboots 2011April08.rdata") ###################################################################### #########

PAGE 452

452 ####################################### ############################### ######### ### Variance Decomposition ###################################################################### ######### ###################################################################### ######### # # This codes comes from ex amples by Ben in 'var_decomp_BMB.r' ggplot(z0, aes(x=tod,y=speed,colour=tagName,fill=tagName)) + geom_point() (a1 < lmer(dtr ~ 1 + (1|tag)+(1|dod)+(1|hod), z0)) ## variances estvar = c(unlist(VarCorr(a1)), err=lme4:::sigma(a1)^2) percentVarianc es = 100 estvar / sum(estvar) dd < data.frame(comp=names(estvar), val=estvar, perVar=percentVariances) ggplot(dd,aes(x=val,y=comp))+geom_point() ###################################################################### ########## # ggplots # ggplot(z0, aes(x=dtr, y=speed, group=tagName, colour=tagName, fill=tagName))+ geom_point(alpha=0.1) + geom_smooth()+ #facet_wrap(~tagName) + xlab("Time of Day") + coord_cartesian(ylim=c(0.0,0.4), xlim=c(0,60)) + scale_x_continuous(breaks = seq(0,40,by=5)) + scale_y_continuous(breaks = seq(0.05,0.2,by=0.01)) + # main effects change x and y for various relationships ggplot(z0, aes(x=temperature, y=speed, group=tagName, col our=tagName, fill=tagName)) + geom_point(alpha=0.03) + geom_smooth(method="gam",formula=y~s(x)) + geom_smooth(aes(group=1),colour="black",lwd=1.3,method="gam",formula=y~s(x)) + coord_cartesian(ylim=c(0.1,0.2)) + scale_y_continuous(breaks = seq( 0.1,0.2,by=0.01)) +

PAGE 453

453 theme_bw() + scale_x_continuous("Temperature (Celcius)") + scale_y_continuous("Travel Speed (m)") + opts(axis.text.x = theme_text(size = 20)) + opts(axis.text.x = theme_text(size = 20), axis.text.y = theme_text(size = 20)) + opts(axis.ticks.x = theme_text(size = 20)) + geom_text() # this for making my main archives of pictures # main effects change x and y for various relationships ggplot(z 0, aes(x=tod, y=dtr, group=tagName, colour=tagName, fill=tagName)) + geom_point(alpha=0.1) + geom_smooth(method="gam",formula=y~s(x)) + geom_smooth(aes(group=1),colour="black",lwd=1.3,method="gam",formula=y~s(x)) + coord_cartesian(ylim=c(6,28)) + scale_y_continuous(breaks = seq(0,30,by=1)) + theme_bw() + facet_wrap(~dod) ggplot(z0night, aes(x=lunarIndex, y=altitude)) + geom_point(alpha=0.1) + geom_smooth(method="gam",formula=y~s(x)) + #coord_cartesian(ylim=c(0.1,0.2)) + #scale_y_ continuous(breaks = seq(0.1,0.2,by=0.010)) + facet_wrap(~lunarIndex) # main effects by fish, change x and y for various relationships ggplot(z0, aes(x=tod, y=altitude, group=tagName, col our=tagName, fill=tagName)) + geom_point(alpha=0.1, cex=1) + geom_smooth() + #geom_smooth( method="gam",formula=y~s(x)) + geom_smooth(aes(group=1),colour="black",lwd=1.1, method="gam",formula=y~s(x)) + theme_bw() + coord_cartesian(ylim=c(0,13)) + scale_y_continuous(breaks = seq(0,13,by=2)) + facet_wrap(~dod) # for altitude and depth # what I want to look at #altitude v tod, lunarIndex, temperature, dirL, magL, d od # magL, dirL. temp no pattern. z6 = z0[!is.na(z0$altitude), ] z6 = drop.levels(z6[(z6$tagName!="f60300")|(z6$tagName!="f61100"),],reorder=FALSE) z6night = z6[z6$day=="night", ]

PAGE 454

454 z6day = z6[z6$day=="day", ] z6$date = as.Date(z6$datiL) z7 = z0[!is.na(z0$depth), ] z7 = drop.levels(z7[(z7$tagName!="f60300")|(z7$tagName!="f61100"),],reorder=FALSE) setwd("C:/zy/Ch 3 Preliminary Data/Figures/temp figures") ggplot(z6, aes(x=tod, y=altitude)) +#, group=tagName, colour=tagName, fill=tagName)) + geom_point(alpha=0.4, cex=1) + #geom_smooth(lwd=1) + #geom_smooth( method="gam", formula=y~s(x)) + #geom_smooth(aes(group=1),colour="black",lwd=1.1,method="gam",formula=y~s(x)) + theme_bw() + coord_cartesian(ylim=c(0,13)) + scale_x_continuous(breaks = seq(6,24,by=6)) + scale_y_continuous (breaks = seq(0,10,by=5)) + facet_wrap(~date) + opts(axis.label.x = "Time of Day") + opts(axis.label.y = "Altitude from Seafloor (m)") + opts(axis.text.x = theme_text(size = 20), axis.text.y = theme_text(size = 20)) + opts(axis.ti cks.x = theme_text(size = 20)) # interactions, change x and y for various relationships ggplot(z0, aes(x=tod, y=dtr, colour=tagName)) + theme_bw() + geom_point(alpha=0.3) + geom_smo oth( method="gam",formula=y~s(x,bs="cc"), colour="black") + geom_smooth(aes(group=1,colour=NA),colour="black",method="gam",formula=y~s(x,b s="cc")) + facet_wrap(~dod) + coord_cartesian(ylim=c(0,13)) + scale_y_co ntinuous(breaks = seq(0,13,by=4)) # z4$date = as.Date(z4$datiL) z5$date = as.Date(z5$datiL) # one fish at a time ggplot(z5, aes(x=tod, y=dtr), xlab="s") + geom_point(alpha=0.1) + geom_smooth(method="gam",formula=y~s(x),colour="black") + #geom_smooth(aes(group=1),colour="black",lwd=1.3,method="gam",formula=y~s(x)) +

PAGE 455

4 55 coord_cartesian(ylim=c(0,100)) + scale_x_continuous(breaks = seq(6,24,by=6)) + scale_y_continuous(breaks = seq (30,90,by=60)) + theme_bw() + facet_wrap(~date) + opts(axis.text.x = theme_text(size = 15), axis.text.y = theme_text(size = 15)) #opts(axis.ticks.x = theme_text(size = 20)) ## for looking at tod and lunarIndex interactions z8 = z0 [z0$day=="night",] # main effects by fish, change x and y for various relationships ggplot(z8, aes(x=lunarIndex, y=speed, group=tagName, colour=tagName, fill=tagName)) + facet_wrap(~hod ) + geom_point(alpha=0.1, cex=1) + geom_smooth() + #geom_smooth( method="gam",formula=y~s(x)) + #geom_smooth(aes(group=1),colour="black",lwd=1.1,method="gam",formula=y~s( x)) + theme_bw() + coord_cartesian(ylim=c(0,0.3)) + scale_y_continuous(breaks = seq(0,0.3,by=0.1)) ggplot(z0, aes(x=hod, y=spd, group=dod)) + #geom_point(alpha=0.01) + geom_smooth( method="gam",formula=y~s(x)) + #theme_bw() + coord_cart esian(ylim=c(0,50)) + scale_y_continuous(breaks = seq(0,50,by=5)) + facet_wrap(~dod) + opts(title="All Fish") ##### water flow direction...dirL ### ...use raw adcp data not what's attached to z0 ad=importADCPdata() ad$datiL = as.POSIXct(ad$datiL) ad $yr = as.factor(ifelse(ad$utime>1.21e9,2008,2007)) library(CircStats) bob = ad[!is.na(ad$dirL),] hist(ad$dirL) rose.diag((bob$dirL)*pi/180, bins=36, prop=3, main="distribution of dirL")

PAGE 456

456 ggplot(ad,aes(x=dirL))+geom_bar(binwidth=10)+facet_wrap(~yr) + theme_bw() + coord_polar() + opts(title="Water Flow Direction") z0$btr = as.numeric(z0$btr) ggplot(z0, aes(x=dirL, y=btr)) plot(z0$dirL,z0$btr,pch=19,cex=0.1) bob=as.numeric(z0$btr) ggplot(z0,aes(x=as.numeric(btr)) )+geom_bar(binwidth=10)+facet_wrap(~tagName) + theme_bw() + coord_polar() + opts(title="Water Flow Direction") ggplot(z0,aes(x=dtr, y=as.numeric(btr))) + geom_point(alpha=0.01) + facet_wrap(~tagName) + theme_bw() plot(hexbin(z0$dtr, a s.numeric(z0$btr))) ggplot(z5, aes(x=easting, y=northing)) + coord_cartesian(ylim=c(500,800), xlim=c(8450,8750)) + #stat_binhex() + stat_binhex(bins=40) + #stat_binhex(binwidth=c(10,10)) #geom_point(x=md[[2]]$reefEN$easting, y=md[[2]]$reefEN$ northing) + geom_point(x=8500, y=550, alpha=10) # individual fish movement by day ggplot(z5, aes(x=tod,y=speed)) +# geom_point() + geom_smooth(method="gam",formula=y~s(x)) + #coord_cartesian(ylim=c(0,130)) + #scale_y_continuous(breaks = seq(0,130,by=40)) + facet_wrap(~dod) + theme_bw() + opts(title = z5$tag[1]) # altitude v tod ggplot(z0, aes(x=tod,y=altitude)) + geom_line() +

PAGE 457

457 geom_smooth() + ## geom_smooth(method="gam",formula=y~s(x)) + ## theme_bw() head(z1) junk=head (z3[!is.na(z3$depth) ,],10000) plot(junk$datiL,junk$depth,type="b") # individual fish altitude ggplot(z0, aes(x=magL,y=altitude)) + geom_point(alpha=0.1) + geom_smooth() + ## geom_smooth(method="gam",formula=y~s(x)) + ## coord_cartesian(ylim=c(0,5 )) + theme_bw() # individual fish altitude by day ggplot(z2, aes(x=tod,y=altitude)) +# geom_line() + geom_smooth() + ## geom_smooth(method="gam",formula=y~s(x)) + ## coord_cartesian(ylim=c(0,5)) + facet_wrap(~dod) + theme_bw() zz = rbin d(z1,z2) plot(z1$datiL,z1$depth,type="l",col="red") points(z2$datiL,z2$depth,type="l",col="blue") points(z2$datiL,z2$waterDepth,type="l",col="green") plot(z2$datiL,z2$altitude,type="b",col="green") # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # chapter 4 ADCP.r # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ###################################################################### ########## # In this file I look at 2009 ADCP data and c reate plots ###################################################################### ### # ADCP plots library(reshape) library(ggplot2) ###################################################################### ##########

PAGE 458

458 # deployment info # md3 hb1 if43 2 009Jun01 # md4 sb1 of43 2009Jul10 # md5 sb2 oh41 2009Aug03 # md6 hb2 if41 2009Aug24 # md7 sb3 os43 2009Sep14 # md8 hb3 if42 2009Oct12 # md9 sb4 of43 2009Nov16 numexpt = 3:9 # look at a list of all the tags fo r(i in numexpt){ print(md[[i]]$deployment) print(md[[i]]$fishNames) } # import ADCP data ad = importADCPdata() # get just 2009 ad = ad[ad$utime > 1230768000, ] #1230768000 = 2009 Jan 1 midnight GMT ad$datiL = as.POSIXct(ad$datiL) # pick only some c olumns ad = ad[,c("datiL","tem","magL","dirL")] ad[,3] = ad[,3]/1000 ad1 = ad # add a deployment indicator, hb1, sb2, etc. and NA for all else # add a year indicator ad1$year = as.factor(as.POSIXlt(ad1$datiL)$year+1900) ad1$deployment = NA for (i in 1: length(numexpt)){ keepers = (ad1$datiL > md[[numexpt[i]]]$startUtime) & (ad1$datiL < md[[numexpt[i]]]$stopUtime) ad1[keepers,]$deployment = md[[numexpt[i]]]$deployment } # drop data not during a deployment, also drop sb1 ad1 = ad1[!is.na(ad1$depl oyment),] ad1 = ad1[ad1$deployment != "sb1",] # Now I want the lunar curve to cover the whole deployment even if there's no # ADCP data. To do this I'll add some lines with only dates in them. # ... let's see, which deployments need this...

PAGE 459

459 # ...ADCP in water: 1 June 20 Aug. 24 Aug 1 Oct. 13 Oct 27 Oct. 18 Nov 26 Nov. # ...hb1:3 17 June. sb2:4 20 Aug. hb2:25 Aug 8 Sept. sb3:16 Sept 1 Oct. # ...hb3:13 27 Oct. sb4:18 28 Nov. # # So sb4 is missing two days at the end...add dates for this # ...it ends on 2 009 11 26 10:59:00 and should go every 10min # create empty data lines from the end to 2009 11 28 23:59:00 # 10 min = 600 sec for about 2.5 days or 360 10 min intervals # ...also, add the year and deployment indicators st = 1259251740 # utime = first made up time temp1 = data.frame(datiL = as.POSIXlt(seq(from=st, by=600, length.out=360), origin="1970 1 1", tz="EST5EDT"),year=2009,deployment="sb4") # this is about right, batteries died on Saturday # now combine these dates with ad1 ad1=merge(ad1, temp 1, by=c("datiL","year","deployment"), all=TRUE) # now I want to add a sine wave showing the phase of the moon, but because # 2009 ACDP data is not continuous the method I used for 2007/2008 won't work. # I'll have to get the sine wave directly from the fr actional day of the year # FYI...from importALPSdata() # read in the lunarIndex for each day of 2009 fn1 = "C:/zy/Telemetry/R summary files/lunar phases 2009.csv" lunar2009 = read.table(file=fn1,header=T,sep=",", col.names=c("month","day","doy","lunarIn dex"), colClasses=c("character",rep("numeric",3)) ) # now pick the day of each datum in d9 and determine the lunarIndex # luckily the order of lunar2009 is the same as the order as yday # # I want the plot to be a sine wave between 0 and 1 with a perio d equal to 31 # days. I already have a lunar index associated with each day (1 to 31) so I # need to convert that to radians (0 to 2pi). 31 lunar Index = 2pi rads. # find the lunar Index from lunar2009 for each datum ad1$doy = as.POSIXlt(ad1$datiL)$yd ay ad1$lunarIndex = lunar2009$lunarIndex[ad1$doy] # to smooth the curve, make it a fraction of how many seconds since midnight ad1$lunarIndexFrac = ad1$lunarIndex + (as.POSIXlt(ad1$datiL)$hour*3600 + as.POSIXlt(ad1$datiL)$min*60)/(24*60*60) ad1$moonCurv e = (sin(ad1$lunarIndexFrac*(2*pi/30) 0.5*pi) + 1 )/2 plot(ad1$datiL,ad1$moonCurve,type="p")

PAGE 460

460 # rename the deployments ad1$deployment[ ad1$deployment == "sb2" ] = "B Soft bottom 1" ad1$deployment[ ad1$deployment == "sb3" ] = "D Soft bottom 2" ad1$dep loyment[ ad1$deployment == "sb4" ] = "F Soft bottom 3" ad1$deployment[ ad1$deployment == "hb1" ] = "A Hard bottom 1" ad1$deployment[ ad1$deployment == "hb2" ] = "C Hard bottom 2" ad1$deployment[ ad1$deployment == "hb3" ] = "E Hard bottom 3" # drop what I don't want plotted ad2 = ad1[,c("datiL","tem","deployment","moonCurve")] ad3 = ad1[,c("datiL","tem","magL","dirL","deployment","moonCurve")] # rename them pretty for the figure names(ad2) = c("Date", "Temperature (Celsius)", "Deployment", "Lunar Phase") names(ad3) = c("Date", "Temperature (Celsius)", "Current Speed (m/s)", "Current Direction", "Deployment", "Lunar Phase") meltedad2 = melt(ad2, id.vars=c("Date", "Deployment")) meltedad3 = melt(ad3, id.vars=c("Date", "Deployment")) ggplot(meltedad3, aes(x=Date, y=value)) + geom_line() + facet_grid(variable~Deployment, space="fixed",scales="free") + scale_x_datetime(major = "7 days", format="%b %d") + # specifying these works around a bug theme_bw() + scale_y_continuous( ') + # instead of...opts(axis.title.y = theme_text(colour = 'white')) + opts(axis.title.x = theme_text(size=20)) + #opts(axis.text.x = theme_text(size = 15)) + opts(axis.text.y = theme_text(size = 15)) aaa Here's Ben's work on this figure. se e the original R code (freespace.R) in an email g1 < ggplot(meltedad,aes(x=Date,y=value))+geom_line()+theme_bw()+ scale_x_datetime(major="14 days") g1 + facet_grid(variable~year,scales="free",space="free") g1 + facet_grid(variable~year,scales="free",s pace="fixed") library(gridExtra) g2 < ggplot(subset(meltedad,year==2007), aes(x=Date,y=value))+geom_line()+theme_bw()+ scale_x_datetime(major="14 days")+ facet_grid(variable~year,scale="free")

PAGE 461

461 g3 < g2 %+% subset(meltedad,year==2008) ## suppress labels g2B < g2+opts(strip.background=theme_blank(), strip.text.x=theme_blank(),strip.text.y=theme_blank()) g3B < g3 + opts(axis.text.y=theme_blank(),axis.title.y=theme_blank()) n2007 < length(d2007) n2008 < length(d2008) t ot < n2007+n2008 grid.show.layout(grid.layout(1,2,widths=unit(c(n2007/tot,n2008/tot),"null"))) grid.arrange(g2B,g3B,ncol=2,widths=unit(c(n2007/tot,n2008/tot),"null")) bbb End Ben's work on this figure ################################################### ################### ########## # Figure 3. rose plots of current directions ad = importADCPdata() # get just 2007 and 2008 ad = ad[ad$utime < 1230768000, ] #1230768000 = 2009 Jan 1 midnight GMT ad$datiL = as.POSIXct(ad$datiL) # pick just what I want new ad = ad[,c(3,14)] # add a year indicator #newad$year = as.factor(ifelse( # newad$datiL < as.POSIXct("2008 05 01", origin="1970 1 1", tz="EST5EDT"), # 2007,2008)) # now add a column just for the figure captions, since I don't know how to change them manu ally newad$deployment = as.factor(ifelse( newad$datiL < as.POSIXct("2008 05 01", origin="1970 1 1", tz="EST5EDT"), "2007 Deployment","2008 Deployment")) ggplot(newad,aes(x=dirL))+ geom_bar(binwidth=10)+ facet_wrap(~deployment) + theme_bw() + coord_polar(start= pi/20) + # I don't know why 0 isn't at top, but 'start' to fix it opts(axis.text.x = theme_text(size = 12)) + #opts(title="Water Flow Direction") +

PAGE 462

462 # scale_y_continuous(' ') + # instead of ...opts(axis.title.y = theme_text(co lour = 'white')) + scale_x_continuous(' ') ggplot(newad,aes(x=dirL))+stat_bin(binwidth=10,aes(y=19*..density..))+ scale_x_continuous(limits=c(0,360),breaks=seq(0,360,by=45))+ #geom_bar(binwidth=10)+ facet_wrap(~deployment ) + theme_bw() + coord_polar() + # I don't know why 0 isn't at top, but 'start' to fix it opts(axis.text.x = theme_text(size = 12)) + #opts(title="Water Flow Direction") + # scale_y_continuous(' ') + # instead of ...opts(axis.title.y = theme_t ext(colour = 'white')) + labs(x="",y="Proportion") # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # chapter 4 movement.r # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @@@@@@ # In this chapter I use 2009 experimental data library(ggplot2) library(mgcv) library(plotrix) # for multhist source("C:/zy/Telemetry/R Data Processing/global variables.r") source("C:/zy/Telemetry/R Data Processing/global functions.r") source("C:/ zy/Telemetry/R Data Processing/global metadata.r") # read in tagfm, z0, results, and # all depData files which hold tag9, tagf9, tagfm9 # depList holds the good tagfm9 files load("C:/zy/Telemetry/R summary files/z9 2011Jun25.rdata") load("C:/zy/Te lemetry/R summary files/results9 2011Jun25.rdata") load("C:/zy/Telemetry/R summary files/depList 2011June25.rdata") # all tagfm data ## add some things to z9 and results9 ###################################################################### ########## # p aperID tagName deployment #1 f26 sb2 #2 f28 sb2 #3 f29 sb2

PAGE 463

463 #4 f30 sb2 #5 f31 sb2 #6 f39 sb3 #7 f40 sb3 #8 f42 sb3 #9 f43 sb3 #10 f54 sb4 #11 f56 sb4 #12 f57 sb4 #13 f59 sb4 #14 f62100 sb4 #15 f13 hb1 #16 f14 hb1 #17 f16 hb1 #18 f33 hb2 #19 f34 hb2 #20 f35 hb2 #21 f36 hb2 #22 f37 hb2 #23 f38 hb2 #24 f47 hb3 #25 f48 hb3 #26 f51 hb3 #27 f52 hb3 # add a column to z9 for paper ID so they're labeled correctly in figures tempID = 1:27 tempTagName = c("f26","f28","f29","f30","f31","f39","f40","f42","f43","f54 ", "f56","f57","f59","f62100","f13","f14","f16","f33","f34","f35","f36","f37", "f38","f47","f48","f51","f52") z9$ID = 7777777 for (i in 1:length(tempID)){ z9$ID[ z9$tagName == tempTagName[i] ] = tempID[i] } z9$ID = as.factor(z9$ID) # add a column for a pretty treatment name z9$ttmt = 7777777 z9$ttmt[z9$treatment == "sb"] = "Sand bottom Landscapes" z9$ttmt[z9$treatment == "hb"] = "Hard bottom Landscapes" z9$ttmt = as.factor(z9$ttmt) z9$Date = as.Date(z9$datiL) results9$ttmt = 7777777

PAGE 464

464 results9$ttm t[grepl("sb", results9$deployment)] = "Sand bottom Landscapes" results9$ttmt[grepl("hb", results9$deployment)] = "Hard bottom Landscapes" results9$ttmt = as.factor(results9$ttmt) results9$ID = c(15:17, 1:5, 18:23, 6:9, 24:27, 10:14) ##################### ################################################# ########## ###################################################################### ########## ###################################################################### ########## ################################## #################################### ########## # gather all the data # md3 hb1 if43 2009Jun01 # md4 sb1 of43 2009Jul10 # md5 sb2 oh41 2009Aug03 # md6 hb2 if41 2009Aug24 # md7 sb3 os43 2009Sep14 # md8 hb3 if42 2 009Oct12 # md9 sb4 of43 2009Nov16 numexpt = 3:9 # look at a list of all the tags for(i in numexpt){ print(md[[i]]$deployment) print(md[[i]]$fishNames) } # get fish biometric data. This contains data recorded in the field on tagging # day and any recaptures. It also contains the otolith data. biometrics = importBiometricData() ###################################################################### ########### # some plots Deb suggested for the 2009 collected fish # pick only relevant data b1 = biometrics # only data with MM numbers...those caught and kept and otoliths extracted # ...and from one of the experimental reefs b1 = b1[(!is.na(b1$mmNumberD) & !is.na(b1$HBSB)),] # only keep some information about these b1 = b1[, c('reefID1','HBSB','replicate','weight1','girth1','TL1','FL1','tagged',

PAGE 465

465 'tagID','mmNumber', 'recoveredTagID','year2','reefID2','TL2','FL2','weight2','girth2', #'lOto', 'lOtoWeightUseable','lOtoLengthAUseable','lOtoLengthBUseable', #'lOtoLengthCUsea ble', #'rOto','rOtoWeightUseable','rOtoLengthAUseable','rOtoLengthBUseable', #'rOtoLengthCUseable', 'monthD','resolvedAgeclass','otoRadius', 'ultimateAnnulus', 'penultimateAnnulus','growthIncrement') ] # for learning aov and lm keep even fewer dat a b1 = b1[,c('HBSB','TL2','weight2', 'otoRadius', 'ultimateAnnulus','penultimateAnnulus','growthIncrement') ] names(b1) = c("treatment","tl","weight","otoRadius", "ultimateAnnulus", "penultimateAnnulus","growthIncrement") # back calculate length usin g the two equations in Deb and Daryl's paper # ...each treatment should be done separately # # la = [ (a + b ra)/(a + b rc) ] lc # lc = a + b rc # # la = back calculated length to opaque zone 'a' # a = intercept from the linear regression of total le ngth as a function of # otolith radius # b = slope from same regression # ra = otolith radius to opaque zone 'a' # rc = total otolith radius at time of capture # lc = total length at time of capture # Sand bottom treatment sbreg1 = lm(TL2 ~ otoRadiu s, sb) plot(sb$otoRadius, sb$TL2, pch=19) abline(a=sbreg1$coef[1], b=sbreg1$coef[2]) # hard bottom treatment hbreg1 = lm(TL2 ~ otoRadius, hb) plot(hb$otoRadius, hb$TL2, pch=19) abline(a=hbreg1$coef[1], b=hbreg1$coef[2]) # plot together plot(sb$otoRadius, sb$TL2, pch=2, xlim=c(0.5,1.5), ylim=c(300,800)) abline(a=sbreg1$coef[1], b=sbreg1$coef[2], lty=2) points(hb$otoRadius, hb$TL2, pch=19)

PAGE 466

466 abline(a=hbreg1$coef[1], b=hbreg1$coef[2]) # compare regression lines using # ...following root/fruit/grazing example in http://www.scribd.com/doc/50843947/ANCOVA in R #attach(b1) # using lm() ancova = lm(otoRadius ~ HBSB TL2) summary(ancova) # this shows that TL2 has an effect on otoRadius, but there is no indication of # difference in the slope of this relationship between the two treatments anova(ancova) ###################################################################### ########## # some summaries about tagged fish results9 sbr2 = results9[results9$deployment == "sb2",] sbr3 = results9[results9$deployment == "sb3",] sbr4 = results9[results9$deployment == "sb4",] hbr1 = results9[results9$deployment == "hb1",] hbr2 = results9[results9$deployment == "hb2",] hbr3 = results9[results9$deployment == "hb3",] ########################################################### ########### ######### ## look at distributions of dtr, speed, interval # pick one and this gives you depData, then make those tag9, tagf9, andn tagfm9 normal load("C:/zy/Telemetry/R summary files/Experiment tagfm9 and figs/dep_hb1 2011June08.rdata") tag = d epData$tag9 tagf = depData$tagf9 tagfm = depData$tagfm9 ###################################################################### ####### # histograms of tagged and non tagged fish observed at time of tagging # get fish biometric data. This contains d ata recorded in the field on tagging # day and any recaptures. It also contains the otolith data. b1 = importBiometricData()

PAGE 467

467 # pick only relevant data, 2009 fish on the reef at time of tagging... # ...b1 has those fish plus 2007/8 and otolith collection fish at end... # ...to separate...luckily sizeRange is not had for otolith collection fish b2 = b1[((b1$year1 == "2009")|(b1$year1 == "2010"))& !is.na(b1$sizeRange), 1:14] # make b1$sizeRange numeric b2$sizeRange = as.numeric(levels(b2$sizeRange))[b2$size Range] # now split this by treatment b2sb = b2[b2$HBSB == "SB",] b2hb = b2[b2$HBSB == "HB",] # pick out only the tagged fish on 6 good reefs and split by treatment b3 = b2[b2$tagged == "yes",] b3 = b3[!((b3$HBSB == "SB") & (b3$replicate == 1)),] b3sb = b 3[b3$HBSB == "SB",] b3hb = b3[b3$HBSB == "HB",] # histogram of all fish observed on reefs at time of tagging par(mfrow=c(1,2)) par(mar=c(4.5,4,1,1)+0.1) bob = multhist(list(b2sb$sizeRange,b2hb$sizeRange),freq=T,breaks=seq(10,70,by=10), cex.axis=1, cex.n ames=1.5, space=c(0,0.5), axes=F, ylim=c(0,80), #legend.text=c("Gag in sand bottom landscapes","Gag in hard bottom landscapes"), names.arg=c("20 30","30 40","40 50","50 60","60 70","70 80") ) axis(2,seq(0,80,by=5), las=1, cex.axis=1.5) mtext("Total L ength Category (cm) ",1,3, cex=1.5) mtext("Frequency",side=2,line=2.7,cex=1.7) text(x=0.5,y=78,labels="a)",cex=1.5) # histogram of all fish tagged on 6 used reefs at time of tagging par(mar=c(4.5,4,1,1)+0.1) sam = multhist(list(b3sb$TL1,b3hb$TL1),freq= T,breaks=seq(300,800,by=20), cex.axis=1, cex.names=1.5, space=c(0,0.5), axes=F, ylim=c(0,4), legend.text=c("Gag in sand bottom landscapes","Gag in hard bottom landscapes"), names.arg=tail(sam$breaks, 1) ) axis(2,seq(0,8,by=1), las=1, cex.axis=1.5) mtext("Total Length (mm) ",1,3, cex=1.5) mtext("Frequency",side=2,line=2.7,cex=1.7) text(x=0.5,y=3.9,labels="b)",cex=1.5) ###################################################################### #######

PAGE 468

468 # t tests of tagged and non tagged fish observed at time of tagging # # Is there a difference in the number of total fish seen on reefs? sbr1 = b2[((b2$HBSB == "SB") & (b2$replicate == 1)),] sbr2 = b2[((b2$HBSB == "SB") & (b2$replicate == 2)),] sbr3 = b2[((b2$HBSB == "SB") & (b2$replicate == 3)),] sbr4 = b2 [((b2$HBSB == "SB") & (b2$replicate == 4)),] hbr1 = b2[((b2$HBSB == "HB") & (b2$replicate == 1)),] hbr2= b2[((b2$HBSB == "HB") & (b2$replicate == 2)),] hbr3 = b2[((b2$HBSB == "HB") & (b2$replicate == 3)),] sbcounts = c(nrow(sbr1),nrow(sbr2),nrow(sbr3),nro w(sbr4)) hbcounts = c(nrow(hbr1),nrow(hbr2),nrow(hbr3)) # test for equal variance var.test(sbcounts, hbcounts) # variances are equal # t test t.test(sbcounts, y=hbcounts, alternative="t", var.equal=TRUE) ...so were not different total numbers # using th e lower bound of each size class, are there differences in the mean # size of all fish observed on reefs sb1m = mean(sbr1$sizeRange) sb2m = mean(sbr2$sizeRange) sb3m = mean(sbr3$sizeRange) sb4m = mean(sbr4$sizeRange) hb1m = mean(hbr1$sizeRange) hb2m = mea n(hbr2$sizeRange) hb3m = mean(hbr3$sizeRange) sbmeans = c(sb1m,sb2m, sb3m, sb4m) hbmeans = c(hb1m, hb2m, hb3m) # test for equal variance var.test(sbmeans, hbmeans) # variances are equal # t test t.test(sbmeans, y=hbmeans, alternative="t", var.equal=TRUE ) ...so were not different total lengths # is there a difference in the sizes of tagged individuals # ... pick only tagged s1 = sbr1[sbr1$tagged == "yes",] s2 = sbr2[sbr2$tagged == "yes",] s3 = sbr3[sbr3$tagged == "yes",] s4 = sbr4[sbr4$tagged == "yes",] h1 = hbr1[hbr1$tagged == "yes",]

PAGE 469

469 h2 = hbr2[hbr2$tagged == "yes",] h3 = hbr3[hbr3$tagged == "yes",] s1m = mean(s1$TL1) s2m = mean(s2$TL1) s3m = mean(s3$TL1) s4m = mean(s4$TL1) h1m = mean(h1$TL1) h2m = mean(h2$TL1) h3m = mean(h3$TL1) sbmeans = c(s1m,s2m, s 3m, s4m) hbmeans = c(h1m, h2m, h3m) # test for equal variance var.test(sbmeans, hbmeans) # variances are equal # t test t.test(sbmeans, y=hbmeans, alternative="t", var.equal=TRUE) ...so were not different total lengths # what are the relationships betwe en LT, LF, and weight of all tagged individuals aaa an example from chapter 3 # LINEAR REGRESSION????? # TL FL tlfl = lm(formula=results$TL ~ results$FL) tlfl$coef summary(tlfl) fltl = lm(formula=results$FL ~ results$TL) fltl$coef # W TL # this is the s tandard relationship # W = a TL^b or log(W) = log (a) + b Log(TL) logwtl = lm(formula = log(results$weight) ~ log(results$TL)) logwtl$coef summary(logwtl) plot(log(results$TL), log(results$weight)) abline (a = 22.124, b=3.607) plot(results$FL, resul ts$TL) abline(a=tlfl$coef[[1]], b=tlfl$coef[[2]]) abline(tlfl) summary(tlfl)$r.squared

PAGE 470

470 tlfl$r.squared bbb ###################################################################### ########## # look at tagged gag # sbres9 = results9[grepl("sb",results9$depl oyment),] hbres9 = results9[grepl("hb",results9$deployment),] sb2res9 = results9[results9$deployment == "sb2",] sb3res9 = results9[results9$deployment == "sb3",] sb4res9 = results9[results9$deployment == "sb4",] hb1res9 = results9[results9$deployment == hb1",] hb2res9 = results9[results9$deployment == "hb2",] hb3res9 = results9[results9$deployment == "hb3",] # kde ranges range(sbres9$kde50) range(sbres9$kde95) range(hbres9$kde50) range(hbres9$kde95) # mean 50% kde on each reef and t.tests # size of all fish observed on reefs sb2kde50 = mean(sb2res9$kde50) sb3kde50 = mean(sb3res9$kde50) sb4kde50 = mean(sb4res9$kde50) hb1kde50 = mean(hb1res9$kde50) hb2kde50 = mean(hb2res9$kde50) hb3kde50 = mean(hb3res9$kde50) sbmeans = c(sb2kde50,sb3kde50,sb4kde50) hbmean s = c(hb1kde50,hb2kde50,hb3kde50) sbmean = mean(sbmeans) hbmean = mean(hbmeans) sbsd = sd(sbmeans) hbsd = sd(hbmeans) # test for equal variance var.test(sbmeans, hbmeans) # variances are unequal # t test t.test(sbmeans, y=hbmeans, alternative="l", var.eq ual=F) ...were different

PAGE 471

471 # mean kde on each reef and t.tests # size of all fish observed on reefs sb2kde95 = mean(sb2res9$kde95) sb3kde95 = mean(sb3res9$kde95) sb4kde95 = mean(sb4res9$kde95) hb1kde95 = mean(hb1res9$kde95) hb2kde95 = mean(hb2res9$kde95) hb 3kde95 = mean(hb3res9$kde95) sbmeans = c(sb2kde95,sb3kde95,sb4kde95) hbmeans = c(hb1kde95,hb2kde95,hb3kde95) # test for equal variance var.test(sbmeans, hbmeans) # variances are nequal # t test t.test(sbmeans, y=hbmeans, alternative="l", var.equal=T) .. were different # mean median speed on each reef and t.tests sb2spd = mean(sb2res9$medianSpeed) sb3spd = mean(sb3res9$medianSpeed) sb4spd = mean(sb4res9$medianSpeed) hb1spd = mean(hb1res9$medianSpeed) hb2spd = mean(hb2res9$medianSpeed) hb3spd = mean(hb 3res9$medianSpeed) sbmeans = c(sb2spd,sb3spd,sb4spd) hbmeans = c(hb1spd,hb2spd,hb3spd) # test for equal variance var.test(sbmeans, hbmeans) # variances are equal # t test t.test(sbmeans, y=hbmeans, alternative="t", var.equal=T) ...were not different # mean median dtr on each reef and t.tests sb2dtr = mean(sb2res9$medianDtr) sb3dtr = mean(sb3res9$medianDtr) sb4dtr = mean(sb4res9$medianDtr) hb1dtr = mean(hb1res9$medianDtr) hb2dtr = mean(hb2res9$medianDtr) hb3dtr = mean(hb3res9$medianDtr) sbmeans = c(sb2 dtr,sb3dtr,sb4dtr) hbmeans = c(hb1dtr,hb2dtr,hb3dtr)

PAGE 472

472 # test for equal variance var.test(sbmeans, hbmeans) # variances are unequal # t test t.test(sbmeans, y=hbmeans, alternative="l", var.equal=F) ...were different ###################################### ################################ ##### # a pairs plot of tagged individuals pairs(results9[c(3:5,11:14)], pch=21, bg=c("red","blue")[unclass(results9$ttmt)], #cex.labels=1, #labels=c("Weight","Total Length", "Fork Length", "Median DFR","Median Speed", "50% KDE", "95% KDE") labels=c("W","LT", "LF", "DFR","SPD", "50%", "95%") ) ###################################################################### ###### # plot to display 50% and 95% KDE ... like the poster figure # x axis positio ns for each reef xpps = c(2,3,4,6,7,8) #MAKE SURE TO STRETCH THE WINDOW OUT TO GET ALL THE X AXIS WORDS # first plot the individual fish HR50 estimates... par(mar=c(5, 6, 1, 2) + 0.1) plot(1,1, type="n", las=1, bty="l", cex.lab=1.5, cex.axis=1.5, xlim=c (1.5,8.5), xaxt="n", yaxt="n", xlab="", ylab="", ylim=c(0,1000)) axis(1, at = c(xpps), tick = TRUE, cex.axis=1.1, labels=c("Deployment 1","Deployment 2","Deployment 3", "Deployment 2","Deployment 3","Deployment 4") ) axis(2,at=seq(0,1000,by=10 0),las=1,cex.axis=1.5) # add the x axis category labels mtext(text="Hard bottom Landscapes", side=1, line=3, at = 3, cex=1.5) mtext(text="Sand bottom Landscapes", side=1, line=3, at = 7, cex=1.5) mtext("50% KDE Area ( )", side=2, line=4.4, cex=1.7) mtext(expression(m^2), side=2, line=4.5, adj=0.685, cex=1.7) # add the points points(rep(xpps[1], nrow(hb1res9)), hb1res9$kde50, pch=19)

PAGE 473

473 points(rep(xpps[2], nrow(hb2res9)), hb2res9$kde50, pch=19) points(rep(xpps[3], nrow(hb3res9)), hb3res9$kde50, pch=19) points(rep(xpps[4], nrow(sb2res9)), sb2res9$kde50, pch=19) points(rep(xpps[5], nrow(sb3res9)), sb3res9$kde50, pch=19) points(rep(xpps[6], nrow(sb4res9)), sb4res9$kde50, pch=19) # now add the mean values of all fish on a reef points(x=c(xpps[1] 0.2,xpps[1]+ 0.2), rep(hb1kde50,2), col="red", type="l", lwd=3) points(x=c(xpps[2] 0.2,xpps[2]+0.2), rep(hb2kde50,2), col="red", type="l", lwd=3) points(x=c(xpps[3] 0.2,xpps[3]+0.2), rep(hb3kde50,2), col="red", type="l", lwd=3) points(x=c(xpps[4] 0.2,xpps[4]+0.2), rep( sb2kde50,2), col="red", type="l", lwd=3) points(x=c(xpps[5] 0.2,xpps[5]+0.2), rep(sb3kde50,2), col="red", type="l", lwd=3) points(x=c(xpps[6] 0.2,xpps[6]+0.2), rep(sb4kde50,2), col="red", type="l", lwd=3) # now add the mean value of the reef means (n=3) fo r each treatment points(x=c(xpps[1] 0.5,xpps[3]+0.5), rep(hbmean,2), col="blue", type="l", lwd=3) points(x=c(xpps[4] 0.5,xpps[6]+0.5), rep(sbmean,2), col="blue", type="l", lwd=3) # now boxes showing standard deviations points(x=c(xpps[1] 0.3,xpps[3]+0.3,xp ps[3]+0.3,xpps[1] 0.3,xpps[1] 0.3), y=c(hbmean+hbsd,hbmean+hbsd,hbmean hbsd,hbmean hbsd,hbmean+hbsd), type="l", col="black") points(x=c(xpps[4] 0.3,xpps[6]+0.3,xpps[6]+0.3,xpps[4] 0.3,xpps[4] 0.3), y=c(sbmean+sbsd,sbmean+sbsd,sbmean sbsd,sbmean sbs d,sbmean+sbsd), type="l", col="black") # now some text reporting means # hb label text(4.5,hbmean, expression(paste("412 m" ^2)), cex=1.2, pos=4, col="blue") text(4.5,hbmean 50, "(n=3)", cex=1.2, pos=4, col="blue") # sb label text(4.9,sbmean, expression( paste("54 m" ^2)), cex=1.2, pos=4, col="blue") text(4.9,sbmean 50, "(n=3)", cex=1.2, pos=4, col="blue") # add n's for each column text(xpps[1],100, paste("n=",nrow(hb1res9),sep=""),cex=1.2,col="red") text(xpps[2],100, paste("n=",nrow(hb2res9),sep=""),cex=1 .2,col="red") text(xpps[3],100, paste("n=",nrow(hb3res9),sep=""),cex=1.2,col="red") text(xpps[4],120, paste("n=",nrow(sb2res9),sep=""),cex=1.2,col="red") text(xpps[5],120, paste("n=",nrow(sb3res9),sep=""),cex=1.2,col="red") text(xpps[6],260, paste("n=",nro w(sb4res9),sep=""),cex=1.2,col="red") # a legend xref = 5.5; yref=150; boxWidth=3; points(xref+0.8, yref+800, pch=19) text(xref+1, yref+800, "Single fish estimate", pos=4) points(c(xref+0.6,xref+1), c(yref+740,yref+740), col="red", type="l", lwd=3) text( xref+1, yref+740, "Mean of all fish on one reef", pos=4) points(c(xref+0.6,xref+1), c(yref+680,yref+680), col="blue", type="l", lwd=3) text(xref+1, yref+680, "Mean of 3 reefs", pos=4)

PAGE 474

474 points(c(xref+0.6,xref+1,xref+1,xref+0.6,xref+0.6), c(yref+620,yref+6 20,yref+620 80,yref+620 80,yref+620), col="black", type="l", lwd=1) text(xref+1, yref+620 40, "Standard deviation of 3 reefs", pos=4) points(c(xref+0.4, xref+boxWidth, xref+boxWidth, xref+0.4, xref+0.4), c(yref+840,yref+840,yref+840 340,yref+840 340,yre f+840), col="black", type="l", lwd=2) ###################################################################### ########## # histograms of DTR, SPDG # dtr histogram par(mfrow=c(2,4)) bks = c(20,20,20,30,30) for (i in 1:length(tagfm)){ cTag=i; #cmd=md[[2]] hist(tagfm[[cTag]]$data$dtr, freq=T, main=tagfm[[cTag]]$tagName, #breaks=bks[i], xlim=c(0,100)) abline(v=median(tagfm[[cTag]]$data$dtr), col="red") } hist(z9$dtr, breaks=90, freq=F, #xlim=c(0,100), ylim=c(0,0. 05), col="grey", las=1, main="", xlab="Distance to the Reef (m)") abline(v=median(z9$dtr),lwd=4) # speed histogram par(mfrow=c(2,4)) for (i in 1:length(tagfm)){ cTag=i; #cmd=md[[2]] hist(tagfm[[cTag]]$data$speed, breaks=30, freq=F, main=tagfm[[c Tag]]$tagName) abline(v=mean(tagfm[[cTag]]$data$speed), col="red") } hist(z9$speed, breaks=30, freq=F, col="grey", las=1, main="", xlab="Gag Speed (m/s)") abline(v=median(z9$speed),col="black",lwd=4) # I want histograms of all good fish dtr and sp d # gather dtr of all good fish in all deployments alldtrsb = c(); alldtrhb = c(); allspeedsb = c(); allspeedhb = c()

PAGE 475

475 for (i in 1:length(depList)){ for (j in 1:length(depList[[i]])){ if(grepl("hb",depList[[i]][[j]]$deployment)){ alldtrhb = c( alldtrhb,depList[[i]][[j]]$data$dtr) allspeedhb = c(allspeedhb,depList[[i]][[j]]$data$speed) } else if (grepl("sb",depList[[i]][[j]]$deployment)){ alldtrsb = c(alldtrsb,depList[[i]][[j]]$data$dtr) allspeedsb = c(allspeedsb,depL ist[[i]][[j]]$data$speed) } else { print("The deployment isn't HB or SB") } } } # histograms par(mfrow=c(1,2)) par(mar=c(5,5.5,2,0)) bks = seq(0,130,by=2) bob = hist(alldtrsb,freq=F,col="grey",las=1,main="Sand bottom Landscapes", cex.lab=1.5, ce x.axis=1.5, cex.main=2, breaks=bks, yaxt="n", xlim=c(0,130), ylim=c(0,0.16), xlab="Distance From the Reef (m)", ylab="") abline(v=median(alldtrsb),col="black",lwd=4) # median = 4.23m mtext("Density",side=2,line=4,cex=1.7) text(10,0.155,labels="a)",cex= 1.5) axis(1,seq(0,120,by=10), cex.axis=1.5) axis(2,seq(0,0.16,by=0.02), las=1, cex.axis=1.5) par(mar=c(5,4.5,2,1)) hist(alldtrhb,freq=F,col="grey",las=1,main="Hard bottom Landscapes", cex.lab=1.5, cex.axis=1.5, cex.main=2, breaks=bks, yaxt="n", xlim=c( 0,130), ylim=c(0,0.05), xlab="Distance From the Reef (m)", ylab="") abline(v=median(alldtrhb),col="black",lwd=4) # median = 17.69 # all fish combined =7.56m mtext("Density",side=2,line=4,cex=1.7) text(1,0.049,labels="b)",cex=1.5) axis(1,seq(0,120,by=1 0), cex.axis=1.5) axis(2,seq(0,0.05,by=0.01), las=1, cex.axis=1.5) ### speed par(mfrow=c(1,2)) par(mar=c(5,5.5,2,0)) bks = seq(0,0.8,by=0.01) hist(allspeedsb,freq=F,col="grey",las=1,main="Sand bottom Landscapes", cex.lab=1.5, cex.axis=1.5, cex.main=2 #xlim=c(0,130), breaks=bks,

PAGE 476

476 xlab="Travel Speed (m/s)", ylab="") abline(v=median(allspeedsb),col="black",lwd=4) # median = 0.146475 mtext("Density",side=2,line=3,cex=1.7) text(0.04,3.9,labels="a)",cex=1.5) axis(1,seq(0,0.8,by=0.1), cex.axis=1.5) par(mar=c(5,4.5,2,1)) hist(allspeedhb,freq=F,col="grey",las=1,main="Hard bottom Landscapes", cex.lab=1.5, cex.axis=1.5, cex.main=2, #xlim=c(0,130), ylim=c(0,0.04), breaks=bks, xlab="Travel Speed (m/s)", ylab="") abline(v=median(allspeedhb),col="b lack",lwd=4) # median = 0.14846 # all fish combined =0.14727m mtext("Density",side=2,line=3,cex=1.7) text(0.04,4.9,labels="b)",cex=1.5) axis(1,seq(0,0.8,by=0.1), cex.axis=1.5) ###################################################################### ###### ## # Now I want EN plots...I've looked at all of them and picked a few # to go in the paper #hb1 f16 depList[[1]][[3]] md[[3]] good hb centered on the reef paper ID 17 #sb2 f26 depList[[2]][[1]] md[[5]] good sb centered on the reef ID 1 #hb2 f38 depList[[3]][[6]] md[[6]] good hb wider off the reef ID 23 #sb4 f57 depList[[6]][[3]] md[[9]] good sb wider off reef ID 12 # BUT in ggplot I can't show the reef location, so instead I'll shift the E N # numbers so that the reef is at (0,0) and make sure each one shows the same scale # # select things that go together for one tag cTag=c(); cReef=list(); prettyNames = c(); cTag[1] = "f16"; cReef[[1]] = md[[3]]$reefEN; prettyNames[1]="Fish ID 17. Hard bottom Landscape "; cTag[2] = "f26"; cReef[[2]] = md[[5]]$reefEN; prettyNames[2]="Fish ID 1. Sand bottom Landscape"; cTag[3] = "f38"; cReef[[3]] = md[[6]]$reefEN; prettyNames[3]="Fish ID 23. Hard bottom Landscape"; cTag[4] = "f57"; cReef[[4]] = md[[9]]$reefEN; prettyNames[ 4]="Fish ID 12. Sand bottom Landscape"; # pick these fish from z9 and shift EN to center on the right reef z1 = z9[(z9$tagName==cTag[1] | z9$tagName==cTag[2] | z9$tagName==cTag[3] | z9$tagName==cTag[4]),]

PAGE 477

477 z2=z1 z2$title = "z" # center each one on it's reef and make the names pretty for(i in 1:length(cTag)){ z2[z2$tagName == cTag[i],]$easting = z2[z2$tagName == cTag[i],]$easting cReef[[i]]$easting z2[z2$tagName == cTag[i],]$northing = z2[z2$tagName == cTag[i],]$northing cReef[[i]]$nort hing z2[z2$tagName == cTag[i],]$title = prettyNames[i] } # dissertation stuff...I use these in 'plots for dissertation talk.r' z38 = z1[z1$tagName==cTag[3],] # I see that sometimes I want z1 and sometimes z2 z26 = z1[z1$tagName==cTag[2],] # look at daily EN plots of individual fish ggplot(z2, aes(x=easting, y=northing)) + # geom_point(alpha=0.03) + #geom_path(data=dC3,aes(group=f,col="red")) theme_bw() + facet_wrap(~title)+ coord_cartesian(xlim=c( 100,100), ylim=c( 100,100)) + scale_y_continuous("Northing (m)", breaks = seq( 50,100,by=50)) + scale_x_continuous("Easting (m)", breaks = seq( 50,100,by=50)) + opts(axis.title.x = theme_text(size = 20)) + opts(axis.title.y = theme_text(size=20, angle=90)) + opt s(axis.text.x = theme_text(size = 15)) + opts(axis.text.y = theme_text(size = 15)) # add a) etc grid.text("a)", x = unit(0.15, "npc"), y = unit(0.93, "npc"), hjust=0, vjust=1, gp=gpar(fontsize=15)) ## right and top justified grid.text("b )", x = unit(0.56, "npc"), y = unit(0.93, "npc"), hjust=0, vjust=1, gp=gpar(fontsize=15)) ## right and top justified grid.text("c)", x = unit(0.15, "npc"), y = unit(0.5, "npc"), hjust=0, vjust=1, gp=gpar(fontsize=15)) ## right an d top justified grid.text("d)", x = unit(0.56, "npc"), y = unit(0.5, "npc"), hjust=0, vjust=1, gp=gpar(fontsize=15)) ## right and top justified ###################################################################### ####### ################# ##################################################### #######

PAGE 478

478 # some time series i=27 z3 = z9[z9$tagName==results9$tagName[i],] # full time series of individuals dtr and alt ggplo t(z3, aes(x=tod, y=dtr))+#, group=tagName, colour=tagName, fill=tagName)) + geom_point(size=1, alpha=0.2) + #geom_smooth(method="gam",formula=y~s(x)) + #geom_smooth(aes(group=1),colour="black",lwd=1.3,method="gam",formula=y~s(x)) + coord_cartesian (xlim=c(0,24), ylim=c(0,130)) + theme_bw() + facet_wrap(~Date) + scale_y_continuous("Distance From the Reef (m)", breaks=seq(0,130,by=20)) + scale_x_continuous("Time of Day", breaks = seq(0,18,by=6)) + opts(axis.text.x = theme_text(size = 15), axis.text.y = theme_text(size = 15)) + opts(axis.title.x = theme_text(size=15), axis.title.y = theme_text(size=15, angle=90)) # look at daily EN plots of individual fish ggplot(z3, aes(x=easting, y=northing))+ geom_point(alpha=0.05) + #coord_cartesian(xlim=c(0,60), ylim=c(0,10)) + theme_bw() + facet_wrap(~dod) + scale_x_continuous("Easting (m)")+#, breaks = seq(0,60,by=20)) + scale_y_continuous("Northing (m)")#,breaks=c(0,1,2,3,4,6,8,10)) plot(z9$dod) ###################### ################################################ ####### ###################################################################### ####### # GAMS z9hb = z9[z9$treatment == "hb",] z9sb = z9[z9$treatment == "sb",] ggplot(z9, aes(x=lunarIndex, y=dtr, group=ID, colour=ID, fill=ID)) + geom_point(alpha=0.05) + geom_smooth(method="gam",formula=y~s(x,bs="cc"),lwd=1.3) + #,bs="cc" geom_smooth(aes(group=1),colour="black",lwd=1.3,method="gam ",formula=y~s(x,bs=" cc")) + #,bs="cc"

PAGE 479

479 coord_cartesian(xlim=c(1,30), ylim=c(0,100)) + # range(z0$temperature,na.rm=T) range(z0$magL,na.rm=T) theme_bw() + facet_wrap(~ttmt) + scale_x_continuous("Lunar Index",breaks = c(7,15,23,30)) + scale_y _continuous("Distance From the Reef (m)", breaks=seq(0,100,by=10)) + opts(axis.text.x = theme_text(size = 15), axis.text.y = theme_text(size = 15)) + opts(axis.title.x = theme_text(size=15), axis.title.y = theme_text(size=15, angle=90)) ###################################################################### ########## # KDE stabilization curves results9 # something to hold the changing HR answers for all fish allFishKDE = list() cProb50 = 0.5 cProb95 = 0.95 for (cTag in 1:nrow(results9)){ # which tag and pick out that data for just that tag whichTag = results9$tagName[cTag] d1 = z9[z9$tagName == whichTag,c("utime","datiL","easting","northing","dod")] # drop the first t wo days d1 = d1[d1$dod > 2,] # fetch deployment specific informaiton for (k in 1:length(md)){ if(results9$deployment[cTag] == md[[k]]$deployment){ cmd = md[[k]] } # end if statement } # end for k loop # something to hold the changing HR for one fish hrVSdays = data.frame("numDays" = NA, "hr50Size" = NA, "hr95Size" = NA) for (i in 3:results9$numDays[cTag]){ # start with 3 because we dropped the first two days # grab only position solutions du ring the first i days d2 = d1[d1$dod <= i,] # ...but drop data from the first two days #d2 = d2[d2$utime > endOfAllDays[2] ,] print(i) if(nrow(d2) > 0){

PAGE 480

480 # calculate the 50% kde for these PS hr50Size = hom eRange( easting = d2$easting, northing = d2$northing, n=250, tagName = paste(results9$tagName[cTag], ", ", i, days", sep=""), lims = c(cmd$reefEN$easting hrRange[1],cmd$reefEN$easting+hrRange[1], cmd$ree fEN$northing hrRange[2],cmd$reefEN$northing+hrRange[2]), reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, prob=cProb50, drawplot=FALSE ) # calculate the 95% kde for these PS hr95Size = homeRange( easting = d2$easting, northing = d2$northing, n=250, tagName = paste(results9$tagName[cTag], ", ", i, days", sep=""), lims = c(cmd$reefEN$easting hrRange[1],cmd$reefEN$easting+hrRange[1], cmd$reefEN$northing hrRange[2],cmd$reefEN$northing+hr Range[2]), reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, prob=cProb95, drawplot=FALSE ) } else {hr50Size = 0; hr95Size = 0;} # save the answer hrVSdays[i,] = c(i, hr50Size, hr95Size) } # end for loop over all days for one tag # a plot plot(hrVSdays$numDays, hrVSdays$hr50Size, pch=19, main=paste(results9$tagName[cTag], ": ", cProb50*100, "% HR", sep="") ) points(hrVSdays$numDays, hrVSdays$hr50Size, pch=19, col="red") # save the answer for this fish allFi shKDE[[cTag]] = hrVSdays } # end cTag for loop # divide by treatment hbs = allFishKDE[grepl("hb",results9$deployment)] sbs = allFishKDE[grepl("sb",results9$deployment)] # a four panel plot for HB reefs par(mfrow=c(2,2)) # top left pane, SB 50% KD E par(mar=c(3,6,2,0)+0.1) plot(1,1,type="n", las=1, cex.axis=1.5, cex.lab=1.5, cex.main=2, bty="l", xlim=c(3,17),ylim=c(20,220), xaxt="n",

PAGE 481

481 xlab="", ylab="", main="Sand bottom Landscapes") axis(1,at=seq(3,17,by=2), cex.axis=1.5) mtext("50% KDE ( )", side=2, line=4.1, cex=1.7) mtext(expression(m^2), side=2, line=4.2, adj=0.79, cex=1.7) text(3,220,"a)",cex=1.5) # add the lines for(i in 1:length(hbs)){ points(tail(sbs[[i]]$numDays, 2), tail(sbs[[i]]$hr50Size, 2), type="l", lwd=2) } # top right p ane, HB 50% KDE par(mar=c(3,5,2,1)+0.1) plot(1,1,type="n", las=1, cex.axis=1.5, cex.lab=1.5, cex.main=2, bty="l", xlim=c(3,17),ylim=c(0,1100), xaxt="n", xlab="", ylab="", main="Hard bottom Landscapes") axis(1,at=seq(3,17,by=2), cex.axis=1.5) mtext("50% KDE ( )", side=2, line=3.8, cex=1.7) mtext(expression(m^2),side=2, line=3.9, adj=0.79, cex=1.7) text(3,1100,"b)",cex=1.5) # add the lines for(i in 1:length(hbs)){ points(tail(hbs[[i]]$numDays, 2), tail(hbs[[i]]$hr50Size, 2), type="l", lwd=2) } # bottom right pane, SB 95% KDE par(mar=c(4,6,0,1)+0.1) plot(1,1,type="n", las=1, cex.axis=1.5, cex.lab=1.5, bty="l", xlim=c(3,17),ylim=c(0,2000), xaxt="n", xlab="", ylab="") axis(1,at=seq(3,17,by=2), cex.axis=1.5) mtext("Number of Days", side=1, li ne=2.5, cex=1.5) mtext("95% KDE ( )", side=2, line=4.1, cex=1.7) mtext(expression(m^2), side=2, line=4.2, adj=0.78, cex=1.7) text(3,2000,"c)",cex=1.5) # add the lines for(i in 1:length(hbs)){ points(tail(sbs[[i]]$numDays, 2), tail(sbs[[i]]$hr 95Size, 2), type="l", lwd=2) } # bottom right pane, HB 95% KDE par(mar=c(4,5,0,1)+0.1) plot(1,1,type="n", las=1, cex.axis=1.5, cex.lab=1.5, bty="l", xlim=c(3,17),ylim=c(0,6300), xaxt="n",

PAGE 482

482 xlab="", ylab="") axis(1,at=seq(3,17,by=2), cex.axis=1.5) mtext ("Number of Days", side=1, line=2.5, cex=1.5) mtext("95% KDE ( )", side=2, line=4.5, cex=1.7) mtext(expression(m^2),side=2, line=4.6, adj=0.78, cex=1.7) text(3,6300,"d)",cex=1.5) # add the lines for(i in 1:length(hbs)){ points(tail(hbs[[i]]$numDays, 2), tail(hbs[[i]]$hr95Size, 2), type="l", lwd=2) } # compare these results with results9 for (i in 1:nrow(results9)){ print(paste(results9$tagName[i]," ",results9$deployment[i],": ", "results9 50% =", results9$kde50[i], ?= allFishKDE = ", round(tail(allFishKDE[[i]]$hr50Size,1),0),sep="")) } # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # chapter 4 otoliths.r # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # In this chap ter I use 2009 experimental data source("C:/zy/Telemetry/R Data Processing/global variables.r") source("C:/zy/Telemetry/R Data Processing/global functions.r") source("C:/zy/Telemetry/R Data Processing/global metadata.r") library(ggplot2) library(date) lib rary(plotrix) # for multhist # get fish biometric data. This contains data recorded in the field on tagging # day and any recaptures. It also contains the otolith data. biometrics = importBiometricData() # pick only relevant data b1 = biometrics # only data with MM numbers...those caught and kept and otoliths extracted # ...and from one of the experimental reefs b1 = b1[(!is.na(b1$mmNumberD) & !is.na(b1$HBSB)),] # only keep some information about these b1 = b1[,c('reefID1','HBSB',' replicate','weight1','girth1','TL1','FL1','tagged', 'tagID','mmNumber',

PAGE 483

483 'recoveredTagID','year2','month2','day2','reefID2','TL2','FL2','weight2','girth2', #'lOto', 'lOtoWeightUseable','lOtoLengthAUseable','lOtoLengthBUseable', #'lOtoLengthCUseable' #'rOto','rOtoWeightUseable','rOtoLengthAUseable','rOtoLengthBUseable', #'rOtoLengthCUseable', 'monthD','debAgeclassCorrected','debAnnuli', #'resolvedAgeclass', 'otoRadius', 'ultimateAnnulus','penultimateAnnulus','growthIncrement') ] # again b1 = b1[,c('TL1','tagID','mmNumber','HBSB','replicate','year2','month2','day2', 'TL2','FL2','weight2','debAgeclassCorrected','debAnnuli', 'otoRadius','ultimateAnnulus','penultimateAnnulus','growthIncrement') ] names(b1) = c("tl1","tagID","number","treatme nt","replicate","year","month","day", "tl","fl","weight","ageClass","numAnnuli", "otoRadius", "ultAnnul", "penultAnnul","growthInc") # now calculate the fractional age using 1 April as everyones b day # ... I'll get fractional age as: numAnnuli + (num days since 1 April)/(365) temp1 = paste(b1$year,b1$month,b1$day) temp2 = as.POSIXlt(strptime(temp1, "%Y %m %d"), origin="1970 1 1")$yday birthday = as.POSIXlt(strptime("2009 4 1", "%Y %m %d"), origin="1970 1 1")$yday # now we have to treat differently fish caught in early 2010 than late 2009 # ...because $yday starts over again and I need a correct count of days since b day # first get the ageFrac for fish caught late in year b1$ageFrac = b1$numAnnuli + (temp2 birthday)/365 # now fix ageFrac for fish caught early in year b1$ageFrac[temp2
PAGE 484

484 # ...the desired order, paste into Word b1 # calculate the relative weight. I got this equation from Doug. # a = 9.21744 x 10 6; b = 3.04; # (standard weight,g) = a (l ength, mm)^b # relative weight = (actual weight / standard weight) 100 stdwt = function(tl){9.21744e 6 tl^3.04} b1$stdwt = stdwt(b1$tl) b1$relwt = (b1$weight / stdwt(b1$tl)) 100 # separate by treatment sb = b1[b1$treatment=="SB",] hb = b1[b1$treat ment=="HB",] # do a little checking with(b1,plot(tl~fl, col=treatment, pch=ptType)) abline(0,1) ##### size and age distributions of fish caught for otolith work par(mfrow=c(1,2)) par(mar=c(4.5,4,1,1)+0.1) bob=multhist(list(sb$tl,hb$tl), freq=T, breaks=2 8, cex.axis=1, cex.names=1.5, space=c(0,0.5), axes=F, #legend.text=c("Gag in sand bottom landscapes","Gag in hard bottom landscapes"), names.arg=tail(bob$breaks, 1) ) axis(2,0:4, las=1, cex.axis=1.5) mtext("Total Length (mm) ",1,3, cex=1.5) mtext( "Frequency",side=2,line=2.7,cex=1.7) text(x=1,y=3.9,labels="a)",cex=1.5) # age histogram ben=multhist(list(sb$ageFrac,hb$ageFrac), freq=T, breaks=15, cex.axis=1, cex.names=1.5, space=c(0,0.5), axes=F, legend.text=c("Gag in sand bottom landscapes","Gag in hard bottom landscapes"), names.arg=tail(ben$breaks, 1) ) axis(2,0:6, las=1, cex.axis=1.5) mtext("Age (years) ",1,3, cex=1.5) mtext("Frequency",side=2,line=2.7,cex=1.7) text(x=1,y=5.8,labels="b)", cex=1.5) # weight histogram sam=multhist(list(sb$ weight/1000,hb$weight/1000), freq=T, breaks=14, cex.axis=1, cex.names=1.5, space=c(0,0.5), axes=F,

PAGE 485

485 legend.text=c("Gag in sand bottom landscapes","Gag in hard bottom landscapes"), names.arg=tail(sam$breaks, 1) ) axis(2,seq(0,20,by=2), las=1, cex.axis= 1.5) mtext("Weight (kg) ",1,3, cex=1.5) mtext("Frequency",side=2,line=2.7,cex=1.7) # a simple t test to see of there's a difference in total length, weight, age # total length ################### # test for equal variance var.test(sb$tl,hb$tl) # varia nces are equal # t test t.test(sb$tl, y=hb$tl, alternative="t", var.equal=TRUE) ...so were not different total lengths # weight ################### # test for equal variance var.test(sb$weight,hb$weight) # variances are equal # t test t.test(sb$weight, y =hb$weight, alternative="t", var.equal=TRUE) ...so were not different total weights # fractional age ################### # test for equal variance var.test(sb$ageFrac,hb$ageFrac) # variances are equal # t test t.test(sb$ageFrac, y=hb$ageFrac, alternative ="t", var.equal=TRUE) ...so were not different total lengths # now look at the difference b1$diffL = b1$tl b1$fl plot(b1$diffL, pch=19) # --or --ggplot(b1,aes(x=tl,y=diffL,colour=treatment)) + geom_text(aes(label=b1$number),size=3) + ##geom_point() + scale_x_continuous("MM Number")+ scale_y_continuous("TL FL") + theme_bw() # look b1[,c('numAnnuli','ageFrac')] # now look at them with(b1,plot(tl~ageFrac, col=treatment, pch=ptType)) # --or --ggplot(b1,aes(x=ageFrac,y=tl,colour=treatment)) +

PAGE 486

486 geom_text(aes(label=b1$number),size=3) + ##geom_point() + scale_x_continuous("Fractional Age")+ scale_y_continuous("Total Length at Capture") + theme_bw() # There are two things I want to do: # 1. Ask if there are there differences between the a ge total length curves between treatments # 2. Ask if there are differences between the length weight curves of two treatments. # # Now I want to compare the linear tl~ageFrac relationships # between treatments # first get the regression parameters # regression of all data b1reg4 = lm(tl ~ ageFrac, b1) # regression of Sand bottom treatment sbreg4 = lm(tl ~ ageFrac, sb) # regression of hard bottom treatment hbreg4 = lm(tl ~ ageFrac, hb) # plot toget her with fractional age # THIS WILL BE A FIGURE PLOT par(mar=c(5,5,1,1)) with(b1,plot(tl~ageFrac, pch=ptType, las=1, col=ptCol, cex.lab=1.5, cex.axis=1.5, bty="l", xaxt="n", yaxt="n", xl ab="Fractional Age at Capture", ylab="", xlim=c(1.5,5) )) #abline(b1reg4) abline(sbreg4, lty=1, lwd=2) abline(hbreg4, lty=2, lwd=2, col="red") mtext(text="Total Length (mm)", side=2, line=3.5, cex=1.7) axis(1,seq(1.5,5,by=0.5),cex.axis=1.5) axis(2,seq(35 0,750,by=50),las=1,cex.axis=1.5) legend(x=1.5,y=750,legend=c("Sand bottom","Sand bottom regression","Hard bottom", "Hard bottom regression"), pch=c(19,NA,17,NA), lty=c(NA,1,NA,2), col=c("black","black","red","red"), lwd=2) # or do this with ggplot # this is panel a) of paper figure 5 12 ggplot(b1,aes(x=ageFrac,y=tl,colour=Treatment, shape=Treatment, fill=Treatment))+

PAGE 487

487 geom_point()+ geom_smooth(method="lm") + theme_bw() + #coord_cartesian(xlim=c( 100,100), ylim=c( 100,100)) + scale_y_continuo us("Total Length (mm)") + scale_x_continuous("Fractional Age (yr)") + opts(axis.title.x = theme_text(size = 20)) + opts(axis.title.y = theme_text(size=20, angle=90, vjust=0.3)) + opts(axis.text.x = theme_text(size = 15)) + opts(axis.text.y = th eme_text(size = 15)) # add a) etc grid.text("a)", x = unit(0.15, "npc"), y = unit(0.95, "npc"), hjust=0, vjust=1, gp=gpar(fontsize=15)) ## right and top justified # Now compare regression lines using lm() and interpret as before ancova = lm (tl ~ treatment ageFrac,data=b1) summary(ancova) ###################################################################### ######## # 2. ARE THERE DIFFERENCES BETWEEN THE LENGTH WEIGHT CURVEs OF TWO TREATMENTS # Now I want to do the same thing to the length weight curves. # After log transforming I'll follow steps similar to above... # START WITH: weight = c tl^d # log(weight) = log(c) + d log(tl) # ...so y = a + d x # ... and 'a' = log(c), 'b' = 'd', y = log(weight), x = log(tl) # plot together p lot(sb$tl, sb$weight, pch=2, xlim=c(300,800), ylim=c(500,7000)) points(hb$tl, hb$weight, pch=19) # --or --with(b1, plot(weight~tl, col=treatment, pch=ptType)) # all data b1reg2 = lm(log(weight) ~ log(tl), b1) # Sand bottom treatment sbreg2 = lm(log(we ight) ~ log(tl), sb) # hard bottom treatment hbreg2 = lm(log(weight) ~ log(tl), hb)

PAGE 488

488 # look at the log transformed data plot(log(b1$tl), log(b1$weight), xlim=c(0,7), ylim=c( 15,10)) abline(b1reg2) abline(h= 11.27,v=0) # look at the un transformed data plot(sb$tl, sb$weight, pch=2, xlim=c(300,800), ylim=c(500,7000)) weightsb = function(tl){exp(sbreg2$coef[1]) tl^(sbreg2$coef[2])} curve(weightsb, add=TRUE, lty=2) points(hb$tl, hb$weight, pch=19) weighthb = function(tl){exp(hbreg2$coef [1]) tl^(hbreg2$coef[2])} curve(weighthb, add=TRUE) weightb1 = function(tl){exp(b1reg2$coef[1]) tl^(b1reg2$coef[2])} curve(weightb1, add=TRUE, lwd=2) # --or --# THIS PLOT FOR THE PAPER par(mar=c(5,5,1,1)) with(b1, plot(weight/ 1000~tl, pch=ptType, las=1, col=ptCol, cex.lab = 1.5, cex.axis=1.5, bty="l", xaxt="n", xlab="Total Length (mm)", ylab="Weight (kg)", xlim=c(300,750) )) axis(1,seq(300,750,by=50),cex.axis=1.5) # change units form g to kg weightsbkg = function(tl){ exp(sbreg2$coef[1]) tl^(sbreg2$coef[2])/1000} weighthbkg = function(tl){exp(hbreg2$coef[1]) tl^(hbreg2$coef[2])/1000} curve(weightsbkg, add=TRUE, lty=1, lwd=2) curve(weighthbkg, add=TRUE, lty=2, lwd=2, col="red") legend(x=300,y=6.2,legend=c("Sand bottom","Sand bottom regression","Hard bottom", "Hard bottom regression"), pch=c(19,NA,17,NA), lty=c(NA,1,NA,2), col=c("black","black","red","red"), lwd=2) # try it with ggplot # this is panel b) of paper figure 5 12 ggplot(b1,aes(x=log(tl),y=log(we ight/1000),colour=Treatment, shape=Treatment, fill=Treatment))+ geom_point()+ geom_smooth(method="lm") + theme_bw() + #coord_cartesian(xlim=c( 100,100), ylim=c( 100,100)) + scale_y_continuous("Log Weight (kg)") +

PAGE 489

489 scale_x_continuous("Log Total L ength (mm)") + opts(axis.title.x = theme_text(size = 20)) + opts(axis.title.y = theme_text(size=20, angle=90)) + opts(axis.text.x = theme_text(size = 15)) + opts(axis.text.y = theme_text(size = 15)) grid.text("b)", x = unit(0.15, "npc"), y = uni t(0.95, "npc"), hjust=0, vjust=1, gp=gpar(fontsize=15)) ## right and top justified wtancova < lm(log(weight)~log(tl)*treatment,data=b1) rwtancova < MASS::rlm(log(weight)~log(tl)*treatment,data=b1) ## robust version -makes little differen ce ## provided that logging the response variable (tl) doesn't mess up the variance ## structure (i.e. variance is more or less independent of mean for logged data) ## then everything proceeds as before summary(wtancova) summary(rwtancova) par(mfrow=c(2 ,2)) plot(wtancova,col=b1$treatment) MASS::boxcox(wtancova) ## BMB> technically this says we should further transform the data, ## but I don't really believe it ... may also be driven by outliers ## also note that log(tl) coeffi cient is very close to 3 ## (2.97 +/ 0.14 SE) -allometry makes perfect sense (i.e. ## fish grow isometrically in length/width/depth) b1[as.character(c(301:302,320)),] ggplot(b1,aes(x=tl,y=weight,colour=treatment)) + geom_text(aes(label=number),size =3) + ##geom_point()+ scale_x_log10()+scale_y_log10() + geom_smooth(method="lm") ggplot(b1,aes(x=tl,y=weight)) +#,colour=treatment)) + geom_text(aes(label=number),size=3) + ##geom_point()+ #scale_x_log10()+scale_y_log10() + #geom_smooth(method=" lm") scale_x_continuous("Total Length at Capture")+ scale_y_continuous("Weight at Capture") + theme_bw()

PAGE 490

490 ###################################################################### ########## ############################################################### ####### ########## ###################################################################### ########## ### tagged fish biometrics ###################################################################### ########## ################################################# ##################### ########## b1 b2 = b1[,1:14] b3 = b2[b2$year1 == 2009,] b4 = b3[b3$tagged == "yes",] b5 = b4[!is.na(b4$tagged),] names(b5) = c("year","month","day","reef","treatment","rep","method","weight", "girth","tl","fl","sizeclass","tagged", "ID") plot(b5$girth, b5$weight, pch=19) plot(b5$tl, b5$weight, pch=as.numeric(b5$treatment)) plot(b5$tl, b5$girth, pch=as.numeric(b5$treatment)) ptType = as.numeric(b5$treatment) # there are some missing weights... I want to use TL to estimate weights # First I have to fit curves to hb and sb separately to say they're the same, # then pool them to get fitting parameters for the equation to predice # weight from tl # After log transforming I'll follow steps similar to above... # START WITH: weight = c tl^d # log(weight) = log(c) + d log(tl) # ...so y = a + d x # ... and 'a' = log(c), 'b' = 'd', y = log(weight), x = log(tl) with(b5, plot(weight~tl, col=treatment, pch=ptType)) b1reg2 = lm(log(weight) ~ log(tl), b1) # look at the log transforme d data plot(log(b5$tl), log(b5$weight)) # ANOVA to see if treatment has an effect wtancova < lm(log(weight)~log(tl)*treatment,data=b5)

PAGE 491

491 rwtancova < MASS::rlm(log(weight)~log(tl)*treatment,data=b1) ## robust version -makes little difference ## provide d that logging the response variable (tl) doesn't mess up the variance ## structure (i.e. variance is more or less independent of mean for logged data) ## then everything proceeds as before summary(wtancova) # CONCLUSION: treatment doesn't make a differ ence summary(rwtancova) par(mfrow=c(2,2)) plot(wtancova,col=b5$treatment) MASS::boxcox(wtancova) ## BMB> technically this says we should further transform the data, ## but I don't really believe it ... may also be driven by outl iers ## also note that log(tl) coefficient is very close to 3 ## (2.97 +/ 0.14 SE) -allometry makes perfect sense (i.e. ## fish grow isometrically in length/width/depth) # use this equation to predict weight from tl weightb5 = function(tl){exp(b5reg2 $coef[1]) tl^(b5reg2$coef[2])} b5$weightPredicted = weightb5(b5$tl) # now make a column holding the usable weights: measured ones plus filled in missing ones b5$weightUseable = b5$weight b5$weightUseable[is.na(b5$weight)] = b5$weightPredicted[is.na(b5$ weight)] plot(b5$weight,b5$weightPredicted) abline(0,1) ggplot(b1,aes(x=tl,y=weight,colour=treatment)) + geom_text(aes(label=number),size=3) + ##geom_point()+ scale_x_log10()+scale_y_log10() + geom_smooth(method="lm") ggplot(b1,aes(x=tl,y=weight)) +#,colour=treatment)) + geom_text(aes(label=number),size=3) + ##geom_point()+ #scale_x_log10()+scale_y_log10() + #geom_smooth(method="lm") scale_x_continuous("Total Length at Capture")+ scale_y_continuous("Weight at Capture") + theme_bw() # t otal length histogram hbfish = b5$tl[ b5$treatment=="HB" ]

PAGE 492

492 sbfish = b5$tl[ b5$treatment=="SB" ] par(mar=c(4.5,4,1,1)+0.1) bob=multhist(list(sbfish,hbfish), freq=T, breaks=28, cex.axis=1, cex.names=1.5, space=c(0,0.5), axes=F, legend.text=c("Gag in san d bottom landscapes","Gag in hard bottom landscapes"), names.arg=tail(bob$breaks, 1) ) axis(2,seq(0,20,by=2), las=1, cex.axis=1.5) mtext("Total Length at Capture (mm) ",1,3, cex=1.5) mtext("Frequency",side=2,line=2.7,cex=1.7) # weight histogram hbfis h = b5$weightUseable[ b5$treatment=="HB" ] sbfish = b5$weightUseable[ b5$treatment=="SB" ] bob=multhist(list(sbfish,hbfish), freq=T, breaks=14, cex.axis=1, cex.names=1.5, space=c(0,0.5), axes=F, legend.text=c("Gag in sand bottom landscapes","Gag in ha rd bottom landscapes"), names.arg=tail(bob$breaks, 1) ) axis(2,seq(0,20,by=2), las=1, cex.axis=1.5) mtext("Weight (kg) ",1,3, cex=1.5) mtext("Frequency",side=2,line=2.7,cex=1.7) # a simple t test to see of there's a difference in sizes # test for equ al variance var.test(sbfish,hbfish) # t test t.test(sbfish, y=hbfish, alternative="t", var.equal=FALSE) ...so fish on hb are bigger than fish on sb a ###################################################################### ########### ### compare my fish re lative weights to the gag standard relative weight curve ## ###################################################################### ########### plot(b1$tl, b1$relwt, pch=19) abline(h=100) plot(b1$tl, b1$weight, pch=19) curve(stdWt,add=TRUE) # a plot for the paper par(mar=c(5,5,1,1))

PAGE 493

493 with(b1,plot(relwt~tl, pch=ptType, las=1, col=ptCol, cex.lab=1.5, cex.axis=1.5, bty="l", xaxt="n", yaxt="n", xlim=c(350,750), ylim=c(80,135), xlab="Total Length (mm)", ylab="" )) abline(h=100, lwd=2) # add horizontal or linear regression lines abline(h=mean(b1[b1$treatment=="SB",]$relwt, na.rm=T),lty=2,lwd=2) abline(h=mean(b1[b1$treatment=="HB",]$relwt, na.rm=T),lty=2,col="red",lwd=2) # for writing mean(b1$relwt,na.rm=T) # abline(lm(relwt~tl, sb), lty=2, col="red", lwd=2) # abline(lm(relwt~tl, hb), lty=1, lwd=2) axis(1,seq(350,750,by=50), cex.axis=1.5) axis(2,seq(80,135,by=5), las=1, cex.axis=1.5) mtext("Relative Weight (percent)",side=2,line=3.5,cex=1.7) legend(x=575,y=94,legend=c("Sand bottom","Sand bottom mean" ,"Hard bottom", "Hard bottom mean","Population relative weight"), pch=c(19,NA,17,NA,NA), lty=c(NA,2,NA,2,1), lwd=c(NA,2,NA,2,2), col=c("black","black","red","red","black")) # Is there a difference in the relative weight of fish between treatments? sb$relwt hb$relwt # test for equal variance var.test(sb$relwt, hb$relwt) # variances are not equal # t test t.test(sb$relwt, y=hb$relwt, alternative="t", var.equal=FALSE) ...so were not different total numbers ########################################## ############################ ########### ###################################################################### ########### ### an example of how to interpret anova results ############################## #################################################### ################## ########### ###################################################################### ###########

PAGE 494

494 # In these my questions generally group as: # 1a. Am I correctly using lm() to compare two regression lines? # 1b. Am I correctly interpreting lm() output? # 2a. Am I correctly log transforming length weight data to be linear so I can # fit curves to the data # 2b. How do you test if two curved lines are equal? # 1. BACK CALCULATE LENGTHS # back calculate total length using the two equations i n Deb and Daryl's paper # # la = [ (a + b ra)/(a + b rc) ] lc # lc = a + b rc # # la = back calculated length to opaque zone 'a' # a = intercept from the linear regression of total length as a function of # otolith radius # b = slope from same regression # ra = otolith radius to opaque zone 'a' = b1$ultimateAnnulus # rc = total otolith radius at time of capture = b1$otoRadius # lc = total length at time of capture = b1$tl # # first determine if the regressions of lc = a + b rc are the same f or both # treatments...using ANCOVA... sb = b1[b1$treatment=="SB",] hb = b1[b1$treatment=="HB",] # look at all data with(b1,plot(tl~otoRadius, col=treatment, pch=19)) # regression of all data b1reg1 = lm(tl ~ otoRadius, b1) # regression of Sand botto m treatment sbreg1 = lm(tl ~ otoRadius, sb) # regression of hard bottom treatment hbreg1 = lm(tl ~ otoRadius, hb) # plot together with(b1,plot(tl~otoRadius, col=treatment, pch=ptType, xlim=c(0.5,1.5), ylim=c(300,800), xlab="Otolith Radius at Captu re", ylab="Total Length at Capture")) # --or --plot(sb$otoRadius, sb$tl, pch=2, xlim=c(0.5,1.5), ylim=c(300,800)) points(hb$otoRadius, hb$tl, pch=19)

PAGE 495

495 ## BMB> equivalent to above: abline(b1reg1,lwd=2) abline(sbreg1) abline(hbreg1, lty=2) # --or --ggpl ot(b1,aes(x=otoRadius,y=tl)) +#,colour=treatment)) + geom_text(aes(label=b1$number),size=3) + ##geom_point() + #geom_smooth(method="lm")+ geom_smooth(method="lm",aes(group=NA)) + scale_x_continuous("Otolith Radius at Capture")+ scale_y_continuou s("Total Length at Capture") + theme_bw() # compare regression lines using # ...following root/fruit/grazing example in http://www.scribd.com/doc/50843947/ANCOVA in R # using lm() ancova = lm(tl ~ treatment otoRadius,data=b1) summary(an cova) ## BMB> 'intercept' is for the first treatment (alphabetically) -the expected average ## BMB> total length of a HB fish with otolith radius zero (not really a sensible number) # there is no effect of 'treatment' (p=0.7663) ...BUT WHY IS 'SB' AD DED TO THE NAME? ## BMB> because this is the effect of level "SB" relative to the baseline level "HB" ## BMB> if there were a third treatment (say "MB" for medium bottom) its difference ## BMB> from the baseline level would be listed as treatmentMB # t here is an effect of otoRadius on tl, p=0.0416 ... the slope is greater than zero ## BMB> yes # there is no interaction between treatment and otoRadius ... the slopes are equal ## BMB> yes, or not significantly different ... # The next step is to remov e the non significant terms, manually or automatically # using step(). step(ancova) # ...gives the following results with my interpretations # first the full model and its AIC value... Start: AIC=303.08 tl ~ treatment otoRadius

PAGE 496

496 Df Sum of Sq RSS AIC treatment:otoRadius 1 1445.2 324915 301.22 323470 303.08 # remove the most complicated term... Step: AIC=301.22 tl ~ treatment + otoRadius Df Sum of Sq RSS AIC treatment 1 534 325450 299.27 324915 301.22 otoRadius 1 122911 447826 309.49 # ...the AIC goes down so the removal is justified, there is no interaction # between treatment and otoRadius, or the slopes are equa l ## BMB yes. # remove another term... Step: AIC=299.27 tl ~ otoRadius Df Sum of Sq RSS AIC 325450 299.27 otoRadius 1 123936 449385 307.60 # ...the AIC goes down so the removal is justified, meaning that there is no # difference between treatments, so the slopes and the intercepts are the same # now do the regression using best model, which is all data together... Call: lm(formula = tl ~ otoRadius) Coefficients: (Intercept) otoRadius 196.6 316.4 # this matches b1reg1 ## BMB> I don't think it's a particularly big deal in this case, but there ## BMB> are certainly situations in which the stepwise approach is bad -it almost ## BMB> certainly underestimates the uncertainty on the slope ... but it is convenient # Now use these fitted parameters in the back calculation equation a < coef(b1reg1)[1] b < coef(b1reg1)[2] # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

PAGE 497

497 # chapter 4 reef map.r # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # In this chapter I make a map figure of the reef system library(rimage) library(PBSmapping) source("C:/zy/Telemetry/R Data Processing/global variables.r") source("C:/zy/Telemetry/R Data Processing/global functions.r") source("C:/zy/Telemetry/R Data Processing/global metadata.r") # read in tagfm, z0, results # load("C:/zy/Telemetry/R summary files/tag9 2011Mar16.rdata") # load("C:/zy/Telemetry/R summary files/tagf9 2011Mar16.rdata") # load("C:/zy/Telemetry/R summary files/tagfm9 2011Mar16.rdata") load("C:/zy/Telemetry/R summary files/z9 2011Jun25.rdata") # load("C:/zy/Telemetry/R summary files/results9 2011Mar16.rdata") # read in data lrs = read.table("C:/zy/Telemetry/R summary file s/LRS locations.txt", header=T) # add columns for UTM, UTM increases northward and eastward temp2 = data.frame(X = lrs$Longitude, Y = lrs$Latitude) attr(temp2, "zone") < 17 attr(temp2, "projection") < "LL" temp3 = convUL(temp2,km=FALSE) #X is easti ng in m, Y is northing in m lrs$easting = temp3$X eastingOffset lrs$northing = temp3$Y northingOffset # specify which reefs I used: if41, if42, if43, oh41, of43, os43 # ...in lrs these are reefs c(10,14,18,30,44,52) lrs$used = FALSE lrs$used[c(10,14, 18,30,44,52)] = TRUE # drop the 0 reefball reefs lrs1 = subset(lrs, Treatment > 0, select = c(Mosaic,Habitat_type,Treatment,easting,northing,used)) #hard or soft bottom for the experimental reefs lrs1$bottom = NA # make everything NA lrs1$bottom[lrs 1$used] = "sand" # change all used reefs

PAGE 498

498 lrs1$bottom[lrs1$used & (lrs1$Mosaic == "Inner")] = "hard" # change all hard bottom reefs blue #select symbols, sizes, and colors lrs1$symbol = 19 lrs1$symbol[lrs1$used] = 17 lrs1$size = 1 lrs1$size[lrs1$Treatme nt == 4] = 1.5 lrs1$color = "black" lrs1$color[lrs1$bottom == "hard"] = "blue" lrs1$color[lrs1$bottom == "sand"] = "red" lrsf[lrsf$size==1,] # now draw the map par(mar=c(5,6,1,1)+0.1) plot(lrs1$easting, lrs1$northing, pch=lrs1$symbol, col=lrs1$color, ce x=lrs1$size, las = 1, cex.lab=1.5, cex.axis=1.5, xaxt="n", yaxt="n", xlim=c(0,10000), ylim=c( 300,3000), xlab="Easting (m)", ylab="" ) mtext("Northing (m)", side=2, line=4.5, cex=1.7) axis(1,at=seq(0,10000,by=1000), cex.axis=1.5) axis(2,at=seq(0,3000 ,by=1000), las=1, cex.axis=1.5) legend(4400,3000, legend=c("1 Unit reefs","4 Unit reefs", "4 Unit sand bottom reefs used in this study", "4 Unit hard bottom reefs used in this study"), cex=1.5, col=c("black","black","red","blue"), pch=c(19, 19,17,17), pt.cex=c(1,1.5,1.5,1.5)) # pick one reef and find the nearest neighbors # these are the indices checkers = which(lrs1$used) # 7 10 12 20 30 35 chosenone = list() for(i in 1:length(checkers)){ chosenone[[i]]=lrs1[checkers[i],c("easting","nort hing")] } answer = data.frame(chosenone = NA, nn=NA) for (i in 1:length(chosenone)){ nn = 777777777777 for (j in 1:length(lrs1)){ temp1 = sqrt( (chosenone[[i]]$easting lrs1[j,"easting"])^2 + (chosenone[[i]]$northing lrs1[j,"northing" ])^2

PAGE 499

499 ) if(temp1>0){nn = min(nn,temp1)} } answer[i,] = c(checkers[i],nn) } ###################################################################### ######### ###################################################################### ##### # Create a t wo panel plot, catagorical habitat map on the left, individual # habitat preference curves on the right # habitat plots # pick one image at a time... # gather all the data # md3 hb1 if43 2009Jun01 # md4 sb1 of43 2009Jul10 # md5 sb2 oh41 2009Aug03 # md6 hb2 if41 2009Aug24 # md7 sb3 os43 2009Sep14 # md8 hb3 if42 2009Oct12 # md9 sb4 of43 2009Nov16 # if41 IF41_IF42_lines_aligned_HBandSB_bluebox.jpg # bc = c(1590+1, 1805+1) # IF41/center is at (24517 4.8 easting, 3262391 northing) # if42 IF41_IF42_lines_aligned_HBandSB_bluebox.jpg # bc = c(1455+1, 3345+1) # IF42/center is at (245318.6 easting, 3262407 northing) # if43 IF43_lines_aligned_HB_SB_bluebox.jpg # bc = c(1387+1, 2515+1) # IF43/ce nter is at (245478 easting, 3262128 northing) # of43 OF43_lines_SBonly_bluebox.jpg # bc = c(1473+1, 1966+1) # OF43/center is at (237897.6 easting, 3263128 northing) # oh41 OH41_OS43_lines_SBonly_bluebox.jpg # bc = c(2288+1, 3232+1) # OH41/cen ter is at (237034.9 easting, 3263760 northing) # os43 OH41_OS43_lines_SBonly_bluebox.jpg # bc = c(1472+1, 2413+1) # OS43/center is at (236946.8 easting, 3263839 northing) ### SEE THAT I'VE ALREADY GOT THE gimpX AND gimpY BUILT INTO 'findHabType()'

PAGE 500

500 numexpt = 3:9 gimpX = c(2515,1966,3232,1805,2413,3345,1966) # from above, make sure they're in the right order gimpY = c(1387,1473,2288,1590,1472,1455,1473) # from above, make sure they're in the right order sites = vector(length=length(numexpt)) fns = v ector(length=length(numexpt)) reefE = vector(length=length(numexpt)) reefN = vector(length=length(numexpt)) deployment = vector(length=length(numexpt)) goodFish = list() cmd = vector(length=length(numexpt)) for (i in 1:length(numexpt)){ sites[i] = md[[n umexpt[i]]]$site fns[i] = md[[numexpt[i]]]$habmapFileName reefE[i] = md[[numexpt[i]]]$reefEN$easting reefN[i] = md[[numexpt[i]]]$reefEN$northing deployment[i] = md[[numexpt[i]]]$deployment goodFish[[i]] = md[[numexpt[i]]]$goodFishNames cmd[i] = numexpt[i] } # check a plot that everything looks good...It all looks good whichSite = 4 # import the image to use rfile = fns[whichSite] i1 = round(read.jpeg(rfile)) plot(i1, useRaster=TRUE) findHabType(e=reefE[whichSite], n=reefN[whichSite], refe rence = sites[whichSite], show=T, crosshairs=T, habmap=i1) rm(i1) # It all looks good. ###################################################################### ########### ## make a figure plot...do this once for each hb deployment, the sb deployments ## .. had only sand bottom around them ## ... hb deployments correspond to whichSite = c(1,4,6) whichSite = 4 # import the image to use rfile = fns[whichSite] i1 = round(read.jpeg(rfile)) plot(i1, useRaster=TRUE)

PAGE 501

501 # pick out just the portion I wan t bc = c(gimpY[whichSite]+1, gimpX[whichSite]+1) # bc = box center...location of reef bw = 2600 # bw = box width, 2*130m = 2600pixels bh = 2600 # bh = box height bl = function(){bc[2] bw/2} br = function(){bc[2] + bw/2} bt = function(){bc[1] bh/2} bb = function(){bc[1] + bh/2} catmap = i1[bt():bb(), bl():br(), 3] catmap1 = i1[bt():bb(), bl():br(), ] # curse it, plotting needs 2D and findHabType needs 3D # From this I calculate the eastings and northings of the edged of catmap, using # 10pix/m. # nx=bw+1 ny=bh+1 r=10 # 10 pixels = 1m # look down to "plot fig 1" for the plotting code ### figure 1, right panel ################## habmap = i1 rm(i1) # location of reef in easting/northing or row,column. row/column is right? # i.e. ce = 245174.8 eastingOffset # center (meters) in easting direction # i.e. cn = 3262391 northingOffset # center (meters) in northing direction ce = gimpX[whichSite] # gimpX center column cn = gimpY[whichSite] # gimpY center row ne = dim(habma p)[2] # number of columns nn = dim(habmap)[1] # number of rows # compute the distance between pixel (ce,cn) and every other pixel # ...the default is for outer to do the product, but this will add instead d = sqrt(outer( (cn (1:nn))^2, (ce (1:ne))^2, "+")) # which tags are we dealing with # ... just use the tags for the current whichSite / deployment... # ... and don't do this for sb sites, their maps are all white sand bottom cTagNames = goodFish[whichSite][[1]] # list to hold (tagName, radius, # positi ons, % time over HB) allfish = list() d2 = data.frame(radius=NA, percentHB=NA) annulusThickness = 1 # that is 1m

PAGE 502

502 rings = seq(annulusThickness,100, by=annulusThickness) # but only go out to 50m #rings = head(rings,50) # cycle through eac h ring and count the fraction of HB pixels for (i in 1:length(rings)){ #start with 2 because of the [i 1]...but it seems to work with starting at 1? pixelsPerMeter = 10 rOuter = rings[i] pixelsPerMeter # r is in units of pixels, radius is in meter s...10 pixels per meter if(i==1){rInner=0} else {rInner = rings[i 1] pixelsPerMeter} kernel = function(z){(z > rInner) & (z < rOuter)} # if passed a matrix, this returns a T/F matrix # T if with in ring, F if not # k is an nx by ny matrix with TRUE(=1) everywhere [within # rOuter of (i,j) and beyond rInner of (i,j)] and FALSE(=0) everywhere else k = kernel(d) #k = k/sum(k) # this normalizes k so that it sums to one, probably not neces sary but a good habit percentHB = 1 sum(k habmap[,,2]) / sum(k) # some exploration #d[(cn 10):(cn+10),(ce 10):(ce+10)] #habmap[(cn 10):(cn+10), (ce 10):(ce+10), 2] #k[(cn 10):(cn+10), (ce 10):(ce+10)] d2[i,] = c(rings[i], percentHB) #points(c(ce,ce+r,ce,ce r),c(cn+r,cn,cn r,cn),pch=19,cex=0.5,col="green") #points(ce,cn,pch=10, cex=0.5,col="red") } # end for loop # If you want to see it... # plot(d2,type="l",main="Landscape composition around reef") # for each fish, l oop through all radii and calculate the %HB use for (i in 1:length(cTagNames)){ habUse = data.frame(tagName=NA, radius=NA, numPositions=NA, HBuse=NA, HBpreference=NA) cFish = z9[z9$tagName == cTagNames[i], ] for (j in 1:length(rings)){ rOuter = r ings[j] # rOuter is in units of m if(j==1){rInner=0} else {rInner = rings[j 1]}

PAGE 503

503 cPositions = cFish[(cFish$dtr > rInner) & (cFish$dtr < rOuter), ] cHB = cPositions[cPositions$habType == "black", ] habUse[j,] = c(cTagNames[i], rings[j], nrow(cPositions), nrow(cHB)/nrow(cPositions), (nrow(cHB)/nrow(cPositions))/d2$percentHB[j] ) allfish[[i]] = habUse } } # for all fish together calculate the %HB use at all radii # THIS WAY THE 'ALL FISH TOGETHER' NUMBER IS OVERLY WEIGHTED TO FISH WITH MANY # POSITION SOLUTIONS IN A GIVEN RING. #i=length(allfish)+1 #for (j in 1:length(rings)){ # rOuter = rings[j] # rOuter is in units of m # if(j==1){rInner=0} else {rInner = rings[j 1]} # # pick out of z9 only those fish in this deployment # cFish = z9[z9$deployment == deployment[whichSite], ] # # cPositions = cFish[(cFish$dtr > rInner) & (cFish$dtr < rOuter), ] # cHB = cPositions[cPositions$habType == "black", ] # habUse[j,] = c("all" rings[j], nrow(cPositions), # nrow(cHB)/nrow(cPositions), # (nrow(cHB)/nrow(cPositions))/d2$percentHB[j] # ) # allfish[[i]] = habUse #} # THIS WAY THE 'ALL FISH TOGETHER' NUMBER IS GIVES EVEN WEIGHT TO EACH FISH i=length(allfish)+1 habUse = d ata.frame(tagName=NA, radius=NA, numPositions=NA, HBuse=NA, HBpreference=NA) for (j in 1:length(rings)){ # for each ring, calculate the mean of the habUse of all fish temp1 = vector(length=length(allfish)) temp2 = vector(length=length(allfish)) for (k in 1:length(allfish)){ temp1[k] = as.numeric(allfish[[k]]$HBuse[j]) temp2[k] = as.numeric(allfish[[k]]$HBpreference[j]) } habUse[j,] = c("all", rings[j], NA, mean(temp1, na.rm=T), mean(temp2, na.rm=T)) }

PAGE 504

504 allfish[[i]] = habUse ### plot fig 1 ... a double plot for the paper ############################3 par(mfrow=c(1,2)) #stretch this to be as wide as you want # draw the plot par(mar=c(5,6,3,2)+0.1) plot.imagematrix.zy(imagematrix(catmap),useRaster=TRUE) box(which = "plot", lty = "so lid") # add axes labels and numbers mtext(text="Northing (m)", side=2, line=3.4, cex=2) axis(1, at=seq(0,2600,by=500), cex.axis=1.5, labels=seq(0,260,by=50))#seq(8440,8700,by=50)) axis(2, at=seq(0,2600,by=500), cex.axis=1.5, las=1, labels=seq(0,260,by=50)) #seq(560,820,by=50)) # I want to draw on the reef and sdl locations # I CAN'T GET THIS TO DRAW IN THE RIGHT PLACE...SKIP IT # reef findHabType(e=md[[cmd[whichSite]]]$reefEN$easting[1], n=md[[cmd[whichSite]]]$reefEN$northing[1], habmap=catmap1, refere nce=sites[whichSite],# erange=c(245045,245305), nrange=c(3262261,3262521), show=TRUE, crosshairs=FALSE, pixels=TRUE, dotCol="red", dotShape=4, dotSize=2) # this one's for if42 ... whichSite = 6 findHabType(e=8514, n=722, habmap=catmap1, reference =sites[whichSite], show=TRUE, crosshairs=FALSE, pixels=TRUE, dotCol="black", dotShape=4, dotSize=2) text(950, 1100, "Reef", cex=1.5, col="white") arrows(1110,1120,1230,1250, length=0.1, lwd=2, col="white") text(130,2400,"a)",cex=2, col="white") # this one's for if43 ... whichSite = 1 findHabType(e=8757, n=437, habmap=catmap1, reference=sites[whichSite], show=TRUE, crosshairs=FALSE, pixels=TRUE, dotCol="black", dotShape=4, dotSize=2) text(1100, 1100, "Reef", cex=1.5) arrows(1110,1120,1230 ,1250, length=0.1, lwd=2) text(130,2300,"a)",cex=2) # this one's for if41 ...whichSite = 4 findHabType(e=8525, n=720,

PAGE 505

505 habmap=catmap1, reference=sites[whichSite], show=TRUE, crosshairs=FALSE, pixels=TRUE, dotCol="black", dotShape=4, dotSize=2) text(900, 900, "Reef", cex=1.5) arrows(950,950,1220,1220, length=0.1, lwd=2) text(130,2500,"a)",cex=2) ### figure 1, right panel ################## par(mar=c(5,6,3,2)+0.1) plot(d2,lwd=4, las=1, bty="l", cex.lab = 2, cex.axis=2, type="l", xlim=c(0 ,100), ylim=c(0,1), xlab="Ring Number", ylab="") # mtext(text="Fraction Hard bottom or Hard bottom Use", side=2, line=3.7, cex=1.8) # add individual fish with dashed lines #for (i in 1:length(cTagNames)){ # points(allfish[[i]]$radius, allfish[[i]]$H Buse, type="l") # text(x=100,y=as.numeric(tail(allfish[[i]]$HBuse,1))+0.01, labels=tail(allfish[[i]]$tagName,1)) #} for (i in 1:(length(allfish) 1)){ points(allfish[[i]]$radius, allfish[[i]]$HBuse, type="l", lty=1, lwd=1) } # now add the average of all fish points(allfish[[length(allfish)]]$radius, allfish[[length(allfish)]]$HBuse, type="l", lty=2, lwd=4) # place the b) text(0,1,"b)", cex=2) ## add a legend #legend( 3, 0.95, legend=c("Fraction hard bottom", "cover", "Individual fish", # "All indiv iduals"), seg.len=4, # lty=c(1,0,1,2), lwd=c(4,1,2,4)) # z check the stretching to that the map x and y axes cross at the zero hash marks ###################################################################### ########## # Calculate the percent HB covera ge and habitat preference index for each # landscape and each fish # # recall that d is a matrix of distances from the reef. the correct reef

PAGE 506

506 # is specified when d is made # count all the HB pixels in habmap[] within 100m of the reef ... similar to # the rings work above pixelsPerMeter = 10 rOuter = 100 pixelsPerMeter kernel100 = function(z){z < rOuter} # if passed a matrix, this returns a T/F matrix # T if within 100m, F if not k = kernel100(d) percentHB100 = 1 sum(k habmap[,,2]) / sum(k) # this is fractional HB cover within 100m # list to hold (tagName, radius, # positions, % time over HB) allfish100 = list() # for each fish calculate the fractional HB use within 100m for (i in 1:length(cTagNames)){ h abUse = data.frame(tagName=NA, radius=NA, numPositions=NA, HBuse=NA, HBpreference=NA) cFish = z9[z9$tagName == cTagNames[i], ] for (j in 100){ # I want everything within 100m rOuter = 100 # rOuter is in units of m because it's compared to dFish$ dtr which is in mm cPositions = cFish[(cFish$dtr < rOuter), ] cHB = cPositions[cPositions$habType == "black", ] habUse[1,] = c(cTagNames[i], rOuter, nrow(cPositions), nrow(cHB)/nrow(cPositions), (nrow(cHB)/nrow(cPosition s))/percentHB100 ) allfish100[[i]] = habUse } } mean(as.numeric(c(allfish100[[1]]$HBpreference, allfish100[[2]]$HBpreference,allfish100[[3]]$HBpreference, allfish100[[4]]$HBpreference,allfish100[[5]]$HBpreference, allfish100[[6]]$HBprefere nce))) # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # filtering ALPS.r # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

PAGE 507

507 # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # this file is for exploring how best to filter ALPS data # # first spend time with beacons, then look at fish tags library(ggplot2) # get the global code source("C:/zy/Telemetry/R Data Processing/global variables.r") source("C:/zy/Telemetry/R Data Processing/global functions.r") source("C:/zy/Telemetry/R Data Processing /global metadata.r") # read in tagfm, z0, results load("C:/zy/Telemetry/R summary files/tag 2011Mar16.rdata") ###################################################################### ########## # BEACON TAGS ################################################# ##################### ########## # pick a deployment and a beacon depNum = 1 tagNum = 3 cnF = 1.6 rnF = 3.5 #4.8 dopF = 999 cmd = md[[depNum]] cmd$deployment cmd$beaconNames cbeacon = cmd$beaconNames[tagNum] # get the data tag = importALPSdata(deployment =cmd$deployment,tagName=cbeacon) # drop the things I don't want tag = tag$data[ ,c('utime','datiL','easting','northing','cn','rn','dop','hid')] # cut the bad times tag = tag[(tag$utime > cmd$startUtime) & (tag$utime < cmd$stopUtime),] # look #plot(tag$ea sting, tag$northing, pch=19, cex=0.5, # xlim=cmd$plotLimits$easting, ylim=cmd$plotLimits$northing) #points(cmd$reefEN$easting,cmd$reefEN$northing, pch=19, col="red") #points(cmd$sdlEN$easting,cmd$sdlEN$northing, pch=19, col="blue") #

PAGE 508

508 #plot(tag$cn, tag$do p, pch=19, cex=0.5) # filter cn tagf1 = tag[tag$cn < cnF,] tagf2 = tagf1[tagf1$rn > rnF,] tagf3 = tagf2[tagf2$dop > dopF,] # look plot(tag$easting, tag$northing, pch=19, xlim=cmd$plotLimits$easting, ylim=cmd$plotLimits$northing) points(tagf1$easting, tagf1$northing, pch=19, cex=0.7, col="red") points(tagf2$easting, tagf2$northing, pch=19, cex=0.5, col="blue") points(tagf3$easting, tagf3$northing, pch=19, cex=0.3, col="green") #points(cmd$reefEN$easting,cmd$reefEN$northing, pch=17, col="red") points(cm d$sdlEN$easting,cmd$sdlEN$northing, pch=17, col="yellow") ggplot(tagf2,aes(x=easting, y=northing, colour=rn))+ geom_point(alpha=0.5) + coord_cartesian(xlim=cmd$plotLimits$easting, ylim=cmd$plotLimits$northing) plo t(tagf2$datiL, tagf2$rn, pch=19, ylim=c(0,6), cex=0.5) # filter using depth, etc. if (depthF) {d1 = d1[d1$depth < depthF,]} if (cnF) {d1 = d1[d1$cn < cnF,]} if (rnF) {d1 = d1[d1$rn < rnF,]} if (dopF) {d1 = d1[d1$dop < dopF,]} if (hidF ) {print("I don't yet know how to filter on 'hid'")} if (hcountF) {d1 = d1[d1$hcount >= hcountF,]} ###################################################################### ########## # FISH TAGS ########################################################### ########### ########## c2007TagNames = md[[1]]$fishNames c2008TagNames = c("f60100", "f60300", "f61100", "f61300") cTagNames = c(c2007TagNames, c2008TagNames) filteringSummary = data.frame( "tagName" = cTagNames, "deployment" = as.factor(c(rep("hb2007" ,length(c2007TagNames)), rep("hb2008",length(c2008TagNames)))), "totalPS" = rep(NA, length(cTagNames)),

PAGE 509

509 stringsAsFactors=FALSE ) # counts of unfiltered data for (i in 1:length(tag)){ filteringSummary$totalPS[i] = nrow(tag[[i]]$data) } # create plots of all 2007, 2008 fish unfiltered data... # to make all plots the same size plot using 2008 plotting limits minEast = 7777777; maxEast = 7777777; minNorth = 7777777; maxNorth = 7777777; maxTime = tag[[1]]$data$datiL[1] for (i in 1:length(tag)){ if(min(tag[[i]]$data$easting) < minEast) minEast = min(tag[[i]]$data$easting) if(max(tag[[i]]$data$easting) > maxEast) maxEast = max(tag[[i]]$data$easting) if(min(tag[[i]]$data$northing) < minNorth) minNorth = min(tag[[i]]$data$northing) if(max( tag[[i]]$data$northing) > maxNorth) maxNorth = max(tag[[i]]$data$northing) if(tag[[i]]$data$datiL[nrow(tag[[i]]$data)] > maxTime) maxTime = tag[[i]]$data$datiL[nrow(tag[[i]]$data)] } #print(paste("Easting Range: ", minEast, to ", maxEast, sep="")) # print(paste("Northing Range: ", minNorth, to ", maxNorth, sep="")) maxElimits = c(minEast, maxEast) maxNlimits = c(minNorth, maxNorth) mediumElimits = c(8200,9000) mediumNlimits = c(300,1100) smallElimits = c(8500,8650) smallNlimits =c(630,750) allTime = c(tag[[1]]$data$datiL[1],maxTime) # a EN plot with limits showing all PS par(mfrow=c(3,3)) for (i in 1:length(tag)){ plot(tag[[i]]$data$easting, tag[[i]]$data$northing, pch=19, cex=0.1, xlim = maxElimits, ylim=maxNlimits, main=tag[[i]]$tagName ) j=ifelse(i<6, 1, 2) points(md[[j]]$reefEN$easting, md[[j]]$reefEN$northing, pch=19, cex=1, col="red") points(md[[j]]$sdlEN$easting, md[[j]]$sdlEN$northing, pch=19, cex=1, col="blue") } # an EN plot zoomed in par(mfrow=c(3,3))

PAGE 510

510 for (i in 1:length(ta g)){ plot(tag[[i]]$data$easting, tag[[i]]$data$northing, pch=19, cex=0.1, #xlim = md[[2]]$plotLimits$easting, ylim=md[[2]]$plotLimits$northing, xlim=mediumElimits, ylim=mediumNlimits, main=tag[[i]]$tagName ) j=ifelse(i<6, 1, 2) points( md[[j]]$reefEN$easting, md[[j]]$reefEN$northing, pch=19, cex=1, col="red") points(md[[j]]$sdlEN$easting, md[[j]]$sdlEN$northing, pch=19, cex=1, col="blue") } # a time easting plot with limits showing all unfiltered points par(mfrow=c(3,3)) for (i in 1:length(tag)){ plot(tag[[i]]$data$datiL, tag[[i]]$data$easting, pch=19, cex=1, #xlim = allTime, ylim=maxElimits, main=tag[[i]]$tagName ) j=ifelse(i<6, 1, 2) abline(h=md[[j]]$reefEN$easting, col="red") abline(h=md[[j]]$sdlEN$easting, col ="blue") } # a time easting plot zoomed in showing all unfiltered points par(mfrow=c(3,3)) for (i in 1:length(tag)){ plot(tag[[i]]$data$datiL, tag[[i]]$data$easting, pch=19, cex=1, #xlim = md[[2]]$plotLimits$easting, ylim=md[[2]]$plotLimits$northin g, #xlim=, ylim=mediumElimits, main=tag[[i]]$tagName ) j=ifelse(i<6, 1, 2) abline(h=md[[j]]$reefEN$easting, col="red") abline(h=md[[j]]$sdlEN$easting, col="blue") } # a time easting plot zoomed in showing filtered points par(mfrow=c (3,3)) for (i in 1:length(tagfm)){ plot(tagfm[[i]]$data$datiL, tagfm[[i]]$data$easting, pch=19, cex=0.3, #xlim = md[[2]]$plotLimits$easting, ylim=md[[2]]$plotLimits$northing, #xlim=, ylim=smallElimits, main=tagfm[[i]]$tagName )

PAGE 511

511 j=if else(i<6, 1, 2) abline(h=md[[j]]$reefEN$easting, col="red") abline(h=md[[j]]$sdlEN$easting, col="blue") } # a time northing plot with limits showing all unfiltered points par(mfrow=c(3,3)) for (i in 1:length(tag)){ plot(tag[[i]]$data$datiL, tag[[i]] $data$northing, pch=19, cex=1, #xlim = allTime, ylim=maxNlimits, main=tag[[i]]$tagName ) j=ifelse(i<6, 1, 2) abline(h=md[[j]]$reefEN$northing, col="red") abline(h=md[[j]]$sdlEN$northing, col="blue") } # a time northing plot zoomed in sho wing all unfiltered points par(mfrow=c(3,3)) for (i in 1:length(tag)){ plot(tag[[i]]$data$datiL, tag[[i]]$data$northing, pch=19, cex=1, #xlim = md[[2]]$plotLimits$easting, ylim=md[[2]]$plotLimits$northing, #xlim=, ylim=mediumNlimits, ma in=tag[[i]]$tagName ) j=ifelse(i<6, 1, 2) abline(h=md[[j]]$reefEN$northing, col="red") abline(h=md[[j]]$sdlEN$northing, col="blue") } # a time northing plot zoomed in showing filtered points par(mfrow=c(3,3)) for (i in 1:length(tagfm)){ plot(ta gfm[[i]]$data$datiL, tagfm[[i]]$data$northing, pch=19, cex=0.3, #xlim = md[[2]]$plotLimits$easting, ylim=md[[2]]$plotLimits$northing, #xlim=, ylim=smallNlimits, main=tagfm[[i]]$tagName ) j=ifelse(i<6, 1, 2) abline(h=md[[j]]$reefEN$n orthing, col="red") abline(h=md[[j]]$sdlEN$northing, col="blue") }

PAGE 512

512 ###################################################################### ########## ###################################################################### ########## ######################## ############################################## ########## ###################################################################### ########## # pick one beacon to work with and look at the raw data to get a feel for # what I think the true fish path was. cta g = importALPSdata(deployment="hb2007", tagName="b79500") cmd=md[[1]] plot(ctag$data$easting, ctag$data$northing, pch=19, cex=0.2, xlim=mediumElimits, ylim=mediumNlimits) points(cmd$reefEN$easting, cmd$reefEN$northing, pch=19, col="red") points(cmd$sdlE N$easting, cmd$sdlEN$northing, pch=19, col="blue") ctagf0 = filterALPSdata(ctag) ctagf1 = filterALPSdata(ctag, cnF=30) ctagf2 = filterALPSdata(ctag, cnF=15) ctagf3 = filterALPSdata(ctag, cnF=10) ctagf4 = filterALPSdata(ctag, cnF=7) ctagf5 = filterALPS data(ctag, cnF=5) ctagf6 = filterALPSdata(ctag, cnF=3) ctagf7 = filterALPSdata(ctag, cnF=2) ctagf8 = filterALPSdata(ctag, cnF=1.5) ctagf9 = filterALPSdata(ctag, cnF=1.3) # now look at a snip of these unfiltered and filtered points par(mfrow=c(1,1)) temptag = ctagf9 plot(ctag$data$easting, ctag$data$northing, pch=19, cex=1, xlim=c(8550,8600), ylim=c(680,720)) points(temptag$data$easting, temptag$data$northing, pch=19, cex=0.5, col="green") points(cmd$reefEN$easting, cmd$reefEN$northing, pch=19, col= "red") points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19, col="blue") # now look at a snip of these unfiltered and filtered points par(mfrow=c(2,1)) temptag = ctagf0 plot(ctag$data$datiL, ctag$data$easting, pch=19, cex=1,ylim=mediumElimits) points(tem ptag$data$datiL, temptag$data$easting, pch=19, cex=0.5, col="green") abline(h=cmd$reefEN$easting, col="red")

PAGE 513

513 abline(h=cmd$sdlEN$easting, col="blue") plot(ctag$data$datiL, ctag$data$northing, pch=19, cex=1,ylim=mediumNlimits) points(temptag$data$datiL, tem ptag$data$northing, pch=19, cex=0.5, col="green") abline(h=cmd$reefEN$northing, col="red") abline(h=cmd$sdlEN$northing, col="blue") ############## another thing # now look at a snip of these unfiltered and filtered points par(mfrow=c(2,1)) temptag = cta gf7 utimeLims = c(1197300000, 1197300000+15*60*60) plot(ctag$data$utime, ctag$data$easting, pch=19, cex=0.7, xlim=utimeLims, ylim=cmd$plotLimits$easting) points(temptag$data$utime, temptag$data$easting, pch=19, cex=0.5, col="green") abline(h=cmd$reefEN$ easting, col="red") abline(h=cmd$sdlEN$easting, col="blue") plot(ctag$data$utime, ctag$data$northing, pch=19, cex=1, xlim=utimeLims, ylim=cmd$plotLimits$northing) points(temptag$data$utime, temptag$data$northing, pch=19, cex=0.5, col="green") abline( h=cmd$reefEN$northing, col="red") abline(h=cmd$sdlEN$northing, col="blue") ################# try filtering on the number of SDLs involved, hcount par(mfrow=c(2,2)) plot(ctag$data$hcount, ctag$data$cn) plot(ctag$data$hcount, ctag$data$rn) plot(ctag$data$hc ount, ctag$data$dop) plot(ctag$data$cn, ctag$data$rn) plot(ctag$data$cn, ctag$data$dop, xlim=c(0,10), ylim=c(0,10)) plot(ctag$data$cn, ctag$data$rn, xlim=c(0,2), ylim=c(0,4)) ctagf1 = filterALPSdata(ctag, hcountF=3) ctagf2 = filterALPSdata(ctag, hcountF= 4) ctagf3 = filterALPSdata(ctag, hcountF=5) # now look at a snip of these unfiltered and filtered points par(mfrow=c(1,1)) temptag = ctagf3 plot(ctag$data$easting, ctag$data$northing, pch=19, cex=1, xlim=c(8550,8600), ylim=c(680,720)) points(temptag$data $easting, temptag$data$northing, pch=19, cex=0.5, col="green")

PAGE 514

514 points(cmd$reefEN$easting, cmd$reefEN$northing, pch=19, col="red") points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19, col="blue") ########################################################### ########### ########## ###################################################################### ########## ###################################################################### ########## ###################################################################### ## ######## # pick one fish to work with and look at the raw data to get a feel for # what I think the true fish path was. For speed, just use some of the data. ctag = tag[[1]] cmd=md[[1]] ctag$data = head(ctag$data,50000) # about 5 days plot(ctag$data$ea sting, ctag$data$northing, pch=19, cex=0.2, xlim=mediumElimits, ylim=mediumNlimits) points(cmd$reefEN$easting, cmd$reefEN$northing, pch=19, col="red") points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19, col="blue") ctagf0 = filterALPSdata(df1=ctag) ctagf1 = filterALPSdata(df1=ctag, cnF=5) ctagf2 = filterALPSdata(df1=ctag, cnF=3) ctagf3 = filterALPSdata(df1=ctag, cnF=2) ctagf4 = filterALPSdata(df1=ctag, cnF=1.5) # now look at a snip of these unfiltered and filtered points plot(ctag$data$eastin g, ctag$data$northing, pch=19, cex=1,xlim=mediumElimits, ylim=mediumNlimits) points(ctagf3$data$easting, ctagf3$data$northing, pch=19, cex=0.5, col="green") points(cmd$reefEN$easting, cmd$reefEN$northing, pch=19, col="red") points(cmd$sdlEN$easting, cmd$sd lEN$northing, pch=19, col="blue") # now look at a snip of these unfiltered and filtered points par(mfrow=c(2,1)) temptag = ctagf3 utimeLims = c(1197300000, 1197300000+1*60*60) plot(ctag$data$utime, ctag$data$easting, pch=19, cex=0.7, xlim=utimeLims, ylim =cmd$plotLimits$easting) points(temptag$data$utime, temptag$data$easting, pch=19, cex=0.5, col="green") abline(h=cmd$reefEN$easting, col="red") abline(h=cmd$sdlEN$easting, col="blue")

PAGE 515

515 plot(ctag$data$utime, ctag$data$northing, pch=19, cex=1, xlim=utimeLims ylim=cmd$plotLimits$northing) points(temptag$data$utime, temptag$data$northing, pch=19, cex=0.5, col="green") abline(h=cmd$reefEN$northing, col="red") abline(h=cmd$sdlEN$northing, col="blue") ############################################################# ######### ########### # NOW ABOUT THE 1 MINUTE FILTERING THING... ctagf0 = filterALPSdata(df1=ctag) ctagf1 = filterALPSdata(df1=ctag, cnF=1.5) ctagf2 = filterALPSdata(df1=ctag, cnF=1.5, minuteMean=TRUE) # now look at a snip of these unfiltered and fil tered points par(mfrow=c(2,1)) temptag1 = ctagf1 temptag2 = ctagf2 utimeLims = c(1200000500, 1200000500+30*60) plot(ctag$data$utime, ctag$data$easting, pch=19, cex=0.7, xlim=utimeLims, ylim=c(8520,8580)) points(temptag1$data$utime, temptag1$data$easting, type="l", cex=1, col="green") points(temptag1$data$utime, temptag1$data$easting, pch=19, cex=1, col="green") points(temptag2$data$utime, temptag2$data$easting, type="l", cex=1, col="red") points(temptag2$data$utime, temptag2$data$easting, pch=19, cex=1, c ol="red") abline(h=cmd$reefEN$easting, col="red") abline(h=cmd$sdlEN$easting, col="blue") plot(ctag$data$utime, ctag$data$northing, pch=19, cex=1, xlim=utimeLims, ylim=c(670,700)) points(temptag1$data$utime, temptag1$data$northing, type="l", cex=1, col="g reen") points(temptag1$data$utime, temptag1$data$northing, pch=19, cex=1, col="green") points(temptag2$data$utime, temptag2$data$northing,type="l", cex=1, col="red") points(temptag2$data$utime, temptag2$data$northing, pch=19, cex=1, col="red") ablin e(h=cmd$reefEN$northing, col="red") abline(h=cmd$sdlEN$northing, col="blue") # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # habitat maps.r # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # In this file I want to do a couple exploratory things with categorical

PAGE 516

516 # habitat maps. Including, answering the question of what the %HB cover is # for circles of different radius around the reef library(rimage) rfile = "C:/zy/Telemetry/R summary file s/IF41_IF42_lines_aligned_HBandSB_bluebox.jpg" i1 = round(read.jpeg(rfile)) plot.imagematrix(i1, useRaster=T) #par(mar=c(0.2,0.2,0.2,0.2)); plot(i1); box("plot", col="red") # locations of IF41 and IF42 # IF41 (245174.8 E, 3262391 N) GIMP yellow box pi xel range=(1796 column,1580 row) to (1815,1600), center=(1805,1590), findHabType pixel location=(1806,1561) # IF42 (245318.1 E, 3262407 N) GIMP yellow box pixel range=(3335,1446) to (3355,1466), center=(3345,1455), findHabType pixel location=(334 0,1701) # GIMP COUNTS ROWS DOWN. R COUNTS ROWS UP. # some checks points(1805,nn 1590,pch=19,cex=0.5,col="green") points(3345,nn 1455,pch=19,cex=0.5,col="green") findHabType(e=245174.8 eastingOffset, n=3262391 northingOffset, habmap=i1, reference="if 41", show=TRUE, crosshairs=TRUE, pixels=TRUE) findHabType(e=245318.1 eastingOffset, n=3262407 northingOffset, habmap=i1, reference="if42", show=TRUE, crosshairs=TRUE, pixels=TRUE) # 1 = white = sand, 0 = black = HB ############################### ####################################### ######## # Now...given the EN location of the reef calculate the %HB of the surrounding # area within different radii. radius = 1:150 #seq(10,50,by=3)#1:10#c(1,50,100)#1:130 d1 = data.frame(radius=NA, percentHB=NA ) habmap = i1 #testmap # location of IF41 in easting/northing or row,column. row/column is right? #ce = 245174.8 eastingOffset # center (meters) in easting direction #cn = 3262391 northingOffset # center (meters) in northing direction ce = 1806 # center column cn = 1561 # center row ne = dim(habmap)[2] # number of columns nn = dim(habmap)[1] # number of rows

PAGE 517

517 # compute the distance between pixel (ce,cn) and every other pixel d = sqrt(outer( (cn (1:nn))^2, (ce (1:ne))^2, "+")) # the default is for outer to do the product, but this will add instead # make another array layer with walues of 0 or 1, 1 if within a certain # distance of the reef for (i in 1:length(radius)){ pixelsPerMeter = 10 r = radius[i] pixelsPerMeter # r is in units of pixels, radius is in meters...10 pixels per meter kernel = function(z){z < r} # if passed a matrix, this returns a T/F matrix k = kernel(d) # k is an nx by ny matrix with TRUE(=1) everywhere within r of (i,j) and FALSE(=0) everywhere else #k = k/sum(k) # t his normalizes k so that it sums to one, probably not necessary but a good habit percentHB = 1 sum(k habmap[,,2]) / sum(k) # some exploration #d[(cn 10):(cn+10),(ce 10):(ce+10)] #habmap[(cn 10):(cn+10), (ce 10):(ce+10), 2] #k[(cn 10):(cn+10), (ce 10):(ce+10)] d1[i,] = c(radius[i], percentHB) #points(c(ce,ce+r,ce,ce r),c(cn+r,cn,cn r,cn),pch=19,cex=0.5,col="green") #points(ce,cn,pch=10, cex=0.5,col="red") } # end for loop plot(d1,type="l",main="Landscape composition around IF41") ### make a plot with the percent HB v distance from reef... ### ... then add fish data...using all points within 'distance x' from reef ### ... what percent of the time were they over HB load("C:/zy/Telemetry/R summary files/z0 2011Mar16.rdata") # whic h tags are we dealing with cTagNames = as.character(unique(z0$tagName)) # list to hold (tagName, radius, # positions, % time over HB) allfish = list() # for each fish, loop through all radii and calculate the %HB use for (i in 1:length(cTagNames)){ hab Use = data.frame(tagName=NA, radius=NA, numPositions=NA, HBuse=NA) cFish = z0[z0$tagName == cTagNames[i], ] for (j in 1:length(radius)){

PAGE 518

518 cPositions = cFish[cFish$dtr < radius[j], ] cHB = cPositions[cPositions$habType == "black", ] habUse[ j,] = c(cTagNames[i], radius[j], nrow(cPositions), nrow(cHB)/nrow(cPositions)) allfish[[i]] = habUse } } #look at them allfish[[1]]$HBpreference[50] plot(allfish[[1]]$HBuse) # for all fish together calculate the %HB use at all radii for (j in 1:leng th(radius)){ cPositions = z0[z0$dtr < radius[j], ] cHB = cPositions[cPositions$habType == "black", ] habUse[j,] = c("all", radius[j], as.numeric(nrow(cPositions)), as.numeric(nrow(cHB)/nrow(cPositions))) allfish[[6]] = habUse } ## now all th ese fish curves go flat after radius = ~70...so chop them off there for (i in 1:length(allfish)){ allfish[[i]] = head(allfish[[i]],70) } # now a plot plot(d1,type="l",lwd=3, las=1, xlab="Radius (m)", ylab="Percent Hard bottom or Hard bottom Use") te xt(x=110, 0.26, labels="Percent hard bottom of landscape") # add individual fish with dashed lines #for (i in 1:length(cTagNames)){ # points(allfish[[i]]$radius, allfish[[i]]$HBuse, type="l") # text(x=100,y=as.numeric(tail(allfish[[i]]$HBuse,1))+0.01, la bels=tail(allfish[[i]]$tagName,1)) #} xpos = 70 # f60200 i=1 points(allfish[[i]]$radius, allfish[[i]]$HBuse, type="l") text(x=xpos,y=as.numeric(allfish[[i]]$HBuse[70]), pos=4, labels=tail(allfish[[i]]$tagName,1)) # f60400, lowest of triplet i=2 points (allfish[[i]]$radius, allfish[[i]]$HBuse, type="l") text(x=xpos,y=as.numeric(allfish[[i]]$HBuse[70]) 0.011, pos=4,

PAGE 519

519 labels=tail(allfish[[i]]$tagName,1)) # f60900, mid of triplet i=3 points(allfish[[i]]$radius, allfish[[i]]$HBuse, type="l") text(x=xpos,y=a s.numeric(allfish[[i]]$HBuse[70]), pos=4, labels=tail(allfish[[i]]$tagName,1)) # f60300 i=4 points(allfish[[i]]$radius, allfish[[i]]$HBuse, type="l") text(x=xpos,y=as.numeric(allfish[[i]]$HBuse[70]), pos=4, labels=tail(allfish[[i]]$tagName,1)) # f611 00, highest of triplet i=5 points(allfish[[i]]$radius, allfish[[i]]$HBuse, type="l") text(x=xpos,y=as.numeric(allfish[[i]]$HBuse[70])+0.009, pos=4, labels=tail(allfish[[i]]$tagName,1)) # all fish i=6 points(allfish[[i]]$radius, allfish[[i]]$HBuse, type ="l", lty=2, lwd=3) text(x=xpos, y=as.numeric(allfish[[i]]$HBuse[70]), pos=4, labels="combined") whichx = 50 abline(v=whichx) # what are the habitat preferences at radius=60 # %HB use / %HB available temp1 = vector(length=6) for (i in 1:5){ te mp1[i] = as.numeric(allfish[[i]]$HBuse[whichx]) / d1$percentHB[whichx] } ###################################################################### ######## # Now look at habitat preference using annuli...dividing the landscape into # rings...what's the %hb in that ring?...and of the PS in that ring what % are # over HB # ... something similar to above load("C:/zy/Telemetry/R summary files/z0 2011Mar16.rdata") d2 = data.frame(radius=NA, percentHB=NA) habmap = i1 #testmap annulusThickness = 1 # that i s 5m rings = seq(annulusThickness,150, by=annulusThickness)

PAGE 520

520 for (i in 1:length(rings)){ #start with 2 because of why...? pixelsPerMeter = 10 rOuter = rings[i] pixelsPerMeter # r is in units of pixels, radius is in meters...10 pixels per meter i f(i==1){rInner=0} else {rInner = rings[i 1] pixelsPerMeter} kernel = function(z){(z > rInner) & (z < rOuter)} # if passed a matrix, this returns a T/F matrix # k is an nx by ny matrix with TRUE(=1) everywhere [within # rOuter of (i,j) and b eyond rInner of (i,j)] and FALSE(=0) everywhere else k = kernel(d) #k = k/sum(k) # this normalizes k so that it sums to one, probably not necessary but a good habit percentHB = 1 sum(k habmap[,,2]) / sum(k) # some exploration #d[(cn 10):( cn+10),(ce 10):(ce+10)] #habmap[(cn 10):(cn+10), (ce 10):(ce+10), 2] #k[(cn 10):(cn+10), (ce 10):(ce+10)] d2[i,] = c(rings[i], percentHB) #points(c(ce,ce+r,ce,ce r),c(cn+r,cn,cn r,cn),pch=19,cex=0.5,col="green") #points(ce,cn,pch=10, cex=0.5,c ol="red") } # end for loop plot(d2,type="l",main="Landscape composition around IF41") # which tags are we dealing with cTagNames = as.character(unique(z0$tagName)) # list to hold (tagName, radius, # positions, % time over HB) allfish = list() # for each fish, loop through all radii and calculate the %HB use for (i in 1:length(cTagNames)){ habUse = data.frame(tagName=NA, radius=NA, numPositions=NA, HBuse=NA, HBpreference=NA) cFish = z0[z0$tagName == cTagNames[i], ] for (j in 1:length(rings)){ rOuter = rings[j] # rOuter is in units of m if(j==1){rInner=0} else {rInner = rings[j 1]} cPositions = cFish[(cFish$dtr > rInner) & (cFish$dtr < rOuter), ] cHB = cPositions[cPositions$habType == "bla ck", ]

PAGE 521

521 habUse[j,] = c(cTagNames[i], rings[j], nrow(cPositions), nrow(cHB)/nrow(cPositions), (nrow(cHB)/nrow(cPositions))/d2$percentHB[j] ) allfish[[i]] = habUse } } # for all fish together calculate the %HB use at all rad ii i=6 for (j in 1:length(rings)){ rOuter = rings[j] # rOuter is in units of m if(j==1){rInner=0} else {rInner = rings[j 1]} cPositions = z0[(z0$dtr > rInner) & (z0$dtr < rOuter), ] cHB = cPositions[cPositions$habType == "black", ] habUse[j,] = c("all", rings[j], nrow(cPositions), nrow(cHB)/nrow(cPositions), (nrow(cHB)/nrow(cPositions))/d2$percentHB[j] ) allfish[[i]] = habUse } # now a plot ...this is bob3 par(mar=c(5,6,3,2)+0.1) xpos = 50 # how many m out do you want to plot d 3 = head(d2,xpos) allfishlong = allfish # preserve the original list for(i in 1:length(allfish)){ allfish[[i]] = head(allfish[[i]],50) } plot(d3,type="l",lwd=4, las=1, bty="l", cex.lab = 2, cex.axis=2, xlim=c(0,50), ylim=c(0,1), xlab="Annulus (m)" ylab="") # mtext(text="Percent Live bottom or Live bottom Use", side=2, line=3.7, cex=1.8) # add individual fish with dashed lines #for (i in 1:length(cTagNames)){ # points(allfish[[i]]$radius, allfish[[i]]$HBuse, type="l") # text(x=100,y=as.numeric (tail(allfish[[i]]$HBuse,1))+0.01, labels=tail(allfish[[i]]$tagName,1)) #} # f60200 i=1

PAGE 522

522 points(allfish[[i]]$radius, allfish[[i]]$HBuse, type="l", lty=1, lwd=2) #text(x=17, y=0.4, labels=i, cex=1.5) # f60400, lowest of triplet i=2 points(allfish[[i]]$rad ius, allfish[[i]]$HBuse, type="l", lty=2, lwd=2) #text(x=40, y=0.9, labels=i, cex=1.5) # f60900, mid of triplet i=3 points(allfish[[i]]$radius, allfish[[i]]$HBuse, type="l", lty=3, lwd=2) #text(x=27, y=0.6, labels=i, cex=1.5) # f60300 i=4 points(allfish[ [i]]$radius, allfish[[i]]$HBuse, type="l", lty=4, lwd=2) #text(x=38, y=0.6, labels=i, cex=1.5) # f61100, highest of triplet i=5 points(allfish[[i]]$radius, allfish[[i]]$HBuse, type="l", lty=5, lwd=2) #text(x=42, y=0.6, labels=i, cex=1.5) # all fish i=6 points(allfish[[i]]$radius, allfish[[i]]$HBuse, type="l", lty=2, lwd=4) # this is bob4 # another plot of the index of habitat preference in each ring ltylist = c(1,2,3,4,5,2) lwdlist = c(2,2,2,2,2,4) plot(0,0,type="n", xlim=c(15,50), ylim =c(0,3), las=1, xlab="Distance from Reef (m)", ylab="Habitat Preference Index") for(i in 1:length(allfish)){ points(as.numeric(allfish[[i]]$radius), allfish[[i]]$HBpreference, type="l", lty=ltylist[i], lwd=lwdlist[i]) } ##### ################################################################# ########## ###################################################################### ########## # I want small sections of all three RGB layers centered on some point...

PAGE 523

523 # this is m y modified version of the plotting command...I want to change the # x and y axis values and show them, etc. # This is tailored specificallyto 'catmap' below...to match the x and y ranges plot.imagematrix.zy = function (x, ...) { colvec < switch(attr (x, "type"), grey = grey(x), rgb = rgb(x[, 1], x[, 2], x[, 3])) if (is.null(colvec)) stop("image matrix is broken.") colors < unique(colvec) colmat < array(match(colvec, colors), dim = dim(x)[1:2]) image(x = 0:(dim (colmat)[2]), y = 0:(dim(colmat)[1]), z = t(colmat[nrow(colmat):1, ]), col = colors, bty="o", cex.lab=2, xlab = "Easting (m)", ylab = "", axes = FALSE, asp = 1, ...) } # import the image to use rfile = "C:/zy/Telemetry/R summar y files/IF41_IF42_lines_aligned_HBandSB_3.jpg" i3 = round(read.jpeg(rfile)) #plot(i3, useRaster=TRUE) # pick out just the portion I want bc = c(1590+1, 1805+1) # bc = box center...location of reef bw = 2600 # bw = box width, 2*130m = 2600pixels bh = 2600 # bh = box height bl = function(){bc[2] bw/2} br = function(){bc[2] + bw/2} bt = function(){bc[1] bh/2} bb = function(){bc[1] + bh/2} catmap = i3[bt():bb(), bl():br(), 3] catmap1 = i3[bt():bb(), bl():br(), ] # curse it, plotting needs 2D and findHabT ype needs 3D # From this I calculate the eastings and northings of the edged of catmap, using # 10pix/m. Reef IF41/center is at (245174.8 easting, 3262391 northing). The # image is 2600 pixels square, 260m square. So: # ...the top row is 3262391 N + 13 0m = 3262521 N # ...the bottom row is 3262391 N 130m = 3262261 N # ...the left column is 245175 E 130m = 245045 E # ...the right column is 245175 E + 130m = 245305 E # nx=bw+1 ny=bh+1 r=10 # 10 pixels = 1m

PAGE 524

524 # draw it ...this is bob1 par(mar=c(5,6,3,2)+0.1) plot.imagematrix.zy(imagematrix(catmap),useRaster=TRUE) box(which = "plot", lty = "solid") # add axes labels and numbers mtext(text="Northing (m)", side=2, line=4.5, cex=2) axis(1, at=seq(0,2600,by=500), labe ls=seq(0,260,by=50), cex.axis=2) axis(2, at=seq(0,2600,by=500), labels=seq(0,260,by=50), las=1, cex.axis=2) # I want to draw the locations of the reef and sdls on the image... # ...these are my best estimates locations of the reef and sdl # ... IF41 (24517 4.8 E, 3262391 N) # ... 50m array spacing md[[1]]$sdlEN md[[1]]$sdlEN$easting + eastingOffset md[[1]]$sdlEN$northing + northingOffset # ... 125m spacing md[[2]]$sdlEN md[[2]]$sdlEN$easting + eastingOffset md[[2]]$sdlEN$northing + northingOffset # ...in the following you have to change the cpoint and the color # 50m spacings for (cpoint in 1:5){ dotColors = c("black", "white", "black", "black", "black") findHabType(e=md[[1]]$sdlEN$easting[cpoint], n=md[[1]]$sdlEN$northing[cpoint], habmap=catmap1, re ference="other", erange=c(245045,245305), nrange=c(3262261,3262521), show=TRUE, crosshairs=FALSE, pixels=TRUE, dotCol=dotColors[cpoint], dotShape=3, dotSize=2) } # 125m spacings for (cpoint in 1:4){ dotColors = c("black", "white", "black", "w hite", "black") findHabType(e=md[[2]]$sdlEN$easting[cpoint], n=md[[2]]$sdlEN$northing[cpoint], habmap=catmap1, reference="other", erange=c(245045,245305), nrange=c(3262261,3262521), show=TRUE, crosshairs=FALSE, pixels=TRUE, dotCol=dotColor s[cpoint], dotShape=3, dotSize=2) } # reef cpoint=1 findHabType(e=md[[2]]$reefEN$easting[cpoint]+1, n=md[[2]]$reefEN$northing[cpoint], habmap=catmap1, reference="other", erange=c(245045,245305), nrange=c(3262261,3262521), show=TRUE, crosshairs=FALSE, pixels=TRUE, dotCol="black", dotShape=4, dotSize=2)

PAGE 525

525 # label the reef with an arrow text(900, 900, "Reef", cex=1.5) arrows(950,950,1220,1220, length=0.1, lwd=2) # this is bob2 # ...the top row is 3262391 N + 130m = 3262521 N # ...the bottom row is 3262391 N 130m = 3262261 N # ...the left column is 245175 E 130m = 245045 E # ...the right column is 245175 E + 130m = 245305 E # I don't know why findHabType isn't working, so I'll put the marks on the # figure the hard way, I know that the reef is at (1300 column, 1300 row) and # that 10pixels = 1m...so these are the (column, row) locations of everything # reef = (8575 E, 691 N) at (1300 c, 1300 r) # center SDL = (8583 E, 699 N) -> 8m east, 8m north -> (1380, 1380) # 2007N41 = (8569E, 742N) -> 6m west, 51m north -> (1240, 1810) # 2007E42 = (8626E, 694N) -> 51m east, 3m north -> (1810, 1330) # 2007S43 = (8570, 639N) -> 5m west, 52m south -> (1250, 780) # 2007W44 = (8522, 699) -> 53m west, 8m north -> (770 530, 1380) # 2008N41 = (8575, 817) -> 0m east,126m north -> (1300, 2560) # 2008E42 = (8684, 686) -> 109m east, 5m south -> (2390, 1250) # 2008S43 = (8568, 580) -> 7m west, 111m south -> (1230, 190) # 2008W44 = (8471, 700) -> 104m west, 9m north -> (260, 1390) text(x=10 0, y=100, labels="Reef") ###################################################################### ######### ###################################################################### ######### ###################################################################### ######### ###################################################################### ######### ###################################################################### ######### ###################################################################### ######### # @@@ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # home range bootstrapping.r # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

PAGE 526

526 # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ###################################################################### # ######### ###################################################################### ########## ### Home Ranges ###################################################################### ########## #################################################################### ## ########## # # THE LIMITS YOU USE WHEN CALCULATING THE KDE AFFECT THE ANSWER, SO FOR ALL # FISH MAKE SURE TO USE THE SAME LIMITS ON EASTING AND NORTHING. # # DO SOME WORK TO CHECK THE hr ESTIMATE SENSITIVITY TO THE LIMITS AND n GRID # CELLS IN EACH DIREC TION. # # USE BOOT STRAPPING TO FIND CONFIDENCE INTERVALS ON HR ESTIMATES. ABOUT 1000 # RUNS OF THE BOOT STRAP IS ABOUT RIGHT. BE SURE TO CHECK THAT THE MEAN OF # ALL THE BOOT STRAPS IS ABOUT EQUAL TO THE HR ESTIMATE WITH ALL THE DATA. USE # 95% QUANTIL ES TO APPROXIMATE THE 95% CI. I worked this out someplace already. I'm not sure where. I did it during my first visit to McMaster. I found this in 'chapter 3 part 2.r' ###################################################################### ######### #### ################################################################## ######### ### Bootstrapping home range estimates ###################################################################### ######### ############################################################# ######### ######### bootHR < function(dat, by="day", nboot=100, prob=0.95, progressbar=FALSE, bootplot=FALSE, pts=FALSE, drawplot=FALSE, ... # 'by' will subset the data by whatever you choose, say 'day', then the # bootstrap will pick randomly fro m the 'days' ) { if (progressbar) {

PAGE 527

527 require(tcltk) pb < tkProgressBar("hr bootstrap",min=0,max=nboot) } time < dat$datiL dat < subset(dat,select=c("northing","easting")) timecat < cut(time, brea ks=by) datsplit < split(dat,timecat) nt < length(levels(timecat)) bootres < numeric(nboot) if (bootplot) with(dat,plot(easting,northing,pch=".")) for (i in 1:nboot) { bootsamp < sample(nt,size=nt,replace=TRUE) bootdat < do.call(rbind ,datsplit[bootsamp]) if (bootplot) with(bootdat,points(easting,northing,pch=".",col=i+1)) bootres[i] < with(bootdat, homeRange(easting, northing, prob=prob, pts=pts, drawplot=drawplot, ...) ) if (progressbar) setTkProgressBar(pb,i) } if (progressbar) close(pb) bootres } # end bootHR # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # intermal array test.r # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ############ ########################################################## ######## ### This file evaluates the performance of the array inside the array. ### We deployed the array at 100m and placed ten tags at many different ### locations in order to determine whether di fferent locations within the ### array result in different frequencies of "positions recorded" ### ### ###################################################################### ######## ### Needs: #'global variables.r' library("PBSmodelling") library("PBSmap ping") # for converting LL to UTM source("C:/zy/Telemetry/R Data Processing/global variables.r")

PAGE 528

528 source("C:/zy/Telemetry/R Data Processing/global functions.r") source("C:/zy/Telemetry/R Data Processing/global metadata.r") # This all happened during one d eployment... # 9 Tags: 43 52, but no 49 because it broke cmd = md[[3]] cDeployment = cmd$deployment cTagNames = c(cmd$otherNames,cmd$beaconNames,cmd$sentinelNames) # nine buoys with tags as follows: # Orange 12 had tag 43 # GPS had tags 44 (~3ft up) and 45 (~6ft up) # O 26 had tag 46 # O 5 had tag 47 # Yellow 4 had tag 48 # O 6 had tag 49, this tag died and had no detections # Y 8 had tag 50 # O 24 had tag 51 # Y 11 had tag 52 ### which tags went with which buoys d1 = data.frame( tagID = c("43","44"," 45","46","47","48","49","50","51","52"), buoyNumber = c(1,2,2:9), buoyName = c("O12", "GPS", "GPS", "O26", "O5", "Y4", "O6", "Y8", "O24", "Y11")# as they correspond to tagIDs ) ### get the dates and times matched startDate = c(rep("01/06/ 2009",46),rep("03/06/2009",10)) startTime = c( # EDT = GMT 4hrs. The times shown are EDT "13:57:27", #time of deployment at the reef on first day, named reef1 "14:03:26","14:03:26", # GPS buoy with tags 44 and 45 "14:06:03", "14:15:26", "14:10:5 8", "14:19:43", "14:23:23", "14:25:27", "14:27:50", "15:18:14", "15:18:14", # GPS buoy with tags 44 and 45 "15:24:00", "15:27:40", "15:32:20", "15:34:25", "15:37:43", "15:41:00", "15:45:30", "16:12:45", "16:12:45", # GPS buoy with tags 44 and 45 "16:16:59", "16:22:30", "16:25:00", "16:28:20", "16:31:50", "16:35:30", "16:42:47", "17:14:05", "17:14:05", # GPS buoy with tags 44 and 45 "17:17:42", "17:21:34", "17:24:42", "17:28:08",

PAGE 529

529 "17:31:03", "17:34:10", "17:36:25", "17:54:11", 17:54:11", # GPS buoy with tags 44 and 45 "17:56:36", "17:59:32", "18:02:17", "18:05:00", "18:06:28", "18:09:31", "18:11:27", "09:28:07", # time of deployment at the reef on second day, named reef2 "09:29:39", "09:31:06", "09:32:03", "09:33:06", "09:34:20", "09:34:52", "09:35:44", "09:36:59", "09:36:59" # GPS buoy with tags 44 ad 45 ) stopTime = c( # EDT = GMT 4hrs, Times shown are EDT "18:52:15", #reef1 "15:14:50", "15:14:50", # GPS buoy with tags 44 and 45 "15:22:29", "15:29:30", 15:25:10", "15:33:00", "15:35:10", "15:38:50", "15:42:20", "16:10:43", "16:10:43", # GPS buoy with tags 44 and 45 "16:14:15", "16:18:56", "16:23:30", "16:26:30", "16:29:45", "16:32:50", "16:36:00", "17:11:00", "17:11:00", # GPS buoy with tags 44 and 45 "17:14:40", "17:18:30", "17:22:21", "17:25:52", "17:29:02", "17:32:10", "17:35:10", "17:51:29", "17:51:29", # GPS buoy with tags 44 and 45 "17:55:02", "17:57:22", "18:00:49", "18:02:56", "18:05:36", "18:07:52", "18:10:44", "18:46 :20", "18:46:20", # GPS buoy with tags 44 and 45 "18:47:16", "18:48:48", "18:50:10", "18:50:54", "18:48:09", "18:51:25", "18:53:18", # "13:40:00", # reef2 # "13:40:00", "13:40:00", "13:40:00", "13:40:00", "13:40:00", "13:40:00", # "13:40:00", "13 :40:00", "13:40:00" "13:00:00", # reef2 "13:00:00", "13:00:00", "13:00:00", "13:00:00", "13:00:00", "13:00:00", "13:00:00", "13:00:00", "13:00:00" ) d2start = paste(startDate, startTime) d2stop = paste(startDate, stopTime) d2datiStart = strptime( d2start, "%d/%m/%Y %H:%M:%S") d2datiStop = strptime(d2stop, "%d/%m/%Y %H:%M:%S") ### now match the set, location, buoy name, start time, and stop time d2 = data.frame( set=c("reef1", rep(1,8),rep(2,8),rep(3,8),rep(4,8),rep(5,8),"reef2",rep(6,8)), location=c("reef1",1:40,"reef2",41:48),

PAGE 530

530 # note that the sets 1 and 6 have the buoys in different orders than 2 5 buoyName = c( "O12", # reef1 "GPS", "O26", "Y4", "O5", "O6", "Y8", "O24", "Y11 ", # set 1 "GPS", "O26", "O5", "Y4", "O6", "Y8", "O24", "Y11", # set 2 "GPS", "O26", "O5", "Y4", "O6", "Y8", "O24", "Y11", # set 3 "GPS", "O26", "O5", "Y4", "O6", "Y8", "O24", "Y11", # set 4 "GPS", "O26", "O5", "Y4", "O6", "Y8", "O24", "Y11", # set 5 "O6", # reef2 "O24", "O5", "Y11", "O12", "Y4", "Y8", "O26", "GPS" # set 6 ) ) # end d2 ### now get the target locations into R latitude = c( 29.462783 # reef1 29 .46305317, 29.46297413, 29.46278333, 29.46259253, 29.4625135, 29.46259253, 29.46278333, 29.46297413, 29.46323305, 29.4631728, 29.46300819, 29.46278333, 29.46255847, 29.46239386, 29.46233361, 29.46239386, 29.46255847, 29.46278333, 29.46300819, 29.4631728, 29.46341294, 29.46332859, 29.46309814, 29.46278333, 29.46246853, 29.46223808, 29.46215372, 29.46223808, 29.46246853, 29.46278333, 29.46309814, 29.46332859, 29.46287562, 29.46294629, 29.46316493, 29.46339149, 29.46304397, 29.46278333, 29.46278333, 29.46350015, 29.462783, # reef2 29.462119, 29.46231597, 29.4625135, 29.46266619, 29.46290047, 29.46305317, 29.4632507, 29.46344767 ) longitude = c( 83.624433, # reef1 83.62443333, 83.62462413, 83.62470317, 83.62462413, 8 3.62443333, 83.62424253, 83.6241635, 83.62424253, 83.62443333, 83.62465819, 83.6248228, 83.62488305, 83.6248228, 83.62465819, 83.62443333, 83.62420847, 83.62404386, 83.62398361, 83.62404386, 83.62420847, 83.62443333, 83.62474814, 83.62497859, 83.6 2506294, 83.62497859, 83.62474814, 83.62443333, 83.62411853, 83.62388808, 83.62380372, 83.62388808, 83.62411853, 83.62417977, 83.62382518, 83.62405173, 83.62427038, 83.6243635, 83.62362384, 83.62429842, 83.62437062, 83.624433, # reef1 83.6 2431619, 83.6241635, 83.62396597, 83.623769, 83.623769, 83.62396597, 83.6241635, 83.62431619) d3 = data.frame(

PAGE 531

531 location = c("reef1", 1:40, "reef2", 41:48), # lat long as decimal degrees latDM=latitude, longDM=longitude, # show lat itude, longitude as degree decimal minutes latDDM = paste("(",floor(latitude), "_", round((latitude floor(latitude))*60, 3),")", sep=""), longDDM = paste("(",floor(longitude), "_", round((longitude floor(longitude))*60, 3),")", sep="") ) # l at long as UTM tagsLL =data.frame(X = longitude, Y = latitude) attr(tagsLL, "zone") < 17 attr(tagsLL, "projection") < "LL" tagsUTM = convUL(tagsLL,km=FALSE) #(longitude, latitude) d3a = cbind(d3, targetNorthing = tagsUTM$Y northingOffset, targetEast ing = tagsUTM$X eastingOffset) # now merge d1, d2, and d3 d4 = merge(d1, d2, by="buoyName", sort=FALSE) d5 = merge(d3a, d4, by="location", sort=FALSE) d5a = cbind(d5, datiStart=d2datiStart, datiStop=d2datiStop) d5b = cbind(d5a, elapsedTime = d5a$datiS top d5a$datiStart) d6 = d5b[,c(1,6,7,9,12,13,14)] # drop tag 49, it was broken d6 = d6[d6$tagID != 49,] # and add columns for the fraction of positions at each tag spot, with unfiltered # ... and filtered data d6$psFracy = NA d6$psFracn = NA # ... also c olumns for the best estimate position for filtered data d6$foundNorthing = NA d6$foundEasting = NA pause this work...it's taking too long for the benefit, maybe come back to it... # now add more lines to d6 for the beacons and sentinel tags # b1 on C45, b 2 on N41, b79400 on E42, b79500 on S44 i=1 # first beacon d6$location[nrow(d6)+i] = "center" d6$targetNorthing[nrow(d6)+i] = NA d6$targetEasting[nrow(d6)+i] = NA

PAGE 532

532 d6$tagID[nrow(d6)+i] = 1 d6$datiStart[nrow(d6)+i] = ...pause this work...it's taking too lo ng for the benefit, maybe come back to it ### now import data for a particular tag used during the test, the tag # numbers are listed at the top of this file. The data file names (ALPS output) # are named like "T61000B1.txt". ## import, filter ALPS data itag = list() # raw tag data itagfn = list() # unfiltered tag data itagfy = list() # filtered tag data for (i in 1:length(cTagNames)){ print(cTagNames[i]) itag[[i]] = importALPSdata(deployment=cDeployment,tagName=cTagNames[i], chopTimes=FALSE) i tagfn[[i]] = filterALPSdata(df1=itag[[i]], cnF=0, minuteMean=F) itagfy[[i]] = filterALPSdata(df1=itag[[i]], cnF=1.5, speedF=0.8, minuteMean=F) } # now pair these down to just the things I want # ... I end up with just utime, datiL, easting, and northing in $data tagn = itagfn tagy = itagfy for (i in 1:length(cTagNames)){ tagn[[i]]$data = itagfn[[i]]$data[,c(1,3,8,9)] tagy[[i]]$data = itagfy[[i]]$data[,c(1,3,8,9)] } # now for each location, or spot calculate the number of detections and the # fractio n of position solutions, detPS # lists to hold the easting northing data for each tag spot spoty = list() spotn = list() # lists to hold the PS for each tag spot psFracy = list() psFracn = list() for (i in 1:nrow(d6)){ # d6 is a data.frame of each deplo yment spot # for each spot ... # ... pick out only the data from tagy and tagn which are for the whichTag whichTag = as.character(d6[i,]$tagID) for (j in 1:length(tagy)){ if(grepl(paste("o",whichTag,sep=""), tagy[[j]]$tagName)){ spoty[[i ]] = tagy[[j]]$data spotn[[i]] = tagn[[j]]$data

PAGE 533

533 spoty[[i]]$tagName = tagy[[j]]$tagName spotn[[i]]$tagName = tagn[[j]]$tagName } } # now that I have all the data for this tag... # ... pick out only the data between startDati an d stopDati for this spot spoty[[i]] = spoty[[i]][ (spoty[[i]]$datiL > d6$datiStart[i]) & (spoty[[i]]$datiL < d6$datiStop[i]),] spotn[[i]] = spotn[[i]][( spotn[[i]]$datiL > d6$datiStart[i]) & (spotn[[i]]$datiL < d6$datiStop[i]),] numPings = a s.numeric(d6$elapsedTime[i]) 30 # num min* 30 pings per min # calculate the fraction of PS for this tag/spot during this interval d6$psFracy[i] = nrow(spoty[[i]]) / numPings d6$psFracn[i] = nrow(spotn[[i]]) / numPings # calculate the best positio n estimates d6$foundNorthing[i] = mean(spoty[[i]]$northing) d6$foundEasting[i] = mean(spoty[[i]]$easting) } # end i for loop # that's all the roaming tags, now what about the stationary beacons/sentinel # tagy has the data for these tags. For datiSt art and datiStop I'll use the # full trial time as extracted from d6, that will be almost 2 days. allstart = min(d6$datiStart) allstop = max(d6$datiStop) numBeaconPings = as.numeric(allstop allstart)*24*60*3 numSentinelPings = as.numeric(allstop allstart)* 24*300 # 300pings/hr # now extract data from tagy for beacons/sentinel and chop to right times stillStuff = list() for (i in 1:5){ # 5 stationary tags stillStuff[[i]] = tagy[[9+i]] # chop times stillStuff[[i]]$data = stillStuff[[i]]$data[(stillStu ff[[i]]$data$datiL>allstart)& (stillStuff[[i]]$data$datiL
PAGE 534

534 # # Plot 1: The clouds # the raw plot plot(cmd$sdlEN$easting, cmd$sdlEN$northing, type="n", xlim=cmd$plotLimits$easting, ylim=cmd$plotLimit s$northing) # the reef and sdls points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=17, col="black", cex=1.5) points(cmd$reefEN$easting, cmd$reefEN$northing, pch=17, col="red", cex=1.5) # the target locations points(d6$targetEasting, d6$targetNorthing, pch=1 9, col="green", cex=0.5) # the best estimate locations points(d6$foundEasting, d6$foundNorthing, pch=21, col="black", cex=0.5) # add the clouds of recorded position solutions for each spot for (i in 1:length(spoty)){ #points(spotn[[i]]$easting, spotn[[i ]]$northing, pch=19, cex=1, col=plotColors[i]) points(spoty[[i]]$easting, spoty[[i]]$northing, pch=19, cex=0.5, col=plotColors[i]) } # add the clouds of stationary tags for (i in 1:length(stillStuff)){ points(stillStuff[[i]]$data$easting, stillStuff[[i ]]$data$northing, pch=19, cex=0.5, col=plotColors[i]) } # Plot 2: the psFrac # the raw plot par(mar=c(4,5,1,1)+0.1) plot(cmd$sdlEN$easting, cmd$sdlEN$northing, type="n", bty="l", las=1, xlab="", ylab="", cex.lab=1.5, cex.axis=1.5, xlim=c(8789,8969 ), ylim=c(300,548) #xlim=cmd$plotLimits$easting, ylim=cmd$plotLimits$northing ) # the sdls points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=17, col="black", cex=1.5) mtext(text="Easting (m)",side=1,line=2.5,cex=1.5) mtext(text="Northing (m)",side=2,line =3.5,cex=1.7) # add points at the best estimate locations with size = fracPS #points(d6$foundEasting, d6$foundNorthing, pch=19, cex=(1+d6$psFracn)) points(d6$foundEasting, d6$foundNorthing, pch=21, col="black", cex=(1+2*d6$psFracy) ) # add points that have data before filtering but not after emptyPts = d6[is.nan(d6$foundNorthing) ,]

PAGE 535

535 points(emptyPts$targetEasting, emptyPts$targetNorthing, pch=19, col="black", cex=1 ) # add the stationary tags' circles # add the clouds of stationary tags for (i in 1:length(stillStuff)){ points(stillStuff[[i]]$foundEN[1], stillStuff[[i]]$foundEN[2], pch=21, cex=(1+2*stillStuff[[i]]$psFracy), col="black") } # figure out the legend details # ...find the smallest circle and the psFracy that goes with it # ...find the largest circle and the psFracy that goes with it psmax = max(d6$psFracy) psmin = min(d6$psFracy[d6$psFracy>0])# smallest non zero psFracy # I see that stillStuff are all in this range # add legend legend(8790,550,legend=c(paste("Fraction = ",round(psmax,3)), paste("Fraction =",round(psmin,3)),"Fraction = 0", "Hydrophone"), pch=c(21,21,19,17), pt.cex=c(1+2*psmax,1+2*psmin,1,1.5), cex=1.2) # add A and B and C with arrows text(8830,480,"A",cex=1.5) arrows(8833,479,8873,455,lwd=2,length =0.15) text(8910,520,"B",cex=1.5) arrows(8906,519,8891,505,lwd=2,length=0.15) text(8820,380,"D",cex=1.5) arrows(8824,380,8848,390,lwd=2,length=0.15) text(8850,300,"C",cex=1.5) arrows(8853,305,8865,324,lwd=2,length=0.15) # add clouds for two tags i=4 # sou th beacon points(stillStuff[[i]]$data$easting, stillStuff[[i]]$data$northing, pch=4,cex=0.5) i=15 # a roaming spot points(spoty[[i]]$easting, spoty[[i]]$northing, pch=2, cex=0.5) # the target locations points(d6$targetEasting, d6$targetNorthing, pch=1 9, col="green", cex=0.5) # the best estimate locations points(d6$foundEasting, d6$foundNorthing, pch=19, col="red", cex=0.5) ###################################################################### ###########

PAGE 536

536 ################################################ ###################### ####### # below here is the old way tagCount = length(tagn) ### some checks # number of hours over two days (tail(tagn[[1]]$data$utime,1) tagn[[1]]$data$utime[1])/3600 (tail(tagy[[1]]$data$utime, 1) tagy[[1]]$data$utime[1])/3600 # as.POSIXlt(tagn[[1]]$data$utime[1], origin="1970 1 1") as.POSIXlt(tail(tagn[[1]]$data$utime,1), origin="1970 1 1") # as.POSIXlt(tagy[[1]]$data$utime[1], origin="1970 1 1") as.POSIXlt(tail(tagy[[1]]$data$utime,1), origi n="1970 1 1") # our best estimates of reef and sdl locations, in UTM cmd$reefEN cmd$sdlEN # a plot tagIndex = 3 cTag = tagy[[tagIndex]] plot(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19, col="black", cex=1.5, main=paste("Tag",tagy[[tagIndex]]$tagName )) points(cmd$reefEN$easting, cmd$reefEN$northing, pch=19, col="red", cex=1.5) oneTag = d6[d6$tagID == substr(cTagNames[tagIndex],2,3),] points(oneTag$targetEasting, oneTag$targetNorthing, pch=19, col="blue", cex=1) ####################################### ############################### ########## # pick only positions during the right time intervals tagIndex = 8 #tag43=1, 44=2, 45=3, 46=4, 47=5, 48=6, 49=7, 50=8, 51=9, 52=10 cTag = tagy[[tagIndex]] oneTag = d6[d6$tagID == substr(cTagNames[tagIndex],2,3),] # gather all cTag lines between each of the oneTag$datiStart and oneTag$datiStop times # ...in other words...remove the points recorded when the buoy was being moved ct2 = vector(length=nrow(oneTag)) loc = vector("list",nrow(oneTag)) for (i in 1:nrow(oneT ag)){ loc[[i]] = cTag$data[ (cTag$data$utime > oneTag$datiStart[i]) & (cTag$data$utime < oneTag$datiStop[i]), ]

PAGE 537

537 ct2[[i]] = loc[[i]] } # look at this so far for 1 June 2009 # one tag at a time june02 = as.POSIXlt("2009 06 02") ct3 = ct2[ [1]][ct2[[1]]$utime < june02,] plot(ct3$utime, rep(1,nrow(ct3)), pch=19,, cex=0.5, xlim = c( as.POSIXct("2009 06 01 13:50:00"), as.POSIXct("2009 06 01 19:00:00") )) abline(v=d6$datiStart, col="red") abline(v=d6$datiStop, col="blue") # look at this so f ar for 3 June 2009 # one tag at a time june02 = as.POSIXlt("2009 06 02") ct3 = ct2[ct2$utime > june02,] plot(ct3$utime, rep(1,nrow(ct3)), pch=19,, cex=0.5, xlim = c( as.POSIXct("2009 06 03 09:20:00"), as.POSIXct("2009 06 03 13:50:00") )) abline(v=d7$dati Start, col="red") abline(v=d7$datiStop, col="blue") zzz why do all the tags stop giving positions before the tags were pulled out of the water? Is it possible the watch was different from the computer and SDL s? Don't delete points before startTime and and see if the points are recorded before startTime. Also, look at detections and "position solutions" ###################################################################### ########## # draw the temporal pic ture of how often each tag gave a position resolution allTagsParsed = vector("list",tagCount) for (i in 1:tagCount){ currentTag = allTagsFiltered[[i]] # get ALPS data for tag i d7 = d6[d6$tagID == whichTag[i],] # gather all currentTag lines betw een each of the d7$datiStart and d7$datiStop times # ...in other words...remove the points recorded when the buoy was being moved ct2 = currentTag[ !(currentTag$utime d7$datiStart[j+1]),] }

PAGE 538

538 ct2 = ct2[ct2$utime < d7$datiStop[nrow(d7)],] # now save this tag's data and move on the the next tag allTagsParsed[[i]] = ct2 } # now plot all tags on one plot plot(allTagsParsed[[1]]$utime, rep(1,nrow(allTagsParsed[[1]])), type="n", cex=0.5, ylim=c(0,11), xlim=c( as.POSIXct("2009 06 01 13:50:00"), as.POSIXct("2009 06 01 19:00:00"))) for (i in 1:tagCount){ # draw points for tag positions through time po ints(allTagsParsed[[i]]$utime, rep(i, nrow(allTagsParsed[[i]])), pch=19, cex=0.5) # draw lines for datiStart and datiStop d7 = d6[d6$tagID == whichTag[i],] for (j in 1:nrow(d7)){ points(x=rep(d7[j,]$datiStart,2), y=c(i 0.4, i+0.4), type="l" col="red") points(x=rep(d7[j,]$datiStop,2), y=c(i 0.4, i+0.4), type="l", col="blue") } } ###################################################################### ###### #### # find the spatial picture of frequency of position solutions # add columns to d6 to hold the 'number of positions' in each buoy location d9 = cbind(d6, posCount=rep( 1,56), posFrac=rep( 1,56), meanLat=rep( 1,56), meanLong=rep( 1,56)) # count the 'number of positions' recorded at each buoy location for (i in 1:nrow(d9)){ # for each row in d6, pick out the data.frame from the 'allTagsFiltered' list whichrow = d9[i,] # pick a row from d8 whichtag = whichrow$tagID # find out which tag, that row describes tagIndex = which(d1$tagID == whichtag) # find the tagID for the tag on that row currentTag = allTagsFiltered[[tagIndex]] # get all the position solutions for that tag # pick only position solutions during the times listed on 'whichrow' c urrentTag = currentTag[ (currentTag$utime>whichrow$datiStart) & (currentTag$utime
PAGE 539

539 # plot the results # a plot plot(sdls$easting, sdls$northing, pch=2, col="black", cex=1.5, main="") points(reef[1], reef[2], pch=19, c ol="red", cex=1.5) points(d9$meanLong, d9$meanLat, pch=19, col="green", cex=5*d9$posFrac) points(d9$targetLong, d9$targetLat, pch=1, col="blue", cex=1) also...make this same plot, one set at a time # This is try two ... "Try it again" setwd("C:/zy/Telem etry/R summary files") fileName = "internal performance locations used 2011Mar25.txt" colNames = c("set", "location", "buoy", "tag", "date", "startTime", "stopTime", "latitude", "longitude") colClasses.z = c("factor", "numeric", "character", "factor", "c haracter", "character", "character", "numeric", "numeric") bob = read.table(fileName, header=TRUE, col.names=colNames) # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # interpolate ALPS data.r # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # The function created in this file will be moved to 'global functions.r' when # it's complete. # This function takes ALPS output for one/all? fish in a deployment and interpolates # fish positions e very second between recorded positions. library("zoo") ###################################################################### ######### interpolateALPSdata = function(tagName, beaconName, deployment, year, psr, interpolate=1, ...) # 'beaconName' is th e desired beacon, i.e. b1 # 'year' because I do different things for different years, 2007, 2008, 2009 # 'deployment' is the experiment designation, i.e. hb1 for 2009 experiments # 'psr' use the 'psr' or 'no psr' ALPS output # 'interpolate' ...int ended to eventually choose between methods # tagName="t91"; beaconName="b1"; deployment="test"; year="2009"; psr=TRUE; interpolate=1; # a real fish

PAGE 540

54 0 # tagName="t38"; beaconName="b1"; deployment="hb2"; year="2009"; psr=TRUE; offset=TRUE; { tagID = substr(tagName,2,100) tagType = substr(tagName,1,1) beaconID = substr(beaconName,2,100) ### record current directory and change to desired SDL directory oDir = getwd() # settings that change for each deployment for (i in 1:length(md)) { # i loops through all deployments if (deployment == md[[i]]$deployment){ tempDir = md[[i]]$homeDir startUtime = md[[i]]$startUtime stopUtime = md[[i]]$stopUtime fishNames = md[[i]]$fishNames } } setwd(paste(tempDir," /ALPS",sep="")) cDir = getwd() # import and filter ALPS output for all fish in 'deployment' fish1 = importALPSdata(tagName=tagName, beaconName=beaconName, deployment=deployment, year=year, psr=psr, ...) fish2 = filterALPSdata(fish1, cnF=3 ) ### later, make cnF pass from 'interpolateALPSdata()' # extract the position elements and # drop any rows before or after startUtime and stopUtime, respectively fish3 = fish2$data[(fish2$data$utime >= startUtime) & (fish2$data$utime <= stopUtim e),] # drop the unwanted columns fish4 = subset(fish3, select = c(CN, RN, DOP, HID, HCount)) # create a list of the complete time of interest fullTime = data.frame("utime"=startUtime:stopUtime ) # fullTime = data.frame("utime"=1230811195:1230812200)# these times are from my made up fish paths # merge position elements with 'fullTime' fish5 = merge(fullTime, fish4, by="utime", all=TRUE) # add columns for 'replaced', and 'status'

PAGE 541

541 i nterp = rep(NA, nrow(fish5)) # a marker to tell if this is a # recorded/interpolated point status = rep(NA, nrow(fish5)) # a marker to tell if a point is 'b'efore, # 'd'uring, or 'a'fter t he first/last recorded # points for this fish fish6 = cbind(fish5, interp, status) # find the earliest and latest recorded points from fish2 earliest = fish3$utime[[1]] latest = fish3$utime[[nrow(fish3)]] # set v alues for fish6$status, first set all to FALSE then pick some to be TRUE fish6$status[ ] = FALSE fish6$status[ (fish6$utime >= earliest) & (fish6$utime <=latest) ] = TRUE # set values for fish6$interp fish6$interp[ ] = TRUE # it is interpolated f ish6$interp[ !is.na(fish6$easting) ] = FALSE # it isn't interpolated # set the leading NA in easting, northing, and depth to equal the first # measured position, which is the first element of fish2 fish6$easting[1] = fish2$data$easting[1] fish6$no rthing[1] = fish2$data$northing[1] fish6$depth[1] = fish2$data$depth[1] # set the final NA in easting, northing, and depth to equal the last measured # position, which is the last element of fish2 fish6$easting[nrow(fish6)] = fish2$data$easting[nro w(fish2$data)] fish6$northing[nrow(fish6)] = fish2$data$northing[nrow(fish2$data)] fish6$depth[nrow(fish6)] = fish2$data$depth[nrow(fish2$data)] # replace the rest of the NAs in positions with the last know position fish6[2:4] = na.approx(fish6[2: 4], na.rm=FALSE, method="constant") # return the position solution frequencies for 'tag' output = list(data=fish6, "tagName"=tagName, "beaconName"=beaconName, "deployment"=deployment, "year"=year, "psr"=psr ) } # end 'interpolateALPSdata()' # some test fish f1 = interpolateALPSdata(tagName="t91", beaconName="b1", deployment="test", year="2009", psr=TRUE, interpolate=1, offset=FALSE) f2 = interpolateALPSdata(tagName="t92", beaconName="b1", deployment="test", year="2009", psr=TRUE, inter polate=1, offset=FALSE) f3 = interpolateALPSdata(tagName="t93", beaconName="b1", deployment="test", year="2009", psr=TRUE, interpolate=1, offset=FALSE) f4 = interpolateALPSdata(tagName="t94", beaconName="b1", deployment="test", year="2009", psr=TRUE, interpolate=1, offset=FALSE) plot(f2$data$utime,f2$data$northing)

PAGE 542

542 # some real fish f1 = interpolateALPSdata(tagName="t38", beaconName="b1", deployment="hb2", year="2009", psr=TRUE, offset=TRUE) plot(f1$data$utime,f1$data$northing,pch=19, cex=0.5, xlim=c(1251600000,1251600500)) # a plot with different colors for the interpolated and non interp points f1interp = f1$data[ f1 $data$interp, ] f1non = f1$data[ !f1$data$interp, ] plot(f1interp$utime,f1interp$northing, pch=19, cex=0.5, col="red", xlim=c(1251609000,1251609100)) points(f1non$utime,f1non$northing, pch=19, cex=0.5, col="blue") # for comparison fish1 = importALPS data(tagName="t38", beaconName="b1", deployment="hb2", year=2009, psr=TRUE, offset=TRUE) fish2 = filterALPSdata(fish1, cnF=3) # I want to find the minimum time between true points nrow(fish2$data) temp1 = vector(length=nrow(fish2$data)) for (i i n 2:nrow(fish2$data)){ temp1[i] = fish2$data$utime[i] fish2$data$utime[i 1] } min(temp1) head(temp1,2000) temp2 = temp1[temp1 == 0] temp2 = temp1[temp1 == 1] temp3 = (temp1 == 1) # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # pick best beacon.r # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # In this file I want to look at the detections of all beacons by all SDLs # and pick the best and second best beaco n. It needs functions from # 'global functions.r' This has mostly been worked out before in # 'analysis single deployment.r' although there I used databases and here I # don't.

PAGE 543

543 # I'm starting this in the middle of an R session, so I'm not sure if ever y # required thing is here. # pick a deployment cmd = md[[10]] setwd(paste(cmd$homeDir,cmd$alpsDir,sep="")) toaData = list() toaStats = list() # for each beacon toa file count the number of detections at each SDL detectionCounts = data.frame("tag"=NA, "d etections"=NA) allBeacons = cmd$beaconNames # or in hb2008 allTags=paste("c",1:130, sep="") allTags = cmd$allTagNames whichTags = allBeacons # importToaData for(i in 1:length(whichTags)){ toaData[[i]] = importToaData(cmd$beaconNames [i], cmd$deployment) } # calculate and summarize toa stats for (i in 1:length(whichTags)){ toaStats[[i]] = toaStats1(toaData[[i]], mLines=FALSE) } # create an empty table and plots of tag receptions at all SDLs summaryTable = matrix(nrow = length(toaSt ats), ncol=5, dimnames=list(whichTags, c("sdl41", "sdl42", "sdl43", "sdl44", "sdl45")) ) # fill the matrix for (i in 1:length(toaStats)){ # i indexes all tags for (j in 1:5){ # j indexes 5 SDLs summaryTable[i,j] = mean(toaSta ts[[i]][[j]]$frequency) } } # a picture of beacons plot(x=41:45, y=summaryTable[2,], ylim=c(0,1.2), type="n", cex=1, main= "Summary of Detection Frequencies", xlab="SDLs", ylab="Frequency", col=1 ) for (i in 1:length(cmd$beaconNames)){ points(x=41:45, y=summaryTable[i,], type="b", col=i)

PAGE 544

544 } legend("topright", legend=cmd$beaconNames, pch = 19, col = 1:5 ) # use this plot and the values in summaryTable to decide which beacon to # use when running the positioning alogrithim in ALPS # # Als o, look at the ALPS output, which beacon gives the largest files of # position solutions. # picture of the sentinel plot(x=41:45, y=summaryTable[1,], # 1 is the sentinel, make these numberings automatic ylim=c(0,1), type="b", cex=1, main= "Summ ary of Detection Frequencies", xlab="SDLs", ylab="Frequency", col="black" ) # a picture of beacons plot(x=41:45, y=summaryTable[2,], # 2:5 are the beacons make these numberings automatic ylim=c(0,1.2), type="n", cex=1, main= "Summary of De tection Frequencies", xlab="SDLs", ylab="Frequency", col=1 ) for (i in 1:length(cmd$beaconNames)){ points(x=41:45, y=summaryTable[i,], type="b", col=i) } legend("topright", legend=cmd$beaconNames, pch = 19, col = 1:5 ) # use this plot and the values in summaryTable to decide which beacon to # use when running the positioning alogrithim in ALPS # a picture of fish tags plot(x=41:45, y=summaryTable[6,], ylim=c(0,1), type="n", cex=1, main= "Summary of Detection Frequencies", xlab="SDLs", ylab=" Frequency", col=1 ) #####################make these numberings automatic for (i in 7:length(cmd$allTagNames)){ ############## what's 7:22 ############ is it generic ######################## points(x=41:45, y=summar yTable[i,], type="b", col=i 5) } legend("topright", legend = cmd$fishNames, pch = 1, col = 1:17, ncol=2 )

PAGE 545

545 # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # testing array spacing.r # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # In this file I want to see how the HR estimate would have been different # if the array spacing were smaller. I'll use the 2008 fish. # start with tagfm[[7]] = f60300. tagfm[[8]] = f61100 str(tagfm[[7]]) str( tagfm[[8]]) d1 = subset(tagfm[[4]]$data, select=c(utime,easting,northing)) plot(d1$easting, d1$northing, pch=19, cex=0.5, xlim=md[[2]]$plotLimits$easting, ylim=md[[2]]$plotLimits$northing) points(md[[2]]$sdlEN$easting, md[[2]]$sdlEN$northing, pch=19, col="red") points(md[[2]]$reefEN$easting, md[[2]]$reefEN$northing, pch=19, col="blue") rotate = function(fisht, fishx, fishy, spin=45, show=FALSE, reefEN=c(md[[2]]$reefEN$easting,md[[2]]$reefEN$northing) ) { # reef location xc = reefEN[1] yc = r eefEN[2] # fish location x=fishx y=fishy # shift center of rotation x1=x xc y1=y yc # convert to polar coordinates r = sqrt(x1^2 + y1^2) theta = atan2(y1,x1)*180/pi # rotate newtheta = theta + spin # convert to cartesian coord inates x2 = r cos(newtheta*pi/180) y2 = r sin(newtheta*pi/180)

PAGE 546

546 # shift center back to (0,0) x3 = x2+xc y3 = y2+yc if(show){ plot(x,y,pch=19,xlim=md[[2]]$plotLimits$easting, ylim=md[[2]]$plotLimits$northing) points(x3, y3, p ch=19, col="red") points(md[[2]]$reefEN$easting, md[[2]]$reefEN$northing, pch=19, col="blue") points(md[[2]]$sdlEN$easting, md[[2]]$sdlEN$northing, pch=19, col="green") points(md[[2]]$reefEN$easting+c(0,125,0, 125), md[[2]]$reefEN$northin g+c(125,0, 125,0), pch=17, col="green") abline(h=yc, v=xc, lty="dashed") abline(h=0, v=0) } data.frame(fisht=fisht, fishx=x3, fishy=y3) } r1 = rotate(d1$utime, d1$easting, d1$northing) chop = function(fisht, fishx, fishy, spacing, re efEN=c(md[[2]]$reefEN$easting,md[[2]]$reefEN$northing) ) { en = data.frame(fisht, fishx, fishy) en = en[abs(en$fishx reefEN[1]) < spacing,] en = en[abs(en$fishy reefEN[2]) < spacing,] } cmd=md[[2]] plot(d1$easting, d1$northing, pch=19, xlim=cmd$plotLimits$easting, ylim=cmd$plotLimits$northing) points(cmd$reefEN$easting, cmd$reefEN$northing, pch=17, col="red") points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=17, col="green") r1 = rotate(d1$easting, d1$northing) r2 = chop(r 1$fishx, r1$fishy, spacing=70) r3 = rotate(r2$fishx, r2$fishy, spin= 45) points(r3$fishx, r3$fishy, pch=19, cex=0.5, col="red") d1 = subset(tagfm[[7]]$data, select=c(easting,northing)) d1 = subset(tagfm[[8]]$data, select=c(easting,northing))

PAGE 547

547 spacing_hr = data.frame(tagName=NA, spacing=NA, kde50=NA, kde95=NA) spacings = seq(10,130, by=10) for(i in 1:length(spacings)){ r1 = rotate(d1$easting, d1$northing) r2 = chop(r1$fishx, r1$fishy, spacing=spacings[i]) r3 = rotate(r2$fishx, r2$fishy, spin= 45) # calculate the home ranges # THE LIMITS YOU USE WHEN CALCULATING THE KDE AFFECT THE ANSWER, SO FOR ALL # FISH MAKE SURE TO USE THE SAME LIMITS ON EASTING AND NORTHING. # There's more in 'chapter 3 part 1.r' on looking at home ranges. spacing_hr[i,] = c( "f60300", spacings[i], homeRange(easting = r3$fishx, northing = r3$fishy, tagName = "?", lims = hrlims, reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, prob=0.5, drawplot=FALSE ), homeRange(easting = r3$fishx, northing = r3$fishy, tagName = "?", lims = hrlims, reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, prob=0.95, drawplot=FALSE ) ) } bob = spacing_hr # 60300 sam = spacing_hr # 61100 plot(bob$spacing, b ob$kde95, type="b", ylim=c(0,4500), col=2, main="f60300, f61100",xlab="Simulated Array Spacing",ylab="Home Range Estimate") points(bob$spacing, bob$kde50, type="b", col=3) abline(v=50,h=c(bob[5,3], bob[13,3])) points(sam$spacing, sam$kde95, type="b", pch=19, col=2) points(sam$spacing, sam$kde50, type="b", pch=19, col=3) abline(v=50) abline(v=50,h=c(spacing_hr[5,3],spacing_hr[13,3])) (as.numeric(bob[13,4]) as.numeric(bob[5,4])) / as.numeric(bob[13,4]) (as.numeric(sam[13,4]) as.numeric(sam[5,4])) / as.numeric(sam[13,4]) head(z0)

PAGE 548

548 # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # analysis single array deployment.r # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ######################## ############################################## ########## ### This file is intended to be a total, sequantial analysis of the SDL array / ### fish tagging experiments. # Required libraries library(MASS) library(CircStats) library(RMySQL) ################ ###################################################### ########## ###################################################################### ########## ###################################################################### ########## # # # # # # # # # # # # # # # # # # # # # Pick a deployment to work with # # # # # # # # # # # # # # # # # # # # ###################################################################### ########## ##### ################################################################# ########## ###################################################################### ########## ### Pick a particular 'deploymentNumber' deploymentOrder dn = 3 cmd = md[[dn]] # short hand for m d[[dn]] cmd$deployment # double check cmd$allTagNames ###################################################################### ########## ###################################################################### ########## #################################### ################################## ##########

PAGE 549

549 # # # # # # # # # # # # # # # # # # # # # Working with GPS data # # # # # # # # # # # # # # # # # # # # ##################### ################################################# ########## ###################################################################### ########## ###################################################################### ########## ### GPS data ##################### ################################################# ########## # GPS data was read in and best estimates of positions were made in 'metadata.r' # Also see 'GPS position estimates.r' # Note that OF4.3 is funky and needs special attention. # # NOTE that hb2 008 is funky and doesn't work yet............zzzzzzzzzzzzzzzzzzz # choose to plot or not plotThem = TRUE # plotThem = FALSE # oDir = getwd() # original directory cDir = paste(cmd$homeDir,"/GPS data",sep="") # current directory setwd(cDir) # if this GPS file comes in two files... cGpsFileName = cmd$gpsFileName reef = importGPSdata(cGpsFileName, cmd$gpsTimes$start[1], cmd$gpsTimes$stop[1], filter=100, p lotThem=plotThem) north = importGPSdata(cGpsFileName, cmd$gpsTimes$start[2], cmd$gpsTimes$stop[2], filter=100, plotThem=plotThem) east = importGPSdata(cGpsFileName, cmd$gpsTimes$start[3], cmd$gpsTimes$stop[3], filter=100, plotThem=plotThem) south = i mportGPSdata(cGpsFileName, cmd$gpsTimes$start[4], cmd$gpsTimes$stop[4], filter=100, plotThem=plotThem) west = importGPSdata(cGpsFileName, cmd$gpsTimes$start[5], cmd$gpsTimes$stop[5], filter=100, plotThem=plotThem) center = importGPSdata(cGpsFileName, cmd$gpsTimes$start[6], cmd$gpsTimes$stop[6], filter=100, plotThem=plotThem) wholepath = importGPSdata(cGpsFileName, cmd$gpsTimes$start[7], cmd$gpsTimes$stop[7], filter=9999999, plotThem=plotThem) # a plot of the array and gps track plot(x=wholep ath$data$easting, y=wholepath$data$northing, type="n") points(x=wholepath$data$easting, y=wholepath$data$northing, pch=19, cex=0.5)

PAGE 550

550 points(reef$data$easting, reef$data$northing, pch=19, col="red", cex=0.5) points(north$data$easting, north$data$northing, pc h=19, col="blue", cex=0.5) points(east$data$easting, east$data$northing, pch=19, col="orange", cex=0.5) points(south$data$easting, south$data$northing, pch=19, col="green", cex=0.5) points(west$data$easting, west$data$northing, pch=19, col="brown", cex=0.5 ) points(center$data$easting, center$data$northing, pch=19, col="pink", cex=0.5) # a plot of the array with the reef at (0,0) to see how close the actual # SDL positions are to the target distance of 100m reefO = c(reef$average$easting reef$average$eastin g, reef$average$northing reef$average$northing) northO = c(north$average$easting reef$average$easting, north$average$northing reef$average$northing) eastO = c(east$average$easting reef$average$easting, east$average$northing reef$average$northing) southO = c(south$average$easting reef$average$easting, south$average$northing reef$average$northing) westO = c(west$average$easting reef$average$easting, west$average$northing reef$average$northing) centerO = c(center$average$easting reef$average$eas ting, center$average$northing reef$average$northing) cCex=1 plot(x=0, y=0, type="n", xlim=c( cmd$spacing*1.1, cmd$spacing*1.1), ylim=c( cmd$spacing*1.1, cmd$spacing*1.1), main = "How close is the array to target locations?", xlab="Easting (m) ", ylab="Northing (m)") points(c(0,sqrt(50),0,cmd$spacing,0, cmd$spacing), c(0,sqrt(50),cmd$spacing,0, cmd$spacing,0), pch=19, cex=cCex+0.5) abline(h=0); abline(v=0) points(reefO[1], reefO[2], pch=19, col="red", cex=cCex) points(northO[1], northO[2], pch=19, col="blue", cex=cCex) points(eastO[1], eastO[2], pch=19, col="orange", cex=cCex) points(southO[1], southO[2], pch=19, col="green", cex=cCex) points(westO[1], westO[2], pch=19, col="brown", cex=cCex) points(centerO[1], centerO[2], pch=19, col="pink" cex=cCex) # look at the numbers reefO northO eastO southO westO

PAGE 551

551 centerO setwd(oDir) # add the ALPS estimates of these positions ###################################################################### ########## ######################################### ############################# ########## ###################################################################### ########## # # # # # # # # # # # # # # # # # # # # # Working with ADCP / tide data # # # # # # # # # # # # # # # # # # # # ###################################################################### ########## ###################################################################### ########## ############################### ####################################### ########## ### ADCP / tide data ###################################################################### ########## # ad = importADCPdata() # check the quality of the data... # ... do the data really come every 10 min temp1 = head(ad$utime, 1) temp2 = tail(ad$utime, 1) temp3 = c(NA,temp2 temp1) plot(ad$utime, temp3, type="b", ylim=c(0,500000)) # some general review of the data par(mfrow=c(1,1)) plot(ad$datiL, ad$tem, pch=19, cex=0.5) plot(ad$datiG, ad$dep, pch=19, cex=0.2) plot(ad$datiG, ad$magL, pch=19, cex=0.2) plot(ad$datiL, ad$eaaL, pch=19, cex=0.2) plot(ad$eaaL, ad$eaaM, pch=19, cex=0.2) par(mfrow=c(2,1)) plot(ad$datiL, ad$rol, pch=19, cex=0.2) plot(ad$dati L, ad$pit, pch=19, cex=0.2)

PAGE 552

552 par(mfrow=c(1,1)) plot(ad$tem, ad$eaaL, type="b") #pch=19, cex=0.2) plot(ad$dep, ad$eaaL, type="l", xlim=c(11,15), ylim=c(100,230)) plot(ad$magL, ad$magU, pch=19, cex=0.3) plot(hexbin(ad$magL, ad$magU)) plot(ad$datiL, ad$magU/ ad$magL, type="l", xlim=c(1198080060,1199000000), ylim=c(0,4)) plot(ad$dep, ad$dirL, type="b") # pch=19, cex=0.3) plot(ad$dep, ad$magL, type="l") plot(ad$dirL, ad$magL, type="l") ###################################################################### #### ###### ###################################################################### ########## ###################################################################### ########## # # # # # # # # # # # # # # # # # # # # # W orking with raw SDL (*.txt) files # # # # # # # # # # # # # # # # # # # # ###################################################################### ########## ########################################################### ########### ########## ###################################################################### ########## ### Raw *.txt files for individual SDLs ###################################################################### ########## # In WHS Reader II, convert the *.bin files to *.txt. This creates two files # for each SDL, # an SDL output file (named like "SN265041_17Jun09.txt") and a battery log file # (named like "SN265041_17Jun09_BAT.txt"). Read more about this output in # 'General steps for analyzing SDL da ta.docx' ### battery file ############################################################### importBatteryData(deployment=cmd$deployment) ### SDL file ################################################################### # To see how detections of each tag vary among SDLs and over time. # I want to pick a tag and plot it hourly detection fraction at each sdl.

PAGE 553

553 # # These sdl files are names like "SN265041_17Jun09.txt". They have 9 columns: # (date, time, fraction, power, port, tagID, type, rawSensor, gpsSyn c) # For data collected in code mode, the 'type' and 'rawSensor' columns are empty. # # Because these files can be very big I've put them in MySQL databases named # like 'sdl41' # # Here I'll start by writing code to read info from the MySQL databases. # # The deployment currently in selected and the tags used cmd$deployment cmd$allTagNames ###################################################################### ########## ############################################################ ########## ########## ###################################################################### ########## # # # # # # # # # # # # # # # # # # # # # Working with decompressed SDL (*.toa) files # # # # # # # # # # # # # # # # # # # # ###################################################################### ########## ###################################################################### ########## #################################### ################################## ########## ### Decompressed files for individual SDLs ###################################################################### ########## # In ALPS, 'decompressing' FIX THIS TO SAY BUILD SYMBOLS... data creates files named li ke "SN265041_17jun09.txt", # one for each SDL each day. It also creates files named like "TxId61000.toa", # one for each tag/beacon for each day. ### SDL file ### TOA file ### See 'toaFileStats.r' # use this file to choose which beacon to use when ru nning ALPS. Look at # receptions over time of each beacon and pick the one that gives the best # receptions during the whole time.

PAGE 554

554 ###################################################################### ######### ### import the toa data for a particular t ag/beacon/sentinel # The toa files are now in MySQL tables. The tables are filled using # 'fillToaSQLtable()' which calls 'importToaData()'. The tag name and # deployment data aren't carried through. # # Now you just need to extract data from the db. ### First do a simple count of how many receptions each tag got over the entire ### deployment. ### # set the database and connect dbName = paste("db",cmd$deployment ,sep="")# connect to the database dbcon = dbConnect(MySQL(), user="root", password="z y0014", dbname=dbName) # for each tag toa file count the number of detections at each SDL detectionCounts = data.frame("tag"=NA, "detections"=NA) allTags = cmd$allTagNames # or in hb2008 allTags=paste("c",1:130, sep="") # now count.. for (i in 1:length(allTags)){ dbphrase = paste("select count(*) from toa", allTags[i], ";", sep="") res = dbGetQuery(dbcon, dbphrase) detectionCounts[i,] = c(allTags[i], res) } # look at these codes for db2008 plot(detectionCo unts$detections, pch=19) # 1 43 are the first codes) abline(v=c(1,43), col=c("blue", "red")) # 44 57, 65 67, 70 72, 77 86, are the second codes abline(v=c(44:57,65:67, 70:72, 77:86), col="green") # 119, 120, 129, 130 are the third codes abline(v=c(119, 12 0, 129, 130), col="green") ### now look at the potential beacon tags over time to be able to pick the best # First, pick a beacon tag and fetch it from the db. allBeacons = c("c80", "c81", "c85", "c86", "c130") --or --allBeacons = cmd$beaconNames toa Data = list() toaStats = list()

PAGE 555

555 for (i in 1:length(allBeacons)){ dbphrase = paste("select from toa", allBeacons[i], ";", sep="") res = dbGetQuery(dbcon, dbphrase) # now put it in the form toaStats1() expects toaData[[i]] = list( "sdl41" = res[res$sdlNumber == 41 ,], "sdl42" = res[res$sdlNumber == 42 ,], "sdl43" = res[res$sdlNumber == 43 ,], "sdl44" = res[res$sdlNumber == 44 ,], "sdl45" = res[res$sdlNumber == 45 ,], "tagName" = allBeacons[i], "deployment" = cmd$deploy ment ) } # toaStats1(toaData[[4]], mLines=TRUE) ###################################################################### ######### # calculate and summarize toa stats for (i in 1:length(allBeacons)){ toaStats[[i]] = toaStats1(toaData[[i]], mLines=FALS E) } #toaStats1(toaData[[18]], m=FALSE) # create an empty table and plots of tag receptions at all SDLs summaryTable = matrix(nrow = length(toaStats), ncol=5, dimnames=list(allBeacons, c(" sdl41", "sdl42", "sdl43", "sdl44", "sdl45")) ) # fill the matrix for (i in 1:length(toaStats)){ # i indexes all tags for (j in 1:5){ # j indexes 5 SDLs summaryTable[i,j] = mean(toaStats[[i]][[j]]$frequency) } } # picture of th e sentinel plot(x=41:45, y=summaryTable[1,], # 1 is the sentinel, make these numberings automatic ylim=c(0,1), type="b", cex=1, main= "Summary of Detection Frequencies", xlab="SDLs", ylab="Frequency", col="black" )

PAGE 556

556 # a picture of beacons plot( x=41:45, y=summaryTable[2,], # 2:5 are the beacons make these numberings automatic ylim=c(0,1.2), type="n", cex=1, main= "Summary of Detection Frequencies", xlab="SDLs", ylab="Frequency", col=1 ) for (i in 1:length(cmd$beaconNames)){ poin ts(x=41:45, y=summaryTable[i,], type="b", col=i) } legend("topright", legend=cmd$beaconNames, pch = 19, col = 1:5 ) # use this plot and the values in summaryTable to decide which beacon to # use when running the positioning alogrithim in ALPS # a picture of fish tags plot(x=41:45, y=summaryTable[6,], ylim=c(0,1), type="n", cex=1, main= "Summary of Detection Frequencies", xlab="SDLs", ylab="Frequency", col=1 ) #####################make these numberings automatic for (i in 7:length(cmd$allTagNames)){ ############## what's 7:22 ############ is it generic ######################## points(x=41:45, y=summaryTable[i,], type="b", col=i 5) } legend("topright", legend = cmd$fishNames, pch = 1, col = 1:17, ncol= 2 ) ###################################################################### ########## ###################################################################### ########## ###################################################################### ########## # # # # # # # # # # # # # # # # # # # # # Working with ALPS position output # # # # # # # # # # # # # # # # # # # # ############################################################## ######## ########## ###################################################################### ########## ###################################################################### ########## ### Run ALPS positioning algorithim

PAGE 557

557 ####################################### ############################### ########## # In ALPS, choosing > Position > Run ALPS creates xxx ZZZ how many and what kind of # files. The output file has a user defined name like 'T1B2 no psr.txt'. # Use 'importALPSdata()' to read these files. This f ile has 9, unnamed columns: # unixTime, easting, northing, depth, cn, rn, dop, hid, hcount. ### pick a tag and beacon cmd$allTagNames #cTags = tail(cmd$fishNames, 5) # pick out fish tags only if desired cTags = cmd$beaconNames # pick out fish tags only if desired cTags = c(cmd$fishNames) cTags = cTags[1] cTags = cTags[ 7] cTags = cmd$allTagNames[ 7] cTag = 3 # which of cTags do you want to look at cBeacon = cmd$bestBeacon cPsr = TRUE cYear = 2007 # # read in and filter data for all tags: tag = list() # r aw tag data tagf = list() # filtered tag data tagfm = list() # filtered tag data merged with ADCP/tide data for (i in 1:length(cTags)){ tag[[i]] = importALPSdata(deployment=cmd$deployment, tagName=cTags[i], beaconName=cBeacon, psr=cPsr) tagf[[i ]] = filterALPSdata(df1=tag[[i]], cnF=1.5, speedF=0.8) tagfm[[i]] = mergeAlpsAdcpData(alpsData=tagf[[i]]) } ###################################################################### ########## ### Basic Plots ################################################ ###################### ########## # time / easting par(mfrow = c(1,2)) cTag=1 # choose one tag to look at plot(x=tagfm[[cTag]]$data$datiL, y=tagfm[[cTag]]$data$northing, col=plotColors[1], pch=19, # type="b", # cex = 0.2, cex.lab = 1.5, cex.axis=1.5, ylim=cmd$plotLimits$northing,

PAGE 558

558 xlab="Time", ylab="Northing (m)", main=paste("All points",tagfm[[cTag]]$tagName) ) abline(h=cmd$sdlEN$northing, col="blue") ablin e(h=cmd$reefEN$northing, col="red") # easting / northing plot(tagfm[[cTag]]$data$easting, tagfm[[cTag]]$data$northing, pch=19,# type="b", # col="black", cex = 0.1, cex.lab = 1.5, cex.axis=1.5, xlim=cmd$plotLimits$easting, ylim=cmd$plotLim its$northing, main=paste("All points",tagfm[[cTag]]$tagName), xlab = "Easting", ylab="Northing" ) points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19, col="blue", cex=1) points(cmd$reefEN$easting, cmd$reefEN$northing, pch=17, col="red", cex=1) # hexbi n plot(hexbin(tagfm[[cTag]]$data$easting, tagfm[[cTag]]$data$northing), #,xbnds=cmd$plotLimits$easting, ybnds=cmd$plotLimits$northing), main=paste("All points ", cTags[[cTag]]), xlab = "Easting", ylab="Northing" ) points(cmd$sdlEN$easting, cmd$sdlEN$no rthing, pch=19, col="blue", cex=1) points(cmd$reefEN$easting, cmd$reefEN$northing, pch=17, col="red", cex=1) # speed v time plot(tagfm[[cTag]]$data$datiL, tagfm[[cTag]]$data$speed, pch=19,# type="b", # col="black", cex = 0.1, cex.lab = 1.5, cex.axis=1. 5, main=paste("All points", cTags[[cTag]]), xlab = "Date", ylab="Speed" ) hist(tagfm[[cTag]]$data$speed, xlab="Speed, (m/s)", main=paste("Histogram of travel speeds, ",cTags[[cTag]])) # depth v time depthTemp = tagfm[[cTag]]$data[ tagfm[[cTag] ]$data$depth > 0,] plot(depthTemp$datiL, depthTemp$depth, pch=19,# type="b", # col="black", cex = 0.1, cex.lab = 1.5, cex.axis=1.5, main=paste("All points", cTags[[cTag]]), xlab = "Date", ylab="Fish Depth" ) hist(depthTemp$depth, n=30, xlab="fi sh depth", main=paste("Histogram of fish depth, ",cTags[[cTag]])) # utime v cn plot(tagfm[[cTag]]$data$datiL, tagfm[[cTag]]$data$cn, pch=19,# type="b", # col="black", cex = 0.1, cex.lab = 1.5, cex.axis=1.5 ) # easting v cn plot(tagfm[[cTag]]$data$ea sting, tagfm[[cTag]]$data$cn, pch=19,# type="b", #

PAGE 559

559 col="black", cex = 0.1, cex.lab = 1.5, cex.axis=1.5 ) # northing v cn par(mfrow=c(1,2)) plot(tagfm[[cTag]]$data$easting, tagfm[[cTag]]$data$northing, pch=19,# type="b", # col="black", cex = 0.1, cex .lab = 1.5, cex.axis=1.5, xlim=cmd$plotLimits$easting, ylim=cmd$plotLimits$northing, main=paste("All points",cTag), xlab = "Easting", ylab="Northing" ) points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19, col="blue", cex=1) points(cmd$ree fEN$easting, cmd$reefEN$northing, pch=17, col="red", cex=1) plot(tagfm[[cTag]]$data$northing, tagfm[[cTag]]$data$cn, pch=19,# type="b", # col="black", cex = 0.1, cex.lab = 1.5, cex.axis=1.5 ) # cn v 2 D position ?taking a subset of points? ee = tagfm[[ cTag]]$data$easting[seq(1,length(tagfm[[cTag]]$data$easting),by=150)] nn = tagfm[[cTag]]$data$northing[seq(1,length(tagfm[[cTag]]$data$northing),by=150)] plot(ee, nn, pch=19, col="black", cex = (tagfm[[cTag]]$data$cn 0.9)*1) points(cmd$sdlEN$easting, cmd$ sdlEN$northing, pch=19, col="blue", cex=1) points(cmd$reefEN$easting, cmd$reefEN$northing, pch=17, col="red", cex=1) ###################################################################### ########## ### basic movement stats, i.e. speed, turning angle, dist ance to reef, etc. ### ###################################################################### ########## ###################################################################### ########## # time intervals for all individuals in the array # ...starting w ith 'tag[[ ]]' and 'tagf[[ ]]' # some lists nextTime = list() thisTime = list() timeInterval = list() # To calculate the time interval between position (i+1) and (i)... for (i in 1:length(cTags)){ # ... take the entire ~utime list except the first one nextTime[[i]] = tail(tagfm[[i]]$data$utime, 1) # ... take the entire ~utime list except the last one thisTime[[i]] = head(tagfm[[i]]$data$utime, 1)

PAGE 560

560 # ... then timeInterval[[i]] = nextTime[[i]] thisTime[[i]] } # to look at the numbers dfTime = data.frame("thisTime"=thisTime[[cTag]], "nextTime"=nextTime[[cTag]], "timeInterval"=timeInterval[[cTag]]) max(timeInterval[[cTag]]) min(timeInterval[[cTag]]) hist(timeInterval[[cTag]]) # how many are greater than 'x' seconds x = 20 sum( timeInterva l[[cTag]] > x ) # fraction of intervals greater than x sec sum(timeInterval[[cTag]]>x) / length(timeInterval[[cTag]]) # remove all intervals greater than x so the histogram shows more timeIntCut = list() for (i in 1:length(cTags)){ timeIntCut[[i]] = ti meInterval[[i]][ timeInterval[[i]]<=x ] } # now look at the pictures hist(timeIntCut[[cTag]], n=60, freq=TRUE, xlab="Interval between receptions (s)", main=paste("Histogram of reception intervals, ", cTags[[cTag]]) ) ############################### ####################################### ########## # gag travel speed ###################################################################### ########## tagfm[[cTag]]$data$speed min(tagfm[[cTag]]$data$speed) max(tagfm[[cTag]]$data$speed) # how many are 'In f' sum( tagf[[cTag]]$data$speed == Inf ) # remove all the Inf values and then the faster speeds to make the histo # more meaningful speedCut1 = list()

PAGE 561

561 speedCut2 = list() for (i in 1:length(cTags)){ speedCut1[[i]] = tagf[[cTag]]$data$speed[ tagf[[cTag ]]$data$speed != Inf ] speedCut2[[i]] = tagf[[cTag]]$data$speed[ tagf[[cTag]]$data$speed < 5 ] } min(speedCut1[[cTag]]) max(speedCut1[[cTag]]) # a plot speedHist = hist(speedCut2[[cTag]], n=30, main="Speed Distribution", xlab="Speed (meters/sec)", c ol="blue") abline(v=mean(speedCut2[[cTag]]), col="red", lwd=1) text(max(speedCut2[[cTag]]/2), max(speedHist$counts)/2, paste("Mean speed = ", round(mean(speedCut1[[cTag]]), digits = 2))) text(max(speedCut2[[cTag]]/2), max(speedHist$counts)/2 max(speed Hist$counts)*0.1, paste("Maximum speed = ", round(max(speedCut1[[cTag]]), digits=2))) text(max(speedCut2[[cTag]]/2), max(speedHist$counts)/2 max(speedHist$counts)*0.2, paste("Total points = ", nrow(tagf[[cTag]]$data))) ######################### ############################################# ######### # distance to reef # ... v. hour of day, tide, etc. ###################################################################### ########## # plot it over time plot(tagfm[[cTag]]$data$datiL, tagfm[[cTag]]$data$dtr, cex=0.5, pch=19) # type="b" # abline(h=mean(tagfm[[cTag]]$data$dtr), col="red") hist(tagfm[[cTag]]$data$dtr, n=30) ####### dtr v. hour of day (hod) #############################################3 par(mfrow=c(1,2)) hrs = 0:23 plo t(tagfm[[cTag]]$data$datiL$hour, tagfm[[cTag]]$data$dtr, cex=0.5, pch=19) # find and plot the average dtr each hour dtrList = list() meanDtr = vector(mode="numeric",length=length(hrs)) for (i in hrs){ dtrList[[i+1]] = tagfm[[cTag]]$data$dtr[tagfm[[cTag]] $data$datiL$hour == i] meanDtr[i+1] = mean(dtrList[[i+1]]) } points(hrs, meanDtr, pch=19, cex=1, col="red") # a boxplot

PAGE 562

562 bob=boxplot(x=dtrList, xlab="Hour of day (Local time)", ylab="Distance to reef (m)") ### find average distance for given time of day bins = 1:24 # bin by hour, half hour, etc. # create a holder for hourly mean dtr values meanDtr = list() # for (i in 1:length(cTags)){ meanDtr[[i]] = data.frame("mean"=rep(7777777,length(bins)), "median"=rep(7777777,length(bins))) } # calculate ho urly mean dtr values for (i in 1:length(cTags)){ # for cTag i, pick out only the distances for hour j and calculate their mean for (j in 1:length(bins)){ meanDtr[[i]]$mean[[j]] = mean( tagfm[[i]]$data$dtr[ tagfm[[i]]$data$datiL$hour == j 1 ] ) meanDtr[[i]]$median[[j]] = median( tagfm[[i]]$data$dtr[ tagfm[[i]]$data$datiL$hour == j 1] ) } # end j loop } # end i loop # add the means to the plot points(bins,meanDtr[[cTag]]$mean, pch=19, col="red") points(bins,meanDtr[[cTag]]$median pch=19, col="blue") ####### dtr v. magnitude of water flow (magL) #############################################3 plot(tagfm[[cTag]]$data$datiL, tagfm[[cTag]]$data$magL, pch=19, cex=0.2) plot(tagfm[[cTag]]$data$magL, tagfm[[cTag]]$data$magM, pch=19, ce x=0.2) plot(tagfm[[cTag]]$data$dirL, tagfm[[cTag]]$data$dirU, pch=19, cex=0.2) hist(tagfm[[cTag]]$data$magL, n=30) ad = importADCPdata() hist(ad$magL, n=30) par(mfrow=c(1,2)) max(tagfm[[cTag]]$data$magL, na.rm=TRUE) waterSpeedCategories = seq(0, 390, le ngth.out=31) # this gives bins of x size binsize = waterSpeedCategories[2] # categorize each position solution into a waterSpeed bin roundedSpeeds = floor(tagfm[[cTag]]$data$magL/binsize)*binsize dtrList = list() meanDtr = vector(mode="numeric",length=le ngth(waterSpeedCategories)) for (i in 1:length(waterSpeedCategories)){ dtrList[[i]] = tagfm[[cTag]]$data$dtr[ roundedSpeeds==waterSpeedCategories[i] ]

PAGE 563

563 meanDtr[[i]] = mean(dtrList[[i]], na.rm=TRUE) } bob=boxplot(dtrList, names=waterSpeedCategories, xlab="Water speed (mm/s)", ylab="Distance to reef (m)") points(meanDtr, pch=19, col="red") ####### dtr v. magnitude of water flow (magL) #############################################3 plot(tagfm[[cTag]]$data$datiL, tagfm[[cTag]]$data$dirL, pch=19, cex =0.2) plot(tagfm[[cTag]]$data$dirL, tagfm[[cTag]]$data$dirM, pch=19, cex=0.2) plot(tagfm[[cTag]]$data$dirL, tagfm[[cTag]]$data$dirU, pch=19, cex=0.2) # compare water direction only at position times v. all times hist(tagfm[[cTag]]$data$dirL, n=30) ad = importADCPdata() hist(ad$dirL, n=30) par(mfrow=c(1,2)) max(tagfm[[cTag]]$data$dirL, na.rm=TRUE) min(tagfm[[cTag]]$data$dirL, na.rm=TRUE) binsize = 360 / 24 waterDirectionCategories = seq(0, 360, by = binsize) # categorize each position solution into a wa terSpeed bin roundedDirections = floor(tagfm[[cTag]]$data$dirL/binsize)*binsize dtrList = list() meanDtr = vector(mode="numeric",length=length(waterDirectionCategories)) for (i in 1:length(waterDirectionCategories)){ dtrList[[i]] = tagfm[[cTag]]$data$dt r[ roundedDirections==waterDirectionCategories[i] ] meanDtr[[i]] = mean(dtrList[[i]], na.rm=TRUE) } bob=boxplot(dtrList, names=waterDirectionCategories, xlab="Water direction (deg)", ylab="Distance to reef (m)") points(meanDtr, pch=19, col="red") ####### dtr v. ADCPC 'waterDepth' and NOAA 'tidalHeight' ###################### par(mfrow=c(2,1)) plot(tagfm[[cTag]]$data$datiL, tagfm[[cTag]]$data$waterDepth, pch=19, cex=0.2) plot(tagfm[[cTag]]$data$datiL, tagfm[[cTag]]$data$tidalHeight, pch=19, cex=0 .2) par(mfrow=c(1,2)) min(tagfm[[cTag]]$data$waterDepth, na.rm=TRUE) max(tagfm[[cTag]]$data$waterDepth, na.rm=TRUE) binsize = 0.2 waterDepthCategories = seq(12.4, 14.4, by=binsize) # this gives bins of x size

PAGE 564

564 # categorize each position solution into a w aterDepth bin roundedDepths = floor(tagfm[[cTag]]$data$waterDepth/binsize)*binsize dtrList = list() meanDtr = vector(mode="numeric",length=length(waterDepthCategories)) for (i in 1:length(waterDepthCategories)){ dtrList[[i]] = tagfm[[cTag]]$data$dtr[ ro undedDepths==waterDepthCategories[i] ] meanDtr[[i]] = mean(dtrList[[i]], na.rm=TRUE) } bob=boxplot(dtrList, names=waterDepthCategories, xlab="Water depths (m)", ylab="Distance to reef (m)") points(meanDtr, pch=19, col="red") ##################### ################################################# ######### ###################################################################### ######### ###################################################################### ######### # turning angle # ... calculate the t urning angle for every set of three consecutive points # ... starting with 'tagfm[[ ]]' # some lists p1 = list() # all but the last two points p2 = list() # all but the first and last points p3 = list() # all but the first two points turns = list() # the list of all turns, this will be one shorter than p1 difference=list(); uniques=list(); # these are for removing duplicates # check for duplicate positions next to each other, since 'bearing.ta()' # ... doesn't allow zero length moves for (i in 1:length(c Tags)){ # find every row that's the same as the one before it. To do this, look at the # ... easting/northing columns, take the whole list but the first one (tail()) # ... then take the whole list but the last one (head()) and subtact them # ... this gives the 'difference[[ ]]' list which has east and north columns), difference[[i]] = tail(tagfm[[i]]$data[,4:5], 1) head(tagfm[[i]]$data[,4:5], 1) # ... then any column that == 0 in both the east and north columns is dropped uniques[[i]] = tagfm[[i]]$data[ !((difference[[i]][1] == 0) & (difference[[i]][2] == 0)), ] # now take just the 'easting' and 'northing' columns uniques[[i]] = uniques[[i]][,4:5] }

PAGE 565

565 # to calculate the angle turned between any three points, i, i+1, i+2 for (i in 1:length(cTags)){ # ... take the entire list (of east/north pairs) but the last two p1[[i]] = head(uniques[[i]], 2) # ... take the entire list but the first and last p2[[i]] = head( tail( uniques[[i]], 1), 1) # ... take the entire list bu t the first two p3[[i]] = tail(uniques[[i]], 2) # now calculate the bearing for each set of 3 pts # ... bearing.ta can accept three 2 column data.frames instead of three length 2 vectors turns[[i]] = bearing.ta(p1[[i]],p2[[i]],p3[[i]],as.deg=T RUE)$ta } # plot them one at a time cTag=1 turnHist = hist(turns[[cTag]], main="Turning Angle Distribution", xlab="Turning Angle (Degrees)", breaks=seq( 180,180,by=30), freq = FALSE, col="blue", xlim=c( 200,200)) # to make a rose diagram of turning angle turnRadians = turns[[cTag]] pi / 180 rose.diag(turnRadians, 18, pts=F, prop=2, main="Distribution of angles turned from forward travel") text(0.8,0.8,"Left Turn") text(0.8, 0.8,"Right Turn") #################################################### ################## ########## ###################################################################### ########## ###################################################################### ########## ### space use calculations and plots ### this code is from 'k de2d.r' ###################################################################### ########## ### create 3 D and 2 D KDE plots par(mfrow=c(1,2)) cTag=2 # a basic easting/northing plot plot(tagfm[[cTag]]$data$easting, tagfm[[cTag]]$ data$northing, pch=19, cex=0.2, col="black", main=cTags[[cTag]], xlim=cmd$plotLimits$easting,

PAGE 566

566 ylim=cmd$plotLimits$northing) points(cmd$sdlEN$easting, cmd$sdlEN$northing, pch=19, cex=1, col="blue") points(cmd$reefEN$easting, cmd$reefEN$northing, pch=1 9, cex=1, col="red") kde = kde2d(tagfm[[cTag]]$data$easting, tagfm[[cTag]]$data$northing, n=50, lims=c(cmd$plotLimits$easting, cmd$plotLimits$northing)) homeRange(easting = tagfm[[cTag]]$data$easting, northing = tagfm[[cTag]]$data$northing, li ms = c(cmd$plotLimits$easting, cmd$plotLimits$northing), reefEN=cmd$reefEN, sdlEN=cmd$sdlEN, prob=0.9, h=c(5.4,4.8) ) # one kind of plot kde2dplot(kde, tagName=cTag, reefEN = cmd$reefEN, sdlEN = cmd$sdlEN) # another kind of plot kde2dplot2(tagfm[[c Tag]]$data$easting, tagfm[[cTag]]$data$northing, d=kde, prob=0.9, pts=FALSE, tagName=cTag) points(x=cmd$sdlEN$easting, y=cmd$sdlEN$northing, pch=19, col="red", cex=1) points(x=cmd$reefEN$easting, y=cmd$reefEN$northing, pch=17, col="red") #abline(h=cmd $reef$northing, v=cmd$reef$easting) ### calculate the home range size hr = homeRange(easting=tagfm[[cTag]]$data$easting, northing=tagfm[[cTag]]$data$northing, lims=c(cmd$plotLimits$easting,cmd$plotLimits$northing), reefEN = cmd$reefEN, sdlEN = cm d$sdlEN, n=50, prob=0.90, h=c(5.4,4.8) ) hr # the units are m^2 ###################################################################### ########### # calculate the home ranges for all fish homeRanges = list() for(i in 1:length(cTags)){ homeRanges[[i]] = homeRange(easting=tagfm[[i]]$data$easting, northing=tagfm[[i]]$data$northing, lims=c(cmd$plotLimits$easting,cmd$plotLimits$northing), reefEN = cmd$reefEN, sdlEN = cmd$sdlEN, n=50, prob=0.90, h=c(5.4,4.8) ) }

PAGE 567

567 ######################### ############################################# ########## ### How does space use change with the number of days of data you use? ### This code works for 2009 # first, divide fish1f into separate days fishPos = data.frame("utime"=tagfm[[cTag]]$data$utime, "dayIndex"= (tagfm[[cTag]]$data$utime %/% 86400) tagfm[[cTag]]$data$utime[1] %/% 86400 + 1, "hourOfDay"=as.POSIXlt(tagfm[[cTag]]$data$utime,origin="1970 1 1",tz="GMT")$hour, "easting"=tagfm[[cTag]]$data$easting, "northing"=tagfm[[cTag]]$data$nort hing, "depth"=tagfm[[cTag]]$data$depth ) output = c(0,0) for (i in 1:length(unique(fishPos$dayIndex))){ cPos = fishPos[fishPos$dayIndex <= i ,] hr = homeRange(easting=cPos$easting, northing=cPos$northing, lims=c(cmd$plotLimits$easting,cmd$plo tLimits$northing), reefEN = cmd$reefEN, sdlEN = cmd$sdlEN, n=50, h=c(5.4,4.8), prob=0.65, pts=TRUE ) output = rbind(output, c(i, hr)) } holder = output plot(output, main = paste(tagfm[[cTag]]$tagName, tagfm[[cTag]]$beaconName), xlab="Number of Days", ylab = "90% Space use Estimate (m^2)") ### This code should work for 2007 and 2008 i2007 = 1:38 i2008 = 1:52 # T60200, T60400, T60500, T60700, t60900 # T60100, T60300, T60600, T60800, T61100, T61200, T61300 h =c(5.4,4.8) output = c(0,0) for (i in i2008){ currentI = head(i2007,i) temp1 = importSdlData(60300, currentI, subSample=F) hr = homeRange(temp1$easting, temp1$northing, n=50, h=c(5.4,4.8), prob=0.65, pts=F) output = rbind(output,c(length(currentI), hr)) } holder = output plot(output, main = "T60300B79500", xlab="Number of Days", ylab = "65% Space use Estimate (m^2)")

PAGE 568

568 ###################################################################### ########## ### How does space use change with frequency of su bsampling? ### ### This code should work for 2007 and 2009 ### This code should work for 2007 and 2008 whichDays = 1:52 ss = c(2,5,seq(10,140,by=10)) # T60200, T60400, T60500, T60700, t60900 # T60100, T60300, T60600, T60800, T61100, T61200, T61300 output = c(frequency=0,hr=0) for (i in 1:length(ss)){ temp1 = importSdlData(60200, whichDays, subSample=ss[i]) hr = homeRange(temp1$easting, temp1$northing, n=50, prob=0.95) output = rbind(output, c(ss[i], hr)) } holder = output plot(output, main = T60200B79500", xlab="Subsample step size", ylab = "Home Range Size (m^2)") ###################################################################### ########## ###################################################################### ########## ### Home range u sing LoCoH ### This requires 'NNCH.r' ### # This takes a long time, so save it. xys = tagfm[[1]]$data[,c("easting", "northing")] homerange = NNCH(xys, k=10) HRtagfm1 = homerange plot(homerange) # To decide what the k value should be... hr = NNCH(xys, k =c(5, 10, 15, 20, 25))

PAGE 569

569 LIST OF REFERENCES Abrahams, M.V. Pratt, T.C. 2000. Hormonal manipulations of growth rate and its influence on predator avoidance foraging trade offs. Can. J Zool. 78 121 127. Acosta, F.J., Lpez, F., Serrano, J.M., 1995. Dispersed versus central place foraging: intra and intercolonial competition in the strategy of trunk trail arrangement of a harvester ant. Am. Nat. 145, 389 411. Amoser, S., Ladich, F., 2005. Are hearing sensitivities of freshwater fish adapted to the a mbient noise in their habitats? J. Exp. Biol. 208, 3533 3542. Andrews, K.S., Tolimieri, N., Williams, G.D., Samhouri, J.F., Harvey, C.J., Levin, P.S., 2011. Comparison of fine scale acoustic monitoring systems using home range size of a demersal fish. Mar Biol. DOI 10.1007/s00227 011 1724 5. Arthur, A.D., Pech, R.P., Dickman, C.R., 2005. Effects of predation and habitat structure on the population dynamics of house mice in large outdoor enclosures. Oikos 108, 562 572. Bakker, E.S., Reiffers, R.C., Olff, H., 2005. Experimental manipulation of predation risk and food quality: effect on grazing behavior in a central place foraging herbivore. Oecologia 146,157 167. Bednekoff P.A., Lima, S.L., 1998. Re examining safety in numbers: interactions between risk dilution and collective detection depend upon predator targeting behaviour. Proc. R. Soc. Lond. B. 265, 2021 2026. Bell, W.J., 1991. Searching behaviour: the behavioral ecology of finding resources. Chapman and Hall. Biesinger, Z., Bolker, B.M., Lindberg W.J., 2011. Predicting local population distributions around a central shelter based on a predation risk growth trade off. Ecol. Mod. 222, 1448 1455. Bolker, B.M 2008 Ecological models and data in R Princeton Univ ersity Press. Bopardikar, S.D., Bul lo, F., Hespanha, J.P., 2007. Sensing limitations in the lion and man problem. Amer. Contr. Conf. New York. pp. 5958 5963. Boyce, M.S., Vernier, P.R., Nielsen, S.E., Schmiegelow, F.K.A., 2002. Evaluating resource selection functions. Ecol. Mod 157, 281 300. Broom, M., Ruxton, G.D., 2005. You can run or you can hide: optimal strategies for cryptic prey against pursuit predators. Behav. Ecol. 16, 534 540.

PAGE 570

570 Brul T., D niel, C., Cols Marrufo, T., Renn, X., 2003. Reproductive biology of gag in the southe rn Gulf of Mexico. J. Fish Biol. 63, 1505 1520. Bullock, L.B., Smith, G.B., 1991. Seabasses (Pisces: Serranidae). Memoirs of the Hourglass Cruises 8(2). Florida Marine Research Institute, St. Petersberg, Florida. Chen, Y., Jackson, D.A., Harvey, H.H., 1992. A comparison of von Bertalanffy and polynomial functions in modelling fish growth data. Can. J. Fish. Aquat. Sci. 49, 1228 1235. Christensen, V., Walters, C.J., 2004. Ecopath with Ecosim: methods, capabilities and limitations. Ecol. Mod. 172, 109 13 9. Claireaux, G., Lefran ois, C., 2007. Linking environmental variability and fish performance: integration through the concept of scope for activity. Phil. Trans. R. Soc. B. 362,2031 2041. Clarke, A., Johnston, N.M., 1999. Scaling of metabolic rate wit h body mass and temperture in teleost fish. J. Anim. Ecol. 68, 893 905. Clements, S., Jepsen, D., Karnowski, M., Schreck, C.B., 2005. Optimization of an acoustic telemetry array for detecting transmitter implanted fish. N. Am. J. Fish. Mgmt. 25, 429 436. Coleman, F., Koenig, K., Collins, L.A., 1996. Reproductive styles of shallow water grouper (Pisces: Serranididae) in the eastern Gulf of Mexico and the consequences of fishing spawning aggregations. Env. Biol. Fishes 47, 129 141. Collazo, J.A., Epperly, S.P., 1995. Accuracy tests fr sonic telemetry studies in an estuaring environment. J. Wildl. Manage. 59, 181 188. Collins, L.A., Johnson, A.G., Koenig, C.C., Baker, M.S., Jr., 1998. Reproductive patterns, sex ratio, and fecundity in gag Mycteroperca micro lepis (Serranidae), a protogynous grouper from the northeastern Gulf of Mexico. Fish. Bull. 96, 415 427. Cooper, W.E., 2000. Tradeoffs between predation risk and feeding in a lizard, the broad headed skink ( Eumeces laticeps ). Behaviour 137, 1175 1189. Co te, D., Scruton, D.A., Niezgoda, G.H., McKinley, R.S., Rowsell, D.F., Lindstrom, R.T., et al 1997. A coded acoustic telemetry system for high precision monitoring of fish location and movement: application to the study of nearshore nursery habitat of juv enile Atlantic cod ( Gadus morhua ). Mar. Tech. Soc. 32, 54 62.

PAGE 571

571 Cowlishaw, G., 1997. Trade offs between f oraging and predation risk determine habitat use in a desert baboon population. Anim. Behav. 53, 667 686. Crowder, L.B., Cooper, W.E., 1982. Habitat structural complexity and the interaction between Bluegills and their prey. Ecology 63, 1802 1813. Ecklund, A.M., 1993. A literature review of the gag grouper, Mycteroperca microlepis CRD 92/93 7 2. National Marine Fisheries Service. Southeast Fisheries Center, Miami Laboratory, Miami, FL. Eggleston, D.B., Lipcius, R.N., Miller, D.L., 1992 Artificial shelters and survival of juvenile Caribbean spiny lobster Panulirus argus : Spatial, habitat, and lobster size effects. Fish. Bull. 90, 691 702. Egner, S.A., Mann, D.A., 2005. Auditory sensitivity of sergeant major damselfish Abudefduf saxatilis from oist settlement juvenile to aduls. Mar. Ecol. Prog. Se r. 285, 213 222. Ferran, A., Ettifouri, M., Clement, P., Bell, W.J., 1994. Sources of variability in the transition from extensive to intensive search in coccinellid predators (Homopters: Coccinellidae). J. Insect. Behav. 7, 633 647. Ferrari, M.C.O., Sih A., Chivers, D.P., 2009. The paradox of risk allocation: a review nd prospectus. Anim. Behav. 78, 579 585. Franklin, A.B., Anderson, D.R. Guti rrez, Burnham, K.P., 2000. Cimate, habitat quality, and fitness in northern spotted owl populations in northwe stern Californa. Ecol. Mono. 70, 539,590. Fretwell, S.D., Lucas, H.L., 1970. On territorial behavior and other factors influencing habitat distribution in birds. Acta Biotheor. 19, 136 156. Giraldeau, L.A., Kramer, D.L., Deslandes, I., Lair, H., 1994. Th e effect of competitors and distance on central place foraging eastern chipmunks, Tamias s triatus Anim. Behav. 47, 621 632. Giske, J., Huse G., Berntsen, J. 2001. Spatial modelling for marine resource management, with a focus on fish. Sarsia 86, 405 410. Giske, J., Huse, G., Fiksen, ., 1998. Modelling spatial dynamics of fish. Rev. Fish Biol. Fish. 8, 57 91. Giske, J., Rosland, R., Berntsen, J., Fiksen, ., 1997. Ideal free distribution of copepods unde r predation risk. Ecol. Mod 95, 4 5 59.

PAGE 572

572 Gliwics Z.M., Slon, J., Szynkarczyk, I., 2006. Trading safety for food: evidence from gut contents in roach and bleak captured at different distances offshore from their daytime littoral refuge. Fresh. Biol. 51,823 839. Grewal, M.S., Andrews, A.P., 2008. Kalman filtering: theory and practise using MATLAB, 3 rd ed. Wiley Press. Harms, C.A., Lewbart, G.A., 2000. Surger y in fish. Vet. Clin. N. Am. Exotic 3, 759 774. Harris, P.J., Collins, M.R., 2000. Age, growth and age at maturity of gag, Mycteroperca microlepis from the southeastern United States during 1994 1995. Hastie, T., Tibshirani, R., 1986. Generalized addit ive models. Stat. Sci. 1, 297 318. Hebblewhite, M., Merrill, E.H., McDonald, T.L., 2005. Spatial decomposition of predation risk using resource selection functions: an example in a wolf elk predator prey system. Oikos 111, 101 111. Hebblewhite, M., Merrill, E.H., 2009. Trade offs between predation risk and forage differ between migrant strategies in a migratory ungulate. Ecology 90, 3445 3454. Hemmi, J.M., 2005. Predator avoidance in fiddler crabs: 1. Escape decisions in relation to the risk of predation. Anim. Behav. 69, 603 614. Heupel, M.R., Semmens, J.M., Hobday, A.J., 2006. Automated acoustic tracking of aquatic animals: scales, design and deployment of listening station arrays. Mar. Fresh. Res. 57, 1 13. Hixon, M.A., Carr, M. H., 1997. Synergistic predation, density dependence, and population regulation in marine fish. Science 277, 946 949. Hobson, E.S., 1968. Predatory behavior of some shore fishes in the Gulf of California. USDI Fish and Wildlife Service, Bureau of Sport Fis heries and Wildlife, Research Report 17, 92p. Hood, P.B., Schlieder, R.A., 1992. Age, growth, and reproduction of gag, Mycteroperca microlepis (Pisces: Serranidae), in the Eastern Gulf of Mexico. Bull. Mar. Sci. 51, 337 352. Hovel, K., Lowe, C., 2007. S helter use, movement, and home range of spiny lobsters in San Diego County. California Sea Grant Post Award Technical Report R/MLPA 04. Humston, R., Olson, D.B., Ault, J.S. 2004. Behavioral assumptions in models of fish movement and their influence on pop ulation dynamics. Trans. Am. Fish. Soc. 133, 1304 1328.

PAGE 573

573 Johns, D.W., Armitage, K.B., 1979. Behavioral ecology of alpine yellow bellied marmots. Behav. Ecol. Sociobiol. 5, 133 157. Johnson, D.W. 2006. Predation, habitat complexity, and variation in den sity dependent mortality of temperate reef fishes. Ecology 87, 1179 1188. Kacelnik, A., Krebs, J.R., Bernstein, C., 1992. The ideal free distribution and predator prey populations. Trends Ecol. Evol. 7, 50 55. Karnad, N., Isler, V., 2008. Bearing only pu rsuit. In Proc. IEEE Int. Conf. on Robotics and Automation, pages 2665 2670, May 2008. Karnofsky, E.B., Atema, J., Elgin, R.H., 1989. Field observations of social behavior, shelter use, and foraging in the lobster, Homarus americanus Bio. Bull. 176, 239 246. Kiel, B.L., 2004. Homing and spatial use of gag grouper, Mycteroperca microlepis Thesis. University of Florida, Gainesville, Florida, USA. Klimley, A.P., Le Boeuf, B.J., Cantara, K.M., Richert, J.E., Davis, S.F., Van Sommeran, S., 2001. Radio acous tic positioning as a tool for studying site specific behavior of the white shark and other large marine species. Mar. Biol. 138, 429 446. Koenig, C.C., Coleman, F.C., 1998. Absolute abundance and survival of juvenile gag in sea grass beds of the northeast ern Gulf of Mexico. Trans. Am. Fish. Soc. 127, 44 55. Kramer, D.L., Chapman, M.R., 1999. Implications of fish home range size and relocation for marine reserve function. Env. Biol. Fish. 55, 65 79. Lagard re, J. P., Ducamp, J. J., Favre, L., Mosneron Dup in, J., Sp randio, M., 1990. A method for the quantitative evaluation of fish movements in salt ponds by acoustic telemetry. J. Exp. Mar. Biol. Ecol. 141, 221 236. LePage, C., Cury, P. 1996. How spatial heterogeneity influences population dynamics: simula tions in SeaLab. Adapt. Behav. 4, 255 281. Lewis, D.B. Eby, L.A. 2002 Spatially heterogeneous refugia and predation risk in in tertidal salt marshes. Oikos 96, 119 129. Lima, S.L. Dill, L.M. 1990. Behavioral decisions made under the risk of predation : a review and prospectus. Can. J Zool. 68, 619 640.

PAGE 574

574 Lindberg, W.J., Frazer, T.K., Portier, K.M., Vose, F., Loftin, J., Murie, D.J., et al ., 2006. Density dependent habitat selection and performance by a large mobile reef fish. Ecol. Appl. 16, 731 746. Lorenzen K., Enberg, K., 2002. Density dependent growth as a key mechanism in the regulation of fish populations: evidence from among population comparisons. Proc. R. Soc. Lond. B 269, 49 54. MacCall, A.D., 1990. Dynamic Geography of Marine Populations. U niversity of Washington Press. Macleod, R., Gosler, A.G., 2006. Capture and mass change: perceived predation risk or interrupted foraging? Anim. Behav 71, 1081 1087. McGovern, J.C., Sedberry, G.R., Meister, H.S., Westendorff, T.M., Wyanski, D.M., Harris P.J., 2005. A tag and recapture study of gag, Mycteroperca microlepis off the Southeastern U.S. Bull Mar Sci 76 47 59. Mokievsky, V.O. 2009. Marine protected areas: theoretical background for design and operation. Russ. J. Mar. Biol. 35, 504 514. Mulcahy, D.M., 2003. Surgical implantation of transmitters into fish. ILAR J. 44, 295 306. Munday, P.L., 2001. Fitness consequences of habitat use and competition among coral dwelling fishes. Oecologia 128, 585 593. Myberg, A.A., 2001. The acoustical bio logy of elasmobranchs. Env. Biol. Fish. 60, 31 45. Niezgoda, G., Benfield, M., Sisak, M., Anson, P., 2002. Tracking acoustic transmitters by code division multiple access (CDMA) based telemetry. Hydrobiologia 483, 275 286. Oro, D., 2008. Living in a ghet to within a local population: an empirical example of an ideal despotic distribution. Ecology 89, 838 846. Parker, R.O., Colby, D.R., Willis, T.D., 1983. Estimated amount of reef habitat on a portion of the U.S. south Atlantic and Gulf of Mexico continent al shelf. Bull. Mar. Sci. 33, 935 940. Perry, G., C E.R., 1997. Animal foraging: past, present and future. Trends Ecol. Evol. 12, 360 364.

PAGE 575

575 Popple I.D., Hunte, W., 2005. Movement patterns of Cephalopholis cruentata in a marine reserve in St. Lucia, W.I., obtained from ultrasonic telemetry. J. Fish Biol. 67, 981 992. Pyke, G.H., 1984. Optimal foraging theory: a critical review. Annu. Rev. Ecol. Syst. 15, 523 575. Quinn, J.L., Cresswell, W., 2004. Predator hunting behavior and prey vulnerability. J. Anim. Ecol. 73, 143 154. R Core Development Team, 2010. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3 900051 07 0, URL http://www.R project.org. Rosenzweig, M.L., 1981. A theory of hab itat selection. Ecology 62, 327 335. Sale, P.F., 1971. Extremely limited home range in a coral reef fish, Dascyllus aruanus (Pisces; Pomacentridae). Copeia. 1971, 324 327. Sandin, S.A., Pacala, S.W., 2005. Fish aggregation results in inversely density de pendent predation on continuous coral reefs. Ecology 86, 1520 1530. Schnute, J.T. Richards, L.J., 2001. Use and abuse of fishery models. Can. J. Fish. Aquat. Sci. 58, 10 17. Schooley, R.L., Sharpe, P.B., Van Horne, B., 1996. Can shrub cover increase pred ation risk for a desert rodent? Can. J. Zool. 74, 157 163. Schirripa, M.J., Goodyear, C.P., 1994. Status of the gag stocks of the gulf of Mexico. MIA 93/94 61. National Marine Fisheries Service, Southeastern Sciences Center, Miami Laboratory, Miami, FL. Scrimgeour, G.J., Culp, J.M., 1994. Foraging and evading predators: the effect of predator species on a behavioural trade off by a lotic mayfly. Oikos 69, 71 79. Southeast Data, Assessment, and Review (SEDAR) 10, 2006. Stock Assessment Report 2. Gulf of Mexico Gag Grouper. Stearns, S.C. 1992. The evolution of life histories. Oxford University Press. Tolimieri, N., Haine, O., Jeffs, A., McCauley, R., Montgomery, J., 2004. Directional orientation of pomacentrid larvae to ambient reef sound. Coral Reefs 2 3, 184 191. Tyler, J.A., Rose, K.A. 1994. Individual variability and spatial heterogeneity in fish population models. Rev. Fish Biol. Fish. 4, 91 123.

PAGE 576

576 Walters, C., 2003. Folly and fantasy in the analysis of spatial catch rate data. Can. J. Fish. Aquat. S ci. 60, 1433 1436. Walters, C.J. Martell S. J.D 2004. Fisheries ecology and management. Princeton Univ ersity Press. Warfe, D.M., Barmuta, L.A., 2004. Habitat structural complexity mediates the foragig success of multiple predator species. Oecologia 14 1, 171 178. Weaver D.C., 1996 Feeding ecology and ecomorphology of three seabasses (Pices: Serranidae) in the northeastern Gulf of Mexico. University of Florida. thesis. Werner, E.E., Gilliam, J.F., 1984. The ontogenetic niche and species inte ractions in size structured populations. Annu. Rev. Ecol. Syst. 15, 393 425. White, K.A.J., Murray, J.D., Lewis, M.A., 1996. Wolf deer interactions: a mathematical model. Proc. R. Soc. Lond. B. 163, 299 305. Wilson, W.D., 1960. Equaiton for the speed o f sound in sea water. J. Acoustic. Soc. Amer. 32, 1357. Wood, S.N., 2001. Generalized additive models: an introduction with R. Chapman and Hall/CRC. Worton, B.J., 1987. A review of models of home range for animal movement. Ecol. Mod. 38, 277 298. Worton B.J., 1989. Kernel methods for estimating the utilization distribution in home range studies. Ecology 70, 164 168. Ye, Y., Dennis, D., 2009. How reliable are the abundance indices derived from commercial catch effort standardization? Can. J. Fish. Aquat Sci. 66, 1169 1178.

PAGE 577

577 BIOGRAPHICAL SKETCH Zy Biesinger received his Bachelor of Science degree in biology in 1998 from Utah State University. During this time he participated extensively in a joint project with the National Forest Service to model Moun tain Pine Beetle population movement within a forest. He also had the opportunities to assist in physiological studies of free ranging antelope and flight trajectories of tundra swan. He went on to receive a Master of Science from Utah State University i n 2001, where his project examined the search behavior of ladybird beetle predators to evaluate the effects of hunger and prey density. In 2002 he began working as a research assistant at The Ecosystem Center at the Marine Biological Laboratory, where he modeled the onset and evolution of a biologically active soil layer overlying permafrost as part of a larger study of the effects of climate change. He began graduate school at the University of Florida in 2004 to study the effects of landscap e and fish b ehavior and fitness, and received his Doctor of Philosophy from the University of Florida in August 2011.