UFDC Home  myUFDC Home  Help 



Full Text  
PAGE 1 1 PROBABILISTIC METHODS FOR IMPROVED CHANGE DETECTION AND PREDICTION ON SANDY BEACHES USING HIGH RESOLUTION AI RBORNE LIDAR By MICHAEL JOHN STAREK A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2008 PAGE 2 2 2008 Michael John Starek PAGE 3 3 To my Mother and Father, for a childhood that sp arked my imagination and instilled in me the joy of learning. PAGE 4 4 ACKNOWLEDGMENTS I express m y sincere gratitude to my adviso r, Dr. K. Clint Slatton, for his mentorship, patience, continual enco uragement, and the freedom to explore my interests. Without his direction, this work would not have come to fruition. Most important, he inspires and truly cares for all his students and wants them to succeed. I also express my sincere appreciation to Dr. Ramesh Shrestha for guidance with my studies a nd providing me the opportunity to be part of such a great program. Without Dr. Shresthas dedication and care, the geosensing systems engineering program would not ex ist. Im proud to say it is truly great to be a geosensing Gator! I also would like to thank Dr. Carter fo r his inspiration and input over the years. His expertise on airborne laser swath mapping (ALSM) and geodesy is invaluable to the geosensing students. Furthermore, I am grateful to Dr Mike Daniels for serving on the committee and taking the time to work with me outside his de partment. His exceptional knowledge on statistical methods enabled the research to be extended into an area that was both new and very exciting to me and unexplored within our research group. I enjoyed very much the opportunity to learn from him. In addition, I would like to thank Michael Sartor i for all of his ALSM processing expertise, answering my innumerable ques tions, and the great times shared out in the field. Special acknowledgements go to Thelma Epperson for all her hard work to the geosensing program and students; Dr. Andrew Kennedy for his help on co astal analysis; Raghav for being such a great colleague and for his valued help with St. Augustine research; Abhinav and Pravesh for St. Augustine data collection and processing; Tony Murphy for k eeping the computers up and running in Weil Hall; and for Nancy and the Univers ity of Florida civil e ngineering department. Additionally, I am extremely thankful to all of my fellow geosensing and ASPL lab mates, both for the enlightening discussions over the years and for the wonderful camaraderie and good times PAGE 5 5 shared in and out of the lab. They are all such unique and good hearted people, and I feel fortunate to have met each one of them. Finally I extend my most heartfelt gratitude to my mother and father, Mike and Sandy, and my brothe r and sisters, John, Noelle, and Mikaela, who made achieving this goal possible. Their enduri ng love and support kept me pushing forward and believing in myself. For them I am forever grateful. PAGE 6 6 TABLE OF CONTENTS page ACKNOWLEDGMENTS...............................................................................................................4LIST OF TABLES................................................................................................................. ..........9LIST OF FIGURES.......................................................................................................................10ABSTRACT...................................................................................................................................12CHAPTER 1 INTRODUCTION..................................................................................................................14Motivation...............................................................................................................................14Background.............................................................................................................................16Overview....................................................................................................................... ..........182 AIRBORNE LASER SWATH MAPPING............................................................................ 21Lidar........................................................................................................................................21Coastal Monitoring............................................................................................................. ....243 STUDY AREA AND DATA PROCESSING........................................................................ 27Study Area..............................................................................................................................27Beach Nourishments........................................................................................................27Coastal Climate...............................................................................................................28Data Processing......................................................................................................................29Errors......................................................................................................................... ......29Vertical Datum................................................................................................................30Point Filtering..................................................................................................................31Shoreline Delineation.......................................................................................................... ...32AutoShoreline Delineation............................................................................................. 33Ground Truth...................................................................................................................364 DATA MINING.................................................................................................................... .43Profile Sampling.....................................................................................................................43Algorithm Details............................................................................................................ 44Shoreline Change.............................................................................................................45Feature Extraction............................................................................................................. ......45 PAGE 7 7 5 SHORELINE DYNAMICS....................................................................................................54Historical Overview............................................................................................................ ....54Shoreline Change Analysis.....................................................................................................54Beach Nourishment.........................................................................................................56Calibration of a Spreading Model................................................................................... 566 FEATURE ANALYSIS.......................................................................................................... 65Density Estimation..................................................................................................................67Bandwidth Selection........................................................................................................68Plugin Methods..............................................................................................................69Classical Methods............................................................................................................70Least squares crossvalidation.................................................................................. 71Maximum likelihood................................................................................................ 72Implementation for data with ties............................................................................. 74Results...................................................................................................................... 76Divergence Measures............................................................................................................ ..78JensenShannon Divergence............................................................................................79Normalized Information Divergence............................................................................... 81Median Metric........................................................................................................................84Results.....................................................................................................................................86Individual Epochs............................................................................................................86Epoch 1.....................................................................................................................86Epoch 2.....................................................................................................................87Epoch 3.....................................................................................................................87Epoch 4.....................................................................................................................88Epoch 5.....................................................................................................................88Epoch 6.....................................................................................................................88Time Averaged................................................................................................................897 GENERATIVE CLASSIFICATION...................................................................................... 96Nave Bayes............................................................................................................................96Results.....................................................................................................................................988 DISCRIMINATIVE CLASSIFICATION............................................................................ 105Logistic Regression............................................................................................................ ..106Logistic Model...............................................................................................................106Generalized Linear Model............................................................................................. 108Model Fitting.................................................................................................................109Maximum likelihood estimation............................................................................ 109Properties of ML estimate...................................................................................... 112Measures of fit........................................................................................................112Nave Bayes vs. Logistic Regression............................................................................113Example of a Logistic Model........................................................................................ 115 PAGE 8 8 Subset Selection.............................................................................................................117Convergence Issues.......................................................................................................118Correlated Observations....................................................................................................... 120Autologistic Models...................................................................................................... 121Generalized Estimating Equations................................................................................ 123Quasilikelihood estimation................................................................................... 124Working correlation matrix.................................................................................... 125Estimation algorithm.............................................................................................. 126Properties of GEE estimate....................................................................................128Results.................................................................................................................... 128Penalized Estimation........................................................................................................... .132Lasso Regression...........................................................................................................133Shooting Method...........................................................................................................135Prediction Results.......................................................................................................... 1379 CONCLUSION..................................................................................................................... 147Discussion.............................................................................................................................147Future Work..........................................................................................................................152APPENDIX A NEWTONRAPHSON ML BANDWID TH OPTIMIZATION SCHEME ......................... 155B IMPLEMENTATION DETAILS......................................................................................... 158LIST OF REFERENCES.............................................................................................................159BIOGRAPHICAL SKETCH.......................................................................................................166 PAGE 9 9 LIST OF TABLES Table page 31 ALSM collection dates and data epoc hs covered b y sequential acquisitions.................... 42 51 Shoreline change statis tics for 10 km study area. ..............................................................64 52 Mean shoreline change by zones (m)................................................................................. 64 61 Average results for optimal bandwidth selection by ML and LSCV................................. 95 62 Variational distance between true density and Parzen dens ity estimated with the optimal average bandwidth determined by ML and LSCV............................................... 95 63 Results for optimal bandwidth determined by ML and LSCV for August 2003 beach width, slope, and volume. ..................................................................................................95 64 Average rankings of the three separati on m easures and correlation for each feature j f across all successive epochs......................................................................................... 95 71 Training error from nave Bayes classifier using only the top two ranked features accord ing to each measure............................................................................................... 104 72 Prediction error from nave Bayes classi fier using only the to p two ranked features accord ing to each measure............................................................................................... 104 81 Logistic regression model fit to Epoch 1......................................................................... 144 82 Comparison of the estimated coefficients for full model, AIC selected m odel, and BIC selected model for Epoch 1...................................................................................... 145 83 Firstorder Markov model fit to Epoch 1 using three of the strongest individual predictors..........................................................................................................................145 84 Logistic regression coefficients for each stand ardized feature and epoch estimated with spatial autocorrelation using a GEE......................................................................... 146 85 GEE and penalized GEE coefficient averages and their deviations for Epoch 2 and 3. .. 146 86 Mean classification accuracy (MCA) for each Epoch using GEE, penalized GEE, and nonparam etric nave Bayes............................................................................................. 146 PAGE 10 10 LIST OF FIGURES Figure page 21 Light aircraft collecting lidar over a beach.................................................................... 25 22 Unfiltered and filtered DEM over a secti on of beach in Bay County, FL generated from ALSM data acquired af ter hurricane Ivan in 2004.................................................... 26 31 St. Augustine Beach 10 km study area.............................................................................. 38 32 Filtered and unfiltered DEMs color coded by elevation.................................................... 39 33 Color filled contour map of a roughly 2 km section of St. Augustine Beach created by differencing August 2003 and March 2005 DEMs....................................................... 40 34 NOAA tidal gauge with MHHW and NAVD88 offset...................................................... 40 35 Contoured shoreline in GIS compar ed to autodelineated shoreline ................................. 41 36 GUI interface for autoshoreline extrac tion algorithm showing results after image closing and pixel erase.......................................................................................................42 41 Profile sampling........................................................................................................... ......50 42 Computation of shoreline change for each profile between tw o data sets collected at time T1 and time T2........................................................................................................... 51 43 Three features extracted from the August 2003 profiles................................................... 52 44 Shoreline change plots between Ja nuary and June 2006 showing the added inform ation in 5 m sampling compared to 300 m.............................................................. 53 51 Zoom ed in view near pier showing the b eachs natural trend and the deviationfromtrend of the pier region....................................................................................................... 60 52 Shoreline change for each e poch using 1 m profile spacing..............................................61 53 Mean rate of change plots..................................................................................................62 54 Zoomed in view of nourishment region............................................................................. 63 55 Results of the beach fill simulation.................................................................................... 64 61 An example of two likelihoods for a single feature j f given data from two classes, 1C and 2C ..........................................................................................................................91 62 Histograms of beach width, slope, and nearsho re slope in March 2005.......................... 91 PAGE 11 11 63 Example showing sensitivity of Parzen windowing to window length h ..........................92 64 Results from 20 simulations of a standard normal density................................................ 92 65 Results from 20 simulations of a standard Cauchy density............................................... 93 66 Results of ML and LSCV density estimation for August 2003 beach width and volum e................................................................................................................................93 67 Ranking of each features separation be tween class eros ion and accretion by epoch and each of the four measures............................................................................................ 94 71 Classification results alongshore for each epoch using DT and S ................................... 102 72 Confusion matrix classifica tion results using DT and S .................................................. 102 73 Maximum a posteriori class member ship probabilities for August 2003 data ................ 103 81 Fitted logistic curves..................................................................................................... ...141 82 Change in orientation from north to south alongshore for October 2005........................ 141 83 Autocorrelation plot of standardized Pear son residuals for a logistic m odel fit to Epoch 1 data.....................................................................................................................142 84 Difference in coefficient values for e ach feature by epoch estim ated using a GEE with autocorrelation and ordi nary logistic regression...................................................... 143 85 Estimated probability of erosion vs. orie ntation for Epoch 3 using a logistic GEE ........ 144 86 Difference in constraint geometry between Lasso with L1 penalty and Ridge with L2 penalty ...................................................................................................................... .....144 PAGE 12 12 Abstract of Dissertation Pres ented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy PROBABILISTIC METHODS FOR IMPROVED CHANGE DETECTION AND PREDICTION ON SANDY BEACHES USIN G HIGH RESOLUTION AIRBORNE LIDAR By Michael John Starek December 2008 Chair: K Clint Slatton Cochair: Ramesh L Shrestha Major: Civil Engineering Airborne light detection and ranging (lidar) can sample beach topography at orders of magnitude higher spatial resolutions than is prac tical with standard surveying methods. Data mining and pattern classification techniques offe r great potential for coastal monitoring with lidar, but have been relatively unexplored. In the following research, three main contributions are presented: 1) systematic framework to mi ne high resolution lidar data over a beach, 2) informationtheoretic approach to detect morphol ogy indicative of erosion, 3) first research to explore modern probabilistic classifiers to model the effect of morphology on probability of erosion. Lidar surveys were conducted over a beach on the east coast of Florida multiple times between 2003 and 2007. Through automated profile sampling, several different features are extracted from the data and segmented into binary erosion or accretion classes. Divergence measures are used to rank class separation between features. The more separation provided by a feature, the greater its potentia l as a morphologic indicator. Mo rphologic indicators can improve beach monitoring providing insight into the change dynamics and for classifying high impact zones. Deviationfromtrend performed best overall and it is a contributi ng factor to anomalous PAGE 13 13 erosion in the study area. Over shorter epochs, slope based feat ures ranked high. A naive Bayes classifier is implemented to test the ability of the features on classifying erosion zones. The top features selected by divergence outperformed correlation and a median metric by approximately 5% and 3% supporting the utility of the divergence method. To evaluate the joint effect of the features on the outcome of erosion, logistic regression is utilized. A generalized estimating equation (GEE) is applied to handle spatial correlation in the binary responses. To reduce model over fitt ing and address collinearity among the features, Lasso regression is employed. The ability of the classifiers to predict (c lassify) zones prone to erosion based solely on morphology is evaluate d. Lasso GEE obtained the highest average success rate with 80% and a maximum of 86%. Th e logistic based classifiers substantially outperformed nonparametric naive Bayes by approximately 7%. The developed classifiers provided a powerful tool for b each characterization with lidar. PAGE 14 14 CHAPTER 1 INTRODUCTION Motivation In the United States the narrow coas tal fringe that makes up only 17 percent of the nation's contiguous land area is home to more than half of its population (Crossett et al., 2008). Similar population concentrations exist in other coastal areas around the wo rld. Within the coastal zone, sandy beaches are a particularly valued na tural resource providing vital economic and recreational benefits the world over. They define a geomorphi c barrier between the land and sea, protecting vast investments in development a nd infrastructure to these regions. But sandy beaches are, by nature, highly dynamic landforms, constantly evolving with time and subject to the continual threat of erosion and coastal flooding. Loss of s horeline and damage to beaches can result in the loss of property, tourism, and investment that can devastate local economies (Shrestha et al., 2005). Monitoring beaches and studying the processes that govern their change are critical to the future sustainability of th ese environments and the economies that depend on them. A beach is an accumulation of unconsolidat ed sediment (sand, gravel, cobbles, and boulders) that includes an exposed zone and extends into the water to a depth, typically 1020 m, at which the sediment is less actively transp orted by surface waves (Komar, 1998). The familiar exposed zone is referred to as th e subaerial beach and is defined as the region that extends from the tidal shoreline to some physiographic change su ch as a dune field. Of the various types of beaches, sandy beaches are the most prominent and consist of sediment with mean grain sizes in the range of approximately 0.1 2.0 mm (Komar 1998). Changes in the topography of a sandy beach are a response to the forcing of waves, winds, and currents and depend on the supply of sediment to the beach. This evolution constitu tes a highlyvariable, th reedimensional process PAGE 15 15 that can be loosely classified into three phe nomena: rapid variations due to storm events, seasonal variations due to changes in wave and wi nd climate, and longterm variations due to sea level rise, longshore sediment transport, anthro pogenic influences, extreme storm events, and climate change (Dean and Dalrymple, 2002). Because beaches constantly evolve with time, the challenge in coastal monitoring is to determin e the persistence of the observed change. If a region of shoreline is undergoing growth (accretion) or retreat (erosion), it is important to determine whether this change is transitory, such as caused by seasonal influences, or the onset of a more permanent condition, such as an alte ring of the longterm sediment supply or sea encroachment. This becomes extremely important when assessing trends in shoreline change, recovery from storm impact, influences from anth ropogenic structures, such as jetties and inlets, and the success of beach nourishment projects. By measuring the elevation of the beach repeatedly over time, changes in beach vol ume and shoreline position can be quantified providing insight into beach stability, storm impact, and sediment transport. Standard methods for measuring beach change employ classical leveling and static GPS to survey twodimensional profiles across the beac h, which are oriented perpendicular to the waterline spaced at intervals of several hundred meters (Gutelius et al., 1998; Shrestha et al., 2005; Robertson et al., 2007). By comparing a time series of profile measurements, changes in elevation and shoreline position with time can be quantified. Because of the labor intensive nature of these methods, it is not possible to sa mple at high enough spatial or temporal resolution to resolve the threedimensional variability in be ach change at large scal es. The resulting profile data are limited in their utility for process modeling and only approximate volume and shoreline change estimates can be expected. This undersampling problem is continually confronted by coastal planners who must make decisions a bout potential changes aff ecting a beach based on PAGE 16 16 profiles separated by several hundred meters at temporal intervals of a year or greater. Furthermore, it is very difficult to perform rapid pre and poststorm mapping with these approaches, resulting in extensive delays before engineers have the information they need to rebuild or take other co rrective actions in the aftermath of storm damage. Alternative survey methods include aerial photogrammetry and ki nematic GPS (Gutelius et al., 1998). But even with these methods it can be too costly, time c onsuming and simply not feasible to sample the beach surfaces densely enough to capture the high resolution variability at large scale. Therefore, high resolution measurements as well as enhanced methods to extract information from them are needed to improve our understanding of beach evolution. Background W ith the advent of airborne light detection and ranging technology (lidar), also referred to as Airborne Laser Swath Mappi ng (ALSM), it is now possible to quantify threedimensional change in beach topography at spa tial scales needed to advance science and monitor erosion over long segments of coastline quick ly, accurately, and economically (Shrestha et al., 2005). ALSM can sample beach topography at orders of magnit ude higher spatial frequencies alongshore than is practical with standard coastal surveying me thods. With such dense co verage over large areal extent, ALSM has revolutionized co astal monitoring. From the data, profiles can be extracted at submeter intervals, and surfaces can be computed from lidarderived bareearth digital elevation models (DEMs) to produce accurate erosion/depo sition estimates for engineers, scientists and decision makers. Numerous studies have demonstrated the appl ication of ALSM for quantifying change in shoreline and beach volume by comparing multiple data sets acquired over the same region. Shrestha and Carter (1998), Zha ng et al. (2004), Shrest ha et al. (2005), Robe rtson et al. (2005), and Robertson et al. (2007) de monstrate the use of ALSM for measuring hurricane impact to PAGE 17 17 beaches along the Florida coast. Tebbens et al (2002) perform wavelet analysis to investigate the spatial signal of shoreline change computed from ALSM data over a oneyear period on the Outer Banks, North Carolina. White and Wang (2003) utilize DEMs derived from ALSM data to analyze morphologic change to barrier islands from storm induced fl ooding. Woolard et al. (2003) collected data from three different commercial lidar system s and conclude that lidar has several advantages compared to stereo photogrammetry for coastal mapping. Sallenger et al. (2003) compare ALSM data to th ree different ground based survey methods and conclude that many scales of beach morphology can be resolved with ALSM, from beach cusps tens of meters in wavelength to entire coastal cells co mprising tens to hundreds of kilometers. In addition to elevation change, by mining the data, many different features can be extracted where a feature in this context refers to a lidar measurable property of the sampled surface, such as gradient or curvature (Luzum et al., 2004). Data mining is the process of extracting relevant information from large data sets, and for ALSM data, the objective is often to extract relevant features to disc riminate between classe s of objects in the data. Most research in this area has been directed toward segmentation of the ALSM observations, such as into ground or nonground points, and homoge neous object detection (e.g. Ma ss (1999); Filin (2004); Luzum et al. (2004); Kampa and Slatton (2004)). For coastal analysis, morphologic feature extrac tion can be used to characterize changes in beach morphology observed in the ALSM data. Although such methods more fully exploit the spatial information contained in the data, they are heavily underutilized in coastal analysis mostly because traditional coastal monitoring is based on measurements of shoreline position and volume change. The few exceptions include Brock et al. (2004) who extract lidarmetrics that condense barrier island morphology and morphol ogical change into attr ibuted linear features PAGE 18 18 for analyzing trends in coastal evolution. They then perform a simple morphologic classification using the extracted features. Saye et al. (2005) calc ulate morphometric parameters from lidar data to examine the relationship betw een beach morphology, dune morphology, and erosion/accretion. Eroding dunes were found to be associated with narro wer, steeper beaches while accreting dunes were associated with wider, lowangle beaches. Even less explored in coastal lidar research are methods that couple feature extraction with statistical learning to move an alysis beyond quantifying change in to probabilistic classification and inference. These approaches require enhanc ed methods to extract information and detect patterns contained within the ALSM measurements Such data mining and pattern classification techniques offer great potential to move the analysis of lidar data beyond visual interpretation and simple (first order) measurements made from th e DEMs. This is particularly true for coastal monitoring with lidar be cause of the importance of smallsc ale features in the DEMs. When acquired with sufficient temporal coverage, the high spatialresolution information in the lidar data can reveal patterns in beach change and reso lve nonstationary processes, such as localized zones of anomalous erosion. Overview In the following research, probabilistic m e thods for improved change detection and prediction of sandy beach evoluti on with ALSM data are devel oped. Results are based on data acquired along a beach in Florida seven times between August 2003 and February 2007. The multiple acquisitions through time provided a uni que opportunity to capture and study the alongshore change in beach topography at high resolution and at varying temporal scales. The data sets are mined extensively by extracti ng several different mo rphologic features. Probabilistic classifiers are then developed to learn the influence of morphology on the observed shoreline change patter ns. Those features more indica tive of the variation alongshore PAGE 19 19 are systematically revealed, and the probability of shoreline change behavior estimated. The resultant classifiers provide a powerful tool for characterizing patterns in beach change with lidar. Three main research contributions are pres ented. First, a framework to mine high resolution lidar data over beaches is devel oped. Through automated profile sampling, several different lidarmeasurable morphologies (feature s) are extracted including novel measures of beach topography. In addition, the problem of determ ining a shoreline reference is discussed in detail and a semiautomatic algorithm for shor eline delineation with lidarderived DEMs is presented. The extracted features are then segm ented into binary erosio n or accretion classes dependent on the shoreline change measured for each profile. The second main contribution detects features that are more indicative of erosion or accretion patterns. These morphologic indicators provide insight into the change dynamics along a beach and can be incorporated into classifier s to improve prediction of highimpact erosion zones. The problem is approached as one of optimal feature selection. Each features classconditional probability density f unction (pdf) is estimated nonparametrically, and their pdf separability ranked using pdf divergence measur es. Optimization of the density estimators bandwidth is investigated, and a normalized info rmation divergence for optimal feature selection is constructed. The more interc lass separation provided by a f eature, the more information contained and the greater the potential as a mo rphologic indicator. Results are compared to rankings based on the Pearsons correlation coefficient and a median based metric. The highest ranking features are then used to implement a nave Bayes classifier trained from the data to test the classification power of the features. PAGE 20 20 The final contribution is directed toward s improved classification beyond nave Bayes by using logistic regression methods. Logistic regression can be applied to regress binary observations (classes) on predicto r variables (covariates) thereby providing an estimate of the probability of belonging to one of the two classes given the measured predictors. In our case, the classes are erosion or accretion, and the covariates are the mor phologic features. Furthermore, logistic regression provides a ri gorous statistical framework for inference to evaluate the joint effect of the features within the fitted m odel. To handle correlati on in the observations, a generalized estimating equation (GEE) approach is utilized. Penaliz ed regression is then used to reduce model over fitting and provide more robust classification. PAGE 21 21 CHAPTER 2 AIRBORNE LASER SWATH MAPPING Lidar Airborne Laser Swath Mapping (ALSM) is a light detection and ranging (lidar) m ethod that pulses a laser to measure the range between an airborne platform and the Earths surface many thousands of times per second. Light tr avels approximately 30 centimeters in one nanosecond. By accurately timing the round trip travel time of the light pulses from the laser to a reflecting surface it is possible to de termine the distance from the lase r to the target (Carter et al., 2007). Because it is active, unlik e aerial photography, it does not depend on ambient light which makes it operable during day or night. Using a rotating mirro r or other scanning mechanism inside the laser transmitter, the laser pulses can be made to sweep through an angle, tracing out a line or other pattern on the reflecting surface. With the scan line oriented perpendicular to the direction of flight, it produces a saw tooth pattern of ranges within a strip ce ntered directly along the flight path (Figure 21). A global positioning system (GPS) receiver onboar d the aircraft is used to determine aircraft position and subsequently the laser head positioning, which is rigidly attached. GPS update rates are generally at 1 Hz or 10 Hz epochs. The measurements are referenced to the phase center of the GPS antennae. Precisely surveyed lever arm offsets between the GPS antennae reference point and the laser head refere nce point are used to translate the positional origin to the sensor head. The aircraft receiver measurements must be differentially corrected to remove inherent errors due to GPS signal pr opagation delay through the ionosphere, satellite clock bias, and ephemeris inaccuracies among ot hers. By operating at least one GPS ground station within the survey area, differential co rrections can be applied. As the airplane is surveying, its onboard GPS recei ver will be making simultaneous observations with the ground PAGE 22 22 receiver. Provided the two are w ithin relatively close proximity the errors and biases between them will be highly correlated. The ground station is static so its position can be accurately determined. The base stations fluctuation in G PS observables over time re lative to the static coordinate provides a differential measure of the positional errors. These differential corrections can be applied to the aircraft GPS receiver meas urements. This technique is referred to as differential positioning (ElRabbany, 2002). The shorter the distance between the aircraft and ground station the more correlated are the erro rs. For highaccuracy ALSM data collection (referred to as geodetic), it is standard to u tilize two or more ground sta tions with the airplane at a maximum distance of 20 to 25 km from a ground station during the survey (Slatton et al., 2007). In many cases such stringent requirements ar e not practical, but the general rule of thumb is to have a ground station within 40 km of the airplane. Signal propagation delay through the ionosphere (the upper atmosphere ) can be virtually eliminated by employing geodetic quality, dualfre quency GPS receivers. The L1 and L2 carrier phases of the observations can be combined within a doublediffe rence solution using kinematic GPS postprocessing software to differentially co rrect the onboard receiver measurements. One such program used by the University of Florida (UF) and the National Center for Airborne Laser Mapping (NCALM) is the Kinematic and Rapid Static (KARS) program developed by Gerald Mader at the National Geodetic Survey (Mader, 1992). For details on GPS theory and practice the reader is referred to HofmannWellenhof et al. (2001) and ElRabbany (2002). The largest remaining errors in the GPS observa bles are generally due to changes in path length between the airplane and GPS satellite as the signals propagate through the troposphere (Slatton et al., 2007). These eff ects cannot be removed by differen tial corrections because of the significant differences between troposphere ambien ce at the ground station a nd aircraft. Recent PAGE 23 23 work by Shan et al. (2007) suggests that calib ration of the tropospheri c effects is possible by employing multiple ground stations with the ai rcraft flying at a range of altitudes. Aircraft orientation (roll, pitc h, yaw) is determined by an inertial measurement unit (IMU) rigidly mounted to the aircraft to sense changes in motion. A typica l IMU consists of a set of triaxial, solid state accelerometers to m easure the aircrafts accelerations (axs,ays,azs) plus the gravitational force, and a set of triaxial m ounted fiberoptic gyrosc opes to measure angular velocity (see Figure 21). The acceleration measurements and angular velocity measurements can be integrated to obtain position and orie ntation measurements. Relative to GPS the IMU provides very high update rates (e.g. 200 Hz) but suffers from long term drift due to random walk. As such, an aircraft trajectory obtained solely from the IMU will contain significant errors that will continually grow if not corrected. To the contrary, GPS provides much lower sampling rates but more stability in the long term. The GPS can be used to aid the inertial solution by updating the IMU position estimates within a Kalm an filter. The Kalman filter is a recursive estimation algorithm that minimizes the variance to provide a linear estim ate of the state of a system. For cases where the true error model is Gaussian, the Kalman filter is an optimal estimator in a minimum mean square error (MMSE) sense. Excellent discussions on recursive state estimation with Kalman filters and GPS aided inertial navigation are provided in Gelb (1974) and Rogers ( 2000) respectively. Through postprocessing, the IMU measuremen ts and corrected GPS position observations are integrated within a Kalman filter to obtain a blended navigation solution. This enables accurate determination of platform location and orie ntation at the instant each laser pulse leaves the aircraft. This information is then combined with the scan angle and r oundtrip travel time for each pulse to determine the georeferenced locati on of the sample points on the reflecting surface PAGE 24 24 (Baltsavias, 1999; Wehr and Lohr, 1999). The re sult is a densely sampled, threedimensional representation (point clo ud) of the ground and land cover. The points can be interpolated to form a digital surface model or filtered to form a bare earth DEM of the terrain, which is often the desired final product (Figure 22). In addition, m odern discretereturn systems record multiple returns per transmitted pulse (including first and last). This multireturn capability is particularly useful for creating bareearth DE Ms over forested terrain because the last returns have a higher probability of reflecting from the true ground surface. Coastal Monitoring The revolution in coastal m onitoring has been propelled both by topographic ALSM systems that operate in the nearinfrared portion of the optical spectrum and bathymetric systems that operate in the bluegreen range of the spectrum (e.g. Wozencraft and Lillycrop (2003); Carter et al. (2004)). The focus he rein is on the application of sm allfootprint, discrete return topographic lidar systems. Smallfootprint sy stems enable beach and upland mapping with average spatial resolutions greater than 1 point per m2, vertical accuracy (z) of 5 cm and horizontal accuracy (x,y) of 1520 cm (Slatton et al., 2007). However, the point density will vary locally depending on flight parameters, s can angle, beam diverg ence and pulse repetition rate (Baltsavias, 1999). Over minimally vege tated surfaces, such as beaches, the high data density enables the generati on of accurate, bareearth DEMs at decimeter scale (Neuenschwander et al., 2000). The resultant DEMs can be used to estimate erosion and deposition through DEM differencing, shorel ine position, and to extract many other measurements. PAGE 25 25 Figure 21. Illustration of a light aircraft collecting lidar over a beach. An onboard oscillating mirror distributes IR laser pulses genera ting a sawtooth pattern. The roundtrip travel time to and from the surface below for each pulse is recorded. GPS receivers onboard the aircraft and at a ground location are used to determine the instantaneous location of the aircraft. The orientation (roll, pitch, yaw) and accelerations of the sensor head (axs,ays,azs) are determined from an inertial measurement unit, which are then used along with the scanner angle and measured ranges to determine the coordinates of th e surface points. PAGE 26 26 Figure 22. Example of unfiltered and filtered DE M over a section of beach in Bay County, FL generated from ALSM data acquired after hurricane Ivan in 2004. A) Unfiltered DEM also called a digital surface model and B) filtered DEM. x axis is UTM Easting (m), y axis is UTM Northing (m). A B PAGE 27 27 CHAPTER 3 STUDY AREA AND DATA PROCESSING Study Area As part of a costal m onitoring program, the University of Florida (UF) Geosensing and Engineering Mapping (GEM) Center acquired ALSM data along the St. Augustine Beach region of Florida seven times between August 2003 and February 2007. This resulted in six data epochs covered by sequential acquisitions (Table 31). St. Augustine Beach is located on the northeast coast of Florida in St. Johns County. A 10 km study area was investigated as shown in Figure 31. This region was selected because it c ontains a highly erosive pier zone, which is armored with revetments and requires periodic beach nourishment to replace lost beach area. The surveys were conducted at approximat ely low tide using an Optech ALTM 1233 topographic lidar system operating at a laser pulse rate of 33 kHz and a nearinfrared wavelength of 1064 nm. First and last reflections were recorded for each return laser pulse. Most topographic lidar systems operate at nearinfrared wavelengths and thus can only record elevations from the surface above the water because nearinfrared wavelengths do not penetrate the water column. As a result, ALSM shorelin e surveys are generally acquired during lowtide to capture more exposed land surface and diminish the potential effects from wave runup on shoreline delineation. Nominal values for all flights included a flying height of 600 m above ground level, flight speed of 60 m/ s, scan rate of 28 Hz and maximum scan angle from nadir of 20. These parameters resulted in a mean ground point density of 1.3 points/m2 and swath width of approximately 440 m for each alongshore flight line. Beach Nourishments Effects from two nourishm ent projects are recorded in the data. The first nourishment was completed in early 2003, but the severe 2004 hur ricane season forced an expedited second PAGE 28 28 nourishment project in the summer 2005 that comp leted in December 2005 (Albada et. al, 2006). Both nourishments focused on the approximately 2 k ilometer revetment section of the pier. Note that these regions are excluded from the analysis for data sets affected by the nourishment. For the initial nourishment and subsequent re nourishment projects, the St. Augustine Inlet ebb shoal, located north of St. Augustine Beach, served as the borrow area. Nourishment median sediment size of 0.155 mm appr oximated the native median size of 0.14 mm (Albada et. al, 2006). However, the borrow sand generally is poorly sorted relative to the natural sediment and there will be a fraction of the borrow material that will be significantly less in size than the native sand within the silt and clay range, < 0.0625 mm. When these sm aller components are exposed to wave action they are placed into suspension a nd lost to the littoral system, carried offshore. Therefore, the smaller fraction of material ma y add to the volume of the nourishment project initially, but ultimately it will be ineffective in terms of contribu ting to the volume in the project area or adjacent beach (Dean, 2003). This appe ars to be a continual problem with the nourishment material used for St. Augustine beac h based on frequency of recurring projects and complaints from locals that the nourishment sedi ment does not approximate the native sediment size. Coastal Climate Tides for this area are sem idiurnal with a mean tide range of ~1.4 m and a spring tide range of ~1.6 m. Wave energy is moderate having a mean significant wave height of 1.2 m at an offshore depth of 18 m (USACE, 1998). During the winter months the waves have a dominant direction from the northeast with higher wa ve heights and longer wave periods. These conditions are more conducive for transport of b each material and tend to result in erosion. During the summer months, waves approach mainly from the southeast, having relatively smaller wave heights and a combination of shorter a nd longer periods. These lower energy conditions PAGE 29 29 are more conducive to sediment deposition and tend to result in shoreline accretion except during periods of tropical storms. Winter storms are thought to have a greater impact on shoreline change than tropical storms in this region becau se the greater frequency of occurrence and the longer duration they tend to have. Observations indicate that severe storms can temporarily disrupt or obscure the longer term erosion patter n, for up to a decade in some cases (Foster et al ., 2000). In addition, onshore winds play a substantial role in beach modification for this region. The consistent onshore flow in this area pushes sand inland resu lting in landward dune migration. Furthermore, civil infrastructure projects, such as jett y construction or inlet modification can sometimes permanently alter sand supply and subsequently erosion patterns (Dean and Dalrymple, 2002). Data Processing The raw ALSM data con sists of point clouds of irregularly spaced x,y,z values. Generally, only the last returns from each lidar shot ar e utilized when measuring surface change to minimize the probability of patches of vegeta tion or other landcover biasing the surface elevations. Those points must then undergo a seri es of preprocessing steps before the shoreline is delineated and morphological features are ex tracted. The three main preprocessing steps are (1) remove absolute position biases between flight s that constitute the time series, (2) project elevations to an appropriate vertical datum, and (3) filter out the lidar returns that are not from the ground surface. Errors Even with proper system calibration, a vertical bias in the A LSM observations, due mostly to GPS induced trajectory errors, is sometimes present. These are handled by comparing the ALSM heights to calibration lines on hard su rfaces in the study area, such as roads and runways, determined by groundbased GPS meth ods. Kinematic GPS profiles acquired on the PAGE 30 30 ground can provide measurements of surface elev ations with fewcentimeter scale accuracy (Shrestha et al., 2007). In this st udy, the offsets were removed by vert ically shifting the laser data sets to match GPS profiles acquired along roads within the study area unde r the assumption that the road surfaces did not change during ALSM and ground surveys. For the ALSM system and flight parameters used in this work, along w ith proper calibration, experience shows that this correction generally yields vertic al accuracies of 510 cm and hor izontal accuracies of 1520 cm relative to groundsurvey measurements over slow ly varying surfaces (Slatton et al., 2007). For our seven lidar data sets, RMS height errors relative to the GPS c ontrol points fell within 58 cm after adjustment. Assuming a worst case elevati on accuracy of 8 cm deviation, we can thus expect our threshold in cha nge detection to be roughly 2288 11.3 cm deviation by propagating the error from the di fferencing of two elevations. Vertical Datum For coasta l applications, it is desirable to have the elevations referenced to a vertical datum when determining shoreline. The ALSM points available for this study were output with respect to the NAD83 vertical datum projected in Universa l Transverse Mercator (U TM). Therefore, the heights are initially referenced to the ellipsoi d that defines the NAD83 datum. The ellipsoid heights must be converted using a geoid model in to orthometric heights to relate the measured elevations to localized mean sea level (MSL). This allows a tidal el evation contour to be extracted as the shoreline. In this study, the horizontal ( x,y ) coordinates were projected to UTM zone 17 and the initial ellipsoid heights were transformed with respect to the NAVD88 vertical datum using the GEOID03 geoid model (Meyer et al., 2004) PAGE 31 31 Point Filtering Because ALSM observations consist of discre te returns from the ground and landcover, the raw data must be interpolated to represent the continuous ground surface. Prior to DEM generation, the point data are typi cally filtered to remove nonground points due to such things as buildings, vegetation and other occluding objects (Slatton et al., 2007). Many different filters have been proposed to remove nonground points from lidar data, several of which are reviewed in Sithole and Vosselman (2004). In this wor k, the lastreturn points were filtered using the adaptive, multiscale algorithm in Kampa and Slatton (2004) to remove nonground points without altering the natural beach morphology. Th ere are also commercial software packages that can be used with varying degrees of success to remove nonground points, a summary of which is given in Fernandez et al. (2007). Once the ground points were obtained through filt ering and height bias es between flights removed, the points were interpolated to 1 m 1 m bareearth elevati on grids using ordinary kriging with a linear variogram model (Cressie, 1991). Kriging provides an optimal, in a mean square error sense, weighted linear combinati on of the observed data, where the weights are the unknown regression parameters. 1 m spacing can be achieved with highaccuracy because the ALSMs point density is > 1 point/m2. A nugget effect of 0.07 m was added to capture measurement uncertainty for the lidar height error after adjustment. Figure 32 shows the filtered 1 m DEM for the June 2006 acquisi tion colorcoded by elevation and a zoomed in view near the pier showing the unfiltered resu lt with buildings and vegetation and the filtered result with the majority of buildings and vegetation removed. In general, the filtering process will not eliminate all nonground points. Vegetation th at is both short and dense can be particularly difficult to remove completely since it greatly reduces th e number of lidar retu rns from the ground. Fortunately, over beaches the nonground objects that one typically must filter such as lifeguard PAGE 32 32 towers, people and trash bins, are quite easily removed. The interpolated elevation values underneath the pier in Figure 32 are excluded from the surface anal ysis because it is not possible to obtain lidar returns from underneath a solid obj ect like a pier. Figure 33 shows erosion and deposition estimates from DEM differencing. Shoreline Delineation There is no universally accepted m ethod for deli neating shorelines on beaches. Because of the dynamic nature of these systems, several pr oxies are used, depending on project needs and available data. These proxies incl ude physical indicators such as a debris line or high water line or reference contours such as a tidal datum or zeroelevation (Stockdon et al., 2002; Boak and Turner, 2005; Moore et al., 2006) Traditionally, the high water line (HWL) was used as a shoreline indicator as depicted on historical maps, and it is sti ll commonly used today. The high water line is defined as the intersection of la nd with the water surface at high tide and is most often delineated by manual surveying or aeria lphotogrammetry. Additionally, many ALSM systems record the return pulse intensity which ca n potentially be used to segment the highwater line in lidar data (e.g. Starek et al. (2007b)). A problem with the HWL measurement is that it is a relative measurement dependent on date and time surveyed because the HWL varies as tide, waves and beach slope changes. The advantage of ALSM data for shoreline delineation is that the user can accurately select a georeferenced co ntour as an indicator fo r shoreline. Unlike the HWL, this creates a reliable common reference for comparing shoreline change from surveys conducted at different times (Stockdon et al., 2002; Robertson et al., 2004). Because the ALSM data were collected n ear low tide, the Mean Higher High Water (MHHW) tidal datum was selected as the proxy for shoreline in this study. MHHW is the average of the higher high water height of each tidal day observed over a 19 year tidal datum epoch. To obtain this contour, the offset betw een the laser heights re ferenced to NAVD88 and PAGE 33 33 the MHHW line referenced to the nearest tidal gauges zero bench mark must be determined (Figure 34). The nearest NOAA tidal gauge, st ation 8720587, located near St. Augustine Beach, was used for this work. At this location, tidal datum elevations were derived from the ~19 year tidal epoch 19832001 referenced to the stations zero level. Mean Higher High Water (MHHW) = 2.485 m North American Vertical Datum1988 (NAVD88) = 1.870 m Mean Lower Low Water (MLLW) = 0.916 m Because the ALSM elevations are referenced to NAVD88, the 1.870 m offset was subtracted from MHHW to determine the co rresponding MHHW elevation relative to the lidar elevations (2.4851.870 0.6 m). Thus, the 0.6 m NAVD88 elev ation line was used as the MHHW shoreline reference. It is impor tant to mention that the tidal st ation selected was near enough to the 10 km study region such that the difference in tidal levels between th e station and study area is expected to vary minimally. Where there are si gnificant fluctuations in tidal levels away from the reference gauge, a hydrodynamic model may be necessary to estimate th e datum fluctuation. For several coastal regions of the US, the VDAT UM software developed by NOAA can be used to compute transformations between vertical da tums and tidal datums (Woolard et al. (2003)). AutoShoreline Delineation The m ost common approach for delineating th e shoreline vector is to use geographic information system (GIS) software to contour the bareearth DEM at the desired elevation. However, the results are often less than desira ble resulting in many nonshoreline based contours that require tedious manual interv ention to clean up and generate a contiguous shoreline vector. Although not as common, methods other than cont ouring have been proposed to delineate the shoreline from ALSM data. Stockdon et al. (2002) applies a moving least squares fit through the raw data points and then finds the x,y coordinate of the specific elev ation along the line. But this approach requires manual segmentation of the be ach points and is computationally intensive PAGE 34 34 because it uses a rectangular window search through the raw points. Liu et al. (2007) develop an automated method to contour the shoreline vector in a DEM usi ng morphological image processing. Image morphology operations can be used for many tasks such as to connect disjoint regions within a binary image or break connected regions. Dilation and erosion are the two fundamental morphological operation s. Dilation adds pixels to th e boundaries of objects in an image, while erosion removes pixels on object boundaries. Dilation = )( Aand Erosion = )( A where is union, is intersection, A is the binary image, and is the structuring element. The number of pixels added or removed from th e objects in an image depends on the size and shape of the structuring element (i.e. sliding wi ndow) used to process the image (Math Works, 2005). The structuring element is to mathemati cal morphology what the c onvolution kernel is to linear filter theory. St ructuring elements can be rectangula r, diamond shaped, circular, and other forms. Mathematically dilation and erosion can be written using standa rd set operations: Following the approach in Liu et al. (2007) an algorithm to autodelineate the shoreline within a DEM using morphologic operations was developed. However, there are two important differences to that of Liu et al (2007). First, the algorithm is s implified to delineate shorelines along contiguous stretches of beach only rather than inlets, canals, and bay areas. Therefore, it does not require a complex region removal appro ach. Second, the algorithm applies a least squares fit to edge points along the land/sea boundary after morphology operations are applied to estimate shoreline position. This eliminates e rroneous shoreline positi on estimates due to over growing of the region boundary providing a control on image morphology errors. No such control methods are applied in Liu et al. (2007). The developed algorithm is outlined as follows: 1. Create a binary image to segment land from sea. PAGE 35 35 a. Set all pixels in DEM > shoreline datum 1 b. Set all pixels in DEM <= shoreline datum 0 2. Apply an image dilation followed by erosion using a disk structur ing element. This sequence of operations is referred to as image closing. It connects is olated pixel areas for a contiguous boundary betw een land and the sea. a. Specify the size of the disk structuring element. i. 15 m radius, 8 directions is default 3. Remove isolated pixel groups below a certa in threshold. After image closing is performed, small areas of connected pixels will form that are not part of the contiguous land boundary. These small pixel regions must be removed to ensure there is only one land and sea boundary. a. Specify number of connected pixels to remove. i. Output size of each connected pixel area to enable selection. 4. Sweep binary image row by row, starting from ocean to land (right to left along St. Augustine beach) a. When first pixel landward, is found for that row, find neighboring land points on the row up to an elevation greater than the shoreline datum. i. Use adjacent land neighbors and first land point to fit a least squares line. b. Determine x,y coordinate of shoreline elevation along the line. 5. Smooth the horizontal coordinates of the s horeline vector using a locally weighted regression. 6. Output points. Figure 35 compares an autode lineated shoreline using the a bove algorithm to a contoured shoreline (after manual clean up) using a commerci al GIS software package. In this case, we consider the contoured elevati on from the GIS package to be ground truth. The RMSE was generally less than a meter between the horizonta l positions of the autodelineated shoreline and contoured shoreline after cleanup for each data se t. The only manual intervention required for the autoshoreline algorithm is in specifying the size of the disk structuring element and number of PAGE 36 36 connected pixels to remove. This is a major bene fit compared to contouring within a GIS, which requires manual clean up. The algorithm can also be used to delineate any contiguous elevation contour along the beach, such as the dune line (e stimated at ~ 2.2 m elevation). To implement the algorithm a program was developed in the Matlab scientific computing language. The required input is an ASCII DEM. Figure 36 show s an example of the programs graphical user interface (GUI) after the morphol ogical operations for image clos ing and removal of isolated pixel regions are performed. Ground Truth The questio n arises, How accurate is our shor eline delineation and what data do we have to compare? In the previous section, the shoreline contouring results were based on comparisons between two methods that delineated the vectors solely from the ALSM data. In scientific endeavors, it is always desirable to have a se parate source of ground truth measurements for comparison. But, at the spatial resolutions (< 1 m) and scale (several km) we are measuring, no current method for mapping shoreline position and topography can surpass the accuracy and resolution obtainable with geodetic quality ALSM. The caveat is no current methods given practical considerations as ground based methods can potentially be more accurate but are not feasible at the resolution and scale ALSM can survey. In that regard, the ALSM measurements are the ground truth give n that proper calibrati on and vertical bias adjustment were performed. To gauge the uncertainty in ALSM shoreline positions, the mean crossshore slope in the fores hore (tidal) zone along St. Augustine Beach was estimated to be approximately 8.3 m/m (distance/elevation). Assumi ng a vertical error of 0.07 m this results in a horizontal displacement in shor eline position of approximately 0.58 m. This uncertainty in shoreline position is more than acceptable for coastal erosion monitoring where the concern is PAGE 37 37 with significant changes, genera lly several to tens of meters due to the dynamic behavior of shoreline. PAGE 38 38 Figure 31. St. Augustine Beach 10 km study ar ea. Star is location of the pier. PAGE 39 39 Figure 32. Filtered and unfiltered DEMs color coded by elevation. A) 1 m filtered DEM of the study region from June 2006. The x y axes are in UTM Zone 17 meters, and the dashed line indicates the genera l direction of the flights. B) Zoomed in view of pier vicinity before and after filtering. The doubl earrow on the right plot spans the beach from dune toe to shoreline (dashed line). The interpolated surface underneath the pier (indicated by the star) was excluded from the analysis since no lidar returns occur there. The filtering serves to remove lidar returns from nonsurface objects on the beach, such as lifeguard towers trash bins and people. B A Dune Buildings, trees, cars, fences, and utility poles Beach Revetment Unfiltered Filtered PAGE 40 40 Figure 33. Color filled contour map of a roughly 2 km section of St. Augustine Beach created by differencing August 2003 and March 2005 DEMs. The blue areas show locations were sand depletion occurred. The dark line running along the back edge of the beach marks the area where the foredune once existe d, and displays the loss of more than 2.5 meters in surface height. Figure 34. Example of NOAA tidal ga uge with MHHW and NAVD88 offset. MHHW NAVD88 NAVD88 offset Zero level PAGE 41 41 Figure 35. Contoured shoreline in GIS compared to autodelineated shoreline. A) GIS contouring showing the need for manual cleanup. B) Autodelineated shoreline (green line). PAGE 42 42 Figure 36. Example of GUI interface for autos horeline extraction algorithm showing results after image closing and pixel erase. Table 31. ALSM collection dates and data epochs covered by sequential acquisitions Time period Dates Length (years) Epoch 1 August 16, 2003 to March 10, 2005 ~1.6 Epoch 2 March 10, 2005 to May 07, 2005 ~0.2 Epoch 3 May 07, 2005 to October 11, 2005 ~0.5 Epoch 4 October 11, 2005 to January 27, 2006 ~0.3 Epoch 5 January 27, 2006 to June 14, 2006 ~0.4 Epoch 6 June 14, 2006 to February 19, 2007 ~0.8 PAGE 43 43 CHAPTER 4 DATA MINING Profile Sampling Because our goal is to determ ine which b each morphologies are more indicative of shoreline change, we must now construct a method to systematically extract several morphological features from the lidar data. The natural coordinate system traditionally used for studying shoreline change consists of a local 2D Cartesian system oriented with alongshore and crossshore axes (Dean and Dalrymple, 2002). A co mputer program was developed in Matlab to sample the DEMs by extracting elevation values along crossshore profile lines oriented roughly orthogonal to the shoreline (Starek et al., 2007a ,b; Starek et al., 2008) This provides the x,ycoordinates and elevation values along each profile at a userdefined spacing. To orient the profiles orthogonal to shor eline, the algorithm fits a piecewise trend lin e through the shoreline contour to determine the angle and direction of a landward baseline from which to extract the crossshore profiles. Baselines can be directly in put to match historical survey lines if desired, but it is important that the same baselines are used when comparing profile measurements across data sets. Once baselines are selected, the algorithm can then extract profiles at various intervals (e.g. 1 m, 10 m, 100 m) along the shore and calcul ate the intersection of each profile with the shoreline contour and other surface features (such as the fore dune). For this analysis, the profiles were extrac ted every 5 m in the alongshore direction and extend in the crossshore from the dune toe line to the MHHW shorelin e (Figure 41). This provided a total of 2010 profiles for each data set. A 1D moving average f ilter of length 5 m is applied to each crossshore profile to reduce highfrequency artifacts th at may sometimes be present due to lidar scan line artifacts or is olated nonground points that the filter failed to remove. PAGE 44 44 Algorithm Details The elevatio n of each point is calculated by taking a weighted average (based on inverse distance) of the elevatio ns of the four surrounding grid node elevations. For example, as shown in Figure 41, consider point D. The Euclidean distance from point D to each of its nearest neighbor grid cells, points 1, 2, 3 an d 4 in Figure 41, are estimated. Each of the four points is assigned a weight set to be the inverse of their distance from point D. The elevation for point D is then calculated by taking a weighted aver age of the elevation of the four points: The traversal along the profile needs to be continued until the shoreline is encountered. Consider point R in Figure 41. For this point, the routine recogni zes that point R falls within a grid cell containing the shoreline. However, the pr ofile does not end at this point. The last point in the profile should be P in Fi gure 41, which lies on the shoreline and whose elevation is set to the shoreline reference elevation (0.6 m for this study). Point P can be estimated as the intersection of the crossshore profile and the line segment joining the points S1 and S2, which are the nearest shoreline poi nts above and below R. Let m1 be the slope and b1 be the intercept for the profile line joining R and T, another point on the pr ofile. Furthermore, let m2 be the slope and b2 be the intercept for the line joining S1 and S2. Define the point of intersection, P, as (p,q). Because ( p, q) lies on both the lines, it satisfies the equati ons for both of the lines. This leads us to the following equation: q = m1p + b1 = m2p + b2 Solving for p and q gives: p = ( b2b1) / ( m2 m1) and q = m1[ ( b2b1) / (m2m1)] + b1 PAGE 45 45 In some instances, the point R may occur past th e shoreline. In those cases, R should not be a part of the profile. Rather, the profile should end with the point of intersection. In either case, the profile is terminated with the intersection point of the profile and shoreline vector. Shoreline Change The tem poral change in shoreline between the data acquisition periods listed in Table 31 was quantified using pairs of profiles by compu ting the distance between the shoreline position x,y coordinates (Figure 42). Profiles were then segmented into binary classes, 1C for erosion or 2C for accretion, depending on whether the shore line had moved landward or seaward in that period. For instance, if a hypothe tical Profile 1 in March 2005 experienced negative shoreline change for the March 2005 to May 2005 epoch, it was classified as an erosi on profile. Features extracted at a given time can therefore be interp reted as leading indicators. For some epochs, only erosion or accretion occu rred. In those cases, class 1C still corresponded to the most negative shoreline change (high erosion or low accretion), and 2C corresponded to the most positive shoreline change (low erosion or high accretion). In such cases, the median value in shoreline change was used as the decision rule to segment the profiles into 1C or 2C. Feature Extraction After the profiles are segm en ted into classes, the crossshore morphologic features f from each profile l can be extracted. As an example, we obtain 11,m f lC for feature 1 and class 1, where m indexes the profiles. Results are accumu lated over all features, profiles and both classes for each epoch. This approach allows us to mine the data by extracting several different features per profile and then to examine interc lass separation and classconditional probabilities ji p fC for }2,1{ i, where j indexes the feature set. A tota l of ten features were chosen PAGE 46 46 based on their historical usage as a measure of beach change or created as a novel measure to better characterize beach morphology. Beach slope (m/m) was estimated using a linear fit through the elevations in each profile and represents the subaerial beach slope from dune toe to MHHW shoreline. For more steeply sloping beaches, the slope is sometimes measured only from the berm line seaward rather than dune line. Nearshoreline slope (m/m) measures beach steepness from the MHHW shoreline to a few meters landward. It was created to approximate th e slope near the intertidalwave run up zone. Beach width (m) is the width of the subaerial beach from dune toe to MHHW shoreline. It should be noted that this is instantaneous beach width at the time of a given lidar acquisition and should not be confused with the subsequent change in beach width that determines which class a given profile belongs to (1C or 2C). Volumeperwidth (m3/m) is the subaerial volume per unit width above 0 m NAVD88 computed using trapezoidal integration. Mean elevation gradient (m/m) is the mean of the gradient estimated ev ery 3 m along a profile us ing a centraldifference Taylors approximation. Mean elevation curvature (m/m2) is the mean value of the surface curvature estimated every 3 m along the prof ile using a centraldifference Taylors approximation to the second derivative. Standard deviation of height (m) is the standard deviation of the elevation values for the profile, which provides an estimate of surface roughness for that profile. Deviationfromtrend (m) is the orthogonal distance th e shoreline deviates from the natural trend of the beach in the horizontal over the scale of a few kilometers and is a measure created to capture the deviation of th e highly erosive pier region. The beach trend was estimated by a linear fit through the shorel ine contour points, ex cluding the roughly two kilometer armored section of St. Augus tine Beach that includes the pier. Orientation is the angle in degrees that the local shorel ine makes relative to azimuth (i.e. clockwise from north). The PAGE 47 47 local shoreline orientation was estim ated using piecewis e linear fitting. Max gradient (m/m) is the maximum of the gradient measured every 3 m al ong the profile. It is us ed as a feature only for the logistic regression model developed in a la ter chapter. The features presented are a subset of the many that could potentially be extracted from the ALSM data, which motivates the feature ra nking method presented in Chapter 6. Certain features, such as std. deviation of height, mean gradient, max gradient, and mean curvature, relate closely to onshore character of beaches, such as dunes or scarps. As such it was expected their greatest utility would be for classifying the alongshore variability of beach profiles. For example, large values for mean gradient indicate narrow sections of the beach with a larger berm formation. In contrast, small values indicate wide r regions with little to no berm formation. In general, the mean curvature captures similar beha vior to mean gradient but the two will vary dependent on nonuniformity of the crossshore pr ofile. Differences between profile linearity and undulating (wavelike) behavior ar e expected to be better cap tured by mean curvature. In comparison, large relative magnitudes for max grad ient indicate a berm or scarp formation regardless of whether that section of beach is na rrow or wide. As such, max gradient is an exceptional feature for automatic delineation of th ese types of formations within a DEM. Other features, such as beach slope, volumeperwidt h, and width are commonly used measures to quantify beach dynamics and were expected to more directly relate to the sediment processes governing the region (Dean and Dalrymple, 2002). B each orientation is also a feature that more directly influences the sediment transport. Relative differences in breaking wave angle due to changes in orientation can create gradients in the longshore currents impacting the sediment transport. For example, differences in orie ntation for a beach nouris hment fill can result in PAGE 48 48 substantial differences in performance within th e fill including potential development of erosion hot spots (Benedet et al., 2007). Figure 43 shows the high alongshore variation in beach width, mean gradient, and mean curvature extracted from the A ugust 2003 profiles. The DEM presented on the left provides an understanding of alongshore scale. Referring to the beach width plot, notice the rise, sharp decrease and then subsequent increase in width over the range of 3000 to 5000 m alongshore. This is an effect of the nourishment project on ei ther side of the pier earlier that year. The reaction of gradient and curvature in the vicinity of the pier is due to the short width imposed by the revetment wall and large berm created from the beach nourishment. The short width results in less sample points and the large berm results in high relative magnitudes for gradient and curvature. Because these measures are mean va lues, this combination of morphology results in larger absolute mean gradient and curvature valu es for the profiles in this region. In contrast, notice the high mean gradients in the range of 1000 to 2000 m alongshore, but the curvature does not react in this area. From the DEM, it was visu ally confirmed that this region has a large berm with short width but has a higher slope and is less uniform in the crossshore than in the nourishment zone. Therefore, the nonuniformity and greater slope in th is region is the likely cause for the disparity in beha vior between the two metrics. As explained previously, features are extracted from the values along a profile line as opposed to using a moving 2D window or simple data clustering. This appr oach was selected for consistency with the standard practices used in coastal erosion studies, wh ich utilize crossshore and alongshore measurements. The major differen ce is that lidar provid es several orders of magnitude increase in profile sampling density compared to tradi tional manual surveying methods. Figure 44A presents shoreline ch ange for the January 2006 to June 2006 epoch. PAGE 49 49 Shoreline change was calculated based on the li dar DEMs for profiles ex tracted every 5 m and for profiles extracted every 300 m (~ 1000 ft), which is the standard spaci ng for manual transects for the State of Florida shoreline surveys (Foster et al., 2000). As shown in Figure 44A, general trends are captured in both survey resolutions, but there is significant loss of information relative to local features when using 300 m spacing. For example, at 5 m sampling the maximum measured erosion is 15 m. In comparison, at 300 m sampling the maximum measured erosion is 5 m. The difference in mean erosion between th e 5 m and 300 m sampling is 4 m. In addition, Figure 44B shows the residual autocorrelation, whic h exhibits strong spatial correlation in the residuals between 300 m sample points. This sma ll scale information is critical when assessing local hot spots of anomalous erosion, partic ularly near expensive development and civil infrastructure. PAGE 50 50 Figure 41. Profile sampling. Intr aprofile spacing was set at 1 m and interprofile spacing was set at 5 m (not drawn to scale) PAGE 51 51 Figure 42. Computtion of shoreline change for each profile between two data sets collected at time T1 and time T2. PAGE 52 52 Figure 43. Three features (nonsmoothed) extracted from the August 2003 profiles; y axes are the relative alongshore distance (m). The DEM on left provides an understanding of the alongshore scale relative to the study area with x, y axes in UTM Zone 17 meters. The reduction in beach width near the alongshore distance of 4000 m coincides with the beginning of the revetment wall in the area of St. Augustine pier. 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 58 98 138 178 0.01 0.02 0.03 0.04 0.008 0.005 0.002 0.001 Alongshore Distance (m) Beach Width (m) Mean Gradient (m/m) Mean Curvature (m/m2) PAGE 53 53 Figure 44. Shoreline change plots between January 2006 an d June 2006 showing the added information in 5 m alongshore sampling relativ e to profiles located at intervals of 300 m, as is common for manual surveys. A) (t op) Shoreline change computed from lidar profiles every 5 m; (middle) Same data with profiles taken every 300 m to simulate manual survey; (bottom) Differences resu lting from 5 m sampling versus linearly interpolated 300 m sampling. B) Residual autocorrelation plot exhibits non i.i.d. behavior showing strong spatia l correlation and periodicity. PAGE 54 54 CHAPTER 5 SHORELINE DYNAMICS Historical Overview The 10 km study area is bordered on the nor th by the St. Augustin e Inlet and extends southward past the St. Augustine Beach pier into Crescent Beach at the southern end. Historically, the pier region has been a zone of continual erosion experi encing longterm erosion rates (Foster et. al, 2000). North of the pier near the inlet and sout h of the pier into the Crescent Beach area the beach has experienced longterm accretion. The St. Augustine Inlet originally had a large dynamic and complex shoal and channel system. It also had a southeasterly trending ebb channel, which together with natural sa nd bypassing towards the south formed a succession of moving, shoalsheltered capelike features along the adjacent sout hern shoreline. One of these features is now the heavily armored St. Augustin e Beach region near the pier (Foster et. al, 2000). After stabilization of the inlet with the building of the north and south jetties, the dynamics of the region changed becoming increasingly expo sed to wave energy resulting in a strong retreat of the shoreline in this zone (F oster et. al, 2000). Because of the resulting change in surrounding shoreline and the revetments, this section of th e island protrudes outward, away from the natural strike and trend of the beach as shown in Figure 51. We parameterize this characteristic as the deviationfromtrend. Therefore, the natural tendency is for the beach to try and cut through this feature in an attempt to regain some form of qua siequilibrium. As long as this natural process is resisted, this section of beach will undergo longterm erosion. Shoreline Change Analysis Figure 52 s hows shoreline change plots quant ified for each ALSM data epoch using 1 m profile sampling. Change statistics for each ep och are presented in Table 51. Referring to PAGE 55 55 Figure 52, the massive erosion in Epoch 1, appr oximately two years after the 2003 nourishment, can be observed in the pier region at ~3500 to 5000 m alongshore. The large negative change is due to the spreading of the in itial nourishment sediment most likely enhanced by the severe 2004 hurricane season. The area was indirectly impacted by high wave energy generated from several hurricanes during the period, which was a direct cause for the massive erosion during the epoch. The short period of Epoch 2 exhibits zones of erosion and accretion, altho ugh near the pier at ~ 4000 m alongshore substantial erosio n still occurred during the period. The second nourishment in 2005 to reclaim the lost beach can be observed in the Epoch 3 plot, and the subsequent loss of sediment in the pier zone during and shortly after the nourishment is shown in the Epoch 4 plot. There is also a trend of slight shoreline retreat at the southern edge of the study region, > 7000 m alongshore. During Epoch 5, substantial accreti on is observed and due to a combination of decreased wave activity and sp reading of sand away from th e nourishment zone. Substantial erosion in the pier zone is st ill observed. Epoch 6 only covers the northern portion of the study area because the February 2007 acquisition only c overed this region. The area experienced accretion near the in let and erosion in the pier vicinity (~ 4000 m alongshore). Figure 53A compares mean rate of cha nge computed from 5 m profiles and 300 m profiles. The mean rate of change is comput ed by averaging shoreline change rates between epochs at each profile excludi ng the nourishment epochs for prof iles where sand was deposited. Figure 53A shows the quasiperiodic component in the mean rate of change alongshore and the general trend of erosion patterns. As expected there are sign ificantly higher erosion rates in the pier region, but unexpectedly, there is a high accretion rate on the southern edge of the region. The likely culprit is the spreading of nourishment sediment into this region. As shown in Figure 53A, the 300 m sampling typical of coastal monitoring in the state of Florida captures the PAGE 56 56 general trend, but the resolution is too low to ca pture the quasiperiodic be havior observed in the 5 m profiles. Figure 53B s hows the mean erosion rates ev ery 5 m alongshore overlaid on a DEM of the study area. Beach Nourishment Effects from two beach n ourishments are capture d in the data (Figure 54A). To try and analyze change independent of the nourishments the study region was divi ded into three zones with the nourishment area segmented into its own region labeled Zone 2 as shown in Figure 54B. The mean shoreline change was then comput ed for each epoch and zone (Table 52). As expected, the pier region, Zone 2, underwent substantial erosion co mpared to the other zones due to the loss of nourishment sediment. There are also significant differences in the mean values between the north and south segments of the beach although wed expect those zones to act similar on average due to similar wave clim ate and morphology. These differences are most likely not due to random effects, but rather are influenced by several factors including subtle variations in morphology, bathymet ry, and proximity to the inlet and nourishment zone relative to mean direction of longshore transport. It is important to mention that in the feature analysis and classification methods discussed in Chapters 68, the ~ two kilometer nourishment zone is excluded for epochs in which sand was artificially deposited. Calibration of a Spreading Model So called b each fill models are used by co astal engineers to simulate the spatial spreading of sand over time deposited during beach nourishment. Here, we use the ALSM data to calibrate one such type of model, the point fill, for the 2003 nour ishment of St. Augustine Beach. The point fill approach conceptualizes the nourishment as a hypothetical fill where all initial sand is deposited at a single point. Th is can be modeled mathematically using a Dirac delta function and solved in relation to the onedimensional diffusion equation. The analytical PAGE 57 57 solution through the utiliza tion of Fourier transforms yields a Gaussian model based on the longshore diffusivity (Dean and Dalrymple, 2002): 2)4/(*4/),(GtxeGtMtxy (51) where M is initial planform area (m2), G is longshore diffusivity parameter (m2/s) with values of 0.002 < G < 0.014 being typical for the east coast of Florida, t is time in seconds and is an arbitrary value initially to fit the curve, x is alongshore distance (m), and y is shoreline displacement (m). This type of model is referred to as a onel ine planform model because it describes the time history of the shoreline positi on relative to an alongshore base line (Dean and Dalrymple, 2002). Furthermore, Equation 51 disperses, not erodes, the sediment over time. Although the model is not likely to be representative of a real beach fi ll, for practical purposes it is convenient because it requires no time series wave data or bathym etry and can be extended to approximate more realistic beach fills. It only requ ires initial values for diffusivityGand planform area M where M can be estimated from the lidar data. To calibrate the model, the August 2003 ALSM data set, acquired shortly after completion of the 2003 beach nourishment, was used. A baselin e is oriented parallel to the shoreline and extends the entire region of nourishment. This baseline forms our alongshore basis x, where x = 0 is the location of maximum shoreline displa cement (Figure 55A). The initial shoreline displacement, y, is then calculated as the orthogonal distance from this baseline to the August 2003 shoreline. To fit a Gaussian to the data using Equation 51, the model is logtransformed and put in linear form. A least squares approxim ation can then be used to estimate the unknown parameters. This provides a bestfit Gaussian m odel to the initial shoreline displacement as follows: PAGE 58 58 Logtransform to linearize 2)4/(ln)4/ln()ln(GtxeGtMy 2)4/()4/ln()ln(GtxGtMy 2)( x,)41/( 4/ let xGt BGtMC At x = 0, C = max displacement of shoreline: CGtM eGtMtxyGt 4/ *4/),0(2)4/0( Now rewrite in least squares form because C is known: )ln()ln( VBxCy (52) where V is residual. B can then be determined from E quation 52 using least squares and M can be back calculated to get planform area. Figure 55B shows the fitted Gaussian to the August 2003 shoreline using M estimated from the data. As shown in Figure 55B, the Gaussian model is a reasonable approximation for the initial shape of the fill. To estimate the diffusivity parameter G we could refit the model using the March 2005 or May 2005 data and then use the change in y and elapsed time to solve for G; however, our goal is to simulate and compare to the measured change. Therefore G was set to 0.010 m2/s based on empirical values for the region from Dean (2003). Figure 55B shows a simulation run for t =571 days (time between August 2003 and March 2005 survey) to compare with the actual shoreline displacement measured on March 2005. Notice the smoothness and flattening of the Gaussian fill over time relative to the measured change in Figure 55B. Recall that this model is based on the diffusion relationship and therefore, all sediment is evenly spread away (alongshore) from the peak over time with no actual loss, as the total planform area must remain constant. Although the simulation results differ significantly in terms of spreading and sm oothness relative to th e measured change, the model is reasonable at capturing the genera l behavior around the focus of deposition at x = 0. This ideal case is intended to provide an exampl e of an analytic process model calibrated using PAGE 59 59 ALSM data. In general, a fill evolution model of this nature is unrealistic for most applications but its simplicity does prove useful in certain si tuations where the nourishment can be modeled as a point source. The benefit of the high resolution lidar data is that the parameters can be estimated directly using a regression fit with high degrees of freedom. PAGE 60 60 Figure 51. Zoomed in view near pier showing th e beachs natural trend and the seaward protrusion (i.e. the deviationfr omtrend) of the pier region. ; x y axis in UTM (Zone 17). PAGE 61 61 Figure 52. Shoreline change fo r each epoch using 1 m profile sp acing. The pier region of continual erosion and episodic nourishment is located in the range of ~3500 to 5000 m alongshore. The ALSM data also reveal significant smallscale variability alongshore. PAGE 62 62 Figure 53. Mean rate of change plots. A) M ean rate of change computed from 5 m and 300 m profile sampling. Notice the quasiperiodic behavior captured by the 5 sampling. 300 m sampling common with manual surveying on ly captures the general trend. B) Colorcoded rate of change plot from 5 m profiles showing the high rates of erosion in the pier vicinity. x, y axis in UTM (Zone 17) meters. 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0.15 0.1 0.05 0 0.05 0.1 Alongshore Distance (m)Mean Rate of Change (m/day) 5 m 300 m A B PAGE 63 63 Figure 54. Zoomed in view of nourishment re gion. A) Postnourishment beach observed in August 2003 DEM, massive loss of shoreline less than 2 years later in March 2005 DEM (middle), and the regained shoreline from the 2005 nourishment observed in October 2005 DEM. Black line is shoreline. B) Segmentation of the study area into zones to isolate the nourishment region (Zone 2). y axis in UTM (Zone 17) meters. B A PAGE 64 64 Figure 55. Results of the beach fill simulati on. A) Top view of the August 2003 DEM in the region of nourishment showing the initial sh oreline position. Th e alongshore distance ( x) and orthogonal shoreline displacement ( y) are measured relative to the baseline (dashed line). Left, right axes are in UTM Zone 17 meters. B) The August 2003 shoreline displacement and in itial (fitted) Gaussian, and the March 2005 displacement and simulated displacement after t = 571 days (time fr om August 2003 to March 2005 survey). Table 51. Shoreline change st atistics for 10 km study area. Time Period Mean (m) Min (m) Max (m) Std Dev (+/m) Epoch 1 (August 2003 to March 2005) 60.96 135.01 10.40 26.49 Epoch 2 (March 2005 to May 2005) 0.65 17.60 12.00 5.72 Epoch 3 (May 2005 to October 2005) 33.53 5.40 155.00 42.49 Epoch 4 (October 2005 to January 2006) 0.63 54.60 47.19 17.40 Epoch 5 (January 2006 to June 2006) 13.99 15.60 35.00 10.95 Epoch 6 (June 2006 to February 2007) 4.99 39.20 23.20 16.20 Table 52. Mean shoreline change by zones (m). Time Period Zone 1 Zone 2 Zone 3 Epoch 1 49.37 98.60 53.32 Epoch 2 3.16 1.36 1.59 Epoch 3 24.64 105.58 8.70 Epoch 4 4.75 15.67 4.64 Epoch 5 13.06 0.38 21.02 Epoch 1 49.37 98.60 53.32 PAGE 65 65 CHAPTER 6 FEATURE ANALYSIS As explained in Chapter 4, several crossshor e features were extracted fro m each profile and then partitioned into erosi on or accretion tending class, Ci, depending on the shoreline change experienced during the epoch following a pa rticular ALSM acquisition. Our objective is to characterize the influence of morphology on the observed shorel ine change classes with the long term objective of incorpor ating such information to pred ict erosion patterns with high spatial resolution. Therefore, in this work our focus was on developing a method to extract, identify, and rank lidarmeasurable features that exhibit good potential for segmenting erosion and accretion areas. A robust and systematic appr oach is needed because the full effects of anthropogenic and storminduced coastline change are difficult to pr edict, particularly at fine temporal or spatial scales at which ALSM can survey. As a result, one cannot rely solely on historical analytical models of the physical processes to intuit which lidarmeasurable features will be most important for a given application. To measure a features importance, we es timate classconditional pdfs (likelihoods) for each featureclass pair, j i pf C, and then rank interclass se paration using pdf divergence measures (Starek et al., 2007a,b). Generally, th e more separation betwee n class likelihoods for a given featurej f the more likely it is that classification will be successful. Consider a twoclass decision problem with likelihoods 1 j p fC and 2 j p fC. Classification using j f is likely to be better than for other features if j f yields the largest Euclidea n distance between the class means. The relative performance will be even better if j f yields the largest Mahalanobis distance, which is the Euclidean distance normali zed by the standard deviations (Duda et al., 2001). PAGE 66 66 Entropy is an informationtheoretic measure of uncertainty for a random variable. It is conceptually similar to standard deviation, but encapsulates all information in the pdf rather than just the second order statistical moment. The simplest divergence measure, the wellknown KullbackLeibler divergence)(),( xQxPDkl, quantifies the relative entropy between two pdfs Px and Qx over the same random variable (Cover and Thomas, 2006). A large value for the relative entropy implies that a feature provides good separability between classes when we let 1jPxpfC and 2jQxpfC, as seen in Figure 61. In our case, features yielding greater divergence are more separable between the binary erosion and accr etion classes and in a sense more indicative. Because divergences ar e based on functional differences between pdfs, they serve as highly robust measures of class separability even when geometric measures of separation, such as Euclidean distance or Maha lanobis distance, fail due to similarity in the means. While it is possible th at feature transformation methods such as principal components (e.g. Draper et al. (2003)), or me thods that project the original features to a higher dimensional space, like support vector machines (e.g. Duda et al. (2001); Hastie et al. (2001)), could lead to features that produce even greater class separation, such methods were not used here because our intent was to assess the value of features relative to each other and retain clear physical meaning for greater benefit to coastal researchers. Used in this way, divergence serves as an el egant tool for feature selection that has many benefits. Because divergences are based on functi onal differences between pdfs, they serve as a highly robust measure of the difference between two pdfs even when geometric measures of separability, such as the Mahalanobis distance, fa il due to similarity in the means (Figure 61). This is particularly useful in cases of nonGa ussian pdfs that have similar support but complex shapes, as is the case here. By estimating the pdfs nonparametrically from the data, assumptions PAGE 67 67 about the distribution can be avoi ded. However, with the added rigor comes the added cost of computational complexity, which can be significantly greater than geometric based measures as your feature space and amount of data grow. Density Estimation Figure 62 shows the classconditional histograms and 1D separation plot for three features from March 2005. As shown in the plot, the da ta in feature space can be multimodal and nonGaussian, as is often the case with ALSM da ta. Therefore, the nonparametric Parzen windowing method was employed to estimate the pdfs from the data. The Parzen window method uses a ddimensional histogram to estim ate the probability density )( xp within a hypercube as h xx K h N xpi N i h d 111 )( where hKis the window function or kernel, h controls the length of the ddimensional kernel window commonly referred to as the kernel bandwidth, N is number of points in the data set, ix is the vector of current feature sample values, x are the points at which the pdf is to be estimated along each feature dimension (Duda et al., 2001). We use d=1 for this analysis because we estimate univariate pdfs for each extracted feature and class. The kernel function hK is itself a pdf making it easy to guarantee the estimate)( xp satisfies the property of a pdf. The simplest kernel is a rectangular window that simply takes on a value of 1 if a data point xi falls within the current window position or 0 if not. Such di scontinuous windows, however, can lead to over fitting of the data, yielding pdf estimates that change erratically over small changes in the random variable. For this analysis, hK was chosen to be a continuous 1D Gaussian kernel because the bias in the Parzen estimate can be asymptotically reduced to zero by selecting a PAGE 68 68 unimodal symmetric kernel function, such as th e Gaussian (Erdogmus et al., 2004). By reducing the kernel size monotonically with increasing number of samples, the kernel will approach a Diracdelta function, which is desired as N approaches infinity. Furtherm ore, the Gaussian kernel provides infinite support and ni ce analytic properties. From the Gaussian function, we can writehK as 2/22 1 )(u heuK where h xx ui The Parzen density estimate at a point x can then be written as N i ue h N xp1 2/22 11 )( (61) where h in this case is the familiar standard deviation of the normal distribution. Bandwidth Selection Crucial to Parzen estimation is the sel ection of the window length or bandwidth, h. The choice of the kernel function,hK, is of essentially negligible co ncern compared to the choice of h. The size of the window in featur e space can strongly affect the accu racy of the estimated pdfs. The window should be small enough to capture the deta ils of a pdf but not so small as to create artifacts due to a limited number of samples in the window. Windows that are too large yield pdf estimates that are overly smooth providing low variance but high bias, an d windows that are too small overfit the samples yielding pdf estimates that are spiky in appearance providing low bias but high variance. This is the classic tradeoff between estimation bias and variance. An example of the effect of window size on estimating a pdf for beach slope is shown in Figure 63. Choosing the kernel width a priori is illadvised as its optimal valu e (i.e. the value that minimizes the measure of dissimilarity be tween the estimated pdf and the true pdf) strongly depends on the type of data, the number, and the amount of noi se they are corrupted by (Archambeau et al., PAGE 69 69 2006). Unfortunately, there is no one universal me thod for determining the optimal size. In the general case, one can start with a large bandwidth and progressive ly decrease it to allow more resolution in the pdf esti mates. Although such an approach can be tedious, this provides an empirical determination of the bias vs. variance trade off for a given application. The final window size can be selected as th at which is just large enough to avoid pdf estimates that exhibit sharp local gaps where the probability is clos e to zero. For this work, it was empirically determined using this procedure that a window size of 1/40th the range of va lues for each feature provided an adequate compromise betw een smoothing and overfitting. The downside of empirical bandw idth selection beyond simply be ing tedious is that it is a highly subjective process, not ba sed on any global error criteria or optimization of a score function. Therefore, automatic bandwidth selection methods we re explored to develop an adaptive approach for determining the kernel bandwidth, h, directly from the sample data. There are two broad categories for automatic selecti on of the bandwidth (Loader, 1999): plugin methods and classical methods. Plugin Methods Plugin based methods target the bias compone nt of the mean inte grated squared error (MISE). MISE is widely used as a criterion for ba ndwidth selection. It is the expected value of the integrated squared error (ISE) (i.e. the L2 norm): dxxpxphISE2)]()( [)( (62) )]([))( (hISEExpMISE MISE can be decomposed into a bias and variance component as follows: dxxpdxxpxpExpMISE)]( [var)}()]( [{))( (2 PAGE 70 70 Plugin approaches write the bias of MISE as a function of the unknown) (xp, and usually approximate it through Taylor series expansions. A pilot density for ) (xp is then "plugged in" to derive an estimate of the bias and hence an estimate of mean integrated squared error. The "optimal" h minimizes this estimated measure of fit (Loader, 1999). The most notable plugin approach is Silv ermans plugin principle, which plugs a Gaussian distribution to approximate)( xp, leading to 5/106.1Nhx where x is the empirical standard deviation and N is the number of points (Erdogmus et al., 2004). Although it is easy to implement, results using Silvermans bandwidth are often less than desirable, particularly when the true distribution is highly nonGaussian as is the case here. The same can be said for plugin bandwidths in general when the underlying distribu tion is not well known. Loader (1999) concludes through detailed anal ysis that plugin approaches to bandwidth selection fare poorly due to their arbitrary specif ication on pilot bandwidths and the fact they are biased when this specification is wrong. Theref ore, plugin based methods were not investigated as a viable alternative to em pirical bandwidth selection. Classical Methods Classical methods are more or less natural extensions to the familiar parametric optimization methods of leastsquares, maximu m likelihood, and other approaches. Classical methods determine the bandwidth directly from the data by applying a numerical optimization approach to minimize or maximize a cost function. In this regard, classical methods are truly adaptive, providing a measure of best bandwidth in an optimal sense based on the data. Such approaches are preferred in our case because we are working with highl y nonGaussian data and want to avoid any parametric assumptions. Of the various proposed methods, least squares crossvalidation (LSCV) is arguably the preferred approach because it is asymptotically correct PAGE 71 71 under very weak smoothness assumptions (Ma rron, 1988; Zychaluk and Patil, 2008). This provides a type of robustness that we desire. As such, LSCV was selected as a method to determine the optimal bandwidth. For comparison, a maximum likelihood (ML) approach to bandwidth selection was implemented. It can be shown that the ML approach is optimal in an information theoretic sense. Both approaches re ly heavily on crossvalidation, and specifically leaveoneout crossvalidation. Such jackknife approaches provide an estimate of the prediction error solely from the data providing effectiv e density estimates in a variety of conditions (Marron, 1988). Least squares crossvalidation Least squares crossvalidation (LSCV) was independently proposed by Rudemo (1982) and Bowman (1984). It is based on minimizatio n of the integrated squared error (ISE) from Equation 62 with respect to the bandwidth, h. ISE can be rewritten as follows: dxxpxpEdxxphISE 2 2)()]( [*2)( ) ( (63) The first term can be computed using ) ( xpof Equation 61, the Parzen estimate. The integral part is a convolution of a Gaussian kernel with a Gaussian kernel, which is itself a Gaussian. Through elementary operations and using summations instead of integrals the first term can be rewritten as: N i N j ji hh xx K hN dxxp11 2 22 2 1 )( (64) where hK is the Gaussian kernel and x are the sample data. The last term in Equation 63 can be ignored as far as minimization is co ncerned because it does not depend on h. This leaves only the second term. Following Wang and Wang (2008 ) the second term, can be approximated by applying a leaveoneout cros svalidation estimate (leaving the ith data point out): PAGE 72 72 h xx K hN xpji h N ijj i ,11 1 1 )( (65) Substituting Equation 65 and Equation 64 into 63, we get the following form of our LSCV cost function: N i N ijj ji h N i N j ji h LSCVh xx K hNN h xx K hN hJ1, 1 11 2)1( 2 2 2 1 )( (66) By scanning a certain range of h, the optimal bandwidth can be selected: )(minarghJ hLSCV h LSCV A gridsearch approach is n ecessary because the LSCV cost function (Equation 66) has a tendency towards several local minima, often w ith spurious ones far over on the side of undersmoothing (Marron, 1988). This is a major drawback to the LSCV approach because it can become extremely computationally intensive to search h within a large range at very small step sizes as N grows. Nonetheless, the robustne ss of the LSCV estimate and particularly to distributions with tails is a desirable property. Maximum likelihood The maximum likelihood (ML) approach to ba ndwidth selection was initially proposed by Duin (1976). The goal is to maximize the loglik elihood of the observed data with respect to h given the Parzen density estimator,) ( xp. This is analogous to maximizing the expected value of the loglikelihood,)] ( [logxpEx. The expectation is approximated by the sample mean resulting in ))( log( 1 )(1N i i MLxp N hJ Substituting in the Parzen density estimate of Equation 61 this becomes PAGE 73 73 N i N j h xx MLjie h NN hJ11 2 )(2 22 11 log 1 )( However, this criterion exhibits an undesirable global maximum at the null kernel size, since as h approaches zero, the kernel approaches a Diracde lta function and a maximum value of infinity. To avoid this behavior, the crit erion is modified by replacing ) ( xp with ) ( xpi, the leaveoneout crossvalidation estimate of Equation 65. This yields N i N ijj h xx MLjie h NN hJ1, 1 2 )(2 22 1 1 1 log 1 )( (67) The optimal bandwidth based on the maximum likelihood criteria is then )(minarg hJ hML h ML A major benefit of the ML approach compared to LSCV in terms of computational costs is that Equation 67 is concave and can be ma ximized directly with respect to h through the derivative. Because the first derivative is nonlinear in h, a stepwise maximization approach must be used. Erdogmus et al. (2004) apply gradie nt descent to Equation 67 for ML bandwidth selection. But, depending on the step size, gradie nt descent can oscillate resulting in very slow convergence. To avoid dependence on step size, a NewtonRaphs on optimization scheme for Equation 67 was developed in this research to find the optimal bandwidth. The derivation is detailed in Appendix A. Interestingly, maximizing the loglikelihood is equivalent to choosing the bandwidth that minimizes the KullbackLeibler divergen ce between the unknown true density ) ( xp and the Parzen estimate) ( xp This can easily be shown as follows: PAGE 74 74 N i i i i klxp xp xpxpxpD1)( )( log)())( ),(( N i i N i ixpxpxpxp )( log)()(log)( )]( [log)]([log xpExpEx x The first term can be ignored as far as minimi zation is concerned becau se it does not involve h. This leaves only the second term, )] ( [log xpEx, which is the negative of the expected value of the loglikelihood. Therefore, minimizing klD is equivalent to maximizing)] ( [log xpEx, the ML criterion. This verifies that th e ML approach turns out to be opt imal in an information theoretic sense as well. This is a very attractive prope rty for our research because we use information theoretic divergence measures to assess feature separation from the estimated pdfs. To clarify, by optimal we are referring to the bandwidth MLh as being a global optimum with respect to the ML score function, the given data, and specified Gaussian kernel. However, the actual Parzen density estimate based on MLh is a local maximum in the sense that we do not know the functional form of the true density and to optimi ze over all possible functiona ls to obtain a global maximum is not possible. The downside of the ML approach is that the bandwidth estimate can be severely affected by tailb ehavior of the true density ) ( xp (Marron, 1988). Implementation for data with ties Early in the testing of both the LSCV and ML methods for estimating feature pdfs, the methods were continually converging toward a null bandwidth, h = 0. After investigation, it became apparent that the problem stemmed from the leaveoneout crossvalidation estimate and the use of data with ties. Zych aluk and Patil (2008) explain that the good asymptotic and finite sample performance of the crossvalidation esti mate can only be guaranteed for a continuous sample and the method fails when the data cont ain ties. Basically, the leaveoneout estimate PAGE 75 75 requires that when the ith point is left out there are no other values equal to that value. If this assumption is violated it will result in both the LSCV and ML methods selecting an optimal bandwidth equal to zero. Ties in the sample da ta occur when the random data are discretized to save storage space or speed computation. They ca n also be generated naturally. In our case, the features were extracted from profile lines and then discretized to save stor age into an ASCII file. The features were stored at tw o nonzero decimal places. Dependent on the feature this resulted in a range from two decimal places to five decimal places. For example, beach width is measured in meters and was discretized to tw o decimal places. Thus, it is measured at centimeter level precision, which is the measuremen t level of the ALSM system. Because of this discretization approach, th e feature samples have se veral ties in the data. To enable optimal bandwidth selection using the LSCV and ML crossvalidation approaches, the methods had to be modified to ac count for sample data with ties. Following the approach in Zychaluk and Patil (2008), the feature data were corrupted with random noise prior to automatic bandwidth selection. The corrupted noise must be uniformly distributed and added in such a manner so as to fill in the gaps betw een the samples to ensure that the contaminated density converges to the true dens ity. This implies that the data must be added to at a resolution that is at least half the smallest stepwise ch ange between sample values. For example, the smallest change in resolution between two b each width samples is 0.01 m so the corrupted uniform noise is added at half this resolution, 0.005. This approach creates continuous data and preserves the original density structure enabling implementation of the LSCV and ML methods. To reduce the variability created by the noise, several si mulations are run and then the average of the bandwidths is selected as the optimal bandwidth. The following outlines the implementation algorithm for automatic bandwidth selection with ties: PAGE 76 76 1. Add uniform random noise to the feature samples at the resolution of the measurements to remove ties 2. Normalize the feature to unit variance. The normalization is necessary to reduce computation time by limiting the window s earch between the interval [0 1]. 3. Perform optimization using LSCV or ML method a. LSCV, apply a grid search starting at h=1 and reduce the window length with a step size of 0.001 to provide reasonable computation time. i. Select the bandwidth with smallest LSCV score b. ML, apply the NewtonRaphson optimization scheme (Appendix A) starting at an initial window size of h=1 i. Iterate until convergence ii. If convergence is not obtained, apply a random perturbation based on the sample size N to the initial window length and repeat. 4. Go back to 1 and repeat process for n=20 simulations 5. Take the average of the 20 simulations as the optimal bandwidth. Results The LSCV and ML bandwidth selection methods were initially tested on simulated data. A total of twenty random samples of size N = 50 and N = 100 were generated from a standard normal and standard Cauchy distribution. Bo th the ML NewtonRaphson scheme and LSCV grid search approach were applied to estimate bandwidths for each random sample. The average of the twenty simulations was then selected as th e optimal bandwidth. Note that in this case, the random samples are continuous, and therefore, the method to handle ties in the data is unnecessary. To assess performance the variationa l distance between the true density and the Parzen estimate was computed: N i i ixpxpxpxpV1)( )())( ),( ( PAGE 77 77 Table 61 presents results for optimal bandwidth selection using the ML and LSCV method for each density and sample size. Table 62 shows the variational distances. As shown in Table 61 and Figure 64, for the normal distribution simila r bandwidths are obtained, particularly as the sample size increases. In contrast, for the Cauchy distribution, the ML bandwidth is much larger than the LSCV bandwidth. Recall that the ML bandw idth can be severely affected by the tail of the distribution. The tail behavior of the Cauchy distribution resulted in an ML bandwidth that is too large and heavily oversmoothes the data. This oversmoothing is shown in Figure 65. Similar results are shown in the variational dist ances of Table 62, where the difference between LSCV and ML becomes much larger for the Cauchy distribution. The ML and LSCV implementation algorithm for data with ties, outlined in the previous section, was used to compute bandwidths for August 2003 beach width, slope, and volumeperwidth. Table 63 presents the results. ML provided a slightly larger bandwidth and hence, slightly more smoothing for two of the featur es, but overall the select ed bandwidths by LSCV and ML were relatively close. This resulted in ve ry similar Parzen density estimates (Figure 66). Although the ML bandwidths potent ial to oversmooth is a concer n, the additional smoothing is actually preferred for the feature pdfs because of inherent sample noise induced from the ALSM system and the potential for outliers. Furthermor e, the decrease in bias (less smoothing) that might be gained by a LSCV bandwidth is far ou tweighed in this analysis by the computational cost to perform a full grid search at high enough resolution. This is especially true when considering there is no guarantee the optimal bandwidth lies within your search range. If computational concerns are not an issue then the LSCV method might be preferred because of its robustness. However, based on the results in th is research, the differences between the two methods were minimal and do not justify the need for LSCV. Therefore, the ML method using PAGE 78 78 the developed NewtonRaphson implementation sche me was selected as the preferred method for adaptive bandwidth selection. Several ML bandwidths were computed for ot her features and epochs to compare window length vs. range of data. Bandwidth s fell within approximately 1/50th to 1/120th the range of values. These results indicate that the 1/40th range determined by empi rical bandwidth selection is actually a more smoothed density estimate than that determined by maximum likelihood and potentially an overly biased estimate. Fort unately, the feature ranking approach based on divergence measures discussed in the following section is fairly robust to small differences in bandwidths. In this research we only applied the leaveone out crossvalidation estimator of Equation 65 for determining the optimal bandwidth. In our case, the feature data exhibit spatial dependence, and a leaveoneblockout crossvalidation estimator may be more appropriate. Such an approach was not investigated in this research because results obtained using the leaveoneout estimator provided bandwidths that app eared not to oversmooth or overfit the data. Both the LSCV and ML cost functions can be modified to include a leaveoneblockout estimator for dependent data. Divergence Measures Generally, the more separation between classes for a given feature, the more probable that feature will lead to successful classification. To assess interclass separation across the estimated classconditional pdfs, two performance metr ics based on KullbackLeibler divergence (klD) are used. klD between two pdfs P and Q is x klxQ xP xxPQPD )( )( log)(),( (68) PAGE 79 79 where xis the incremental length between samples of x In this work, x was set to 1/1000th the range of values for each feature to ensure inconsistencies in bin size did not influence resultant divergence calculations (discussed below). klD can be conceptualized as measuring the expected amount of additiona l information required in Q to describe distribution P As such it is measured in either bits of information for 2logas in Equation 68 or n ats of information if lnis used. klD= 0 if and only if P = Q. This is the notion behind divergence acting as a distance measure between pdfs. However, klDis not a true metric because it is not symmetric and need not satisfy the triangle inequality (Duda et al., 2001). For more details on information theory and KullbachLeibler distance the reader is referred to the seminal treatment by Cover and Thomas (2006). JensenShannon Divergence The generalized JensenShannon divergence (JSD ) was first formulated by Lin (1991) and can be expressed as ),(),(),(2 1MQDMPDQPJSDkl kl (69) where ) (1CfpPj and ) (2CfpQj are the likelihoods of feature jf for class erosion and accretion, 1 and 2 are the class priors such that 1 + 2 = 1, andQPM2 1 JSD is a nonnegative, symmetric bounded form of klD between two pdfs and their weighted average, M (Lin, 1991). In our case where P and Q are classconditional pdfs, JSD is bounded by the maximum entropy for the binary class variable, which is 1 bit and occurs for the case of equal priors 1 = 2 = 0.5. Thus, JSD is less than or equal to 1 bit and equal to zero if and only if P = Q. The closer the divergence for a feature is to 1 bit, the more separation between the class PAGE 80 80 conditional pdfs and hence, the more discriminative is the feature in a prediction sense for the given class. The form of JSD in Equation 69 fits natura lly to twoclass featur e selection problems, where P and Q are the conditional pdfs of a feature unde r two different classes with varying priors. Interestingly, by applying the JSD measur e within a decisiontheoretic framework, it is equivalent to the well known mutual information measure,);(ijCfI, between the random feature variablej f and the binary erosion/accre tion random class variable iC This can be shown as follows: Let )(1 1CXPP and )(2 2CXPPbe classconditional likelihoods for feature X and 1 21 are the prior class probabilities for class 1 and 2 respectively. Generalized JSD can then be written as ) log( )log( ) log( )log( ) log( ) log( ) () (2211 22 222 221111 111 2211 2 22 2211 1 11 221122221111 x xx x xx kl klPPP PPPPPPP PP P P PP P P PPPDPPPDJSD )()()(22112211PHPHPPH (610) where ) ( PH is entropy of distribution P Now, from law of total probability )())()(() (2 21 1 2211XHCXPCXPHPPH (611) Furthermore, we get the following resu lt for the second term s in Equation 610 ))((log)( )])(())((1[ )()(2 1 2 2 1 1 2211CXPCXP CXPHCXPH PHPHxc c )( CXH (612) Replacing the terms in Equation 610 with 611 and 612 we get PAGE 81 81 );()()( CXICXHXHJSD mutual information equivalency In our framework we are explicitly quantifyi ng the amount of information gained from the feature on knowing the binary cl ass label through the use of the divergence measure JSD. Interesting to note that although theyre equivale nt, when using JSD, we actually satisfy more requirements of a metric (in particular for X = Y it equals 0) and the interpretation is different. JSD = 0 implies that the classlikeli hoods are equivalent, where as ) ;( CXI = 0, implies feature X and class C are independent. By taking the square root of JSD as in Equation 69 it satisfies the triangle inequality and all othe r properties of a metric, which mutual information does not (Endres and Schindelin, 2003). Therefore, JSD, referred to hereafter simply as JSD, was selected as the first performance metric to assess feature separation. Normalized Information Divergence Although JSD measures the information gained by the feature j f on class iC it does not take into account the inherent entropy within each feature j f The standard klD definition of the divergence and hence JSD in our framework can be biased toward higher entropy random variables when comparing across feature space (Guha and Sarkar, 2006). Th erefore, we create a novel information divergence that is normalized a nd based on JSD. Coeurjolly et al. (2007) list several desirable properties for an information divergence, yx, : 1. Symmetry; yx, = xy, 2. Nonnegativeness; yx, >= 0 3. 0 yx, if and only if X and Y share the same information 4. yx, is maximal if and only if X and Y are independent 5. Normalized; [0,1] yx, and 1 yx, when X and Y are independent 6. Satisfies the triangle inequality PAGE 82 82 For prediction of a random variable Y given a random variable X, the goal of an information divergence is to find a minimization of a nonnegative complexity term, yx,C, and a maximization of a nonnegative information term, yx,I, which is bounded by yx,C. Therefore, the following additional propert ies are desired for prediction (Coeurjolly et al., 2007): 1. When 1Xand 2Xhave the same complexity (y,xy,x2 1 CC ), we have y,xy,x2 1 whenever 1Xhas better knowledge of Y (i.e. y,xy,x2 1 II ) 2. When 1Xand 2Xhave same knowledge of Y (i.e. y,xy,x2 1 II ), we have y,xy,x2 1 whenever 1Xis simpler than 2Xin the sense that y,xy,x2 1 CC 3. When 1Xand 2Xshare almost the exact same information (i.e. 21x,xIis almost maximal), the difference between the information divergences y,x1 and y,x2 is almost zero (i.e. y,xy,x2 1 ) Based on the above properties, a normalized in formation divergence was created to select optimal features for prediction of class erosi on or accretion. The measure is based on the generalized JensenShannon diverg ence (JSD) of Equation 69, and uses a complexity measure, QPC,, that bounds ),( QPJSD and compensates for complexity in the features. Recall that for JSD, P and Q are the classconditional likelihoods of featurejf. Because of the mutual information equivalency, it can be shown that)(),(jfHQPJSD where H is the entropy. Therefore, by selecting )(jfH as the complexity measure, ) ())()(()(2 1 2 21 1 ,QPHCfpCfpHfHCj j j QP the following normalized JSD information divergence was created: ) ( ),( ) ( ) ( ),( ),(2 1 2 1 2 1 ,QPH QPJSD QPH QPH C QPJSDC QPNJSDQP QP PAGE 83 83 )( ),( 121QPH QPJSD (613) This metric is referred to as normalizedJSD (NJSD) and is used as a second divergence metric to determine the impact of f eature entropy on the JSD separation rankings. From Equation 613 we obtain the following bounds on NJSD: Case 1: 1P=2P, 0)()()()() (22112211 PHPHPHPHPPHJSD 1 NJSD Case 2: X and class C share exactly the same information (i.e. complete redundancy), )()() (22112211PHPHPPHJSD 0 ) ( ) )( )( () ( ) )( ),( () ( ) )( ),( () )( ),( () ( ))(())(() (2211 2 1 2211 2 1 2211 2 2 2 1 1 12211 1 21 12211 NJSD PPH XP XP HPPH CP CXP HPPH CP CXP H CP CXP HPPH CXPHCXPHPPHc c c c c c We see that ifP = Q then NJSD = 1 and if ),( QPJSD = )(21QPH = )(jfHthen NJSD = 0. Therefore, values close to one imply poor separation between likelihoods and values close to zero imply good separation with NJSD = 0 implying perfect discrimination between classes for a given feature. This is the notion of an informa tion divergence because as the value increases the information content on the class decreases (diverge s). Note that by simply not subtracting by one we can reverse the behavior so th at larger values impl y better separation to stay consistent with JSD as above. Coeurjolly et al. (2007) shows th at if the complexity term is greater than ))(),(max(CHfHj a normalized information divergence sa tisfies all properties of a metric. In PAGE 84 84 our case, the entropy of the features will naturally exceed the ma ximum entropy of 1 bit for the binary class variable given the much larger range of values and hence uncertainty for each feature. Thus, Equation 613 satisfies all proper ties of a metric and does not require taking the square root as with JSD. To better understand the behavior of Equation 613, take for example the following two cases given two features, 1f and2f: Case 1: 1f and2f have equal entropy (i.e. H(1f) = H(2f)), but JSD(1f) < JSD(2f), then select 2f as the more optimal feature because 2f provides more separation between classes based on the divergence. Case 2: 1f and2f have equal divergence (i.e. JSD(1f) = JSD(2f)), but )(1fH < )(2fH, NJSD will select 1fas the more optimal feature because it requires less uncertainty to describe its behavior. Therefore, NJSD selects the feature that provi des a compromise between least uncertainty and most information gained on the class variable. Th is behavior is desirabl e in feature selection problems because features with lower relative entropies (uncertainty) tend to enable more accurate pdf estimation and conse quently prediction. The concep t is similar to parsimony in regression models where the objective is to find a compromise between number of parameters and quality of the model fit. Median Metric Finally, as a comparison to the divergence m easures, a median based measure derived for nonGaussian ALSM data was selected as an additional measure to assess 1D interclass separation. Sample mean and standard deviati on are sensitive to multimodality and outliers. Thus, they tend to be poor measures of the separation between two clusters that are nonGaussian. The median based measure has the following form (Luzum et al., 2004): PAGE 85 85 2 2))(())(( )()(accretion j erosion j accretion j erosion j jfmad fmad fmedian fmedian d (614) where erosion jfrepresents feature j for all points in class erosion, etc. Index j runs from 1 to 9 for all nine features, and mad is the median absolute deviation about the cluster median value. Equation 614 is a measure of the separation of the cluster centers in feature space relative to their spread that is robust to outliers and co mputationally fast. It is a unitless measure and there is no theoretical maximum value for Equation 614 in the limit as clus ter spreads vanish. In general, the larger the value in Equation 614 th e more separable the cla sses for a given feature (Luzum et al., 2004). Note however that it makes no calculation of the amount of overlap between two densities as divergence does. To summarize, three performance metrics we re selected: two base d on divergence, JSD and NJSD, and a median based measure. Usi ng these metrics, each features interclass separation was ranked for the six ALSM epochs. Those features that rank highest yield more separability and therefore indica te stronger potential re lationships between th e features variation alongshore and the observed erosion and accretion patterns. In addition, Pearsons squared correlation coefficient (2 R ) was computed to evaluate linear relationships between the features and shoreline change (Duda et. al., 2001). In ge neral, a feature that is well correlated with measured shoreline change (i.e. change viewed as a random variab le rather than a binary class label) would be expected to also exhibit a signi ficant divergence in the classconditional pdfs. As explained by Herzel and Grobe (1995) the major difference is that the correlation coefficient measures only linear dependences, whereas divergence is more general in the sense that it detects all kinds of statistical dependen ces. Instances in which the feat ure ranking from the divergences PAGE 86 86 is different from those suggested by correlation in dicate cases where using the full pdf instead of merely secondorder descri ptions is important. Results Individual Epochs Figure 67 summarizes the rankings of each features separation between the binary erosion and accretion classes by epoch and for each of the four measures: JSD, NJSD, median metric, and the squared Pearson correlation coefficient,2 R Within Figure 67, 1 indicates most separable, 8 indicates least, and for correlation, 1 indicates the strongest linear relationship and 8 indicates the weakest linear rela tionship. Throughout the discussion, features are abbreviated as follows: S = slope (m/m), W = width (m), V = volumeperwidth (m3/m), NS = nearshore slope (m/m), G = mean gradient (m/m), C = mean curvature (m/m2), SD = standard deviation of height (m), DT = deviationfromtrend (m), and O = orientation (degrees). Epoch 1 Epoch 1 (August 2003 to March 2005) is the longest epoch during which the entire shoreline eroded so there was no accretion class (F igure 52). Therefore, the median value for shoreline change was used to partition the profiles into the following two classes: erodes less for change less than the median value and erodes more for change greater than the median. DT and W ranked highest (most separable) for all metrics. The top ranking of DT is noteworthy in that the pier regions devi ation from the natural trend is a stro ng contributing factor to it being an erosion hot spot over longer time periods (Foster et al, 2000). Another interesti ng result is the subtle differences in feature rankings between NJSD (normalized) and JSD (nonnormalized). These disagreements indicate cases where the f eatures entropy or uncertainty impacted its ranking as a feature, For example, the differences in S arise from its rela tively higher entropy PAGE 87 87 resulting in NJSD penalizing slop es performance as a feature a nd hence, raking it lower than JSD. Epoch 2 Epoch 2 (March 2005 to May 2005) is the sh ortest epoch. The overall high ranking and correlation of S and NS in the shortterm compared to the lower rankings in th e longterm, Epoch 1, indicates that slope is potentially more influential to shor tterm, highfrequency fluctuations in erosion and accretion along a beach. Correlations between S, W, V and shoreline change are expected because the beach profile tends to fla tten or steepen due to shortterm and seasonal variations in wave climate. Interestingly, divergence ranked DT high but both correlation and median ranked DT low. However, they all ranked W high, which indicates that DT is performing quite different as a feature than W and that pdf shape is important in capturing the DT relationship in this epoch. Epoch 3 Epoch 3 (May 2005 to October 2005) was im pacted by the 2005 beach nourishment. To reduce effects, a 2.5 km zone surrounding the pier region was omitted from the analysis in this epoch. This will not eliminate all effect s because of nourishment spreading but does substantially reduce them, and allows us to bett er measure the broader impact over the entire area. Because almost all change was accretion fo r this epoch (Figure 52), the median value was used to segment the profiles into two cla sses: accretes more or accretes less. NS ranked high across all metrics as did DT for divergence and correlation. Ho wever, the median metric ranked DT lowest. This occurred because DTs median absolute deviation (mad) for each class is large, which drives the median measure to be lower. Furthermore, W performed much more poorly across all metrics and had the weak est correlation overall. This may seem surprising at first given that width is generally expected to be highly correlated with shoreline change. But that implicitly PAGE 88 88 assumes that shoreline change exhibits a consis tent trend during the time period spanned by the observations. Seasonal variations and partic ularly beach nourishme nts can disrupt that relationship, especially over time periods of less than one year. Epoch 4 During Epoch 4 (October 2005 to January 2006) the pier region was omitted, as done for Epoch 3, so the dramatic impact of nourishmen t in that small region would not overwhelm the trends for the study area as a whole. Once again, W performed poorly across the three metrics, but with slightly st ronger correlation. S ranked high for the three separability metrics but not correlation. It is interesting to note that DT still performed well as a feature although the pier region, where the highest deviations occur, was omitted. Epoch 5 For Epoch 5 (January 2006 to June 2006) the majority of shorelin e change experienced during the period was accretion (Fig ure 52). Therefore, the median value was used to segment the profiles into two classes: a ccretes more or accretes less. W performed better this period, which is likely due to a combin ation of nourishment sediment sp reading, and a decreased energy wave climate conducive to accretion transg ressing from winter to summer. Epoch 6 For Epoch 6 (June 2006 to February 2007), th e February 2007 survey only covered the northern section of the study regi on and beginning of pier zone The high overall ranking of G is unexpected; however, S also ranked high. This northern portion of the beach is the least influenced by anthropogenic development, excludi ng impacts from the jetty, since it is protected as a state park. Therefore, it is possible that the area modulates more naturally resulting in a more uniform change in slope from shoreline to dune toe. Because G is a mean value it would PAGE 89 89 more closely match S in this case and capture berm formati on. In addition, there is a pattern between decreased performance for W and exclusion of the nourishment region. The three data sets that excluded the re gion had lower rankings for W compared to those that included the region. Movement of sediment away from th e nourishment region causes a strong retreat in the shoreline, which results in a stronger relationship between W and shoreline change. Thus, inclusion of the region sa turates the relationship. Time Averaged Table 64 presents the overall rankings for each measure computed by averaging the values across all epochs. The class means are also presen ted for each feature. Divergence metrics, JSD and NJSD, closely agreed and rank DT highest followed by S and W. The high ranking of S, W and V is expected given that they are more dire ctly influenced by the net sediment transport governing the region, both current and wave induced, as is sh oreline change. The unexpected result is that DT outperforms all other features. The sub tle differences between the two measures are due to relative differences in featur e entropy. For example, the higher ranking of G relative to V for NJSD, indicates that G provided a better balance betw een feature entropy and class divergence. Because JSD only measures diverg ence, it did not account for entropy resulting in different rankings for these features. The median metric ranks S highest followed by G, and DT. Analytically, the relative high ranking for mean gradient, G, indicates that the median absolute deviations (mad) were small on average for G relative to other features. Because th ese values appear in the numerator, the smaller values resulted in a higher apparent separability for G according to the median metric. NJSD also ranked G within the top four features. These resu lts suggest that sections of the beach with larger berms and shorter widths influence th e tendency for more or less erosion relative to wider, smaller berm sections of the beach. PAGE 90 90 Correlation, R2, ranks DT highest followed by V, W, then C. These results suggest that the inclusion of the entire pdf is important for determining the relative importance of S, G, and C. Overall, O ranked lowest and performed poorly even when the nourishment region was excluded. As explained in the feature extrac tion discussion of Chapte r 4, orientation is a parameter that appears in analyt ical models because it impacts the relative breaking wave angle. In that regard, its poor performance as a feat ure is unexpected. A likely explanation is that orientation influences to a shor eline are highly scale dependent manifesting only at very large scales compared to the localized scales that ALSM allows us to measure. PAGE 91 91 Figure 61. An example of two likelihoods for a single feature j f given data from two classes, 1C and 2C. Divergence measures quantify the de gree to which the two pdfs differ, and thus serve as indicators of the jth features potential valu e for classification. When the pdf mean values are similar, as depicted here, geometric measures of pdf separability fail to capture important differences. Figure 62. Histograms of beach width, slope, and nearshore slope in March 2005 showing (yaxis) raw counts out of 2010 profiles and (xa xis) support. Classes labeled according to which profiles exhibited erosion or accretion during the March 05 to May 05 epoch. The nonGaussian, mulitmodal nature of the features is apparent. Vertical lines in the lower plot indicate class means. PAGE 92 92 Figure 63. Example showing sensitivity of Parzen windowing to window length h. This example is for August 2003 beach slope (class erosion). Figure 64. Results from 20 simulations of a standard normal density (N=100 samples). A) Average variation in maximum likelihood (M L) and leastsquare s crossvalidation (LSCV) score criteria by wi ndow length (h). B) Comparison between the true density and Parzen density using av erage optimal window length. Notice that both ML and LSCV provide similar estimates due to nonheavy tail behavior. 0 0.5 1 10 5 0 h ML 0.2 0.4 0.6 0.8 0.2 0 0.2 0.4 h LSCV 2 1 0 1 2 0 0.1 0.2 0.3 0.4 XDensity 2 1 0 1 2 0 0.1 0.2 0.3 0.4 XDensity True ML True LSCV A B 0.01 0.02 0.03 0.04 8.5 9 9.5 10 (h=1/1) Slope 0.01 0.02 0.03 0.04 10 20 30 40 (h=1/10) SlopeProbability Density 0.01 0.02 0.03 0.04 20 40 60 80 100 (h=1/40) Slope 0.01 0.02 0.03 0.04 50 100 150 200 (h=1/100) Slope Parzen Density Estimationwindow length (h)Probability Density Probability Density Probability Density PAGE 93 93 Figure 65. Results from 20 simulations of a standard Cauchy density (N=100 samples). A) Average variation in maximum likelihood (M L) and leastsquare s crossvalidation (LSCV) score criteria by wi ndow length (h). B) Comparison between the true density and Parzen density using average optimal window lengt h. Notice that the ML estimate heavily oversmoothes the density estimate. This is due to the heavy tail behavior of the Cauchy distribution. Figure 66. Results of ML and LSCV density estimation for August 2003 beach width and volume. A) Beach width, B) volumeperwidth. 0 2 4 6 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Normalized Beach WidthDensityML 0 2 4 6 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Normalized Beach WidthDensityLSCV 2 3 4 5 6 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 Normalized VolumeDensityML 2 3 4 5 6 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 Normalized VolumeDensityLSCVA B 0 1 2 3 4 6 4 2 0 h ML 1 2 3 4 4 3 2 1 h LSCV 2 1 0 1 2 0 1 2 3 XDensity 2 1 0 1 2 0 1 2 3 XDensity True ML True LSCV A B PAGE 94 94 Figure 67. Ranking of each feat ures separation between cla ss erosion and accretion by epoch and each of the four measures. 1 is most separable (highest ra nked) and 9 is least separable (lowest ranked). Notice divergen ce measures, JSD and NJSD, consistently ranked deviationfromtrend high for each epoch. JSD NJSD Median R2 1 2 3 4 5 6 1 2 3 4 5 6 7 8 9 Slope 1 2 3 4 5 6 1 2 3 4 5 6 7 8 9 Width 1 2 3 4 5 6 1 2 3 4 5 6 7 8 9 Volume 1 2 3 4 5 6 1 2 3 4 5 6 7 8 9 Nearshore SlopeRanking 1 2 3 4 5 6 1 2 3 4 5 6 7 8 9 Mean Gradient 1 2 3 4 5 6 1 2 3 4 5 6 7 8 9 Mean Curvature 1 2 3 4 5 6 1 2 3 4 5 6 7 8 9 Std. Deviation 1 2 3 4 5 6 1 2 3 4 5 6 7 8 9 Deviation from Trend Epoch 1 2 3 4 5 6 1 2 3 4 5 6 7 8 9 Orientation PAGE 95 95 Table 61. Average results for optimal bandwidth selection by ML and LS CV. Results are based on twenty simulations from a standard nor mal and Cauchy distribution with sample sizes of N=50 and N=100. Values in brackets are standard deviation. Distribution Method Optimal bandwidth (N= 50) Optimal bandwidth (N= 100) ML 0.46 (0.12) 0.42 (0.10) Standard Normal LSCV 0.52 (0.19) 0.43 (0.15) ML 0.69 (0.24) 0.61 (0.22) Standard Cauchy LSCV 0.12 (0.11) 0.10 (0.09) Table 62. Variational distance between true de nsity and Parzen density estimated with the optimal average bandwidth de termined by ML and LSCV. Distribution Method Variational distance (N= 50) Variational distance (N= 100) ML 0.98 1.44 Standard Normal LSCV 0.70 1.43 ML 2.17 2.63 Standard Cauchy LSCV 1.23 1.48 Table 63. Results for optimal bandwidth dete rmined by ML and LSCV for August 2003 beach width, slope, and volume. Features ML bandwidth LSCV bandwidth Width 0.033 0.040 Slope 0.019 0.010 Volumeperwidth 0.030 0.029 Table 64. Average rankings of the three separa tion measures and correlation for each feature j f across all successive epochs. The mean numer ical values for each feature are also shown for the two classes. Here, 1C represents the class with smaller signed shoreline change in the seaward direction (i.e. erosion tending) of the two possible classes. Feature uncertainty resulted in s ubtle differences between the NJSD and JSD rankings, but overall th e two strongly agreed. Devi ationfromtrend ranked (DT) highest and orientation (O ) lowest on average. j f JSD NJSD Median R2 1 j f C 2 j f C S 2 2 1 6 0.0151 0.0194 W 3 3 4 3 62.719 49.147 V 4 5 5 2 1.5902 1.4185 NS 6 6 6 7 0.1279 0.1161 G 5 4 2 5 0.0199 0.0234 C 8 8 9 4 0.0002 0.0008 SD 7 7 7 8 0.4550 0.4868 DT 1 1 3 1 11.275 35.63 O 9 9 8 9 13.177 12.742 PAGE 96 96 CHAPTER 7 GENERATIVE CLASSIFICATION Nave Bayes Based on the overall results from Chapter 6 we expect the highest ra nking features using the divergence methods, JSD and NJSD, to provi de the strongest relationships (i.e. most information) and therefore best potential as i ndicators of shoreline change behavior. This is because the divergence methods operate on the full probability space and measure separation between pdfs with no implied linear ity or Gaussian assumptions. However, to empirically assess which measure provides the best performance and to test the potential predictive power of these morphologic features for detecti ng segments of beach more prone to erosion or accretion we implement a twoclass nave Bayes classifier. The Bayes classifier determines the posterior prob ability of a sample belonging to a class iCgiven feature values f ) ,....,(1n iffCP. Such a classifier is referred to as generative because the posterior probability, ) ,....,(1n iffCP, is generated indirectly from the classconditional likelihoods and prio r probabilities through Bayes rule (Hastie et al., 2001): i iin iin n iCPCffP CPCffP ffCP )(),....,( )(),....,( ),....,(1 1 1 The class with the higher maximum a posteriori (MAP) probability is then selected for the given inputs (e.g. if ) ,....,(),....,(12 11n nffCPffCP then choose1C). In this case, iCis either class erosion or class accretion as defined previously. Recall that certain epochs, such as Epoch 1, only lost or gained shoreline. Therefor e, class erosion represents the class with smaller signed shoreline change in the s eaward direction (i.e. less accretion or more erosion) of the two possible classes. The set 1,...,n f f are the values of the crossshore features for the specific PAGE 97 97 profile we are attempting to classify. For a nave Bayes classifier, one assumes conditional independence among the feature likelihoods ) ,....,(1inCffP = ) ()...(1in iCfPCfP (Duda et al., 2001). This greatly simplifies the problem of estimating the posterior probability ),....,(1n iffCP because we need only estimate the likeli hoods for one feature at a time rather than the joint (n ine dimensional vector in our case) pdf. As the dimension increases, the amount of training data required to obtain an unbiased estimate incr eases exponentially as a function of the number of dimensions (Mithcell, 2005). This is referred to as the curse of dimensionality, and nave Bayes is one approach for handing this complexity. Furthermore, the assumption of independence will suffice in this case because we are concerned with each features individual relationship to shoreline vari ation and overall ranking rela tive to other features. Each classifier is trained by estimating the classconditional pdfs, ) (inCfP, using the extracted feature values. Recall, that these pdfs are estimated directly from the data using a nonparametric Parzen estimate (Chapt er 6). The prior probabilites,) (iCP, are simply the ratio of the number of class erosion or cl ass accretion events to total numbe r of training data. In this example we test the training error and predicti on error of the classifier on each epoch. For training error, we train the classifier using the data from all the epochs and then test on each individual epoch. This approach is also referred to as internal validation. For prediction error, we train the classifier using all other epochs and a random uniform sampling of 1/20 the profiles of the current test epoch. For instance, if we ar e testing on Epoch 1, we tr ain the classifiers using all other epochs plus the random subset of Epoch 1 and then classify each profile in Epoch 1, trained and untrained. The result s are compared to the actual shoreline change measured for each profile in Epoch 1. This appr oach is a type of external valid ation and results in 95% of the tested profiles being untrained. In general, for evaluating prediction we would not use the PAGE 98 98 current epoch data. But, in this case it is imperative to train using a subsample of the test epoch because of the temporal span between epochs and large differences in the natural and anthropogenic (nourishment) effect s experienced between epochs. Therefore, our prediction is testing the utility of th e morphologic features to model the variation in erosion and accretion patterns alongshore. Only the top two ranked features for each measure from Table 64 are used to classify. Our two class naive Bayes classifi er can then be written as i ii i ii i iCPCfPCfP CPCfPCfP ffCP )()()( )()()( ),(2 1 2 1 21 There are 2010 profiles for each period to test except for Epoch 6, which has 852 profiles, and Epoch 3, which has ~ 1600 profiles because of exclusion of the nourishment region. A cumulative total of 10,504 profiles were classified. Results Results for the training error of the classifier s are shown in Table 71 where the numerical values are the success rate (#co rrect/total). The top two f eatures (deviationfromtrend, DT, and slope, S) selected by the divergence measures, JSD/ NJSD, provided substantially better training rates with an average success ra te of 73% compared to 66% for correlation and 63% for median method. For evaluating the utility of a classi fier, prediction performance is a much better indicator of performance potential. Results for prediction are listed in Table 72. DT and S selected by the divergence metrics outperformed th e other feature combinations with an average success rate of 63% supporting the utility of the divergence method for feature selection. Success rates of 70 % or greater were achieved for so me periods with all bu t one epoch providing a success rate of 60% or greater. It is important to understand th at the success rates directly PAGE 99 99 evaluate the ability to predict based solely on the morphologic feat ures without the inclusion of spatial dependence in our bi nary class variables. Figure 71 shows the variation in prediction results alongshore for each epoch using the features DT and S selected by divergence. The plots are colorcoded by the four possible binary classification outcomes as found in a confusion ma trix: true positive (correctly classified as erosion; more erosion tending), true negative (correctly classi fied as accretion; less erosion tending), false positive (Type I error), false negative (Type II error). As shown in Figure 71, the classifier performs well within the highlyerosive pier zone, correctly classifying the majority of this region as erosion tending fo r each epoch. However, the classi fier did not perform as well for other regions of erosion outside the pier zone, re sulting in several Type II errors. This indicates that DT and S are potentially best for measuring the pi er zones behavior but are not providing enough information to capture the su btle erosion patterns in other zones. Nonetheless, in the northern zone and adjacent pier zone where accretion behavior is more prevalent, the classifier provided decent performance, correc tly classifying large segments in those zones as less erosion tending. To investigate the effects of sampling resoluti on on classifier performance, the classifier was trained on 300 m profiles rather than 5 m profiles. Classifi cation was then performed every 300 m and upsampled to 5 m by holding the classifi cation results to 150 m on either side of the profile. Figure 72 compares results from the 5 m classification as obtai ned with lidar and the upsampled 300 m classification as might be obtained from manual surveying. The main thing to notice in Figure 72 is the ove rall poor performance of 300 m fo r capturing the true negatives and the large number of Type I errors (false positives). Th e average succe ss rate for 300 m sampling was 58% compared to 63% for 5 m, resu lting in a difference of 5 percent, which is PAGE 100 100 significant but not as much as we might expect. However, the binary class variable has high spatial autocorrelation. Therefore, by making a prediction every 300 m and then holding that value constant to mimic predicti ons at 5 m profile locations we are in effect capturing this autocorrelation unintentionally. For some epochs spatial variab ility in the erosion patterns increases and as such the 300 m sampling cannot captu re this high resolution variability. This is observed in cases such as Epoch 6 where the 5 m information becomes critical in capturing the variability in erosion patterns alongshore. The high resolution spatial information can become useful for better characterizing the sediment budget for a beach and making predictions, e.g. localized storm surge mapping, sediment tran sport, sand bar formation, and rip current generation could be linked to these localized changes. Overall, the results are encouraging when cons idering these predictions are based solely on probabilities of class occurrence for a given morphology learned from the laser data. Therefore, these classification re sults are data depende nt and not intended as a standalone tool for predicting shoreline change. Rather, the success of the results demonstrates that certain morphologies can be systematically extracted from ALSM data sets and ranked to provide useful information about shoreline change dynamics supporting the notion of morphologic change indicators and their poten tial power for beach characterization. As an example of the potential utility of such methods, Figure 73 shows the maximum a posterior i (MAP) probabilities of class membership for the August 2003 data set. Notice the high probability of erosion in the pier zone. Coastal managers can use plots of this natu re to identify high impact segments of the beach based on the current state of beach morphology. It is easy to imagine that as lidar technology becomes more prevalent, such probability plots can be estimated for long stretches of coastline from ALSM surveys conducted before hurricane im pact to assess potential areas of high erosion PAGE 101 101 or coastal flooding. From the plot s, prestorm mitigation efforts can be directed towards specific zones of the beach. PAGE 102 102 Figure 71. Classification result s alongshore for each epoch using DT and S. TP= true positive (more erosion tending); TN = true negative (less erosion tending); FP = false positive; FN=false negative. Notice strong performa nce in highly erosive pier zone (star). Figure 72. Confusion matrix cl assification results using DT a nd S. A) 5 m results B) 300 m upsampled results. Epoch 1 Epoch 2 Epoch 3 Epoch 4 Epoch 5 Epoch 6 TP TN FP FN Epoch 1 51% 40% 1% 8% Epoch 2 20% 23% 24% 33% Epoch 3 27% 19% 22% 32% 32% 22% 17% 29% Epoch 4 35% 24% 15% 26% Epoch 5 37% 24% 22% 17% Epoch 6 TP FP FN TN 35% 14% 18% 34% Epoch 1 21% 17% 24% 38% Epoch 2 7% 9% 40% 44% Epoch 3 18% 7% 31% 44% Epoch 4 32% 18% 18% 33% Epoch 5 34% 3% 25% 38% Epoch 6 TP FP FN TN A B PAGE 103 103 Figure 73. Maximum a posteriori class member ship probabilities for August 2003 data. The zoomed in view is of the pier region showing the high probability of erosion. PAGE 104 104 Table 71. Training er ror from nave Bayes classifier us ing only the top two ranked features according to each measure. The classification that used the two features selected by divergence significantly outperformed the ot her two measures in terms of average accuracy. DT = deviationfromtrend, S = slope, G = mean gradient, V = volumeperwidth Data set Divergence (DT, S) Median (S, G) R2 (DT, V) Success rate (# correct / total ) Epoch 1 0.83 0.64 0.80 Epoch 2 0.64 0.39 0.63 Epoch 3 0.62 0.75 0.47 Epoch 4 0.80 0.60 0.79 Epoch 5 0.66 0.62 0.62 Epoch 6 0.81 0.81 0.67 Average success rate 0.73 0.63 0.66 Table 72. Prediction error from nave Bayes cl assifier using only the top two ranked features according to each measure. Divergence outperf ormed the other two measures in terms of average accuracy. Data set Divergence (DT, S) Median (S, G) R2 (DT, V) Success rate (# correct / total) Epoch 1 0.68 0.63 0.68 Epoch 2 0.60 0.32 0.60 Epoch 3 0.51 0.69 0.45 Epoch 4 0.62 0.60 0.64 Epoch 5 0.64 0.35 0.60 Epoch 6 0.72 0.77 0.65 Average success rate 0.63 0.56 0.60 PAGE 105 105 CHAPTER 8 DISCRIMINATIVE CLASSIFICATION In the previous chapter, a nonparam etric nav e Bayes classifier was developed to segment the beach into zones more or less prone to eros ion based on variation in crossshore morphology. This process is to some degree a form of spatial prediction because the classifier estimates the probability to experience erosion or accretion as it transgresses along the shore. In a similar regard, the classifier is mode ling (i.e. characterizing) beach behavior based on what it has learned from the data. Modeling and prediction are tw o powerful tools that a learningbased classifier can provide our analysis, but, there ar e several limitations of the nave Bayes approach that restrict classifier performa nce and utility. First, because na ve Bayes is rigidly tied to the assumption of conditional independence, there is no simple mechanism for assessing the joint effect of the features on classification. Second, unless Gaussian assumptions are made, it is very difficult to directly evaluate the change in probability for class erosion given the change in an individual feature or combinati on of features. This limits our ab ility to understand the influence different morphologies have on e xplaining the observed erosion patterns. Third, shoreline change is a highly correlated spat ial process, but the nave Baye s approach disregards both the correlation in the binary cl ass variable and in the features themselves. In this chapter, a discriminative classification approach, logistic regression, is developed to address the above limitations with nave Bayes. Recall that nave Bayes is considered a generative classifier because the training da ta is used to estimate classconditional pdfs,) (inCfP, which are subsequently used to generate an estimate of the posterior probability )...(1n iffCP. In contrast, logistic regr ession is considered a discriminative classifier because the training data is used to directly estimate (i.e discriminate) the posteri or probability (Miller, PAGE 106 106 2005). Note that herein our notati on will change to stay consistent with the literature. Instead of Cand f for class and feature, we use y and x. Logistic Regression Logistic regression is a memb er of the family of generalized linear models (GLMs). Generalized linear models are a framework that extend ordinary regression to the case of nonGaussian response (dependent) variables, wh ich have a distributi on function from the exponential family (Myers et al., 2002). The most prominent member of the exponential family is the normal distribution. The logistic model is one type of GLM that is used to regress a binary response variable (0, 1) on a set of explanatory variables, also referred to as predictors or covariates. Ordinary leastsqua res regression is inappropriate for this case because of the noncontinuous, discrete natu re of the response vari able and the possibility of obtaining values outside [0, 1]. Logistic regres sion first found application as fa r back as the early 1950s for development of doseresponse curves in the fiel d of biostatistics, a lthough its connection to GLMs had not yet been discovered (Myers et al., 2002). Since th at time logistic regression has evolved tremendously and enjoyed wi despread use in a variety of st atistical fields, mostly as a data analysis and inference tool, where the goal is to understand the role of input variables in explaining the outcome. Typically, many models ar e fit in an attempt to find a compromise between model fit and model parsimony. Our focus will be on the use of logistic regression to understand the joint effect of the features on th e probability of erosion and to find the optimal model for classification. Logistic Model The logistic regression model aris es from the desire to model the posterior probabilities for a binary class, y, via linear functions in the predictors, x, while ensuring that the probabilities sum to one and remain in the interval [0, 1] (H astie et al., 2001). Consider our case where we PAGE 107 107 have binary observations,iy for i = 1 to N profiles, indicating whether profile i belongs to class erosion tending or class accretion tending as defined previously in Chapter 6 for each data epoch. We arbitrarily set 1 iy to indicate that a profile belongs to class erosion and 0 iy to indicate that a profile belongs to class accretion. Our objective th en is to estimate the posterior probability of class erosion,1 iy, for a specific profile given an input vector of morphologic features, ) 1(i iiiyPx x The logistic model assumes the following parametric form for iP (Myers et al., 2002; Hastie et al., 2001), i i ie e e yPi iix x xx 11 1 )1 ( (81) where is a k + 1 column vector of coefficients and the term ...110ikk i ix x x is called the linear predictor. No tice that the form of Equation 81 assumes a one has been included in the input vector ix to handle the intercept. Equation 81 is the logi stic distribution function and it ensures that 1 0 iP. The logistic function forms an Sshaped curve as a function of x (Figure 81). Although there is a nonlinear rela tionship between ix and iP, the logistic model leads to a very convenient linear decision rule (Miller, 2005). We assign the label1 iy if the following condition holds i i i i i iP P yP yP 1)0( )1( 1 x x substituting Equation 81 for iP, we get ix e 1 1 By taking the natural logarithm, we assign1 iy if ix 0, otherwise 0 iy PAGE 108 108 Hence, logistic regression is a linear classi fication method. Generalized Linear Model Under the generalized linear model framework, the expected value of the response variable is related to the linear predictor through a link function (Myers et al., 2002) )( and )(][1i iiug guyE i ixx Following the discussion in My ers et al. (2002), it is reasona ble to assume that at the i th data point, the response iy is a Bernoulli random variable where niP yPyEi i iiii:1for )()1(][ xx x iP is referred to as the probability of success in a binomial process, and it is also the mean value. Furthermore, ) 1()var(iiiPPy Consequently, the variance is a function of the mean, and both are functions of the inputs ix, which is a characteristic of GLMs. The link function g for logistic regression is called the logit transform ix i i iP P P 1 ln)(logit (82) where ] [i iyEP The logit is the log of the odds between the probability of success and failure. It transforms numbers between 0 and 1 to the re al line and appears often in neural network applications (Miller, 2005). The logistic model, Equation 81, is easily derived from the logit as follows i i ie e P e P P P Pi i i i i x x x ix 1 1 )exp( 1 lnexp PAGE 109 109 We now have a better understand ing of the origins of logistic regression. Logistic regression assumes a linear relationship between th e log of the odds and the predictors x. This linking between the linear predictor and th e expected value of the respons e is the functionality provided by the GLM framework. For a more in depth di scussion on generalized linear models and their applications to engineering based problems, the reader is referred to Myers et al. (2002). Model Fitting Referring back to the logistic model for the conditional likelihood, E quation 81, the only unknowns in the equation are the parameters, s For example, in our case we have ten features and an intercept term that comprises the linear predictor, )()()()( ...)()()()()()(10 9 8 7 6 5 4 3 2 10i i i i i i i i i iS SDCG ONS MG DTVW ix We desire to estimate the s directly from the training data using Equation 81, but the question arises as to how this estimation will be accomplis hed. In theory, we could do ordinary regression with the logits as our dependent variable, but, we do not actually have th e logits. Rather, we only have binary observations of 0s and 1s specifying the class membership for each profile, which would make the logit either negative infinity or positive infinity. O bviously, this approach is not applicable and an altern ative method must be utilized. Maximum likelihood estimation Logistic regression models are generally fit by maximum likelihood (ML), using the conditional likelihood,) ;1( i iiyPx of Equation 81. Because iP and ) 1(iP completely specifies the conditi onal distribution for iy the binomial distribution is appropriate (Hastie et al., 2001). For n = 1 trial per an observation and assuming N independent binomial observations or number of profiles in our case, th e loglikelihood can be written as PAGE 110 110 N i ii N i i i i i N i y i y ii i ie y Py Py P P1 1 1 1)1log( ));(1log()1();(log ));(1();(log)( xx x x x x To maximize the loglikelihood we set its de rivative to zero and obtain the following score equations N i i iiPy1,0));(( )( xx which are k + 1 equations nonlinear in To solve these equations we use the NewtonRaphson approach (e.g. Appendix A), which requires the second derivative or Hessian (Hastie et al., 2001), N i i iiiP P1 2));(1)(;( )( x xxx Given an initial value forold, a single NewtonRaphson update is )()(1 2old new It is convenient to write the score an d Hessian in matrix notation. Let y denote the vector of N x 1 observationsiy, X denote the N x (k + 1) matrix of feature valuesix including a column of ones for the intercept, p the N x 1 vector of fitted probabilities with ith element ) ;(old iPx evaluated at old and W a N x N diagonal matrix of weights )) ;(1)(;(old i old iP P x x which are the binomial variances for each observation iy. Then, the score and Hessian can be written in matrix notation, PAGE 111 111 )( )(pyX (83) WXX )(2 (84) The NewtonRaphson update in matrix form is thus ))( ()( )()(1 1 1pyWXWXWXX pyXWXX old old new WzXWXX 1) ( (85) The second and third lines have expressed the NewtonRaphson update as a weighted least squares step, using the adjusted response )(1pyWXz old At each step of NewtonRaphson p changes and subsequently, so does W and z. This algorithm is referred to as iteratively reweighted least squa res (IRLS), because at each iteration it solves the weighted least squares probl em (Hastie et al., 2002): ) ()(minarg XzWXz new This reexpression is very handy because it allows us to apply weighted leastsquares solution methods. This provides us great flexibility for modifying the NewtonRaphson approach to include additional constraints. This utility will become more apparent later in the analysis. Excellent references that discuss ML estimation for logistic regression include Hastie et al. (2001) and Myers et al. (2002). Although not within the scope of this research, logistic regression can be extended to handle more than two classes, referred to as multinomial logistic regression (see Hastie et al. ( 2001)). For details on implementation refer to Appendix B. PAGE 112 112 Properties of ML estimate The asymptotic variancecovariance matrix for the maximum likelihood estimate of can be obtained from the inverse of the information matrix I, which is the negative of the Hessian in Equation 84, 1 1)()var( WXXI (86) The standard errors for each coefficient can then be obtained from the diagonal elements. In addition, GLMs lend themselves to both Wald inference and likelihood inference. These techniques can be used to perform tests of statistical significance on i ndividual parameters or groups of parameters as well as estimate confiden ce intervals and to assess quality of fit. Refer to Myers et al. (2002) for more details on in ference methods for logistic regression. Measures of fit Two important measures of fit in logistic regression are the mode l deviance and residual deviance, )}1ln()1()ln({*2i ii iPyPy dev (87) i ii idevPysign residualdev*)]([ (88) where iP= ) 1(i iiyP x is the estimated probability of success from the logistic model. The model deviance is the sum of Equation 87 over all i observations and is equivalent to the residual sum of squares in ordinary regression. It is twice the negative of the loglikelihood ratio between the reduced model and th e fully saturated m odel (Myers et al., 2002). The residual deviance of Equation 88 is equiva lent to the residual of ordinary regression. Another important residual measure is the squared Pearson residual, which is the squared difference between the observation and the estimate normalized by th e binomial variance (Myers et al., 2002), PAGE 113 113 2 2 2 )1( i ii ii iiPy PP Py r (89) The sum of the Pearson residuals can be used to form a2 statistic for evaluating model fit similar to deviance. The Pearsons residual will resurface later when estimating correlation. Nave Bayes vs. Logistic Regression At the beginning of this chapter it was explai ned that logistic regr ession is considered a discriminative classifier and nave Bayes is a ge nerative classifier. This is because logistic regression directly estimates the parameters for the posterior probability) (xyP whereas nave Bayes estimates parameters for the prior class probability) ( yP and the individual classconditional pdfs) ( yxP to obtain an indirect estimate for ) (xyP using Bayes rule. Interestingly, a Gaussian nave Bayes (GNB) clas sifier implies the same parametric form for )(xyP as logistic regression. Mitc hell (2005) shows that by assumi ng a Gaussian form for the classconditional pdfs,) ( yxP the resultant nave Bayes posterior is the same form as Equation 81. At this point one may be wondering as to why we would want to use logistic regression if the two forms are equivalent. The difference arises in how the k + 1 parameters, are estimated. For GNB, the parameters are estimated by using the feature class means and standard deviations prior class is 2 1 ln and 2 01 0 2 10 k k kk k kk k whereas for logistic regression the s are estimated directly from the data using maximum likelihood. This distinction is very important. Mitchell (20 05) explains that the form of )(xyP assumed by logistic regression holds in many problem settings beyond GNB. This is PAGE 114 114 particularly true in our case because the feat ure pdfs are highly nonGaussian in structure. Logistic regression provides a general method for estimating the parameters when the stringent modeling assumptions imposed by GNB are not reasonable. In cases where the GNB assumptions do not hold, logistic regression and GNB typically learn different classifiers. In such cases, Mitchell (2005) explains that the as ymptotic (as the number of training samples goes to infinity) classification accuracy for logistic regression is of ten better than GNB. Although logistic regression is consistent with th e nave Bayes assumption that input featureskx are conditionally independent given the class label y it is not rigidly tied to this assumption. Given data that disobeys this assumption, logistic re gression will adjusts its parameters to maximize the fit to the conditional likelihood of th e data, even if the resulting pa rameters are inconsistent with the nave Bayes assumptions. This flexibility is one of the main reasons for selecting logistic regression as an alternative to the nave Bayes approach. In the previous chapter we developed a nonparametric nave Bayes classifier using a Parzen estimate for the classconditional pdfs) ( yxP We can compare this approach to logistic regression by applying the logi t of Equation 82. Let ) 1()(1 yxPxfi i and )0()(0 yxPxfi i be the nonparametric classconditional pdfs for each of our k = 10 features, and be the prior probability for class y = 1. By applying the logittransform using Bayes rule we get the following PAGE 115 115 k i ii k i i i k i k ixg xf xf xf xf yP yP1 1 0 1 1 0 1 1)( )( )( ln )1( ln )()1( )( ln )0( )1( ln x x (810) Equation 810 has the form of a generalized additive model (GAM) for logistic regression where is an intercept term and each ) (iixg is a nonparametric, smooth function of the feature ix (Hastie et al., 2001). Essentially, a GAM allo ws the linear predictor of the GLM to be composed of smooth, additive functions. Although similar in form, nonparametric nave Bayes and generalized additive models are fit quite differently. The differences in approach are analogous to the differences above for logistic regression a nd Gaussian nave Bayes. GAM model fitting and application are beyond the scope of discussion. The reader can refer to the seminal paper by Hastie and Tibs hirani (1986) for more details. They present a generalized backfitting algorithm that is a modified form of the IRLS algorithm of Equation 85. Additional references include Hastie et al (2001) and Rupert et al. ( 2003). For this research a GAM approach was not utilized a lthough the developed framework can be extended to incorporate additive nonlinear functions into the linear predictor. In a ddition, from Equation 810 we observe that our nonparametric nave Bayes approach results in a nonlinear decision rule for classification as opposed to the linear decision rule of logistic regression. Therefore, we can directly compare the results obtained using a nonlinear and lin ear classification approach. Example of a Logistic Model Table 81 presents results of an ordinary logistic model fit to the Epoch 1 da ta set with a training accuracy of 78%. Training accuracy is the average of correct ly classified profiles in the PAGE 116 116 training data. Listed in the tabl e are each features estimated coefficient value, standard error, and significance value ( pvalue ). The intercept term is left ou t of Table 81 because we are only concerned with the importance of the features. Furthermore, the model is fit using standardized values for each feature x (normalized by their standard de viation) as opposed to the actual measured value. By standardizing the features we can compare the relative effects of each feature on the outcome probability. This step is necessary because of the varying units and range of values for each feature. Throughout the disc ussion, features are abbreviated as follows: W = width, V = volumeperwidth, DT = deviationfromtrend, MG = max gradient, NS = nearshore slope, O = orientation, G = mean gradient, C = mean curvature, SD = standard deviation of height, and S = slope. From the coefficients listed in Table 81 we get the following linear predictor, )(48.4)(89.0)(24.1)(85.3 ...)(75.4)(2.0)(44.4)(15.4)(98.1)(59.60 i i i i i i i i i iS SD CG O NS MG DT VW ix (811) We can interpret the results as follows: a 1 standard deviation increase in W produces, on average, a 6.59 increase in the log odds of class erosion. In contrast a 1 standard deviation increase in S produces, on average, a 4.48 decrease in the log odds of class erosion. The model enables us to quantify the effect of different features on the outcome probability while holding other features constant. From Table 81, we see that O and S have pvalues that are statistically nonsignificant at the 0.05 level indicating that these features could potentially be dropped from the model. However, the pvalue is dependent on inclusion of all other featur es in the model (Myers et al., 2002). On their own or in the pres ence of other correlated predictors, O and Ss behavior and their importance in the model can change. PAGE 117 117 Subset Selection Given the full set of features or predictors one generally desires to find a subset of variables in some best sense that are sufficien t for explaining their joint effect on the outcome. Parsimonious models tend to have more bias but less variance, th ereby enabling better prediction. Furthermore, smaller numbers of f eatures in our model eas e our interpretation of their effects on the outcome. Popular subset selection met hods include forward stepwise regression or backward stepwise regression wh ere individual coefficients are consecutively added or removed from the model dependent on some criterion, usually the least significant coefficient is dropped (see Hastie et al. (2001); Ruppert et al. (2003)). Shtatland et al. (2001) discuss several issues with the use of stepwise logistic regres sion due to subjectivity in the significance level and instead suggest the use of information based criteria to perform subset selection. The basic idea behind information cr iteria is to penalize the likelihood for model complexity. Shtatland et al. (2001) recommends using the Akaike informa tion criterion (AIC) if the goal is strictly prediction and the Bayesian information criterion (BIC ), also called Schwarz information criterion, if the focus is model interpretation, kNM kM )log()(2BIC 2)(2AIC )( Mis the maximized loglikelihood using the estimated coefficient values, k is the number of parameters including the intercept, and N is the sample size (number of profiles). AIC and BIC have important optimal properties. AIC is asymptotically equivalent to the crossvalidation criterion and is considered a co rner stone of the modern prediction approach. The implied significance level varies from 30% to approximately 1516% as the sample increases (Shtatland et al., 2001). Therefore, AIC tends to select more covariates than seems necessary, striving predominantly for prediction. In comparison, BIC implies a much lower PAGE 118 118 significance level, often much less than 0.05. Therefore, it is much stricter in how it allows features to enter the model. Unlike AIC, BIC is consistent in that the probability in choosing a bigger model converges to zero as the sample size increases (Sht atland et al., 2001). AIC and BIC were used to determine optimal subset models for Epoch 1. Because we have ten features excluding the intercept, there are 210 = 1024 possible models. We use a brute force algorithm to compute AIC and BIC scores fo r all possible models selecting the model with the lowest score for AIC and BIC. Table 82 compares coefficien ts estimated for the full model, the AIC selected model, and the BIC selected model. The AIC model includes all features except NS and the coefficient values ar e very similar to the full model. The BIC model excludes NS and SD which results in both S and C having their coefficient values switch from positive to negative. This is an effect of relative correlati on between the features. Width remained the most influential feature (i.e. largest magnitude coefficient) for each of the models. Convergence Issues An occasional problem encountered when fitting a logistic regression model is failure of the likelihood maximization algorithm to conve rge. Trying to figure out the cause of nonconvergence can be troublesome. In general, the presence of local maxima is the most often encountered problem in function maximization. Fortunately, such problems cannot occur for logistic regression because the binomial loglikelihood is globa lly concave and therefore can have only one maximum (Allison, 2008). Unfortunately there are many situations in which the loglikelihood function doe s not have a maximum. In these cases, as increases the maximum likelihood estimate does not reach a maximum and either approaches zero or some number less than zero asymptotically. The end result is that the parameter estimates will continually grow and PAGE 119 119 become overinflated or the loglikelihood will caus e a numerical instability and abruptly fail due to log of zero and presence of NaNs (not a number). There are two data patterns th at if present can result in nonexistence of the maximum likelihood solution: complete separation and qua sicomplete separation. Complete separation occurs when there exists some coefficient vector such that iy = 1 whenever 0 ix and iy = 0 whenever 0 ix (Allison, 2008). In other words, complete separation occurs when a linear function of x can generate perfect predictions of y For these cases, the loglikelihood will approach zero asymptotically. Quasicomplete separation occurs when there exists some coefficient vector such that 0 ixwhenever iy = 1 and 0 ixwhenever iy = 0, and equality holds for at least one cas e in each category of the dependent variable (Allison, 1998). In these cases, the loglikelihood function tends to approach some value less than zero asymptotically. Quasicomplete separation is the most fre quent cause for nonconvergence and there are different approaches for handling th is problem. The most widely used method is to simply delete from the model any variables whose coefficien ts do not converge. Other methods include Bayesian estimation by using a prior distributi on on the coefficients and penalized likelihood estimation (Allison, 2008), which is an approach that will be used later for a different purpose. Complete separation is less common and considerably more difficult to deal with. In general, the only approach is to delete the problematic va riable or data from th e model (Allison, 2008). Nonconvergence occurred when fitting an ordi nary logistic model to the Epoch 4 data. The problem stemmed from the orientation, O feature. Once that feature was removed, convergence was obtained. However, the feature still needed to be retained for analysis. By examining a plot of orientation it was evident that the artificial nourishment zone resulted in a PAGE 120 120 drastic shift in localized orient ation around the pier region (Figure 82). This drastic shift also correlates almost perfectly with the binary class patterns resulting in quasicomplete separation. After removing this section of data, the resulting model was able to converge. Correlated Observations Spatial dependence is defined as the property of random variables taking values at pairs of locations a certain distance apart that are more si milar (positive autocorrelation) or less similar (negative autocorrelation) than expected for rand omly associated pairs of observations (Legendre and Legendre, 1998). Shoreline change is a proc ess that exhibits st rong spatial dependence resulting in high positive auto correlation in our binary cl ass variables or observationsiy Given a profiles class membership (erosion or accretion ), we strongly expect its neighboring profiles to belong to that same class. When untreated, th is spatial dependence violates the logistic regression assumption of independence in the observations (Miller et al., 2007). The nonindependence of outcomes represents a form of pseudoreplication and overestimates the effective available degrees of freedom (Carl and Kuhn, 2007). Untreated autocorrelation can result in severe problems including underestimation of coefficient variances, increased Type I errors, incorrect parameter estimates, erroneous assessment of the significance of relationships between predictor and response variables, and biased vari able selection (Carl and Kuhn, 2007; Miller et al., 2007). Figure 83 shows an autocorrelation plot of st andardized Pearson residuals estimated from the previous logistic model fit to Epoch 1 (Equation 811). It is evident there is strong residual correlation violating the assumptions of independence. To address spatial autocorrelation in our binary observations, two different extensions of logistic regression are investigated: autologistic PAGE 121 121 models and generalized estimating equations (G EE). In so doing we hope to obtain better estimation of our parameters to improve our understanding and prediction of erosion patterns. Autologistic Models Autologistic models incorporat e spatial dependence to increase the predictive ability of a logistic model. The general idea is to calcu late an extra explanat ory variable, called an autocovariate, that captures th e effect of other response va lues in a spatial neighborhood (Dormann, 2007). The basic autologistic model was first proposed by Besag (1974), j ijji i iy P P01 ln where iP is the binomial probability of occu rrence (class erosion in our case), 0 is the intercept, ji measures the influence of the cla ss occurrence at a neighboring location j to the predicted probability in i and i is the binomially distributed error. Often, isotropy is assumed and jiis a coefficient based on a weighted neighborhood or autocovariate term. Augustin et al. (1996) modified the basic autologistic model to incorporate predictor variables x i kk i ixx P P i 110Autocov ... 1 ln where iAutocovat site i is defined as a weighted sum of ob servations in a neighborhood defined by iN (Miller et al., 2007): iNj jijyw Autocovi where ijw are weights for neighboring observations If the autocovariate coefficient = 0, we obtain the ordinary logistic model of Equation 81. Typi cally, autologistic models are applied to PAGE 122 122 grid or areal data structures. In our case, we are working with profile lines as opposed to a grid structure; therefore, our neighborhood is a serial structure and can consist of adjacent profiles. For our analysis, we chose a Markov transitiona l model that is a modification of the above form and only uses previous profile class memberships as autocovariat es (Vries et al., 1998), 1 1 1 1 110... 1 lni j jij i i j jij kk i iy yxx P P x (812) where ] ,[1iii iyyEPx is the probability of occurrence for class erosion at profile i ixare the morphologic features incl uding intercept term, and j are the autocovariate terms measuring the influence of the previ ous profiles classesjiy on the current profiles class iy For j = 1, we get the familiar firstorder Markov transition model, 1 1 110... 1 ln i iikk i iy yxx P Px (813) where measures the influence of the previous profiles class on the current class. Other extensions of the Markov transitional model incl ude marginalized transitional models (e.g. Lee and Daniels (2007)). A firstorder Markov model of the form of Equation 813 was fit to Epoch 1 data using only the W V and DT features, which are three of the strongest individual predictors (see Chapter 6). Results of the coefficients and thei r significance are shown in Table 83. The most important observation we can make from the results is that the autocovariate term completely saturates the model rendering the other features nonsignificant as predictors. Similar behavior was observed when fitting a full model with all feat ures and for other epochs. This is because the correlation between adjacent pr ofiles is in excess of 0.9 for each epoch. Because of such strong correlation, the previous profiles class essentially pred etermines the current class. PAGE 123 123 Although the autologistic model provided a very high training accuracy of 0.98, we lose the ability to predict class erosion based on our morp hologic features as they become irrelevant in the presence of the autocovariate term. Another concern with the autologistic mode l is that in practice we will not know the previous profiles class as that must be predicted. When the autocovariate term is based on response values that are predicted rather than obs erved, fitting autologistic models requires some form of iteration. Mostly this has been accomplished using Gibbs sampler and Markov chain Monte Carlo based methods which can become tedious and complex (Miller et al., 2007). Furthermore, Dormann (2007) compare autologistic parameter estimates to other methods using simulated data with correlation and conclude that the autologistic model consistently underestimates the effect of the parameters in the model giving biased estimates. This is another concern with using an autologistic approach. We de sire to analyze the joint effect of the features on the outcome of class erosion as well as test the ability of these features to predict the occurrence of class erosion. In the presence of such strong correla tion as in our data, autologistic based models will not allow us to achieve this goal and another approach is needed. Generalized Estimating Equations Generalized estimating equations (GEEs) ar e an extension of GLMs for correlated observations that are specially suited for paramete r estimation. GEEs were initially proposed by Liang and Zeger (1986) to handle lo ngitudinal (i.e. time series) binary data collected on different subjects or clusters over time, but have more recently been applied to sp atially correlated data. GEEs are referred to as marginal models beca use they model the mean (populationaveraged) response at a site, ] [iyE as a function of the predictor variable s, rather than the joint response of all sites simultaneously (M iller et al., 2007; Zorn, 2001; Gumper tz et al., 2000). In this regard, PAGE 124 124 they allow explicit incorporation of spatial and temporal correlation without the need to specify the joint distribution for all observations or sites i In contrast to autologistic models, spatia l dependence and its use for prediction is of secondary interest to GEEs, included only to obt ain unbiased parameter estimates (Miller et al., 2007; Gumpertz et al., 2000). By applying the GEE approach and incorporating spatial dependence, we hope to gain better estimates of our parameters than obtained with ordinary logistic regression. This will improve our unders tanding of the joint effect of the features on explaining the outcome of class erosion. Furthermore, we can then use the fitted GEE model to predict the mean probability of erosion at a profile given the feature values. Quasilikelihood estimation For ordinary logistic regre ssion we used the IRLS scheme of Equation 85 based on a maximum likelihood framework by assuming indepe ndent binomial observations. However, the assumption of independence is inappropriate due to the presence of correlation in our observations. Therefore, we can no longer dir ectly specify our joint distribution to apply maximum likelihood estimation. To get around this problem, GEEs use quasilikelihood estimation (Carl and Kuhn, 2007; Myers et al., 2002; Zorn, 2001). Quasilikelihood estimation stems from generalized least squares for the case wh ere responses are correlat ed. It exploits the fact that the score function i nvolves the distribution of the re sponse only through the first two moments. In addition, quasilikelihood produces asymptotic properties that are quite similar to those of maximum likelihood estimators. As a result, good efficiency can be obtained even when the likelihood is not known (Myers et al ., 2002). The quasilikelihood method involves iteratively solving the following system of score equations for 0uyVD1)( (814) PAGE 125 125 where y is our vector of N observations iy for a specific data epoch, u is our vector of mean responses ) ;(][ iiiPyEx which is the logistic m odel of Equation 81, and Vis a N x N diagonal covariance matrix with the i th diagonal element ) 1()var(iiiPPy D is an N x k + 1 matrix of partial derivatives AX u D (815) where the (i j ) element of D is j iP for i = 1 to N j = 1 to k + 1 features, Ais a N x N diagonal matrix of the binomial variance ) 1()var(iiiPPy and X denotes the N x ( k + 1) matrix of feature valuesix including a column of ones for the intercept. Working correlation matrix The covariance matrix V is decomposed into the following form 2/12/1RAAV (816) where is a scale parameter, A is same as above and split into two squareroots for formal reasons, and R is referred to as the working correlation matrix. R allows the user to impart a correlation structure. One can either directly spec ify this matrix, such as an exponential variogram for spatial data, or one can allow the GEE framework to directly estimate it from the data. Common correlation structures for R include (Myers et al., 2002; Ratcliffe and Shults, 2008): 1. Independent. R is set to the identity matrix. A ll correlations are assumed zero and we get back the ordinary generalized model. 2. Unspecified. This implies that all correlations are to be independently estimated from the data. 3. Exchangeable. All correlations within subjects are equal. PAGE 126 126 4. Firstorder Autoregressive AR(1). The correlation among repeated measurements on a subject depends on thei r separation in order of measurements, ),(kj kjyyCorr. This structure is suitable for equally spaced longitudinal data. 5. Markov. The correlation among repeated measur ements on a subject depends on their timing of the measurement, ),(kjtt kjyyCorr. This structure generalizes the AR(1) structure to allow for unequal spacing of the measurements. The chosen correlation structure should be approp riate for the data type as it leads to more efficiency. However, the GEE method provides a consistent estimator of our parameters even if the specified correl ation structure is incorrect (Myers et al., 2002; Zorn, 2001; Liang and Zeger, 1986). This robustness is a very powerful property because we do not have to worry too much about the detailed structure of R For most practical cases, we will not know the true form and rely on the consistency of the GEE met hod. In our case, we are dealing with profiles which are spatially separated by 5 m, but they ar e sequential in their relative orientation to each other. Therefore, a correlation structure suitabl e for longitudinal data can be applied. The Markov structure is employed because for certain epochs the nourishment zone around the pier is excluded, which results in unequal pr ofile spacing. In cases where the pier zone is not excluded, all profiles are equally spaced and the Markov structure is equivalent to AR(1). Estimation algorithm The score equations of Equation 814 are nonlinear in As before with the maximum likelihood score equations for logistic regre ssion, we linearize E quation 814 and apply a NewtonRaphson scheme (Myers et al., 2002) ))(()( )()(0 0 1 00 1 0 1 00 0 1 00 1 0 1 00uyDVDDVD uyVDDVD old old new (817) PAGE 127 127 where old is current estimate of and 0D,0V,0u are as defined above but evaluated at the current parameter estimate old. Substituting Equation 815 a nd Equation 816 into Equation 817 we get zRAXXARAX uyAXARAXXARAX uyVDDVD1 0 2/1 0 12/1 0 1 0 2/1 0 0 2/1 0 2/1 0 1 0 2/1 0 12/1 0 1 0 2/1 0 0 1 00 1 0 1 00) ( ))( ( ) ( )()( old old new (818) where z = ))( (0 2/1 0 2/1 0uyAXA old is our adjusted response. This form is the logistic GEE equivalent of the iterative rewe ighted leastsquares method of E quation 85 for ordinary logistic regression. We will refer to it as IRLS GEE. Note that the scale parameter cancels out in the formulation. The GEE estimation algorithm is outlined as follows: 1. Obtain an initial estimateold. Generally the parameters estimated from an ordinary logistic model are used. 2. If R is not directly specified, use old to obtain residuals to estimate 0Rbased on correlation structure. 3. Use old to estimate0A and 0u. 4. Use Equation 818 to obtain new estimator for new. 5. Substitute new for old and go back to Step 2. 6. Repeat until convergence. R is estimated using a consistent estimator of the variancecovariance structure for the specified form based on the residuals, usually either devi ance or Pearson residuals. In our case, Pearson residuals (Equation 89) were computed and then used to estimate a correlation parameter which was used to estimate the Markov correlation matrix R at each step of the algorithm. In the general case, the above GEE estimation e quations can be extende d to handle correlation PAGE 128 128 across multiple subjects or clusters. For more details on GEE estimation and the correlation matrix refer to Liang and Zeger (1986) and Myer s et al. (2002). For details on implementation refer to Appendix B. Properties of GEE estimate Properties of the quasilikelihood estimator resemble the maximum likelihood estimator for logistic regression. If the correlation structur e is correct, the estimator is asymptotically normal with variancecova riance matrix (Myers et al., 2002; Zorn, 2001) 11)() var( DVD Unlike the GEE estimator for if the correlation structure is incorrect, ) var( is no longer a consistent estimator. Therefore, one generally uses a robust empirical es timator of the variancecovariance matrix Such estimators are often referred to as sandwhich estimators. The following estimator is from Liang and Zeger (1986) 11 1 1 11)]())(([)() var( DVDDVuyuyVDDVD (819) The empirical estimator is appropriate for most cas es and used most often in practice. From the diagonals of Equation 819 the standard errors can be used for inference and to compute confidence bounds for the parameter, ) of s.e.( 2/k kz Results GEE models were fit to each epoch individually to evaluate the joint effect of the features on the probability to experience erosion. Figure 84 shows the large difference between coefficients estimated using ordinary logistic regression and the GEE approach accounting for correlation. Table 84 lists the estimated coeffi cients for all features by epoch, excluding the intercept. The features were standardized to allow relative comparisons. The asterisks indicate features that have confiden ce bounds which include zero indica ting potential nonsignificance PAGE 129 129 given the other features in the model. For Epoch 3,4 and 6 several features were found nonsignificant whereas for Epoch 1 and 5 all features were significant. Interpretation of these results is difficult due to high collinearity between the features themselves. The fit of the model was meas ured using the Brier score N i iiiPy N PyEPB1 2 2))(( 1 ]))([()(x x The Brier score ranges between 0 and 1, with 0 be ing a perfect fit and 1 being worst possible fit. The values in Table 84 range between 0.13 and 0. 18, indicating the models fit the training data fairly well. For example, the Brier score of 0. 14 for Epoch 5 corresponded to a training accuracy (percent of training data co rrectly classified) of approxi mately 84%. Although the relation between Brier and classification accuracy is nonlinear, the average training accuracy for all epochs was around 80%. The top most positive and negative coefficients for each epoch are shown in bold in Table 84. For Epoch 1 W is most influential on the outco me of class erosion where a 1 increase in W produces, on average, a 1.70 increase in the log odds of class erosion (54 .32 width m). Or to put it differently, given a 1 increase in W at some profile i and all other features held constant, the odds )1(i iP P of that profile to experience erosion is 4739.570.1 e times more likely. In contrast, S is the most negative coefficient indicating that a large increase in slope reduces the odds of erosio n. These results make sense considering that the shoreline reference is based on a tidal datum elevation. A very steep beach would be less likely to experience large landwar d movement of this elevation contour as opposed to a gently sloping beach. Furthermore, beaches with large widths tend to experience more erosion relative PAGE 130 130 to less erosive regions because the natural tendency is for the shoreline to erode back and retain some form of quasiequilibrium. The majority of the top influential features in Table 84 agree well with the individual rankings presented in Table 64 of Chapter 6. However, two major differences are the negative coefficients of DT and the strong influence of O for Epoch 3 and Epoch 4. Individually, DT ranked very high as a feature base d on correlation and informationtheoretic measures (see Table 64). DT is expected to have positive influence, but in this case DT exerts a negative influence for most epochs. Recall, that for a GEE the co efficient estimates are based on the joint effect estimated from the data taking correlation of the responses into account. On its own DT remains strongly influential and with pos itive sign, but in th e presence of other correlated features, DTs influence on the observations and its importance changes. In this case, W is capturing most of the behavior generally captured by DT as is evident based on their high correlation coefficient ( p > 0.80). For Epoch 6, which only covers the northern, natural portion of the beach, DT is the most influential feature. This change in beha vior is due to the large difference between W and DT in this region. The other important observation in Table 84 is the strong, positive influence of O in Epoch 3 and 4. Individually, O performed worst overall as a feat ure (see Table 64), but in the presence of other features its influence changes. Originally, it was expected that orientation would have some influence on the erosion proce ss because it impacts th e relative breaking wave angle. The poor individual perf ormance was hypothesized to be an effect of scale in that the differences in orientation were not prominent eno ugh at the localized scale we are exploring with lidar. However, the results for O in Table 84 support the notion that orientation can be an influential feature at the local scales when coupled with other features. To speculate, this PAGE 131 131 behavior may be more a mathematical byproduc t than a physical effect. Individually, the variation in orientation values ove r the entire stretch of beach is in general less relative to the other features except in a few regions near the pier Therefore, its ability to separate regions of beach based solely on its own is not as strong co mpared to other features. However, in the presence of all the features its influence ma y increase due to collinearity among the features. Because some of the measures, such as W and DT are highly collinea r, their amount of information on the class may potentially decrease in the presence of each other. In these cases, it is possible that subtle variations in O start to become more important in determining the probability of erosion. This may also have a phy sical explanation in that the impacts of local orientation on the relative breaki ng wave angle start to manifest when evaluating the joint effect of the features. To provide a better example of the utility of the GEE logistic model for beach characterization, Figure 85 shows estimated probab ility of erosion as a function of orientation (degrees) for Epoch 3. Four plots are displayed: change in probability vs. O given the mean values of all other features change in probability vs. O given a 2 standard deviation increase in S (01 .0 slope m/m), change in probability vs. O given a 2 standard deviation decrease in S, change in probability vs. O given a 2 standard deviation increase in S and DT (21 .82 trend deviation m). Recall that orientation is measured as an azimuth, clockwise relative to north. From the plot, we observe that as the beach approaches a more east to southeasterly orientation, its probability to erode increases. Furthermore, wh en both slope and deviation from trend increase, the probability to erode decreases fo r the same orientation value. For example, an orientation of 176 degrees results in an estimat ed probability of class erosion of ~ 0.80 given mean values for each feature. But if slope and deviation from trend increase by 2 standard PAGE 132 132 deviations, an orientation of ~ 184 degrees is re quired to obtain an equivalent probability of erosion of 0.80. Plots such as Figure 85 can be very useful for characterizi ng the behavior of the beach. One potential application is in beach nourishmen t design. A logistic model trained on beach nourishment data can be used to simulate nourish ment design and provide expected probabilities of erosion given different design characteristics. Additionally, reca ll that Figure 85 is based on a linear predictor of th e logodds. By applying a GAM appr oach, the nonlinear effect of x on the outcome probability can be examined using similar plots. Penalized Estimation To address spatial autocorrelation in our logi stic regression, we us ed a GEE approach. This provided us a better model for evaluating the joint effect of the feat ures on the outcome of class erosion Now, our objective is to test the pr edictive power of the GEE model for classifying a profile as an erosion or accre tion zone given the feature values But there are two potential issues that could lead to redu ced prediction performance. First, although the GEE estimator is asymptotically consistent, the high correlation (c ollinearity) among the featur es raises concerns about the prediction accuracy of the models ba sed on these estimates. Collinearity in our design matrix X can lead to illconditioning in the quasilikelihood solution increasing the variances of our parameter estimates (Barker and Brown, 2001). In retu rn, this can result in an unstable model with reduced prediction abilities. The second main concern is model over fitting. Over fitting occurs when models include more predictors than necessary or the estimated coefficients fit too strongly resulting in a model that fits the training data well but cannot generalize. This is a common occurrence in regression based models when theyre used for prediction. Over fit models tend to perform poorly in PAGE 133 133 prediction because the pattern in th e data is misidentified due to in terpolation at given data points with many unnecessary predicto rs in the model (Fu 1998a). Penalization is a technique that was originally used to provide solu tions to illconditioned inverse problems. For our concerns, penalization methods can be used to address collinearity and overfitting in our GEE model. The basic idea is to place a c onstraint (penalization) on the coefficients during estimation. During collinear ity, a large positive coefficient on one variable can be cancelled by a similarly large negative coe fficient on its correlated cousin. By imposing a size constraint on the parameters, this phenomen on can be prevented (Hastie et al., 2001). Because mean square error is equally composed of variance and bias, penalization methods strive to reduce the variance at the cost of additional bias. The reduction in variance should enable better prediction. Through ge neralized crossvalidation the degree of penalization can be determined to reduce model overfitting a nd improve our prediction performance. Lasso Regression There are several penalization methods to se lect from including the family of bridge penalty estimators tk j j 1 with t 0 and > 0 (Fu, 2003). The most popular member of this family is ridge regression, which places an L2 constraint, = 2, on the parameters (see Hastie et al. (2001)). Tibshirani ( 1996) introduced the Least Ab solute Shrinkage and Selection Operator (Lasso), which places an L1 constraint, = 1, on the parameter. The Lasso penalty distinguishes itself by shrinking the estimator to obtain better estimation and prediction performance, and by setting certain coefficients to zero depending on the level of shrinkage and nature of the data (Fu, 1998a,b; Fu, 2003). This la st property is very attr active because the Lasso provides an automatic subset se lection method through constraine d optimization. Figure 86 shows the difference in constraint geometry between Lasso and Ridge regression. The Lasso PAGE 134 134 solution is the first place that the contours touch the square, and this will sometimes occur at a corner, corresponding to a zero coe fficient (Figure 86A). For ri dge regression (Figure 86B), there are no corners for the contour s to hit and hence zero solutions will rarely occur (Tibshirani, 1996). For logistic regression, the Lasso ca n be applied to the model deviance k j j j j lassot t Deviance1 j j})]1ln()1()ln([(*2min{arg  subject to )}1ln()1()ln([(*2min{arg  subject to } min{arg uyuy uyuy where is referred to as the tuning parameter or shri nkage factor (Fu, 1998a,b). But, in our case we are dealing with a GEE, which is based on the quasilikelihood scor e functions. Fu (2003) proves that the Jacobian of the GEE estimator is positive semidefinite and therefore the penalized quasilikelihood score provides a unique estimator. Hence, we can penalize the quasilikelihood score equations as follows: }0)()('{arg }0  )('{arg1 signk j j lassouyVD uyVD1 1 (820) Because of the absolute value and the requirement of tracking the sign of the coefficients, the resulting estimates are nonlinear in y and cannot be solved with direct minimization. Tibshirani (1996) used a quadratic progr amming technique to obtain estimates of for ordinary leastsquares re gression using 2k inequality constraints, corresponding to the 2k different possible signs for the j s. We employ a simpler method de veloped by Fu (1998b; 2003) called the shooting method to apply Lasso penalization to the GEE estimator. PAGE 135 135 Shooting Method The shooting method is a cyclic coordinate descent algorithm developed for Lasso regression in the ordinary leastsquares (OLS) ca se. Cyclic coordinate descent based algorithms cycle through all parameters and update each parameter in turn (Wu and Lange, 2008). The shooting method is directly app lied to the OLS score equations, yxxxyxjjjj j j jjS 22),,,( (821) where j means all parameters except j In the Lasso case, for each of the parameters there are separation solutions to the left and right of 0. The basic idea behind the algorithm is to use the directional derivatives to s hoot in the direction of the slope Theoretically, it stems from taking the limit of the Bridge estimator analytically as approaches 1. The algorithm is as follows (Fu, 1998b): 1. Start with ) ,..(10 k the initial estimate from ordinary least squares 2. At step m for each j = 1,..., k let ) ,, ,0(0yxj j jSS and set  if 0 if 2 if 20 0 0 0 0 S S S S Sjj jj jxx xx where jxis the j th column vector of the design matrix X. Form a new estimator ) ,..., ( 1k m 3. Repeat 2 until m converges. PAGE 136 136 Note that penalization is only placed on the k predictor variables, not th e intercept. Fu (1998b) suggests to meancenter and standardize the predic tors prior to estimation to avoid shrinkage of the intercept. To apply the shooting method to our case, we can rearrange Equation 818 of the IRLS GEE method to resemble the OLS score equations of Equation 821. Applying a Cholesky factorization to 1 0R, 1 00 0 1 0 )(RSSR S cholesky we obtain the following fo rm for Equation 818, zAXXAAX uyAXAAXXAAX00 2/1 0 12/1 000 2/1 0 0 2/1 0 2/1 000 2/1 0 12/1 000 2/1 0) ( ))( ( ) ( SS SS SS SSold new (822) Equation 822 can be reformulated as a score equation, wppp wppp zAX XAAX 2 2 ) (00 2/1 0 2/1 000 2/1 0new new newSS SS (823) where XASp2/1 00, zSw0 and z = ))( (0 2/1 0 2/1 0uyAXA old. Equation 823 can then be split into separate func tions for each of the k parametersj to resemble Equation 821 wpppwpjjjj j j jjS 2 2),,,( (824) The shooting method can then be directly applie d to Equation 824. The Lasso GEE penalization algorithm is outlined as follows: 1. Start with ) ,..(10k the initial estimate from the GEE estimator. 2. Use0 to estimate0A, 0u, and 0R. 3. Apply the shooting method using the score equations of Equation 822. 4. Repeat steps 2 4 until convergence PAGE 137 137 For details on implementation refer to Appendix B. Prediction Results In this section we evaluate the ability of the m odels to classify zones more or less prone to erosion based solely on variation in the beach morphology. This will provide us a better understanding of how well th e classifiers are able to describe the patterns in erosion alongshore. Because of the differences in temporal span, seasonal periods, and wave climate covered by each epoch, we restrict our classification to each individual epoch without training across epochs. Therefore, our classification problem can be stated as follows: given some training data taken from part of the beach, we want to predict (classi fy) locations more likely to erode at other parts of the beach. An example of this type of problem in a practical situation is incomplete coverage. We might have an initial lidar survey th at covers an entire stretch of beach, but during the return survey only a portion of the region is mapped. A clas sifier could be trained on the covered portion and used to predict zones of high erosion in the nonsurveyed region. Provided both regions experience similar dynamics, potentiall y accurate classification can be obtained. Another scenario might be to use manually survey ed profiles at lowreso lution coupled with an initial lidar data set to train a classifier fo r estimating changes between the manual profiles. Techniques such as these could enable mo re accurate estimation of beach change. To test the logistic GEE and Lasso pena lized GEE for classification, the following approach was implemented: 1. For each epoch, take a random training sample of 30% of the profiles. Use the other 70% of profiles as a test set. 2. Fit a GEE model using the training data a. Apply a Markov correlation structure to ensure that unequal spacing between profiles is accounted for when estima ting the working correlation matrix R. 3. Use the estimated GEE coefficien ts and apply Lasso penalization PAGE 138 138 a. Select the optimal size for the tuning parameter using Fus (2003) quasigeneralized cross validation (QGCV) criteria for penalized GEEs i. Run a grid search for =[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 1, 2, 3, 5, 7, 9, 15, 30]. Select with lowest QGCV score. 4. Classify i = 1... N profiles in each test set for each epoch using GEE and penalized GEE models and compute mean cl assification accuracy (MCA) N i ii iiyy yy I N MCA1 if 0 if 1 1 whereiy is the actual class and iy is the class with maximum a posterior probability 5. Repeat simulation 20 times Table 85 shows the average GEE and Lasso pena lized coefficient values for Epochs 2 and 3 to provide a comparison between large and sm all shrinkage. Epoch 2 underwent an average penalization of avg ~= 0.5 compared to Epoch 3, whic h underwent an average penalization of avg ~= 13.2. As a result Epoch 3 experienced much greater shrinkage in its coefficient values relative to the GEE estimates and some were set to zero. For example, S was the strongest negative coefficient for the GEE model, but und er penalization its coefficient was shrunk to negligible influence whereas O remained a strong predictor. Furthermore, the efficiency (standard deviation) of the Lasso parameter esti mates for Epoch 3 were much better compared to the GEE model. However, for Epoch 2 the effici ency of the Lasso estimator was slightly worse than the GEE estimates. This could be due to the crossvalidation method for selection of the tuning parameter. Potentially, a high resolution gridsearch could result in more efficiency for Epoch 2, but the computational costs increase polynomial. Table 86 presents mean classification accuracy (MCA) for each epoch and the overall average MCA. The nonparametric nave Bayes clas sifier (see Chapter 7) based on all features PAGE 139 139 was implemented for comparison and its result s are presented. Comparing the GEE and penalized GEE results, the Lasso penalization added a sl ight gain in classification accuracy for most epochs with an average gain of 1%. This indicates that the GEE model was not overfitting the data as much as expected; however, this coul d be due to the small training set of using only 30% of the data. Epoch 2 saw the greatest increa se in performance between the two of ~2.5%. As we increase the training size, the improvement in Lasso penaliz ation is expected to increase. Furthermore, a high resolution gridsearch for th e tuning parameter could result in increased performance. In addition, confusion matrices were computed for the Lasso GEE to examine potential bias in the misclassification error. The classifier exhibited minimal differences between average false positives and average false negatives for each epoch. Comparing all classifiers, the logistic GEE and Lasso penalized GEE significantly outperformed the nonlinear nave Ba yes classifier on average. Nave Bayes did outperform for Epoch 3 and 5 but performed very poorly for E poch 2 and 6. The poor performance of Epoch 6 could be due to the reduced training size and the inability to effectively estimate the pdfs given the small data size. Results indicate that a line ar classification approach was sufficient for this data compared to the nonlinear approach of naive Bayes. The lower performance of naive Bayes is likely a product of the generative framework and the assumption of conditional independence between features, whic h disregards the correlation in the feature data. In addition, difficulties in nonparametric estimation of the classconditional pdfs and bandwidth selection with limited training data can reduce performance. Overall, results are impressive when consid ering that classification is based solely on change in morphology with no direct inclusion of ba thymetry, waves, currents, or other forcing. The highest classification rate is 86% obtained for Epoch 4 using the Lasso GEE. In addition, it PAGE 140 140 is expected that a GEE with a nonlinear predictor in x such as a logistic GAM, can obtain even better classification performance and further dominat e the naive Bayes classifier. The relatively high classification rates suggest that subtle variat ions in morphology for this beach are influential in the observed outcome of erosion alongshore. Furt hermore, they demonstrate the ability of the logistic GEE approach to effectively model patterns in beach morphology and probability of erosion for each epoch. PAGE 141 141 Figure 81. Fitted logistic curves. A) E poch 1 model fit. B) Epoch 3 model fit. x is the estimated linear predictor ix for each model. Figure 82. Change in orientation from north to south alongshore for October 2005. The region between the dashed lines is the nourishme nt exclusion zone that was extended to exclude the data causing convergence issues. A 30 25 20 15 10 5 0 5 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 g xP(Y=Erode) 10 5 0 5 10 15 20 25 30 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 g xP(Y=Erode) B 3.297 3.298 3.299 3.3 3.301 3.302 3.303 3.304 3.305 3.306 3.307 x 106 140 160 180 200 220 240 260 280 Northing (m)Orientation (degrees) Zone causing convergence problems PAGE 142 142 0 200 400 600 800 1000 1200 1400 1600 1800 2000 0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Profile La g ( 5m ) Autocorrelation Figure 83. Autocorrelation plot of standardized Pearson residuals for a logistic model fit to Epoch 1 data. Lag of 1 is equivalent to the profile spacing of 5 m. The dashed lines represent 0.05 confidence bounds to test for randomness meaning we expect only 5% of autocorrelation to fall outside these lines if independent. It is evident there is high autocorrelation in the residuals violating the assumptions of logistic regression. PAGE 143 143 Figure 84. Difference in coeffi cient values for each feature by epoch estimated using a GEE with autocorrelation and ordinary logist ic regression. The coefficients are standardized. Overall, ordi nary logistic regression substantially overestimates coefficient values and fluctuates in estimating their influence in sign (either positive or negative contribution). Results based on ordinary logi stic regression would result in incorrect conclusions about feature infl uence on the outcome of erosion because it does not account for autocorrelation in the obs ervations. Note that standard deviation ( SD ) was left out of the plot. GEE LOGIT 1 2 3 4 5 6 2 0 2 4 6 8 Width 1 2 3 4 5 6 6 4 2 0 Volume 1 2 3 4 5 6 4 2 0 2 Deviation from Trend 1 2 3 4 5 6 4 2 0 2 4 Max Gradient standardized value 1 2 3 4 5 6 0 2 4 6 8 Nearshore Slope 1 2 3 4 5 6 2 0 2 4 Orientation 1 2 3 4 5 6 4 2 0 2 Mean Gradient 1 2 3 4 5 6 1.5 1 0.5 0 Mean Curvature Epoch 1 2 3 4 5 6 4 2 0 2 4 Slope PAGE 144 144 150 155 160 165 170 175 180 185 190 195 200 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Orientation (degrees)Estimated Probability of Erosion Mean values 2 s.d. increase in S 2 s.d. decrease in S 2 s.d. increase in S and DT Figure 85. Estimated probability of erosion vs. orientation for Epoch 3 using a logistic GEE. Four separate lines are displayed for diffe rent predictor values: the mean value for each feature, 2 standard deviation increase in S, 2 standard deviation decrease in S, 2 standard deviation increase in S and DT. Overall, as beach orientation approaches a more easterly to southeasterly direction the probability of erosion increases. Figure 86. Difference in constrai nt geometry between Lasso with L1 penalty and Ridge with L2 penalty. A) Lasso B) Ridge. B A PAGE 145 145 Table 81. Logistic regression model fit to Epoch 1. Coefficients are for standardized features. = coefficients, )var( = standard errors, = pvalue. Table 82. Comparison of the estimated coefficients for full model, AIC selected model, and BIC selected model for Epoch 1. Table 83. Firstorder Markov model fit to Epoch 1 using three of th e strongest individual predictors. The autocovariate term saturates the model rende ring the other features nonsignificant. Feature )var( feature W 6.58 0.64 0.00000 32.54 V 1.98 0.34 0.00000 0.52 DT 4.15 0.33 0.00000 137.52 MG 4.44 0.62 0.00000 0.13 NS 0.21 0.63 0.00000 0.09 O 4.75 0.86 0.72978 25.20 G 3.85 0.46 0.00000 0.01 C 1.24 0.30 0.00000 0.001 SD 0.88 0.46 0.00003 0.22 S 4.48 0.70 0.05583 0.01 Feature Full Model AIC BIC W 6.58 6.52 7.12 V 1.98 1.98 1.80 DT 4.15 4.17 4.08 MG 4.44 4.34 4.37 NS 0.21 O 4.75 4.86 4.97 G 3.85 3.80 3.45 C 1.24 1.29 1.10 SD 0.88 0.95 S 4.48 4.45 3.12 Feature pvalue W 0.025 0.106 V 0.343 0.715 DT 0.001 0.722 9.229 0.000 PAGE 146 146 Table 84. Logistic regression coefficients for each standardized feature and epoch estimated with spatial autocorr elation using a GEE. is estimated correlation and training fit is based on the Brier score. i ndicate potentially nonsignifica nt features that have zero within their upper and lowe r confidence bounds. The bounds were estimated using the empirical variance estimator but not shown. Numbers in bold are the largest magnitude + and coefficients. Table 85. GEE and penalized GEE coefficient av erages and their deviations for Epoch 2 and 3. Results based on 20 simulations. Epoch 2 had a much smaller average shrinkage factor (50 .0 avg ) compared to Epoch 3 (2 .13 avg ). Table 86. Mean classification accuracy (MCA) for each Epoch using GEE, penalized GEE, and nonparametric nave Bayes. Results based on 20 simulations. Data set GEE Lasso GEE Bayes MCA (# correct / total ) Epoch 1 0.75 0.77 0.73 Epoch 2 0.80 0.80 0.66 Epoch 3 0.75 0.76 0.81 Epoch 4 0.85 0.86 0.74 Epoch 5 0.79 0.80 0.82 Epoch 6 0.79 0.80 0.60 Average MCA 0.79 0.80 0.73 Feature Epoch 1 Epoch 2 Epoch 3 Epoch 4 Epoch 5 Epoch 6 W 1.70 2.76 0.47 0.01 1.38 0.60* V 0.95 1.80 0.21 0.07* 0.02 0.63 DT 0.74 0.93 0.49 1.12 0.90 1.55 MG 0.15 0.18 0.08* 0.12 0.82 0.30* NS 0.09 0.13 0.18* 0.01 1.98 0.06* O 0.29 0.27 1.38 1.18 0.17 0.93* G 0.71 0.26* 0.09 0.19* 0.66 0.41 C 0.12 0.03* 0.05 0.03 0.10 0.74* SD 0.14 0.07* 0.01 0.01* 0.01 0.03 S 1.47 0.12* 0.49* 0.40 0.87 1.19 0.97 0.91 0.95 0.96 0.95 0.91 Training fit 0.16 0.14 0.18 0.14 0.16 0.13 Feature Epoch 2 GEE Lasso GEE 2 Lasso 2 Epoch 3 GEE Lasso GEE 2 Lasso 2 W 4.900 4.458 0.159 0.545 0.552 0.002 0.051 0.000 V 4.045 3.603 0.135 0.432 0.786 0.001 0.055 0.000 DT 1.062 1.012 0.002 0.010 0.265 0.001 0.007 0.000 MG 0.240 0.199 0.007 0.010 0.426 0.015 0.013 0.003 NS 0.552 0.489 0.008 0.013 0.186 0.035 0.004 0.001 O 0.109 0.072 0.003 0.003 1.227 0.747 0.002 0.037 G 0.098 0.087 0.038 0.020 0.118 0.000 0.036 0.000 C 0.073 0.062 0.003 0.001 0.163 0.000 0.016 0.000 SD 0.128 0.122 0.027 0.022 0.090 0.011 0.008 0.000 S 0.594 0.429 0.100 0.116 1.137 0.048 0.097 0.009 PAGE 147 147 CHAPTER 9 CONCLUSION Discussion Airborne Laser Swath Mapping (ALSM) has revolutionized coastal monitoring enabling submeter sampling of nearshore topography ove r long segments of coastline. The highresolution spatial information contained in the data can reveal patterns in beach change, but coastal monitoring with lidar has mainly been re stricted to measures of shoreline and volume change extracted from DEMs. To improve our ability to better understand and monitor beach change with lidar, enhanced methods are needed to extract information and reveal patterns in the data. Data mining and statistical learning based approaches offer great potential. The focus of this research was to explore ideas from these fi elds and develop methods that more fully exploit the high resolution information in repeated lid ar measurements for improved beach change monitoring. Results were based on ALSM data acquired along a beach in Florida seven times between August 2003 and February 2007. The multiple acquisitions through ti me provided a unique opportunity to capture and study the alongshore vari ation in beach topography at high resolution and at different temporal scales. Through automa ted profile sampling, several different crossshore features (morphologies) were extracted from the data, ma ny of which are novel measures created to characterize changes in profile morphology alongshore. Deviationfromtrend is the orthogonal distance the shoreline deviates from the natural trend of the beach and was created to capture the deviation of the highly erosive pier region. Max gradient was created to capture high berm regions. Mean gradient is a measure that captures th e variation in width and berm formation. Relatively high magnitudes indicate narrow sections of beach with large berm formation. Mean curvature is similar in behavior to mean gradient but it captures differences in PAGE 148 148 profile uniformity. Nearshore slope measures the slope in the foreshore zone. Orientation is the angle in degrees that the locali zed shoreline makes relative to azimuth. Other features, such as beach slope volumeperwidth and width are commonly used measures to quantify beach dynamics and were expected to more directly relate to the sediment processes governing the region. After features were extracted, the profiles were then segmented into binary erosion or accretion classes based on measured shoreline change following the ALSM acquisition. This enabled estimation of classconditional pdfs for each feature and epoch. An informationtheoretic approach using measures of pdf divergence was app lied to rank each feature s class separability. The nonparametric Parzen window method was used to estimate classconditional pdfs due to the multimodal nature of the lidarderived features. For determination of the kernel bandwidth, a NewtonRaphson numerical scheme for determining an optimal bandwidth based on the maximum likelihood criteria was developed. Two measures of divergence, JensenShannon divergence (JSD) and a normalized form of Je nsenShannon divergence (NJSD) were employed to rank separation. NJSD is a novel information di vergence that was created to provide a tradeoff between feature uncertainty and th e amount of information gained on the erosion class. The more interclass separation provided by a feature, the gr eater the potential as a morphologic indicator. Results were compared to a median based metric and correlation. Overall, deviationfromtrend performed best and was ranked high by all measures over longer time periods, indicating this feature is influential to th e longterm shoreline trend fo r the beach. The top ranking is noteworthy in that the pier re gions deviation from the natura l trend is a strong contributing factor to it being an erosion hot spot. Over shorter epochs, slope based features improved in performance, indicating these measures might be more influential to high frequency fluctuations PAGE 149 149 in shoreline change. Orientation was the lowest individual performer overa ll. The low influence of orientation on shoreline response may be due to scale dependence. It is plausible that orientation effects along the beach only manifest at very large scales compared to the localized scales that were investigated. Of the four measures examined, NJSD is the si ngle most meaningful measure to assess the relative importance of features because it uses the entire pdf and normalizes on the basis of entropy. JSD can also be considered equally as meaningful in cases wh ere the entropy of the features is similar across the f eature space. The subtle differences between the two measures are due to relative differences in feature entropy. Correlation and the medianbased measure sometimes perform poorly when geometric measures of the shape and spread do not adequately describe the distribution. Howeve r, these measures can also provide insights. For example, the medianbased measure can serve as a reasonable tool for ranking features when computational limitations preclude the estimation of the pdfs a nd the underlying pdfs are expected to be highly nonGaussian. Furthermore, high rankings from the correlation measure can suggest features that are amenable to simple binary decision rules that divergence may not reveal. To empirically assess which measure provide s the best performance and to test the classification ability of the morphologic features for segmenting the beach into zones more or less prone to erosion a nave Bayes classifier was implemented. Th e top two features (deviationfromtrend and slope) selected by divergence ou tperformed the other feature combinations selected by correlation and the median measure by 3% and 7% respectively supporting the utility of the divergence method for feature selection. Success rates of 70 % or greater were achieved for some periods with all but one epoch providing a success rate of 60% or greater. Posterior probabilities for erosion in the pier re gion were very strong suggesting that DT and S are PAGE 150 150 capturing the pier zones behavior. Because trai ning is based on all epochs but the current test epochs, these classification results demonstrate the individual utility of the features based on information learned across data epochs. The rela tive success of the results demonstrates that certain morphologies can be systematically extr acted from ALSM time series and ranked to provide useful information about shorelin e change dynamics for a given region. To evaluate the joint effect of the featur es on the probability of erosion, a logistic regression based classifier was developed. B ecause of strong spatial autocorrelation in the binary responses, a generalized estimating e quation (GEE) approach was applied. The GEE includes correlation to obtain better estimates of the model parameters rather than include correlation for improved prediction. By standard izing the features, the GEE model can be used to evaluate the relative influence of the features on the outcome of erosion. In general, features that ranked high individually for a given epoch also tended to be an influential feature in the fitted GEE models. However, deviationfromtrend, which was the highest ranked individual feature, became less important in the presence of other features most likely due to high collinearity among the feature values. In contrast, orientation became a more influential feature although it ranked very low individually. The fitted models indicate that as localized orientation approaches a more easterly to southeasterly direc tion the probability to erode increases. In this regard, the logistic GEE approach provides a powe rful tool for characterizing patterns in beach change with lidar compared to simply differenc ing DEMs to obtain volume and shoreline change estimates. To avoid model over fitting and address col linearity among the feature values, a Lasso penalization was placed on the GEE coefficient estimates. The penalization aims to reduce variance in the estimation of the feature coeffici ents to obtain better pr ediction performance and PAGE 151 151 an overall better model for explaining the joint eff ect of the features on the outcome. Each of the three classifiers, naive Baye s, nonpenalized GEE, and Lass o penalized GEE, were then evaluated to test their ability to predict (classify) zones more or less prone to erosion based solely on variation in profile morphol ogy. Because of the differences in temporal span, seasonal periods, weather, and wave climate covered by each epoch, the classification was restricted to each individual epoch without training across epochs by taking a subset of random samples. Lasso GEE outperformed nonpenalized GEE by 1% obtaining an average classification success rate of 80% with a maximum success rate of 86%. Both Lasso GEE and nonpenalized GEE substantially outperformed the naive Bayes classification approach by approximately 7%. Overall, the classification results are impressi ve when considering th at classification is based solely on change in morphology with no di rect inclusion of spatial autocorrelation, bathymetry, waves, currents, or other forcings. In this regard, the classi fication results are data dependent and not intended as a standalone tool for predicting shorel ine change. Rather, the results demonstrate the ability of the classifier s to learn relationships between variation in crossshore morphology and probability of erosion from the lidar data. Furthermore, results show that subtle variations in morphology for th is beach are influential in the observed patterns of erosion and accretion alongshore. In the past, e ngineers have generally viewed high resolution lidar data over a beach as only useful for volume ch ange and shoreline estimates. But, with such high classification accuracies, > 80 % in some cases, results demons trate that one can start to model erosion and nourishment diffusion strictly from topographic lidar without using waves, bathymetry, etc., through statis tical data driven based appro aches. Such methods have huge potential and open the door for new ways to mon itor and study beaches. In addition, by coupling PAGE 152 152 the statistical methods with phys ical models, more accurate prediction of erosion and storm impact can be obtained. Although multitemporal ALSM surveys of other sandy beaches were not available for this work, the developed framework can be applied to model any sandy beach area if lidar time series of the region are available. Numerous other f eatures can be input into the model to better characterize a given beach based on the regions dynamics. For example, measures such as distance from an inlet, mean grain size, relativ e location to a jetty, and other features beyond profile morphology can be input into the GEE model. Given sufficient data coverage, the developed framework can be applied to characteri ze seasonal or storm induced patterns in beach change, such as hurricane impact. Furthermore, bathymetric data if available can be combined with the topographic lidar data to evaluate the ef fects of bathymetry on er osion. In addition, the ideas can easily be extended to examine other patterns beyond shoreline change such as in volumetric trends, berm formation, and dune migration. In summary, this investigation was focu sed on exploring statistical learning based approaches for improved beach monitoring with lidar. The developed methods provide a systematic framework to mine high resolution lidar data over beaches and characterize the alongshore variation in beach prof ile morphology. Furtthermore, th is research is the first to explore and apply modern probabilistic classifiers to model the influence of morphology on patterns in erosion alongshore usi ng highresolution lidar da ta. With longer time series of lidar observations, the proposed methods should enable coastal researchers to better quantify and characterize coastal change and infer the dynamics of underlying physical processes. Future Work The next logical extension to th e research is to move toward s a nonlinear predictor in the logistic model. A GAM based approach can be implemented to fit a semiparametric model as PAGE 153 153 opposed to a linear model. Penalized spline regression or other basi s functions can be used. The resultant model will allow us to evaluate the nonlinear effect of the features on the outcome probability of erosion. In regards to spatial prediction, the GEE a pproach only uses correlation to obtain better parameter estimates. For our work, it was useful becau se we desired to explain the joint effect of the features on the outcome. To bring spatial corr elation directly into the prediction process, we could use a form of best linear unbiased predic tion (BLUP). From the GEE model, we can use the profiles with the highest posterior probabi lities for the two classes as known (measured) values rather than estimated. BLUP can be used to include the estimated correlation matrix from the GEE model, along with the GEE estimates at the unknown and known locations to obtain a smoothed estimate of our class probabilities that includes correlat ion. This essentially forms a type of kriging estimator. We could then repeat this several times to obtain a form of spatial boosting. Other techniques such as indicator kriging can be investigated and the use of autologistic approaches can be revisited. Ultimately, we desire temporal prediction of shoreline change. To move towards true shoreline change prediction beyond general trend analys is and classification, some type of physical forcing needs to be included in the model. Incoming wave energy is the main force that drives shoreline change. Because of the complexity required in modeling nearshore wave and tidal dynamics, simpler techniques for including wave forcing in the models need investigation. As a first try, the directed component of wave energy flux measured at the nearest offshore buoy could be included as an additiona l predictor in the model. This avoids using wave models and the total energy flux can be calculated for the entire time spanned between consecutive lidar surveys. The directed component of wave energy flux or an i nput from a physical model can be PAGE 154 154 used as an additional predictor to enable spatiotemporal prediction. Given a long enough time series of lidar data, the models could potentially learn the response of the beach to different wave events and energy flux. Such models if traine d sufficiently could be used to predict beach response from storms. In addition, semiparam etric regression methods, such as transitional autoregressive models, can be investigated fo r their potential to predict shoreline change alongshore based strictly on time series of lida r data. These approaches can capture nonlinear behavior and might prove useful for highresolution shoreline trend prediction. PAGE 155 155 APPENDIX A NEWTONRAPHSON ML BANDWID TH OPTIMIZA TION SC HEME The following is the derivation of an itera tive NewtonRaphson optimization scheme for numerically selecting the optimal bandwidth, h, for a Gaussian kernel using a Parzen density estimate. Optimization is based on the maximu m likelihood cost function from Equation 67: N i N i N ijj h u MLB N e h NN hJ1 1, 1 2log 1 2 1 1 1 log 1 )(2 2 (A1) where ) (jixxu and N ijj h ue h N B,1 22 22 1 1 1. To minimize, we take the derivatve of Equation A1 with respect to the bandwidth h and set equal to zero by applying the chain rule: h B B J h JML ML where N i N ijj h u ML N ijj h u h ue h N NB J e h u e h Nh B1 ,1 2 ,1 2 4 2 2 22 2 2 2 2 22 1 1 1 1 1 2 2 1 1 1 From the chain rule, h JML is derived as follows: PAGE 156 156 N i N ijj h u N ijj h u N i N ijj h u N ijj h u N i N ijj u h u N ijj h u N i N ijj u h u N ijj h u MLeh eu Nh e eu h hN e u e eh N e u e hN e h N Nh Jh h h h1 ,1 2 3 ,1 2 2 1 ,1 2 ,1 2 2 3 1, 1 2 ,1 2 1, 1 2 2 ,1 22 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 211 111 1 1 2)1( 1 2 1 1 1 1 1 (A2) Because h JML is nonlinear in h we apply a firstorder Taylor approximation to linearize Equation A2 and enable solving for h iteratively. To find 2 2h JML the quotient rule is applied: 2)( )()()()( )( )( hg hfhghghf hg hf where N ijj h uehhg,1 2 32 2) (and N ijj h u N ijj h ueueh h g hg,1 2 2 ,1 2 22 2 2 23 ) ( N ijj h ueuhf,1 2 22 2) (and N ijj h ue h u h f hf,1 2 3 42 2) ( PAGE 157 157 we get the following for 2 2h JML : N i N ijj h u N ijj h u N ijj h u N ijj h u N ijj h u N ijj h u MLeh eueueh ehe h u N hh J1 2 ,1 2 6 ,1 2 2 ,1 2 2 ,1 2 2 ,1 2 3 ,1 2 3 4 22 22 2 2 2 2 2 2 2 2 2 2 23 11 (A3) Through Taylor linearization, 0 h JML is expressed as 0)( )( )( )(2 2 old new h ML h ML newMLhh h hJ h hJ h hJold old Rearranging to solve for newh, we get the NewtonRaphson estimate as old oldh ML h ML old newh hJ h hJ hh2 2)( )( (A4) Substituting Equation A2 and Equation A3 in to Equation A4 we get the following: old oldh N i N ijj h u N ijj h u N ijj h u N ijj h u N ijj h u N ijj h u h N i N ijj h u N ijj h u old neweh eueueh ehe h u N h eh eu Nh hh 1 2 ,1 2 6 ,1 2 2 ,1 2 2 ,1 2 2 ,1 2 3 ,1 2 3 4 2 1 ,1 2 3 ,1 2 22 2 2 2 2 2 2 2 2 2 2 2 2 2 2 23 11 11 (A5) The ML optimal bandwidth h can be found by selecting an initial value for oldh and then iterating until convergence using Equation A5. PAGE 158 158 APPENDIX B IMPLEMENTATION DETAILS The logistic regression models were fit using a routine developed in Matlab that implements the IRLS algorithm of Equation 85. An iterative scheme was then used to compute AIC and BIC scores for all model subsets. Matlab also has a package for fitting GLMs that is part of the Statistics Toolbox. This package computes deviance residuals, standard errors, and significance values for each coefficient. The GL M package was used to compute the significance values for ordinary logistic regressi on. For implementation of the GEE, the Matlab toolbox GEEQBOX (Ratcliffe and Shults, 2008) was us ed. For implementation of Lasso regression a Matlab routine was developed that uses the gee.m function within GEEQBOX for estimating the correlation function. All other coding and classification was implemented in Matlab PAGE 159 159 LIST OF REFERENCES Albada, E., Goshow, C., Dom pe, P., 2006. Effect of beach nourishment on surfing observations from the St. Johns County shore protection project. St. Johns C ounty Shore Protection Project. Allison, P.D., 2008. Convergence failures in logi stic regression. SAS Conference Proceedings, 3602008. Archambeau, C., Assenza, A., Valle, M., Verleyse n, M., 2006. Assessment of probability density estimation methods: Parzen window and Finite Gaussian Mixtures. IEEE International Symposium on Circuits and Systems, ISCAS Proceedings. Augustin, N., Mugglestone, M., Buckland, S., 1996. An autologistic model for the spatial distribution of wildlife. Journal of Applied Ecology 33, 339. Baltsavias, E.P., 1999. Airborne laser scanning: basic relations and formulas. ISPRS Journal of Photogrammetry and Remote Sensing 54, 199214. Barker, L., Brown, C., 2001. Logistic regression when binary predictor variables are highly correlated. Statistics in Medicine 20, 14311442. Benedet, L., Finkl, C.W., Hartog, W.M., 2007. Processes controlling the development of erosional hot spots on a beach nourishment project. Journal of Coastal Research 23 (1), 3348. Besag, J., 1974. Spatial interaction and the statistical analysis of lattice sy stems. Journal of Royal Statistical Society, 192. Boak, E.H., Turner, I.L., 2005. Shoreline definiti on and detection: a review. Journal of Coastal Research 21 (4), 688703. Bowman, A., 1984. An alternative method of cr ossvalidation for the smoothing of density estimates. Biometrika 65, 521528. Brock, J.C., Krabill, W.B., Sallenger, A.H., 2004. Barrier island morphodynamic classification based on lidar metrics for north Assateague Island, Maryland. Journal of Coastal Research 20 (2), 498509. Carl, G., Kuhn, I., 2007. Analyzing spatial au tocorrelation in species distributions using Gaussian and logit models. Ecological Modeling 207, 159170. Carter, W.E., Shrestha, R.L., Slatton, K.C., 2004.Photoncounting airbor ne laser swath mapping (PCALSM). Proceedings of SPIE, 4th Intern ational AsiaPacific Environmental Remote Sensing Symposium 5661, 7885. Carter, W.E., Shrestha, R.L., Slatton, K.C., 2007. Geodetic laser scanning. Physics Today, 4146. PAGE 160 160 Coeurjolly, J.F., Drouilhet, R., Robineau, J.F., 2007. Normalized information based divergences. Problems of Information Transmission 43 (3), 167189. Cover, T.M., Thomas, J.A., 2006. Elements of Information Theory, second ed. John Wiley & Sons, Hoboken, New Jersey. Cressie, N.A., 1991. Statistics for Spa tial Data. John Wiley and Sons, New York. Crossett, K.M., et al., 2008. Population tre nds along the coastal united states: 1980 2008. NOAA Coastal Trends Report Series. Dean, R.G., 2003. Beach Nourishment: Theory and Practice. World Scientific Publishing Company, New Jersey. Dean, R.G., Dalrymple, R.A., 2002. Coastal Processes with Engin eering Applications, Cambridge Univ. Press, Cambridge. Dormann, C.F., 2007. Assessing the validity of autologistic regression. Ecological Modeling 207, 234242. Draper, B.A., Baek, K., Bartlett, M.S., Beveridg e, J.R., 2003. Recognizing faces with PCA and ICA. Computer Vision and Image Understanding 91 (12), 115 137. Duda, R.O., Hart, P.E., Stork, D.G., 2001. Patte rn Classification, second ed. John Wiley and Sons, New York. Duin, R.P.W., 1976. On the choice of the smoot hing parameters for parzen estimators of probability density functions. IEEE Tran sactions on Computers 25 (11), 11751179. ElRabbany, A., 2002. Introduction to GPS. Artech House, Boston, MA. Endres, D.M., Schindelin, J.E., 2003. A new me tric for probability distributions. IEEE Transactions on Information Theory, 49 (7), 1858. Erdogmus, D., Jenssen, R., Rao, Y.N., Principe, J.C., 2004. Multivariate density estimation with optimal marginal parzen density estima tion and gaussianization. IEEE Workshop on Machine Learning, 7382. Fernadez, J.C., Singhania, A., Caceres, J,. Slatto n, K.C., Starek, M.J., Kumar, R., 2007. An overview of lidar point cloud processing soft ware. University of Florida GEM Center: Report_2007_12_001. (available at www.ncal m.ufl.edu as of October 2008). Filin, S., 2004. Surface classification from airborne laser scanning data. Computers & Geosciences 30, 10331041. Foster ,E.R., Spurgeon, D.L., Cheng, J., 2000. Shor eline change rates St. Johns County. Florida DEP: Report No. BCS0003. PAGE 161 161 Fu, W.J., 1998a. A statistical shrinkage model and its applications. Dissertation, University of Toronto Department of Health Science. Fu, W.J., 1998b. Penalized regressions: the bridge versus the lasso. Journal of Computational and Graphical Statistics 7 (3), 397416. Fu, W.J., 2003. Penalized estimatin g equations. Biometrics 59, 126132. Gelb, A., 1974. Applied Optimal Estimation. MIT Press, Cambridge, MA. Guha, R.K., Sarkar, S. 2006 Characterizing Te mporal SNR Variation in 802.11 Networks. IEEE Wireless Communications and Networking Conference, WCNC 2006. Gumpertz, L.M., Wu, C., Pye, J.M. 2000. Logistic regression for southern pine beetle outbreaks with spatial and temporal autocorrelation. Forest Science 46 (1), 95107. Gutelius, G., Carter, W.E., Shrestha, R.L., Medve dev, E., Gutierrez, R., Gibeaut, J.G., 1998. Engineering applications of airborne s canning lasers: reports from the field. Photogrammetric Engineering a nd Remote Sensing 64 (4), 246. Hastie, T., Tibshirani, R., 1986. Generalized add itive models. Statistical Science 1 (3), 297318. Hastie, T., Tibshirani, R., Friedman, J., 2001. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, New York. Herzel, H., Grobe, I., 1995. M easuring correlations in symbol sequences. Physica A 216, 518542. HofmannWellenhof B., Lichtenegger, H., Collins, J., 2001. Global Positioning System: Theory and Practice. SpringerVerlag, New York. Kampa, K., Slatton, K.C., 2004. An Adaptive Multiscale Filter for Segmenting Vegetation in ALSM Data. Proceedings of IEEE Intern ational Geoscience and Remote Sensing Symposium, IGARSS 6, 38373840. Komar, P.D., 1998. Beach Pro cesses and Sedimentation. 2nd Ed. Prentice Hall, New Jersey. Lee, K., Daniels, M.J., 2007. A cl ass of Markov models for longit udinal ordinal data. Biometrics 63, 1060. Legendre, P., Legendre, L., 1998. Numerical Ec ology. 2nd English ed. Elsevier, Amsterdam. Liang, K.Y., Zeger, S.L., 1986. Longitudinal data analysis using generalized linear models. Biometrika 73, 1322. Lin, J., 1991. Divergence measures based on the Shannon Entropy. IEEE Transactions on Information Theory 37 (1), 145151. PAGE 162 162 Liu, H., Sherman, D., Gu, S., 2007. Automated ex traction of shorelines from airborne light detection and ranging data and accuracy assessment based on Monte Carlo simulation. Journal of Coastal Research 23 (6), 13591369. Loader, C. R., 1999. Bandwidth selection: classical or plugin?. Annals of Statistics 27, 415. Luzum, B., Slatton, K.C., Shrestha, R.L., 2004. Identification and analysis of airborne laser swath mapping data in a novel feature space. IEEE Geoscience and Remote Sensing Letters 1 (4). Maas, H.G., 1999. The potential of height texture meas ures for the segmentation of airborne laser scanner data. Proceedings of 4th Intern ational Airborne Remote Sensing Conference, Ottawa, ON, Canada, 21. Mader, G.L., 1992. Rapid static and kinematic global positioning system solutions using the ambiguity function technique. Journal of Geophysical Research 97 (B3), 32713283. Marron, J.S., 1988. Automatic smoothing parameter selection: a survey. Empirical Economics 13, 187208. MathWorks, 2005. Dilation and erosion. In: Matlab Image Processing Toolbox Users Guide. MathWorks, Inc. Meyer, T.H., Roman, D.R., Zilkoski, D.B., 2004. What does height r eally mean? Part I: Introduction. Surveying and Land Information Science 64 (4), 223233. Miller, J., Franklin, J., Aspinall, R., 2007. In corporating spatial dependence in predictive vegetation models. Ecologi cal Modeling 202, 225242. Mitchell, T.M., 2005. Machine Learning. McGraw Hill. Moore, L.J., Ruggiero, P., List, J.H., 2006. Co mparing mean high water and high water line shorelines: should proxydatum of fsets be incorporated into s horeline change analysis?. Journal of Coastal Research 22 (4) 894. Myers, R.H., Montgomery, D.C., Vining, C. G., 2002. Generalized Linear Models with Applications in Engineering and the Sciences. John Wiley & Sons, New York. Neuenschwander, A., Crawford, M., Weed, C ., Gutierrez, R., 2000. Extraction of digital elevation models for airborne laser terrain mapping data. Pr oceedings of IEEE Geoscience and Remote Sensing Symposium, IGARSS. Ratcliffe, S.J., Shults, J., 2008. GEEQBOX: A Matlab toolbox for generalized estimating equations and quasileast squares 25 (14). Robertson, W., Whitman, D., Zhang, K., Leatherman, S.P., 2004. Mapping shoreline position using airborne laser altimetry. Journa l of Coastal Research 20 (3), 884. PAGE 163 163 Robertson, W., Zhang, K., Whitman, D., Leatherman, S.P., 2005. Shoreline and beach volume change before and after the 2004 hurricane seas on, Palm Beach County, Florida. Shore & Beach 73 (23). Robertson, W., Zhang, K., Wh itman, D., 2007. Hurricaneinduced beach change derived from airborne laser measurements near Panama City, Florida. Marine Geology 237, 191. Rogers, R.M., 2003. Applied Mathematics in Integrated Navigation Systems, second ed. American Institute of Aeronautics and Astronautics, Reston, VA. Rudemo, M., 1982. Consistent choice of linear smoothing methods. Report 821, Department of Mathematics, Royal Danish Agricultural and Veterinary University, Copenhagen. Ruppert, D., Wand, M.P., Carroll, R.J., 2003. Semi parametric Regression. Cambridge University Press, New York. Sallenger, A.H, et al., 2003. Evaluation of ai rborne topographic lidar for quantifying beach changes. Journal of Coastal Research 19 (1),125133. Saye, S.E., van der Wal, D., Pye, K., Blott, S.J., 2005. Beachdune morphological relationships and erosion/accretion: An investigation at fi ve sites in England and Wales using LIDAR data. Geomorphology 72, 128. Shan, S., Bevis, M., Kendrick, E., Mader, G., Raleigh, D., Hudnut, K., Sartori, M., Phillips, D., Carter, W.E., 2007. Kinematic GPS solutions for aircraft trajecto ries: Identifying and minimizing systematic height errors associ ated with atmospheric propagation delays. Geophysical Research Letters 43, L23S07, doi:10.1029/2007GL030889. Shtatland, E.S., Cain, E., Barton, M.B., 2001. The pe rils of stepwise logistic regression and how to escape them using information criteria and the Output Delivery System. SUGI 26 Proceedings, 222. Shrestha, R.L., Carter, W.E., 1998. Instant eval uation of beach storm damage using airborne laser terrain mapping. EOM, 4244. Shrestha, R.L., Carter, W.E., Lee, M., Finer, P., Sartori, M., 1999. Airborne Laser Swath Mapping: accuracy assessment for survey ing and mapping applications. Journal of American Congress on Surveying and Mapping 59 (2), 83 94. Shrestha, R.L., Carter, W.E., Sartori, M., Luzum, B.J., Slatton, K.C., 2005. Airborne laser swath mapping: quantifying changes in sandy beaches ove r time scales of weeks to years. ISPRS Journal of Photogrammetry and Remote Sensing 59 (4), 222232. Shrestha, R.L., Carter, W.E., Sla tton, K.C., Dietrich, W., 2007. Re search quality airborne laser swath mapping: the defining factors, ver.1.2. National Center for Airborne Laser Mapping: Gainesville, FL; (white paper available at www.ncalm.ufl.edu as of October 2008). PAGE 164 164 Sithole, G., Vosselman, G., 2004. Experimental comparison of filte r algorithms for bareEarth extraction from airborne lase r scanning point clouds. ISPR S Journal of Photogrammetry and Remote Sensing 59, 85101. Slatton, K.C., Carter, W.E., Shrestha, R.L., Di etrich, W., 2007. Airbor ne laser swath mapping: achieving the resolution and accuracy required for geosurficial research. Geophysical Research Letters 34: L23S10, doi:10.1029/2007GL031939. Starek, M.J., Vemula, R.K., Slatton, K.C., Shrestha, R.L., Carter, W.E., 2007a. Automatic feature extraction from airborne lidar measurements to identify crossshore morphologies indicative of beach erosion. Proceedings of IEEE International Geoscience and Remote Sensing Symposium, IGARSS 07, 25112514. Starek, M.J., Vemula, R.K., Slatton, K.C., Shrestha, R.L., 2007b. Shoreline based feature extraction and optimal feature selection for segmenting airborne lidar intensity images. Proceedings of IEEE International Confer ence on Image Processing, ICIP 4, 369372. Starek, M.J., Slatton, K.C., Shrestha, R.L., Cart er, W.E., 2008. Airborne lidar measurements to quantify change in sandy beaches. In: Lase r Scanning for the Environmental Sciences, Blackwell Publishing, Oxford (in press). Stockdon, H.F., Sallenger, A.H., List, J.H., Holman R.A., 2002. Estimation of shoreline position and change using airborne topogr aphic lidar data. Journal of Coastal Research 18 (3), 502 513. Tebbens, S., Burroughs, S.M., Nelson, E., 2002. Wa velet analysis of shoreline change on the Outer Banks of North Carolina: An example of complexity in the Marine Sciences. National Academy of Sciences Colloquium: Selforganized Complexity in the Physical, Biological, and Social Sciences, March 2324. Tibshirani, R., 1996. Regressi on shrinkage and selection via the Lasso. Journal of Royal Statistical Society B (58), 267288. USACE, 1998. St. Johns County, Florida Shore Protection Project: General Reevaluation Report with Final Environmental Assessment. Jacks onville, Florida: Department of the Army, Jacksonville District, Corps of Engineers. Vries, S.O., Fidler, V., Kuipers, W.D., Hunink, M., 1998. Fitting multistate transition models with autoregressive logistic regression: supervised exercise in intermittent claudication. Medical Decision Making 18, 5260. Wang, B., Wang, X., 2008. Bandwidth selection for weighted kernel density estimation. Electronic Journal of Statistics 0 (0). Wehr, A., Lohr, U., 1999. Airbor ne laser scanningAn introducti on and overview. Journal of Photogrammetry & Remote Sensing 54, 68. PAGE 165 165 White, S.A, Wang, Y., 2003. Utilizing DEMs de rived from LIDAR data to analyze morphologic change in the North Carolina coastline. Remote Sensing of the Environment 85. Woolard, J.W., Aslaksen, M., Longenecker, J ., Ryerson, A., 2003. Shoreline mapping from airborne LIDAR in Shilshole Bay, Washington. The Hydrographi c Society of America, US Hydrographic Conference 03. Wozencraft, J.M., Lillycrop, J.W., 2003. SHOALS Airborne Coastal Mappi ng: Past, Present, and Future. Journal of Coastal Research 38, 207215. Wu, T.T., Lange, K., 2008. Coordinate descent algorithms for lasso penalized regression. Annals of Applied Stat istics 2 (1), 224244. Zhang, K., Whitman, D., Leatherman, S., Roberts on, W., 2004. Quantification of beach changes caused by Hurricane Floyd along Floridas Atlant ic coast using airbor ne laser surveys. Journal of Coastal Research 21 (1), 122. Zorn, J.W., 2001. Generalized estimating equation models for correlated data: a review with applications. American Journal of Political Science 45 (2), 470490. Zychaluk, K., Patil, P.N., 2008. A crossvalidation method for data with ties in kernel density estimation. Annals of the Institute of Statistical Math ematics 60, 2144. PAGE 166 166 BIOGRAPHICAL SKETCH Michael John Starek was born and raised in Texas and is the second of four children, graduating from Grapevine High School. He attended Texas A&M University, Corpus Christi to be close to his favorite place, the ocean, where he graduated summa cum laude with a B.S. in environmental science and a M.S. in computer science in 1998 and 2002 respectively. In 2004, Michael applied to the geosensing systems engi neering Ph.D. program at the University of Florida to pursue his interest in remote sensing. Upon completi on of his Ph.D., Michael plans to pursue a career in academia. Michaels interests include surfing, fishing, tennis, guitar, Dallas Cowboys football, and of course, the Gators. 