UFDC Home  myUFDC Home  Help 



Full Text  
PAGE 1 EFFICIENT MULTISCALE IMAGE FU SION AND FEATURE REDUCTION FOR ELEVATION DATA IN C OASTAL URBAN AREAS By SWEUNG WON CHEUNG A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2009 1 PAGE 2 2009 Sweung Won Cheung 2 PAGE 3 To my family and friends 3 PAGE 4 ACKNOWLEDGMENTS First of all I would like to th ank my parents who have kept encouraging and inspiring me to pursue my dreams in many ways with unconditional love. I also have to thank my beautiful wife, Sookjung Yun, who willingly dedicated her dr eam and life to me for my dream. And she brought two precious lives, our son and daughter (Brian and Mitche ll), and has cared for them many long nights while I was working. For all of this I will always be grateful and in awe of her. And my fatherinlaw who unexpectedly passed away, always prayed for me. He continues to live on and be with me in spirit I wish to thank my advisor, Prof. Kenneth C. Slatton. Without his patience, insightful guidance and encouragement this dissertation would never have been realized. His ambition and hard work gave me a good model to follow. I would also like to thank the members of my PhD. committee, (Prof. Fred J. Taylor, Prof. John G. Harris, and Prof. Ramesh L.Shrestha). I am grateful for their willingness to serve on my committee I especially thank Dr. Andrew Kennedy and Dr. Robert G. Dean. They willingly shared their time with me and helped me better unders tand the hydrologic aspects of my application. I also thank my colleagues at Adaptive Si gnal Processing Laboratory (ASPL) in the Electrical and Computer Engi neering Department and at Geosensing Engineering and Mapping (GEM) Center in Civil and Coastal Engineering De partment. The discussions with my colleagues always gave me pleasant and ne w realizations I will never forget. 4 PAGE 5 TABLE OF CONTENTS page ACKNOWLEDGMENTS...............................................................................................................4LIST OF TABLES................................................................................................................. ..........7LIST OF FIGURES.........................................................................................................................8LIST OF ABBREVIATIONS........................................................................................................10ABSTRACT...................................................................................................................................12CHAPTER 1 INTRODUCTION................................................................................................................. .142 REMOTE SENSING DATA AND DATA FUSION.............................................................182.1 Study Site................................................................................................................ .........182.2 Remote Sensing Data Sets...............................................................................................192.2.1 NOAA NGDC........................................................................................................192.2.3 Topographic LiDAR...............................................................................................202.2.4 Bathymetric LiDAR...............................................................................................212.2.5 SRTM.....................................................................................................................222.3 Data Fusion......................................................................................................................252.3.1 Kalman Filter and Smoother..................................................................................262.3.2 Multiscale Kalman Filter and Smoother.................................................................293 MULTISCALE ELEVATION IMAGE FUSION OVER COASTAL AREAS....................343.1 Introduction.............................................................................................................. ........343.2 ReducedComplexity MKS.............................................................................................363.2.1 Simplified Equation for Calcul ating Process Noise Variance................................363.2.2 Pruning Quadtree Structure....................................................................................383.2.3 Results....................................................................................................................413.3 Landscapedependent Evaluation of SRTM Accuracy....................................................443.3.1 Approximating the Hydrologic Surface by Region Properties...............................453.3.2 Assessing the Measurement Error Variance of SRTM..........................................483.3.3 Big Measurement Error of SRM over the Coastline..............................................523.3.4 Results....................................................................................................................544 MULTISCALE FEATURE REDUCTION FOR HYDROLOGIC MODELING OF URBAN AREAS.................................................................................................................... 584.1 Introduction.............................................................................................................. ........584.2 Decomposition............................................................................................................. ....62 5 PAGE 6 4.3 Reduction and Scale Selection.........................................................................................664.3.1 Ground....................................................................................................................674.3.2 Building..................................................................................................................694.4 Evaluation........................................................................................................................744.4.1 Mean Square Error and Correlation Coefficient....................................................754.4.2 Flow Accumulation Number..................................................................................764.4.3 Water Discharge Rate.............................................................................................814.4.3.1 Simple 2D Hydraulic Model........................................................................814.4.3.2 Subgrid Parameterization............................................................................884.4.3.3 Evaluation.....................................................................................................915 CONCLUSIONS AND FUTURE WORK.............................................................................98APPENDIX DETAILED DERIVATI ON OF PROCESS NOISE VARIANCE.......................104LIST OF REFERENCES.............................................................................................................108BIOGRAPHICAL SKETCH.......................................................................................................114 6 PAGE 7 LIST OF TABLES Table page 21 Original data spacing and pub lished RMS error for data sets...........................................19 31 Comparison of memory storag e using RCMKS and standard MKS................................41 32 Real values of error variance with respect to the co lor mapped on elevation surface.......43 33 Estimated measurement error standard deviations for SRTM...........................................51 41 Memory requirements for build ings as a function of scale................................................72 51 The similarity between Gain Qx and MSE, CC, LOSD..................................................103 7 PAGE 8 LIST OF FIGURES Figure page 21 A 1 latitude 1 longitude tile of the NOAA elevation data over the Florida coast.......18 22 The solution to recursive minimum mean square estimation............................................27 23 Quadtree data structure with two measurements...............................................................30 24 Procedure for multiscale Kalm an filter (upward sweeping)..............................................32 25 Procedure for multiscale Kalman smoother (downward sweeping)..................................33 31 Data processing structures and procedure RCMKS.........................................................39 32 Fused and estimated DEM by RCMKS...........................................................................42 33 Example of generating a DEM from LB S+ LiDAR data over urban test site...................46 34 Three studied sites extrac ted from Dade County, Florida.................................................48 35 DEMs obtained from LIDAR and SRTM DTED2...........................................................49 36 Kalman filter residuals (innovati ons) for the three terrain types.......................................50 37 Error variance mapping depending on the density of urban development........................52 38 A 1.5km 1.5km coastal site shown at the SRTM scale..................................................53 39 Component data sets to be fuse d together over a 20km 20 km area...............................54 310 MKS fusion results for the 20km 20km area..................................................................55 311 Twometer simulated flood images over the 1.5km 1.5km coastal site.........................56 41 Ground and non ground DEM of LiDAR..........................................................................63 42 Procedure for making hydrologic surface..........................................................................64 43 Spectral characteristic of ground and buildings.................................................................65 44 Optimal scale selection for ground data.............................................................................68 45 Encoding building and scaledivision procedure...............................................................71 46 Optimal scale selection for buildings.................................................................................73 47 Two kinds of DEMs in different scales.............................................................................74 8 PAGE 9 48 Evaluation of the hydrologic DEMs. ................................................................................76 49 Procedure of calculating FAN...........................................................................................77 410 Input and resultant images for the FAN metric.................................................................78 411 FAN and DHRA at different scales...................................................................................79 412 Comparison to high risk ar ea detection (DHRA) by FAN................................................80 413 Spatial domain discretization scheme................................................................................83 414 Pictorial illustra tion of the criterion...................................................................................84 415 Convergence of our simple model with changing bottom friction value...........................86 416 Testing of the ratio of the numeri cal solution to an analytical solution.............................87 417 Example of the physical be havior of water by building....................................................88 418 Procedure for calculating WDR.........................................................................................90 419 Simple simulated DEM and its estimated WDR................................................................91 420 Estimated WDR of hydrol ogic DEM made by downsampling.........................................92 421 Estimated WDR of hydrologic DE M and subgrid parameterization................................93 422 Comparison of total discharge rate and runningtime........................................................93 423 Comparison of total discharge rate over real DEM data....................................................94 424 Estimate of total discharge rate on di fferent sites in Dade County, Miami, FL................95 425 Estimate of total discharge rate on simulated DEMs having different building distribution.........................................................................................................................96 51 LOSD and its error for th e quantization of buildings......................................................101 52 Comparison measurements (MSE, CC, LOSD) with Water discharge rate,( Qx )............102 9 PAGE 10 LIST OF ABBREVIATIONS ACF AutoCorrelation Function ALSM Airborne Laser Swath Mapping BB BuildingBlock BH BuildingHole CC Correlation Coefficient DEM Digital Elevation Models DTED2 Digital Terrain El evation Data Level 2 EGM96 Earth Gravity Model 1996 fBm fractional Brownian motion FAN Flow Accumulation Number FDE Finite Difference Equations FEMA Federal Emergency Management Agency FIU Florida International University HMM Hidden Markov Model IHRC International Hurricane Research Center LBS LiDAR Bottom Surface LiDAR Light Detection And Ranging LMMSE Linear Minimum Mean Square Error LTS LiDAR Top Surface LULC Land Use Land Cover MHHW Mean Higher High Water MKS Multiscale Kalman filter and Smoother MSE Mean Square Error NASA National Aeronautics and Space Administration 10 PAGE 11 NED National Elevation Dataset NGA National GeospatialIntelligence Agency NGDC National Geophysical Data Center NOAA National Oceanic and A tmospheric Association RCMKS Reduced Complexity MKS RMS Root Mean Square RTS RauchTungStriebel SAR Synthetic Aperture Radar SFWMD South Florida Water Management District SRTM Shuttle Radar Topography Mission SWE Shallow Water Equation UF University of Florida USGS United States Geological Survey UTM Universal Transverse Mercator WDR Water Discharge Rate 11 PAGE 12 Abstract of Dissertation Pres ented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy EFFICIENT MULTISCALE IMAGE FU SION AND FEATURE REDUCTION FOR ELEVATION DATA IN C OASTAL URBAN AREAS By Sweung Won Cheung May 2009 Chair: K. Clint Slatton Major: Electrical and Computer Engineering Technological advances in the remote sensi ng of elevations have made it possible to improve topographic resolutions to the 110 meter scale and bathymetric (underwater elevations) to the 590 m scale. To a large extent, however, da ta collected at different resolutions and from different types of sensors remain largely sepa rate and their joint information content underexploited. For many applications, such as flood modeling, there is a vi tal need to combine information from these seemingly disparate data se ts and extract characteristics about the surface that can aid Earth scientists and emergency planners. The research discussed in this dissertation consists of two parts that address this need. In the first component of the work, a simplif ied formulation for calcu lating the variance of the process noise in a multiscale Kalman smoot her is derived and implemented via a pruning method on the quadtree data structure. This method exploits the distributi on of measurements in the finest scale to efficiently fuse multiresolution data with different areas of spatial coverage to produce seamless single surface digital elevation models (DEMs). To further improve the accuracy of the Kalmanbased fusion, a landscaped ependent measurement error is calculated for the Shuttle Radar Topography Mission (SRTM) da ta set, providing information on a scale intermediate to the meterscale DEMs from airborne laser altimetry (LiDAR) and 90m coastal 12 PAGE 13 DEMs created by the National Oceanic and Atmospheric Association (NOAA). Analysis of the Multiscale Kalman filter and Smoother (MKS) re siduals was employed because only a single (spatially uniform) value for the SRTM measur ement error is specified by the US Geological Survey (USGS). This was done by defining a LiDARderived hydrologic ground surface that excludes vegetation and bridges, un der which flood waters can pass. In the second component of this work, an alternative method to traditional downsampling and mesh generation (used to reduce DEM resolu tions to scales at wh ich flood and storm surge modelers can run their fluiddynamics algorithms) is described. By recognizing that the two main DEM components in urban areas that control flood water flow (i.e. ground and buildings) exhibit different spatial frequencies, vegetation points can be filtered out, resulting in a DEM that can be further decomposed into ground and building classes. The resolution of each class can then be reduced independently, using downsampling for the ground elevations and rectangular parameterization for the building regions (so as to preserve the channels for water flowing between buildings even when the overall resoluti on is dramatically decreased). To examine the performance improvements of the new method compared to the traditional method of downsampling DEMs used by the hydrologic mo deling community, two DEMbased hydrologic calculations, flow accumulation number and hydraulic water discharge rate, were calculated. The results obtained from these two calculations c onfirm that the new method better preserves flood model accuracy with reduced computati on time as a function of DEM resolution. 13 PAGE 14 CHAPTER1 INTRODUCTION As population levels and investment in civil infrastructure in coasta l areas continue to increase, the need to accurately predict natura l hazard risks such as hurricanes, tsunamis, and sealevel rise becomes increasingly important (Small et al ., 2000). Inland flooding and extreme wave action caused by storm surge are influen ced by nearshore topography and bathymetry (underwater elevations), wh ich are the surface expressions of underlying geologic and sedimentary processes (Dean et al., 2002). There is also a need to characterize the coastal zone environment for improved understanding of near shore physical and biological processes. Airborne Laser Swath Mapping (A LSM) is a term for laserba sed light detection and ranging (LiDAR) technology used to map terrain eleva tions at high resolutions. ALSM enables measurement of topography at the scales needed to accurately monitor, predict and mitigate coastal flooding and erosion (Shrestha et al., 2005). Modern topographic LiDAR data are generally acquired at a wavelength of 1064 nm (the near infrared po rtion of the spectrum), which does not penetrate water. These systems are capable of very fast pulse ra tes, however, to achieve 1 to 5 m spatial resolution in the resulting elev ation images, which are referred to as Digital Elevation Models (DEMs). Bathymetric LiDAR systems typically use the frequencydoubled wavelength of 532 nm (green light), which penetr ates water up to 60 meters deep (depending on water clarity). Spatial DEM resolution is genera lly no better than 10 m because the higher pulse energies require slower pulse rates. A mathematical framework for estimating su rface elevations by fusing major elevation data types is therefore needed so that regiona l coastal flooding and erosion can be accurately predicted and mitigated. Precise mapping of the n earshore ocean bottom is particularly important for characterizing acoustic propagation and be nthic environments (Komar, 1998), (Kuperman, 14 PAGE 15 2004). A robust and scalable framework for co mbining surface elevation data from above (topography) and below (bathymetry) the waterline is presented. In addition, the framework has the potential to support other ap plications, such as the mode ling of geologic and biologic processes, tactical expeditiona ry (military) operations, and aid to emergency responders. The Multiscale Kalman Smoother (MKS) algorithm is a globally optimal estimator for fusing remotely sensed data of different spatial resolutions (Chou et al., 1994), (Fieguth et al. 1995), (Slatton et al., 2005). The MKS algorithm can be read ily parallelized because it operates on a Markov tree data structure. However, such an implementation requi res a large amount of memory to store the parameters and estimates at each scale in the tree. This becomes particularly inefficient in applications where the observations have very different reso lutions and the finest scale data are sparse or locally aggregated. Such cases commonly arise when fusing data to capture both regional structures (generally acq uired by lower resoluti on sensors) and local structures (generally acquired by higher resolu tion sensors). We develop a reducedcomplexity MKS algorithm through the reduc tion of the number of floating point operations per node by deriving a simplified equation for calculating th e process noise variance and also reducing the number of nodes in the quadtree that must be processed. This l eads to a dramatic reduction in floating point operations required for estimati ng a DEM from the fusion of topographic and bathymetric elevations. Data from the Shuttle Radar Topography Mission (SRTM) is shown to provide important intermediatescale information between large national elevation data sets and small highresolution LiDAR data sets. We evaluate the accur acy of SRTM elevations over the Florida coast using filter residuals in an adaptive multisca le fusion algorithm. We obtain fused DEMs at multiple scales and demonstrate improved DEM quality for coastal flood prediction. 15 PAGE 16 Modern ranging sensor technol ogy, in particular airborne Li DAR, enables the formation of DEM images with pixel resolutions of 1m x 1m or smaller. Such data sets should, in principle, improve our ability to accurately monitor, predict and mitigate coastal flooding and erosion (Shrestha et al. 2005). But in general, the flood and storm surge modeling community struggles with incorporating high resolution data in to their flood predictions (Bates, 2004). The fundamental equation for fluid dynamics is the well known NavierStokes Equation (Acheson, 1990), from which many specialized equations are derived to predict wate r level, velocity, and momentum. For all but the simplest geometries and momentum conservation assumptions, these equations offer no closed form solution, thus requiring computationally intensive numerical solutions. Solving the equations iteratively for ev ery pixel and every possible direction of water flow on a very large DEM often leads to unacceptable computational times (Alemseged et al., 2005). We present a method for decomposing the hydrologic surface into two distinct components, ground and building, and then reduce their resolutions independently in the spatial domain for the purpose of reducing the computational time need ed by flood models. To give a guideline for the modeler to decide a proper scale, we propose an optimal scale selection by balancing memory usage with the users accuracy requirements. While still a rela tively new topic, some approaches to encoding the impact of buildings in flood or surge routing have been explored (Alemseged et al., 2005), (Neelz et al. 2007). We suggest a new method to encode the impact. It is derived by combining the drag force of buildings with shear stress of the ground, including its dependence on the water depth. 16 PAGE 17 Chapter 2 describes the study site and different remote sensing data sets used (e.g. 90 m resolution NOAA NGDC, 30 m resolution SRTM) to make a seamless combined topographicbathymetric DEM. Background information is given about the multiscale Kalman filter and how it is applied to fuse multiple images of different resolutions. Chapter 3 then presents an example of the traditional application of MKS, and the difficulties, primarily high computational complexity that arise. Motivation is give n for the proposed method, known as the reducedcomplexity multiscale Kalman Smoother (RCMK S). In Section 3.2, we evaluate the accuracy of the SRTM data over the Miam i Beach area through statistical ch aracterization of the Kalman innovation terms, creating a terrai ndependent measurement uncertain ty. Accurate delineation of the coastline is then achieved by using the finesca le LiDAR data to boost the uncertainty values at the coarse SRTM scale In Chapter 4, we co mpare the performance of RCMKS in an urban environment to the traditiona l method of downsampling using a number of error metrics (e.g. root mean square error and correlation coeffici ent). A novel method for estimating the impact of buildings for hydraulic models wh ile maintaining low memory usage is described. Conclusions and topics for future research are discussed in Chapter 5. 17 PAGE 18 CHAPTER 2 REMOTE SENSING DATA AND DATA FUSION 2.1 Study Site The investigated site is located in Dade C ounty, FL and consists mainly of sandy beaches, barrier islands, and underwater shoals. The site encompasses the Miami Beach barrier island and the lip of the continental shelf. Beyond the shel f break, the surface drops down into the Straits of Florida (Figure 21). The data also includes a part of the densely populat ed urban center of the city of Miami. This region of North American coast is prone to hurricanes and tropical storms. It also experiences seasonal beach erosion and rip currents (Komar et al. 1998). We focused on this one site because there are relatively few loca tions with as wide an array of topographic and bathymetric data sets available as this one. The data types th at are the most limited in their coverages around the world are the topographic and bathymetric LiDAR data sets. Such data are, however, being collected more and more, and will thus play increasingly important roles in future flood modeling. Figure 21. A 1 latitude 1 longitude tile of the NOAA elevation data over the Florida coast [NGDC, 2005]. The outlined box indicates th e region of interest over Miami Beach, Coverage of the multiscale observations: (shaded relief) 40km 40km NGDC data, (diagonal hatch) bathymetric LiDAR cove rage, and (checker hatch) topographic LiDAR coverage 18 PAGE 19 Table 21. Original data spaci ng and published RMS error for da ta sets (NGDC, 2005; USGS, 2000; Gesch et al., 2001; LADS, 2003; IHRC, 2004). Data source Data type RMS error NOAA NGDC 90 m topography 7 m NOAA NGDC 90 m bathymetry 0.3 m ; 02 mdm 0 1.0 m ; 20100 mdm (0.01 d ) m ; 100 dm ( d: depth of bathymetry (meters)) SRTM 30 m topography 6 ~ 10 m Tenix LADS 10 m bathymetry 0.50 m UF/FIU ALSM 5 m topography 0.12 m 2.2 Remote Sensing Data Sets 2.2.1 NOAA NGDC The National Oceanic and Atmospheric Ad ministration (NOAA) created the National Geophysical Data Center (NGDC), the purpose of which is to create and maintain an official elevation data set consisting of integrated topog raphic and bathymetric elevations along the U.S. coastline (NGDC, 2005). The primary acquisition modality for the bathymetric data is acoustic sonar deployed from boats. The standard grid spacing of the NGDC DEM is 3 arcseconds (approximately 90m 90m). The coverage of the sonar soundings used to derive the bathymetric grid data varies w ith location, but is often dense near populated shores and grows sparser with distance from the shore. For this work, a 40km 40km area was extracted from the 1 1 NGDC tile shown in Figure 21. Since the MKS quadtree requires integer scale changes, the NGCD data were upsampled to 80 m. The expected measurement uncertainty in the ba thymetric elevations is a function of water depth (NGDC, 2005) and is given in Table 21. The soundings we re acquired over many years, and are therefore not necessarily representative of the current state of the shallowwater 19 PAGE 20 bathymetry, which motivated the acquisition of higher resolution bathym etry in the shallow water via LiDAR (discussed below). The topographic component of the NGDC DEM is taken from the National Elevation Dataset (NED) developed by USGS (U.S. Geol ogical Survey) from stereo aerial photography acquired over many years (NGDC, 2005). The standard NED DEMs area available from USGS are at 1 arcsecond postings (approximately 30m 30m), but for consistency with the bathymetry, NOAA formats the t opographic component of the NGDC data set at a 3 arcsecond spacing. Thus, the topographic com ponent of the NGDC can be signifi cantly out of date in areas undergoing urbanization or development and is t oo coarse to capture much of the important topography that can influence floodw ater routing. Furthermore, the USGS NED data are derived by interpreting elevation contours from the aeria l photography. Thus, buildings and many small landforms, such as stream channels, are comple tely lost. The primary advantage to the NGDC data set is its uniform coverage of the entire US coastline, consisti ng of a mosaic of 1 1 tiles that are publicly available (NGDC, 2005). Thus, it is ideal for filling in gaps between higher resolution images. 2.2.3 Topographic LiDAR LiDARderived elevations over th e barrier islands and coastal uplands were acquired by an ALSM system that was jointly owned by the University of Florida (UF) a nd Florida International University (FIU). The sensor consists of an Optech 1233 laser altimeter that pulses at 33 kHz and operates at a 1064 nm (nearinfrared) wave length, which does not penetrate water (Carter et al. 2001). First and last laser returns are recorded for each outgoing pulse. The data used in this work were acquired as part of a Federal Emer gency Management Agency (FEMA) grant in 2002 and 2003. Acquisition and processing details can be found in th e project final report in International Hurricane Research Center (IH RC, 2004). This acquisition was conducted by FIU 20 PAGE 21 personnel and processed to FEMA specifications. Thus, only the DEMs were available, not the raw LiDAR point data. With a maximum scan angle of only +/20 de grees from nadir, such ALSM systems can only image a swath width of roughly 400 m when flying at an altitude of 600 m above ground. In order to decrease the number of flight lines necessary for complete coverage of the Miami area, the aircraft was flown at relatively hi gh altitudes (between 900 m and 1200 m above ground level), increasing the effective swath width, f ootprint spacing, and footprint diameter. The average root mean squared error of the ALSM DEM elevations was still very good at 12cm (IHRC, 2004), but data voids and dropouts were common, particularly over lowreflectance surfaces (e.g. canals) and shadowed areas, due to higher path length losses and increased footprint spacing. For this study, the LiDAR points representi ng the georeferenced laser reflections in the IHRC data set were interpolated to form DEMs with 5m pixel spacing. 2.2.4 Bathymetric LiDAR The 3 arcsecond NGDC DEMs do not have suffi cient resolution to capture smallscale bathymetric features in the near shore region, such as breaks in unde rwater sand bars that can lead to rip currents. In the fall of 2002, a bathymetric LiDAR system operated by the Tenix LADS Corporation contracted with the st ate of Florida to acquire nears hore bathymetry off the coast of Dade County. The bathymetric elevations were acquired by the LADS MkII system using a 532 nm wavelength laser (a frequencydoubled output from a 1064 nm la ser). The laser pulses at 900 Hz while scanning over a 240 m wide swath (Stumpf et al., 2003), resulting in a nominal horizontal spot spacing of 4 m. The lase r footprint diameter on the water surface is approximately 2.5 m with 300~400m flight altitu de. The Dade County acquisition covered an area of roughly 7km 30km with a grid spacing of 10 m in the resulting DEM (Figure 21). We obtained the LADS data from the Florida Depart ment of Environmental Protection. Only the 21 PAGE 22 DEM was available, not that raw LiDAR point data. Prior to the MKS data fusion, the bathymetric LiDAR data were upsampled to 5m and combined with the 5m topographic LiDAR data since the coverage areas were small and mutu ally exclusive. While not strictly necessary, this was done early in the analysis so that each level in the quadtree data structure that would eventually be populated with data could be associated with a gi ven sensor type or data product (e.g. LiDAR, spacebased SRTM radar, and th e NOAA NGDC data). Although the maximum measurable depth for the Tenix LADS Mk II is specified as 70 m, data for the Dade County acquisition were limited to a maximum depth of about 60 m due to limited water clarity. 2.2.5 SRTM SRTM was a cooperative projec t between the National Geos patialIntelligence Agency (NGA), the National Aeronautics and Space Admini stration (NASA) and the German and Italian space agencies. The entire SRTM data set was acquired during a single Space Shuttle mission in February 2000, yielding a temporally consistent snapshot of the Earths surface that is more recent than the NGDC data from NOAA. The primary instrument wa s a dualantenna Cband (wavelength of 5.6cm) radar carried onboard the Space Shuttle Endeavour. Limited Xband (wavelength of 3cm) data were also acquired. Data were collected over 80% of the Earths surface (between 60 N and 56 S latitude) (U SGS, 2004). Unlike most other large scale elevation data sets, SRTM elevations were generated from one consistent source (single sensor on a single flight), resulting in smaller a nd more consistent elevation errors (Gesch et al. 2001). Only the Cband system on SRTM was operate d in a Scan Synthetic Aperture Radar (SAR) mode, which allowed it to obtain contiguous coverage of the terrain. Thus, only the Cband data were used to generate the Digital Terrain Elevation Data Level 2 (DTED2) data products for NGA, where level 2 refers to 1 arc second spacing. Due to the ScanSAR acquisition method and the orbital trajectory, the coverage dens ity of individual SAR images 22 PAGE 23 varies with location. Coverage is dense at high latitudes and re latively low near the equator. The wavelength and variable coverage have im plications for phase unwrapping and vegetation penetration. Radar energy at Cband wavelengths typically ex hibits limited penetration of vegetation canopies. Thus SRTM DTED2 elevati ons represent an interm ediate height between the baresurface and canopy tops. The precise height bias due to landcover is unknown because it is highly dependent on the vegetation type and density, number of component SRTM images in the particular DTED2 tile, and their incidence angles. At present, data at 1 arcs econd (~30m) horizontal resolutio n for the United States and 3 arcsecond (~90m) for the rest of the world are dist ributed by USGS. Since the MKS quadtree requires integer scale changes, the SRTM data were upsampled to 20 m for this work. The mission specifications for absolute horizontal and vertical accuracies are 20 meters (circular error at 90% confidence) and 16 meters (linear error at 90% confidence), respectively. The vertical accuracy is often significantly bett er than the 16 meters specificati on, and is actually closer to 10 meters (Farr, T.G, et al 2007). NGA provides finished topographic data from SRTM after perfor ming quality control checks on the unfinished data. Unfinished SRTM contains occasional voids or gaps, where the terrain is in the radar beam's shadow or in areas of low radar backscatter, such as calm seas and lakes. The finished SRTM DEMs have the elevation of areas corresponding to water bodies set to a constant value (zero for oceans, the mean of the surrounding elevations for inland lakes, and monotonically decreasing values along their length for large rivers). Small voids are filled in via interpolation. The finished NGA pr oduct is a uniform grid of elevation values indexed to specific points on the ground in a standardized DTED2 format. One important application that utilizes SRTM di gital elevation data is delinea ting hydrologic surface features, 23 PAGE 24 such as watersheds and large river networks. Sinc e the SRTM dataset covers most of the Earths land surface, it offers opportunities to extend hydrol ogic modeling research to nearly the entire world where accurate modeling has not been previ ously feasible due to the low resolution of existing data sets (Curkendall et al. 2003). The topographic LiDAR data are provided in orthometric heights refe renced to the NAD83 datum using the geoid model Geoid03 (IHRC, 2004) but SRTM data are referenced to the WGS84 ellipsoid using the older geoid model EGM96, (NGA, 2005), as shown in Table 21. Before the LiDAR and SRTM data can be compare d, they must be brought into a consistent georeferenced coordinate system. In an effort to provide seamless coverage, NGA had the SRTM data mosaicked into continentalscale data se ts, which then underwent continentalscale leastsquared (bundle) adjustments. While the SRTM accuracy on continental scales is excellent, over smaller areas on the order of tens of square kilometers, the georeferencing of the LiDAR data is more accurate. Unfortunately, there is a limitation as to how precisely the SRTM heights can be corrected to reflect more recent geoid models, such as Geoid03. The process used by the Jet Propulsion Laboratory to generate SRTM "orthometric" heights was as follows (Slater, 2005). 1. Convert elevations from SRTM sensor coordina tes to ellipsoidal heights for the 1 arcsecond posting of terrain elevations. 2. Evaluate spherical harmonic expansion of the EGM96 gravity model (geoid) at 0.1 (6 minute) intervals in each 1 1 cell to gene rate geoid undulation values at this interval. 3. Use bilinear interpolation to compute geoid undulations at the SRTM 1 arcsecond postings. 4. Subtract the geoid undulation from the ellipsoid height at each 1 arcsecond post to generate a floating point orthometric height. 5. Round the floating point value to the nearest integer (meter). Th ese integer height values are the values found in the SRTM DTED2 data file. 24 PAGE 25 As mentioned above, the EGM96 geoid was embe dded in the SRTM data at 0.1 intervals for each 11 tile before the elevations were quantiz ed to 1 meter integer values. This results in a .5m quantization effect that cannot be re moved. While SRTM was bundle adjusted for each continent using GPS ground truth, over smaller areas there can be differences between SRTM and accurate LiDAR orthometric heights on the order of a few meters due to the older geoid model embedded in the SRTM data product. If an area with complete LiDAR coverage is to be fused with SRTM data, the average quantiz ation error can be subtracted out of the SRTM data prior to fusion. When fusing data over ar eas with partial or no LiDAR coverage, however, we must convert the SRTM to data orthometri c heights using the following steps (NGA, 2005) and accept the residual .5m quantization erro rs. The twostep process is as follows: 1. Convert the orthometric heights of the SRTM data to ellipsoid heights by subtracting the geoid, which is calculated by the EGM geoid he ight calculator vers ion 1.0 (NGA, 2005). 2. Convert the ellipsoid heights back into or thometric heights in NAVD88 using Corpscon version 6 (USACE, 2004). Th e WGS84 ellipsoid is approx imately the same as the GRS80 ellipsoid (geocentric) with sli ght differences documented in the WGS84 Technical Report (NGA, 2005). 2.3 Data Fusion Data fusion techniques combine data from multiple sensors or related information from associated databases to achieve improved accuraci es and more specific inference than could be achieved by the use of a si ngle sensor alone (Hall et al., 1997). Observational data may be fused at the measurement level, feature level, or de cision level. Measurement data can be combined directly if the sensors make a measurement of the same physical phenomena using methods like weighted least squares (Sorenson, 1970). In featur e level fusion, features extracted from multiple sensors are combined into a single feature vector. The feature vector can th en be used in pattern recognition algorithms, such as clustering (Gunatilaka et al. 2001),(Gao et al. 2005) or in a statevariable estimator, such as a Kalman filter (Gan et al., 2001), (Willsky, A.S., 2002). In 25 PAGE 26 decision level fusion, final decisi ons are made by fusing the preliminary decisions that are made for each sensor. Depending on the particular statetoobservation mapping used, statevariable methods can perform either measurement level fusion or feature level fusion. One of the more common approaches is the application of a sequential estimation technique, such as the Kalman filter. Chou et al., (1994) and Fieguth et al. (1995) introduced a recursiv e estimator consisting of a multiscale variant on the well known Kalman sm oother, constructed on a Markov tree data structure. We refer to that basic method as th e multiscale Kalman smoother (MKS). It is a methodology for the efficient, sta tistically optimal fusion of meas urements of a Gaussian random process. The approach can solve problems in which the measurement data may convey information about random processes at very different scales (Daniel et al. 1997). The standard Kalman filter and smoother are first briefly presented below, followed by the theoretical framework for the ba sic MKS algorithm. In Chapter 3, we discuss the extensions on the standard MKS we developed for making th e single surface DEMs by fusing the component elevation data sets. 2.3.1 Kalman Filter and Smoother The Kalman filter is a recursive stochastic es timator that attempts to minimize the mean squared error between the estima tes and the random variables be ing estimated (i.e. the state variables) (Haykin, 2002), (Hwang et al., 1997), (Kay, 1993). It is often used to smooth noisy observations, but unlike Fourier methods, it opera tes in the time/spacedomain. Because it incorporates a stochastic model of the underlyi ng state process, it can continue to produce estimates in the event of data gaps. Since the solution is computed recursively using only the previous estimate and the present observation, th e standard Kalman filter has relatively low memory requirements.(Haykin, 2002) It can also be formulated in a matrix form because only 26 PAGE 27 second order statistics ar e used, thus giving it multipleinput/multipleoutput capability. If the model input parameters are known to be correct, the Kalman filter is the optimal linear minimum meansquare error (LMMSE) estimator, and it b ecomes the optimal MMSE estimator when the signal and noise distributions are Gaussian (Kay, 1993). The state model is given as )()()()1( twtxttx (21) where x ( t ) is the process state vector at time t ) ( t is the state transition matrix. is the process noise (also sometimes called the detail process), which is assumed to be a white Gaussian processes with zero mean and variance Q ( t ). ) ( tw The corresponding observation model is given as )()()()( tvtxtHty (22) where y( t ) is the observation vector at time t H ( t ) is the observationstate relation matrix, and v( t ) is the measurement noise process, which is assumed to be a white Gaussian process with zero mean and variance R ( t ). In the recursion of the Kalman filter (Figure 22), the future state variable at time 1 t is first predicted (the a priori estimate), along with the error cova riance, from the present state and covariance at time t using the stochastic information enc oded in the state transition operator and process noise variance. Figure 22. The solution to r ecursive minimum mean square estimation portrayed as a predictor corrector (Haykin, 2002) 27 PAGE 28 )( )()1( ttxtttx (23) )()()()()1( tQtttPtttPT (24) The a posteriori estimates of the state and covariance at time 1 t are then calculated using the present observations at time 1 t and the priors, )1( )1()1()1()1( )11( ttxtHtytKttxttx (25) )1()1()1()11( ttPtHtKIttPT (26) where K ( t+ 1) is the Kalman gain and is defined as 1)1()1()1()1()1()1()1( tRtHttPtHtHttPtKT T (27) Because the measurement noise variance R appears in the Kalman gain as an inverse factor (or the denominator in the scalar case), the Kalman filter balances uncertainty in the observations with uncertainty in the proce ss. Considering the scalar H =1 case for simplicity, one can observe that when R is very small, K is close to unity and the filter trusts the observations and tracks them closely. When R is very large, K is close to zero and the filter relies more on the stochastic model because the posterior estimate is dominated by the prior estimate. The standard Kalman filter makes one pass through the data in one direction, and is thus ideal for processing streaming data. An extension of the Kalman filter, known as the Kalman smoother, can be employed to make both a forward and backward sw eep through the data. While it requires processing the data in a batch mode, the Kalman smoother has the advantage of achieving optimal estimates c onditioned on all of the observations rather than just on the past observations(Hwang et al.,1997). The Kalman filter and Kalm an smoother therefore constitute a causal filter and a noncausal filt er, respectively (Anderson, 1999). 28 PAGE 29 There are three main types of Kalman smoothi ng methods: 1) fixedinterval smoothing, 2) fixedpoint smoothing, and 3) fixedlag smoothing (Anderson, 1999). The fixed interval smoothing, also called the RauchTungStriebel (R TS) algorithm, is the form implemented in our multiscale Kalman smoother. It is of interest to note that the smoothing error variance matrix is not needed for the computation of the estimate in the backward sweep. This is in contrast to the situation in the filter (forward) sweep, where error covariance matrix is needed for the gain and associated estimated computations (Hwang et al., 1997). The Kalman filter can also be viewed as an analogue to the Hidden Markov Model (HMM). The primary difference is that the underlying st ate process is assumed continuous and Gaussian in the Kalman filter, whereas the state is often c onsidered discrete and of arbitrary distribution in the HMM. Both approaches can be derive d from the Bayesian estimator (Roweis et al. 1999). 2.3.2 Multiscale Kalman Filter and Smoother Due to the difficulty in establishing suitable 2D causal models and the high dimensionality of the resulting state vectors, the application of the standard Kalman filter to image processing has been limited (Woods, 1984). But in the 1990s, Chou et al. (1994) and Fieguth et al. (1995) developed a method that replaced the 2D recursion in space (pixel rows and columns) with a recursion in scale. This allowed them to develop a causal Kalman filter sweep for images. One of the main motivations for trying to employ cau sality is to develop computationally efficient algorithms, especially fo r the large images often encountered in remote sensing. Figure 23 shows a multiscale pyramid, or quadtree data structure, which is used in the multiscale Kalman filter and smoother (Fieguth et al. 1995). The structure is similar to wavelet decompositions applied to multiresolution analysis, but the Kalmanbased approach is able to 29 PAGE 30 Merge data of different resolutions Fill in all data dropouts Figure 23. Quadtree data struct ure with two measurements, where denotes scale. The support of the tree at the finest scale is mMM22 where ) ,...,1( Mm accommodate irregularly spaced data and indir ect measurements via the observationstate relation, as well as provide error statis tics via the covariance matrix (Slatton et al., 2001). )()()()()( swsBsxssx ossSs (28) )()()()( svsxsHsy (29) STswhere x( s ) is the state va riable at node s. s is the node index on the tree, where s=1 denotes the root node. Each node can also be referenced by a threetuple ( i ,j ,m ), where ( i j ) is the pixel row and column of the node at scale (level) m in the tree. y (s) represents the observation (if any) at node s. The stochastic forcing function w ( s) is a Gaussian white noise process with unity variance, and the measurement error v( s) is a Gaussian white noise process with scaledependent variance R ( s ). S represents the set of a ll nodes on the quadtree, and T denotes the collection of nodes at which an observation is available. B is a backshift operator in scale, such that Bs is one scale coarser (higher up in the tree) than s. )( s is the coarsetofine state transition operator, 30 PAGE 31 ( s) is the coarsetofine stocha stic detail scaling function, H ( s) is the measurementstate model, and R ( s) represents the measurement variance of the observations. A complete description of the MKS algorithm can be found in Chou et al. (1994) and Fieguth et al. (1995). The algorithm is noniterative and has deterministic com putational complexity per pixel with operations, where is the number of node s at the finest scale)(MSOMSMm A residual, known as the innovation, is computed during the Kalman filter recursion, where is the a priori state estimate at node s. Total complexity of the sc alar MKS is proportional to since the total number of nodes in the three is )( )()( ssxsHsy)( ssxMS 3 4and each node has constant complexity (four children and one parent) (Fieguth et al. 1995). With the generative coarsetofine model in place, the MKS algorithm starts with a finetocoarse sweep up through the quadtree using a Kalm an filter recursion with an added merge step. The finetocoarse sweep is followed by a coarse tofine downward sweep that corresponds to Kalman smoothing (i.e. the RTS algorithm me ntioned earlier). The whole procedure of multiscale Kalman filtering and smoothing is shown in Figures 24 and 25, respectively. The standard Kalman filter with Gaussian assumption provides optimal estimates in the mean squared sense if perfect a priori information regarding the model parameters is known. Since a wide range of natural stochastic processes, su ch as topography, exhibit power law behavior in their power spectra, they can be effectively modeled as fractional Brownian motion (fBm) processes (Fieguth et al., 1995). We therefore assume that our state process (surface elevation) follows a model in scale (Fieguth et al. 1995), (Slatton et al. 2001). Using this model, the power spectrum of the state variable is represented by the multiscale model in (Eq. 28) by specifying the coarsetofine state transition operator f / 1)( sx 1)( s and process noise standard 31 PAGE 32 1 1 1 1 1 1 1 1 1])()()1[()( )( )()()( 4 Merge )()()()()( )( )()( )( computed have We:)4,1( Projection )]()()()([)()( )()()()( )()]()( [)( )]( )()()[()( )( )]()()()()[()()( scovariance and estimates posterior Compute sweep Upward.2 )( 0)( nodes leaf Initialize .1 q i i s i q i i i i T ii i i iii i ii s s T s s T s T T sssPsPqssP ssxssPssPssx q sQssFssPsFssP ssxsFssx ssx i BsPssPsIBsPsQ sPsBsPsF ssPsHsKIssP ssxsHsysKssxssx sRsHssPsHs HssPsK pssP ssx spssP ssx )( 0)( )(3 ssP )( 3 ssx )( ssx)( ssP )(1ssP )( 1ssx )(3 ssP )( 3 ssx )(1ssP )( 1ssx )(4 ssP)( 4 ssx )(2 ssP )( 2 ssx )( ssx)( ssP )( ,)( ), ( ,)( )( ,)( ), ( ,)( Compute Projection 4 3 2 1 4 3 2 1 ssPssPssPssP ssxssxssxssx )( ,)( ), ( ,)( )( ,)( ), ( ,)( Compute Projection 4 3 2 1 4 3 2 1 ssPssPssPssP ssxssxssxssx )( )( ), (, )( Compute sQsFssPssx )( )( ), (, )( Compute sQsFssPssx )(, )( Compute ssPssx )(, )( ssPssxFigure 24. Procedure for multiscale Kalman filter (upward sweeping) deviation 2 )1(2)(m os. The values of 0 and are determined by first order regression of the power spectra of the observations and a realiz ation of the fBm model in loglog space. Since our data sets consist of direct measurements of surface elevation, is simply 1 where and 0 otherwise. ) (isH Tsi 32 PAGE 33 )()()()( )()]()()[()()( )]( )( )[()( )( scovariance and estimates smoothed Compute sweep Downward1sBsPsFssPsJ sJsBsPBsPsJssPsP sBsxBsxsJssxsxT T s s s s )( ,)( ), ( ,)( )( ,)( ), ( ,)( )( ), ( ), ( ,0)( using )( (s), ), ( Comput e 4 3 2 1 4 3 2 1 ssPssPssPssP ssxssxssxssx Bs PssPssxsJ sJPsxss )(3 ssP)( 3 ssx )(1 ssP)( 1 ssx )(4ssP)( 4 ssx )(2ssP )( 2ssx )()(, )( )( Bs PssP Bs xssx )(1 ssP )( 1 ssx )(2ssP )( 2ssx )(4ssP)( 4 ssx )(3 ssP )( 3 ssx)( ,)( ), ( ,)( )( ,)( ), ( ,)( )( ), ( ), ( ), ( using )( (s), ), ( Comput e 4 3 2 1 4 3 2 1 ssPssPssPssP ssxssxssxssx Bs PssPssxsJ sJPsxss Figure 25. Procedure for multiscale Kalman smoother (downward sweeping) 33 PAGE 34 CHAPTER 3 MULTISCALE ELEVATION IMAGE FUSION OVER COASTAL AREAS 3.1 Introduction Oceanic phenomena, such as hurricanes, tsunamis, and sealevel rise, as well as terrestrial processes, such as fluvial erosion and subsidence, continually modify the worlds coastlines. Inland flooding caused by storm surge and extr eme wave action is influenced by nearshore topography and bathymetry. As pop ulation levels and property development in coastal areas continue to increase, there is a critical need to characterize the coastal zone environment for improved nearshore physical and biological proc ess monitoring. Precise mapping of the ocean bottom is also important for characterizing acoustic propagation for sonar (Komar et al. 1998), (Kuperman et al. 2004). A mathematical framework for fusing data a nd estimating coastal zone surface elevations is therefore needed so that coastal flooding and erosion mechanisms can be more accurately predicted and mitigated. A robust and scalable framework for combining surficial elevation data from above (topography) and below (bathymetry) the waterline is presented. The framework could subsequently be used to support geoand biophysical modeli ng as well as military functions, such as tactical deployments and poststrike damage assessment in the coastal zone (Hicks et al. 2000 ). In one study, Starek et al. (2005) generated the DEM of a coastal area (St. Augustine, FL) using airborne LiDAR data. Th ey selected the mean higher high water (MHHW) tidal datum, referenced to a nearby tidal gauge, and used the correspondi ng elevation contour as an approximation for the shoreline (the MHHW is the average of the high water of each tidal day observed over a 19year period). The resulting elevation contour was then projected onto the DEM and compared to results derived from c onventional methods for determining shoreline position (Starek et al., 2005) (Robertson et al., 2004). When used in conjunction with high 34 PAGE 35 resolution LiDAR DEMs, this approach to deriving the shoreline contour is very accurate spatially. But the LiDAR data usually only cove r a thin strip along the beach, and are thus of limited use when establishing the regional elevati on surface for flood modeling. Furthermore, no matter how precise a DEM is, it can only reflect the surface conditions (e.g. urban infrastructure) that were present when the data were collected So ideally, one would want a method to merge different elevation data sets to get continuous co verage of the region, high resolutions wherever possible, and the most recent acquisitions to bett er reflect recent urban development. This method should also have the ability to encode un certainty in the data sets based on the data set age, resolution, and/or known sensor characteristics. Chou et al. (1994) and Fieguth et al. (1995) introduced a recursiv e estimator consisting of an MKS constructed on a Markov tree data structure that accommodates multisensor observations of differing resolutions. At each node in the tree, the estimator optimally (in a least mean squared error sense) blends a stochastic multiscale model with the available observations according to a Kalman gain that accounts for the error characteristics of each sensor type. The estimator also accommodates sparse and irregularly spaced data and provides an estimate error variance (uncertainty measure) at each node in the tree. Slatton et al., (2001) successfully used MKS to fuse topographic elevations derived from highresolution in terferometric SAR and airborne LiDAR data. However, in many remote se nsing applications, it is desirable to extract both regional and local structure. In such cases, the resolutions of the component data sets may differ by an order of magnitude or more, and th e highest resolution obser vations may be sparse relative to the coarsescale observations. Such data characteristics are common in the case of coastal zone surficial mapping, a nd lead to numerical inefficien cies in the standard MKS. 35 PAGE 36 3.2 ReducedComplexity MKS For a standard implementation of MKS in wh ich twodimensional su rface elevation data are fused, the full set of recursive operations is performed at each node in the quadtree. This approach is highly inefficient if the finest scale (the set of leaf nodes in the quadtree) is populated sparsely with observati ons. This is often the case with small airborne LiDAR strips along the beaches. So in (Slatton, Cheung et al., 2005), we develop a reducedcomplexity version of the MKS algorithm (RCMKS) to exploit this commonly encountered data configuration. First, we derive d an alternative formulation for scalewise process noise variance to that found in (Fieguth et al. 1995). Then we developed an efficient implementation of the MKS algorithm that indexes th e leaf nodes so that only fine scale nodes containing data are considered in the full recursion. 3.2.1 Simplified Equation for Calc ulating Process Noise Variance Because MKS is recursive, significant overa ll reduction in computational complexity is possible with modest pernode reductions. Th e nominal MKS algorit hm has a pernode computational complexity of O ( L ) where L is the number of leaf nodes in the quadtree, such that L=22M. where M is the finest scale. From the downward model (Eq. 28) and (Eq. 29), which is Markov process and satisfies the forward orthogonal property, i.e. E ( x( 0 ) wT( s ))=0 for s 0 in (Eq. 31), a corresponding state equation for the upward mode l can be written as )()()()()()(1 1swsssxsBsx (31) This backward state process is still Markov, but the backward orthogonal condition between and for is not satisfied. Therefore estimate of x( Bs ) is not the same to the one step projected estimate of x( s ). )(isx)(jswijss 36 PAGE 37 We can define w ( s ) as a sum of MMSE and its error term as follows )( ~ )()()(swsxswEsw (32) where ) ( ~ sw represents the nonort hogonal contributions to in the upward model. By the MMSE property ) ( sw ) ( ~ isw is orthogonal to for Since we assume that and are zero mean and Gaussian, we can write using fundamental formula of MMSE (Kay, 1993) )(jsxijss )( sw )( sx )()()()()()()(1sxsxsxEsxswEsxswET T (33) Substituting (Eq. 31) into (Eq. 32) gives )()()()()(1sxsPssxswEx T (34) where )()()( sxsxEsPT x The upward Markov model can then be rewritten (Verghese 1979) )( 1 )( 1 1 1)( ~ )()()()()()()()()(sw sF x TswsssxsPssssBsx (35) We define to be )( sQ )()()()()()()()()()()( )()()()()()()( )()()( ~ )( ~ )()( )()()(1 1 1 1 1 1ssssPsssssss ssssPsIss ssswswEss swswEsQT x T T T x T T T T (36) But from the equation of we can rewrite and as follows. ) ( sPx)( sQ )( sF)()()()()()( ssssBsPsQT T T x (37) )()()()(1sPsBsPsFx T x (38) Thus, upon substitution of into )( sF) ( BsPx)()()()( )()()()()()()( BsPssFBsP BsPssFssPsFsQx x x T x (39) 37 PAGE 38 We then write the final expression for as ) ( sQ)()()()( BsPssFIsQx (310) which is an alternative to th e form presented in (Fieguth et al 1995). )()()()()()(1BsPssPsIBsPsQx x T x (311) The alternative calculation for in equation (Eq. 310) requires two multiplications per node, compared with four multiplications in the standard expression in equation (Eq. 311). The reduction by two multiplications per iterative step is for the scalar MKS. If incorporated into a vector MKS, the corresponding redu ction would be further increased proportionally to the vector size. ) ( sQ3.2.2 Pruning Quadtree Structure Further reduction in the computational complexity can be realized if the sparseness of the finescale data is exploited. In data fusion applications where the observation scales differ by an order of magnitude or more, it is quite common th at the finestscale data will only be available over a small subset of the quadtree leaf nodes. New information is presented to a Kalman estimator through observations; in the absence of observations, the prior estimate is simply propagated. Thus, we need only consider those subtrees that contain at l east one leaf node at which an observation is avai lable (Figure 31. C). Let m=N be the level where dense coarsescale ob servations are available, e.g. the NGDC elevations, and m=M be the level where the finestscale data are available, e.g. LiDAR elevations. We define an indica tor (flag) matrix to store the location of populated subtrees. The flag matrix is i.e. the size of the NGDC data matrix. To find valid subtrees, we tile the leaf nodes with a window and set the corresponding entr y in the flag matrix to 0 if no node in the corresponding tile cont ains an observation. Otherwise, the entry in flag matrix is NN22 NMNM 2 2 38 PAGE 39 Finest scale(m=M) Coarse scale(m=N) A Finest scale(m=M) Coarse scale(m=N) B LIDAR measurement (M scale) Subtree 3 Flag matrix (N scale) Subtree 2 Subtree1 CLIDARRLIDAR 1 1 1 (a)C Subtree1Subtree 2Subtree 3 (b) MSMKS up sweeping Normal MKS up sweeping MSMKS down sweeping Normal MKS down sweepingN scale M scale N+1 scale 0 scale N scale N+1 scale M scale 0 scaleD Figure 31. Data processing structures and procedure for utilizing the distribution of measurements in the finest scale. A) Full pyramid structure. B) Pruning pyramid structure. C) Set of subtrees for which fine st scale data are available. D) Mapping the subtrees to the square matrices using the flag matrix. set to 1. This can be accomplished at no extr a computational cost dur ing the specification of H (measurementstate model operator). Using the flag matrix, we can obtain a range of row and column indices ( RM and CM) at the finest scale for each valid subtree as N NM NM N NM M N NM NM N NM MC C C R R R 2~12 2 2~12 2 (312) where and are row and column indices of the flag matrix whose entries are set to 1. Due to the Markov property of MKS, each subtree ca n be processed independently. In practice, NRNC 39 PAGE 40 however, it is convenient to conc atenate the subtrees, such th at their bases form a single rectangular matrix. Th at matrix has support where is the number of subtrees. Other MKS parameters, such as NMNM stN 2 2stN H and R are similarly permuted to correspond to the concatenated subtree st ructure. Subtrees that c onnect leaf nodes to scale but do not contain observations are not expl icitly processed since the priors are propagated directly to scale N NThe reduction in computational complexity afforded by this implicit pruning of the quadtree depends on (i) the size of the subtree blocks, which is determined by the difference in scales M and and (ii) the aggregation of the finesca le data, which determines the number of subtrees The percent reduction in floating point operations is determined by the reduction in the number of leaf nodes that must be proces sed in the recursion, which is given by NstN 100 22 22 100% MM NMNM stN ME (313) The NOAA NGDC data contains comprehensive coverage (suc h that there are no data dropouts) at scale so that we did not apply the me mory saving procedure above scale N With the quadtree now effectively pruned below scale N the standard MKS algorithm was used to compute estimates for each subtree in parallel (F igure 31. D). As we process upwards in the quadtree, the number of data elements processed is reduced by one quarter from the previous scale. These subtrees are used from the finest scale to one scale be low the NOAA NGDC scale, (scale N+1 ), for both the upward Kalman filtering sweep and the downward Kalman smoothing sweep. When the filtering pro cess reaches scale N+1 the poste rior estimates at scale N+1 are used as prior information for estimation at scale N in the usual Kalman recursion. To accomplish this, we use the flag matrix as an indicator func tion to permute the rectangular matrix of Kalman N 40 PAGE 41 estimates into a square matrix of prior state information. From scale N the standard MKS upsweep proceeds with square matrices. Normal down sweeping (Kalman smoothing) is performed from the coarsest (root) scale to NOAA NGDC scale. When the estimator reaches scale N +1, it proceeds down through the subt rees according to the flag matrix. As a final step, we must propa gate the estimates at scale N where the flag matrix value is 0, down to scale M We initially used the nominal quadtree interpolation, i.e., nearest neighbor (zero order) interpolation, which retains the Markov property for the pr opagation. However, propagation on a Markov tree over several scales leads to a well known problem known as block artifacts (loss of spatial correlation). To retain spatial correlation, we instead employed an upsampling of the NGDC data with linear interpolation. The propagated values were then available at scale M in tiles where no LiDAR data were present. 3.2.3 Results This section presents the a pplication of the proposed RCMK S method for fusing data sets over the Florida coast. The topographic LiDAR observations and the bathymetric LiDAR were available at a 5m grid spacing (data size = 213x213) and a 10m spacing (data size = 212x212) respectively. The NGDC observations were resamp led from their original 90m spacing to 80m (data size = 29x29) so that the data sets would differ in resolution by integer powers of 2. Table 31. Comparison of memo ry storage using RCMK S and standard MKS. (Mb :Mega bytes, Gb: Giga bytes) Memory saving method Standard method m Up Down Up Down 10 25.22 Mb 13.19 Mb 136.31Mb 71.30 Mb 11 100.88 Mb 52.77 Mb 545.26 Mb 285.21 Mb 12 403.53 Mb 211.08 Mb 2.181 Gb 1.141 Gb 13 1.614 Gb 844.31 Mb 8.724 Gb 4.563 Gb Total 3.27 Gb 17.58 Gb 41 PAGE 42 In this example, the bathymetric LiDA R are one level up from the leaf nodes 1 Mm. Situations in which observations are pr esent at intermedia te scales between M and do not preclude significant reductions in complexity if the intermediatescale data have sparse or aggregated support and are close in scale to the finest scale. The locations of intermediate data can be projected down to the leaf nodes and indi cated by 1 in the flag matrix along with the finestscale data. The fused re sult is shown in Figure 32. N A B C D E Figure 32. Fused and estimat ed DEM by RCMKS A) Estimated elevation of 40km x 40km, B) Estimated elevation in outlined area of A, C) RMS error of the bounded area of A) where high resolution data are available. 3 dimensional view of estimated elevations and its error variance D) 3D view of topographic and ba thymetric elevations fused and E) 3D view of normalized square root e rror variance. All elevations are in meters. 42 PAGE 43 Table 32. Real values of error variance with resp ect to the color mapped on elevation surface. Colors on surface Normalized values Real values (m) Dark Red 1.0 24.96 Red 0.9 2.286 Orange 0.75 2.043 Yellow 0.65 1.914 Green 0.5 1.864 Cyan 0.4 1.757 Blue 0.1 0.999 Given these data, and is 48501, so the support of th e subtrees at the finest scale consists of a 776016 16 array. From Eq. 313, we can calculate the expected computational savings. In the standard MKS implementation, the number of leaf nodes to be processed is i.e. for For the investigated data sets in this work, the number of nodes in the reduced leaf node set is i.e. The reduction in the number of nodes that must be processe d is therefore 81.5% at each scale below m=N 4 NMstNM 222819213 MnMNM stN 2 2,21648501 It is also possible to measure the impact of pruning the quadtree in terms of memory required for a nonparallelized implementation of MKS. By reducing the number of subtrees between scale M and that must be processed, we also significantly reduce the amount of memory required to store values of filter parameters and estimates at those levels in the quadtree. Without loss of generality, we let a 1 matrix represent a single byte and then compare the amount of memory used for the MKS algorithm to the amount of memory required for the proposed method below scale The columns in Table 31 list the required memory for the up and down sweeps for standard MKS and the propos ed RCMKS method. The realized memory savings between the standard MKS that uses the full quadtree and the RCMKS is 81.4%, which corresponds well to the 81.5% savi ngs predicted by Eq. 313. Using both the simpler expression for calculating at each node and the pruned quadtree, we obtained a total reduction in N N)(sQ 43 PAGE 44 floating point operations of 82.59% in the upswe ep and 72.87% in the down sweep. The actual values of each normalized square root error variance in Figure 32 are summarized on Table 32. A common problem that arises when fusing data that span a wide range in scales is that the detail of interest is difficult to see in a synoptic view. Important detail exists at both fine (less significant bits) and glob al (more significant bits) ranges. Simple monotonic transformations of the image histograms are not helpful because they necessarily eliminate valuable detail from one of these extremities. Instead, we partition the co lor map to reveal both small and large range characteristics that occupy different aver age values in the range (Figure 32). 3.3 Landscapedependent Evaluation of SRTM Accuracy NOAA acquires and maintains many datasets for the U.S. coastal zone. NOAAs NGDC has assembled a single topographic and bathymetric database to pr ovide surface elevations of the U.S. coastline at 3 arcsecond (roughly 90m) po stings. We fuse the NGDC data with SRTM, which provides topographic elevations with 1 arcsecond (roughly 30m) postings, along with high resolution topographic LiDAR and bathymetric LiDAR observati ons near the sh oreline with 5m to 10m postings, re spectively. In the t opographic LiDAR DEMs, it is possible to estimate the actual solid surface on which the water flows (the hydr ologic surface) by removing the vegetation but keeping the buildings in. This process is described below. Prior to fusing the data sets, we also estimate the accuracy (encoded in the MKS via the R parameter) of the SRTM elevation data over different la ndscapes near the coastline. Th is is accomplished through the statistical characterization of the Kalman filter innovations. Considering the vegetationpenetrating characteristics of the Cband wavele ngth mentioned previously, we used the LiDARderived hydrologic surface for proper evalua tion of the SRTM measurement error. 44 PAGE 45 3.3.1 Approximating the Hydrologic Surface by Region Properties Various field observations and numerical an alysis have shown that overland flooding dynamics are strongly influenced by surface morphology (gradient, curvature, and surface roughness), and the distribution of structures on the ground (Harris, 1963). For better prediction and mitigation of coastal flood hazards, we must be able to evaluate where floodwaters will flow. Floodwaters will generally flow around solid stru ctures such as buildings but flow under elevated roads (overpasses) and vegetation canopie s. Ideally then, we desire a treatment that classifies overpasses and vegeta tion separately from buildings when defining our LiDARderived hydrologic ground surface (which is then fused with the SRTM data). Several studies have investigated the penetration of vegeta tion canopies by Cband radar. Wegmuller et al. (1997) investigated vegetation e ffects on ERS data, Treuhaft et al., (2000) evaluated penetration of vegetation canopies using AIRSAR data, and Dobson et al., (1995) did the same for SIRC data. Ideally, the original LiDAR point data should be segmented to filter out the vegetation and leave building structures in, yielding a DEM of the impervious surface on which floodwaters could flow. However, the LiDAR data provided by the International Hurricane Research Center (IHRC) at FIU had already been filtered and gridded into DEMs. The DEMs that were distributed consisted of a LiDAR Top Su rface (LTS) product and a LiDAR Bottom Surface (LBS) product (IHRC, 2004). The LBS had been filte red in order to remove both vegetation and buildings from the LTS. Therefore it only contai ns elevations from the underlying topographic surface (Zhang et al., 2003). In order to fuse SRTM data with the other topographic data, we must specify values for R in the MKS statespace m odel. We cannot simply use the speci fied SRTM value in Table 21 if we desire optimal estimation of the elevations since that value is not landscapedependent. A reasonable approach instead is to use the sample variance of the differences between SRTM and 45 PAGE 46 A B C D E F Figure 33. Example of gene rating a DEM from LBS+ LiDAR data over the 1.5km 1.5km urban test site. A) LTS point data. White spaces denote data voids and dropouts due to the high acquisition altitude and lowrefl ection surfaces. D) LBS point data, where white spaces denote voids and removed elev ated surfaces. B) classification of LTS using a 15m height threshold followed by a 3 median filter, where unity values represent elevated structures. E) re fined classification after applying region properties to remove bridges and elevated roads. C) LBS+ point data obtained by adding buildings to LBS that were extracted from LTS usi ng the classification. F) LBS+ gridded to form a 5m DEM. White spaces in the final LiDAR DEMs denote vegetation and elevated stru ctures under which flood wate rs can flow, very short buildings, or data voids due to lowreturn surfaces. The subsequent MKS data fusion fills in these voids using the other data types. All elevations are in meters. the LiDAR data over different landscapes. However, we cannot derive a good measurement uncertainty (R) value for SRTM with either LTS or LBS individually. Since SRTM blurs the closely packed urban buildings, if we use  SRTM LTS to derive R for SRTM, uncertainty will be relatively large where there are buildings due to the deviation of SRTM elevations from the building surfaces. Over vegetation, R would be largest where SRTM penetrates the 46 PAGE 47 vegetation the most. This is undesirable becaus e it would cause the SRTM values to be underweighted in the MKS algorithm where it penetrates deep into vegetation (and almost reaches the ground), and it would be overweighte d where there was little penetration of the vegetation. The converse is also true; by using  SRTM LBS to derive R for SRTM, the values would resemble the desired trends over vegetation and would be skewed over building areas. To estimate our hydrologic surface for comparis on with SRTM, a data set with combined information from the LTS and LBS data is ther efore needed. We create a new LiDAR DEM, which we denote as LBS+ (Figure 33), consisting of the LBS data combined with the reinserted building structures In order to extract the building data and create the LBS+ DEM, we apply a simple segmentation procedure to the LTS data., A binary threshol d of 15m is applied to the LTS elevations, followed by a 3 median filter to reduce spurious high points and refine building edges. Elevated highways and bridge s were observed in the resulting DEM. Since water can generally flow under these structures, we wish to omit them from our building mask. We therefore applied the morphologi cal filter described in (Gader et al., 2004, Gonzalez et al. 2002) in order to exclude elongated st ructures like bridges. The algorithm was run with the following parameters: eccentricity >0.99, solidity <0.2 and ar ea <50. The mask was then applied to add the building structures to the LBS DEM and form the LBS+ DEM. Using the LBS+ surface, the Kalman innovations at the SRTM scale yi eld the expected trends, namely smaller R values for SRTM over vegetation the more SRTM penetrates the vegetation. They also yield larger R values for the SRTM measurem ents over tall buildings, as SR TM underestimates the true building heights due to the f act that a single SRTM pixel is so large (30 m 30 m). 47 PAGE 48 3.3.2 Assessing the Measurement Error Variance of SRTM Before we can optimally combine the datase ts in the MKS framework, we must account for the spatiallyvarying measurement error in the SRTM data. This is particularly important over the land surface since there are many anthr opogenic structures that cause heights derived from sidelooking radar to deviate markedly from the actual land su rface elevations. In addition, each SRTM scene is actually a weighted average of many individual radar subimages that only cover part of the radar swath (Hensley, 2005). Thus, we focus on estimating the measurement error variance of the SRTM as a func tion of landscape and landuse. To assess SRTM measurement error, we employed a stochastic approach that uses the MKS innovations at the SRTM scal e. Since the stochastic esti mation in the MKS algorithm is statistically optimal, we choose to use it instead of determinis tic differencing, which simply measures the height differences between LBS+ and SRTM. Three representative landscape Figure 34. Three studied site s extracted from Dade County, Florida (Miami metro area) to evaluate SRTM accuracy. Coastal (Long itude W/Latitude N): 80.183~80.198 / 25.762~25.776. Urban: 80.200 ~80.211 / 25.762~25.776. Rural: 80.472~80.487 / 25.612~25.626. 48 PAGE 49 AB C DE F Figure 35 DEMs obtained from LIDAR (A, B, and C) and SRTM DTED2 (D, E, and F) over three distinct terrain types. From left to right, the terrain types are coastal urban, urban, and rural. 1.5 m scale LIDAR DEMs are derived from LBS+ points. The SRTM DEMs are resampled from 1 arc second (roughly 30m) spacing to 24m to correspond to integral powers of 2 scale leve ls in the MKS quadtr ee data structure. Each area is 1.5km 1.5km. A ll elevations are in meters. archetypes are chosen for error analysis from the Dade County study area. They are urban, coastal, and rural (Figures 34, 35). The number of archetypes was limited to three to reduce the chance of over fitting to a partic ular study area and thus to keep the estimator generalizable. The MKS innovations represent the difference between the SRTM data and the a priori estimates from the MKS estimator (Figure 36) We examined the first and second order statistics of these stochastic differences (innovations), including their 2D autocorrelation function (ACF). We verified th at over a single landuse/landc over archetype, such as rural, the ACF plots indicate the innovations do not exhi bit strong spatial correl ation (see Figure 36). This suggests they can be well approximated as widesense stationary and that Kalman based data fusion should perform well since the noise term in the MKS measurement equation is assumed to be white. Another important character istic is that the measurement error varies 49 PAGE 50 Figure 36. Kalman filter residuals (innovations) fo r the three terrain types in Figure 35 used to assess SRTM accuracy. The Kalman residual is the a priori estimate (predicted observation) subtracted from the actual observation. Thus, the resi dual is negative at the centers of tall buildings since the pr ediction based on LiDAR is higher than the SRTM elevation. Large positive values result just outside the perimeters of tall buildings where the blurred bui lding signal is still present in the SRTM data. From left to right, the terrai n types are coastal, urba n, and rural. A, B, C) innovations. D, E, F) histograms of the normali zed innovations (innovations minus their mean). G, H, I) autocorrelations of the normalized innovati ons. All innovations are in meters. noticeably between the three terrain archetypes. Using the three simple terrain archetypes, we were thus able to establish the need for a spatiallydependent R for the SRTM data. We then used the same concept with a slig htly expanded list of classes to estimate R over the study site. Since land in south east Florida is generally flat, variation in terrain type is largely dependent on the density of anthropogenic structures and proximity to the coastline. The effect 50 PAGE 51 Table 33. Estimated measurement error standard deviations R for SRTM for the aggregated terrain types in the study area. Class Terrain type SRTM rms measurement error 5 Urban (tall buildings) & coast 17.5 meter 4 Urban (tall buildings) 12 meter 3 Suburban & coast 10 meter 2 Suburban 5.5 meter 1 Rural (low relief) 1.1 meter other unclassified 8 meter of the coastline is to split a given subtree in th e SRTM DTED2 data into a terrain portion and a flat nonterrain portion (except the case for rural sites, where flat terrain predominates both in land and in water.) We therefore expanded our initi al list of three terr ain archetypes into six classes (shown in Table 33): urban, urban adjacent to the coast, suburban, suburban adjacent to the coast, rural, and unclassifi ed, in which no specific informati on about terrain type could be derived. SRTM elevations were found to be very accurate when highly localized features associated with urban development were minimally present, such as in rural areas. Following the same procedure described for estimating R where LiDAR is present, we estimated R over the six major landscape classes (Table 33) based on expected building size and density and the presence of the coastline. This process was applied only in regions where LiDAR data were available. We then took a landuse/landcover (LULC) sh ape file supported by South Florida Water Management District (SFWMD) that covered the entire area, including areas where no LiDAR data were acquired and aggregated the LULC units into these same six classes (Figure 37). The trained R values for the six classes were then assi gned throughout the entire region in 1.5km 1.5km tiles. The choice of 1.5km for assigning R values was dictated by th e smallest number of 51 PAGE 52 A B Figure 37. Error variance mappi ng depending on the density of urban development. A) LULD shape file of the Miami area produced by the South Florida Water Management District. Classes ranging from 1300 1600 represent high density urban development. Classes in the 1800s and 8000s represent intermediate density urban development. Classes in the 3000s represen t lower density development. Classes in the 5000s represent water bodies. White space represents unclassified areas. Class numbering details can be found in (SFWMD 1999). Original UTM coordinates of feet used here. The area is roughly 21 kmkm. B) Corre sponding area showing five aggregated terrain cla sses corresponding to Table 33. Classes were aggregated into 1.5km.5km tiles because the SRTM measurement noise is assumed stationary over such distances. Although, the noise ma y not actually be stationary over such distances, the tiles must be su fficiently large to allow for the robust calculation of the innovation statistics. SRTM pixels in a given dimension (e.g., Univ ersity Transverse Merc ator (UTM) easting or northing) that could consistently provide a robust estimate of sample variance and autocorrelation. For water bodies in the SRTM data, the MKS mapping operator is set to zero ( ). Thus, the R value for SRTM has no effect on MKS estimates over water bodies. 0 H3.3.3 Big Measurement Error of SRM over the Coastline Accurate delineation of the co astline is critically important for coastal flood modeling. Furthermore, the coastline in th is area exhibits many smallscal e variations, particularly near developed areas such as marinas, canals, and modified shorelin es. Discerning the coastline in SRTM is problematic due to its coarse resolution. Thus, SRTM heights near the coastline are 52 PAGE 53 expected to have large uncertainties, and it is desired that the MKS fusion track the highresolution LiDAR data very closely over the coastline. In coastal pixels where LiDAR is present, we want to ensure that the estimated R values for the SRTM will have minimal effect because the MKS estimates should still be dominated by the smallR LiDAR pixels below. In coastal pixe ls where there are neither topographic nor bathymetric LiDAR pixels present, the estimated R value for the SRTM should have a stronger effect. Therefore we need a way to inform the algor ithm when it is over coastal pixels so that the R for the SRTM can be increased to better reflect the poor elev ation quality in the SRTM over coastal pixels, yet the adjustment should be small enough for the SRTM to influence the MKS estimates in the absence of LiDAR data. As a re sult (Figure 38 C) the coastline pixels in the SRTM are delineated, and we increase the R values of the SRTM to a value of 100 (a value that was determined experimentally to be su fficient) to affect these changes. First, we classified the downsampled LiDAR and SRTM by height, such that a logical A B C Figure 38. A 1.5km 1.5km coastal site show n at the SRTM scale (24m). A) coastlines derived from the downsampled LiDAR and SRTM. B) Differences between classified LiDAR and classified SRTM. Red represents pixels where the LiDAR indicates land and SRTM indicates water. Blue represents the converse. Green indicates the same class in both LiDAR a nd SRTM. C) Corresponding coast line in the MKS data fusion result at the SRTM scale using both the nominal (constant) measurement noise variance R for SRTM and the terraindependent boosted R. Compared to the SRTM coastlin e in the plot on the left, the coastline in the fused data at the SRTM scale is able to track the LiDAR coastline more closely. 53 PAGE 54 corresponds to land (elevations above 0.05m), and a logical 1 represents sea water (elevations below 0.05m). We then calculate d the difference between classified LiDAR and classified SRTM (Figure 38 B). Results are sh own with areas of colora tion representing pixels (Red : Water in SRTM/Land in LiDAR, Blue : Land in SRTM/ Water in LiDAR, Green : Land in SRTM and LiDAR or Water in SRTM and LiDAR) R was increased only where the LiDAR indicates water and SRTM indicates land since only SRTM land pixels are used in the MKS fusion.. 3.3.4 Results A B C D Figure 39. Component data sets to be fused together over a 20k m 20 km area: A) bare surface NGDC elevations (80 m spaci ng) showing deep water bathymetry and land surface topography on a nonlinear color scale; B) SRTM elevatio ns (20 m spacing); and C) LBS+ and bathymetric LiDAR elevations (5 m spacing). D) SRTM measurement error variance R estimated as a function of landuse class corresponding to Figure 37. R has units of meters squared. Over the ocean, R values for SRTM have no meaning since there. 0H 54 PAGE 55 We fused the NGDC, SRTM, bathymetric LiDAR and topographic LBS+ data sets with the MKS estimator (Figure 39). Since we wish to estimate solid surface elevations, the SRTM pixels containing ocean were treated as mi ssing data by setting m easurementstate model operator zero, H=0. Similarly, topographic LiDAR pixels over the water were excluded from the fusion. Table 21 shows the nominal measuremen t error for each data set. The SRTM measurement error was modulated by terrain type. As described previously, the SRTM measurement error over areas c ontaining topographic LiDAR data was determined from the A B C D Figure 310. MKS fusion results for the 20km 20km area. (A) Fused estimates at SRTM scale (20m). (B) Standard deviation of the expected estimate error Rat SRTM scale (in meters). (C) Fused estimates at LIDAR scale (5m). (D) R at LIDAR scale. Dark red indicates high uncertainty in areas wh ere only NGDC data were available. Yellow indicates intermediate uncertainty where SRTM data were available. Dark blue indicates low uncertainty where LIDAR data. All elevations and R are in meters 55 PAGE 56 MKS innovations adaptively. For areas with no LiDAR data, aggregated LULC classes were used to propagate spatiallydependent R values (Table 33). The larg est root mean square (RMS) values occur where tall buildings lift the SRTM phase scattering center above the ground. A value of 8m is used for unclassified terrain types because it is the me dian value between the nominal bounds of 6m and 10m listed in Table 21. The original grid sizes of SRTM and NGCD were 30 m and 90 m, respectively. Since the MKS quadtree requires integer scale changes, the SRTM and NGCD were upsampled to 20 m and 80 m, respectively. Prior to the MKS fusion, the bathymetric LiDAR was upsampled to 5m and combined with the 5m topogr aphic LiDAR data since the c overage areas were small and mutually exclusive. The fused elevation estimates and the square root of the Kalman estimate error variances are shown in Figure 310 at both the SRTM and LiDAR scales. AB C DE PF = 0.0026388//PM = 0.4328//PD = 0.56456 100 200 300 400 500 600 700 800 900 1000 100 200 300 400 500 600 700 800 900 1000 FA D M PF = 0.00028324//PM = 0.29299//PD = 0.70672 100 200 300 400 500 600 700 800 900 1000 100 200 300 400 500 600 700 800 900 1000 FA D M Figure 311. Twometer simulated flood images over the 1.5km 1.5km coastal site. A) SRTM data (20m). B) MKS fused estimate at SRTM scale (20m). C) fused estimate at LiDAR scale (5m). Black color indicates si mulated flooded area. Elevations are in meters. D) The flood estimate error with SRTM data, FA (False alarm) rate is 0.0026, M (Missing) rate is 0.4628. E) The flood es timate error with coarse scale fused DEM, FA rate is 0.00028, M rate is 0.2929. 56 PAGE 57 To further elucidate the benefits of fusing LiDAR and SRTM data, a simulated 2m flood level is depicted in Figure 311. The flood rou ting prediction using the fused DEM at the LiDAR scale is much more precise than that obtained from nonfused SRTM observati ons alone. It is interesting to observe that even at the SRTM scale, the prediction is dramatically improved through the data fusion. To quantify the improve ment, we calculated the false alarm (no flood in LiDAR scale flood model / flood in coarse scal e flood model) and miss (flood in LiDAR scale flood model / no flood in coarse scale flood model) rates by co nsidering the 2 m simulation result over the finest scale data as truth (Figure 311 D and E). In addition, the MKS estimate error images in Figure 311 provide an uncertainty measure for every pixel at every scale in the quadtree that reflects the quality of data available at that pixel. Such uncertainty measures are needed for sound decision making in res ponse to and mitigation of flooding hazards 57 PAGE 58 CHAPTER 4 MULTISCALE FEATURE RE DUCTION FOR HYDROLOGIC MODELING OF URBAN AREAS 4.1 Introduction Due to population growth and deve lopment in coastal areas, eight of the ten largest cities in the world are located on the coast and 44% of the worlds popul ation lives with in 150 km of the ocean (Resio, et al, 2008), resulting in increased natural hazard risks that must be accurately estimated. Modern ranging sensor technology, in particular airbor ne LiDAR, enables the formation of DEM images with pixe l resolutions of 1m x 1m or sma ller. Such data sets should, in principal, improve our ability to accurately monitor, predict and mitigate coastal flooding and erosion (Shrestha et al., 2005). Yet, the flood and storm su rge modeling communities struggle with how to incorporate high resolution data into th eir flood predictions (Bates et al., 2004). In general, the physical phenomenon of fluid dynamics in such mode ls is expressed by nonlinear, coupled, timevarying differential equations, a nd the terrain and land cover (buildings and vegetation) comprise a set of complex boundary conditions. The fundament al equation for fluid dynamics is the well known NavierStokes Equation (Acheson et al., 1990), but from it many specialized equations are derived to predict water level, velocity, and momentum. For all but the simplest geometries and moment um conservation assumptions, these equations offer no closed form solution, thus requiring computationally intensive numerical solutions. Solving the equations iteratively for many possi ble water flow directions at every pixel in a very large DEM often leads to unacceptable computation times, pa rticularly because multiple realizations are often desired for each storm scenario. A 1 km 1 km area with 1 m resolution would require up to one million nodes at this resolution if a regular grid is used. In order to reduce the computation time, modelers are often forced to use DEMs with large (coarse) resolution, and accept the errors in flow routing and discharge predictions th at result from the degraded boundary condition 58 PAGE 59 information. Alemseged et al. (2005) concluded that relatively low resolution DEMs can yield useful 2D models in area s of relatively simple topography, such as rural settings. They found that the calculation times for estimating inundation le vels with the SOBEK flood model over a few tens of square kilometers ranged from a few hours to 13 days for gridding (pixel) resolutions of 15 m and 2.5 m, respectively, on a 1.5 GHz Pentiu m IV PC. Herritt and Ba tes (2007) studied the impact of varying spatial reso lution from 10 m to 1000 m over larg er areas to evaluate their rasterbased model of flood flow and conclude d that resolutions as coarse as 500 m were adequate for predicting water levels in some rural areas. 500 m is far t oo coarse for urban areas however, and the flood mode ling community still lacks a general method to balance the need for accuracy against computation time. In modern storm surge models such as ADCIRC (Luettich et al., 2006), irregular mesh grids are generated to represen t the terrain since they can re duce information loss and storage requirements by adjusting the dens ity of the mesh depending on the amount of variation in the data. One most realize, however, that a nonuni form grid only offers significant savings if you have large areas in your DEM of low topographic re lief and few buildings. In areas where most of the hydrologic surface ha s a lot of detail, as it does in most cities, the potential savings of a nonuniform grid is mostly lost and you are back to having to increase the average node spacing to a distance that is computationally feasible but lack s accuracy (Shephard et al., 1983). Also, the very same irregular node spacing used to ba lance between information loss and the computer storage can cause high computationa l complexity in the actual hydr ologic models in that every pixel must be indexed with the pa rticular locations of its irregularly connected neighbors. This also presents a difficulty in fusing data of di fferent resolutions (Wilson, 2004). Considering these issues, we use a regular grid in implementing our hydraulic model. That is not to say that the 59 PAGE 60 methods used here could not be adapted for use on an irregular grid, but rather the implementation on a regular grid makes the im pact of our decomposition method easier to understand. To minimize the number of pixels included in the computations, we exclude the data that fall inside footprints of buildings in running the hydraulic model. Schubert et al., (2008) refer to this approach as the BuildingHole (BH) met hod and compared it to the BuildingBlock (BB) method, in which the terrain height is raised wh ere buildings coincide w ith the mesh. Their study concluded that the accuracy of BH and BB are similar; however, the computational time for the BH approach is shorter In some cases, ad hoc methods of manually specifying subpixel boundary conditions have been used. However, these are highly specific to the particular flood model used (e.g. SLOSH, ADCIRC), as well as highly inefficient and subjective (Bates et al., 2000). What is needed is a principled and systematic met hod for preserving information in the high resolution DEMs that can be used at coarser resolution in the models Ultimately, the scale must be decided by the modelers tolerance for error, but we present some guide lines for choosing a proper scale by considering a balance between pe rformance error and computationa l requirements. In DEMs of urban areas, the exterior walls of buildings and the ground (i ncluding paved area) form the surface over which flood waters flow. However, these two components of the hydrologic surface often exhibit very different spatial frequencie s. Much of the information contained in the landscape topography, such as hills and valleys, is spatially correla ted over hundreds of meters to kilometers (i.e. low frequency). This is particul arly true for cities on coastal plains where the topographic relief is quite small. Examples of su ch cities in the United States include Houston, TX; New Orleans, LA; Miami, FL; and Jacksonville, FL. It is precisely such lowrelief urban 60 PAGE 61 areas that are at greatest risk fo r wide spread flooding and devast ating storm surge. In contrast, the buildings in these urban areas represent high (typically on scales of meters to tens of meters) frequency information. As a result, it is suboptim al to reduce the spatial resolutions of these two components of the hydrologic surface uniformly as happens when a highresolution LiDAR DEM is naively downsampled equally over the entire study area. We present a method for decomposing the hydrologic surface into these two distinct components and then reducing them independently. As further motivation for this approach, we present the evidence of the valid ity of decomposing a mixture distribution into its component distributions to preserve more information (Cover et al., 1991). If viewed as a lossy data compression problem, the entropy of the mixture distribution is guara nteed to be greater than or equal to the mixture of the com ponent entropies. This implies that reducing the resolution of the DEM components separately preserves more information. While still a relatively new topic, some appro aches to encoding the impact of buildings in flood or surge routing have been explored. For example, the concept of porosity (Chow et al., 1988) can be used to handle the effects of buildings in coarse DEMs. Alemseged et al., (2005) suggested that buildings could be defined as hollow, partially solid, or entirely solid objects depending on the percentage of building area and ground area at any given resolution. The resulting effect of a building is the same in every direction and for any water depth. This approach is incomplete if we consider an elongated building or na rrow channels between buildings, wherein water flows predom inantly in one direction. Neelz et al. (2007) presented preliminary results from the 2D flood model TUFLOW with high resolution data, coarse resolution data, and coarse re solution data with an incr eased roughness value (based on increasing the Manning coefficients) and mentioned that the overall effects of buildings in urban 61 PAGE 62 areas should be considered. Un fortunately, that method cannot be used generally because the roughness value is actually a ffected by water depth. In this dissertation we show that a new fr iction value can be e fficiently computed by combining the drag force of buildings with sh ear stress of the ground, including its dependence on water depth. 4.2 Decomposition Our study region comprises the urban area of MiamiDade County, Florida. Our finest scale DEM of the study area was gene rated at 1m spatial resolution from LiDAR data. This data set was chosen because the density of LiDAR points is representati ve of typical LiDAR acquisitions over urban environments. In such data sets, it is not uncommon to have small data voids due to lowreflectance water surfaces and shadowing from buildin gs and trees. In this work, the LiDAR voids and areas of no LiDAR coverage were filled in by fusing the LiDAR data with a coarse resolution (30m) dataset from SRTM by using MKS (Slatton, Cheung et al., 2005). A subset of the area is shown in Fi gure 41, where the size of a pixe l is 1m x 1m. The data format of the LiDAR consisted of two surfaces, the LTS and LBS surfaces, as mentioned in section 3.3.1. The LiDAR DEM is a simple data structure, with every pixel having a single elevation value. Buildings are considered as the sole dominant hydrologic feature in dictating flow of water due to the sparsity of vegetation in th e urban area (as well as the predominantly narrow trunks of vegetation in this regi on, which do not provide resistan ce to fluid motion) Hence, to better estimate the hydr ologic response (e.g. water discharge rate), we only in clude the building and ground information to form a hydrologic surface. The region property filter (regionp rops( )) built in Matlab was used to help estimate the hydrological surface for flood or surge simulation. Initially a scalar elevation threshold was applied to the LiDAR elevations to generate a binary image composed of ground and nonground 62 PAGE 63 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 5 10 15 20 A B 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 C 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 5 10 15 20 D Figure 41. Ground and non gr ound DEM of LiDAR A) Fused DEM by MKS in the finest scale,, Top view B) Shade relief model of the finest scale data, C) ground LiDAR, D) non ground LiDAR. Elevat ions are in meters features as we did in section 3. 3.1. ). Testing this approach showed it to be flawed; if the threshold height was too high, bu ilding features were mislabeled while if the threshold was set too low, trees (particularly those adjacent to build ings) were misclassified. In section 3.3.1, this error could be forgiven because exclusion of some building data did not greatly influence the error statistics of SRTM. However, the calculation of hydrol ogic features is much more sensitive to the accuracy of the exclusion process. In order to remove vegetation but preserve bui ldings accurately we applied multiple height thresholds to the LTS. For each discrete thresh old (increments of 2 m height), a morphological size of obstruction operator was ap plied to determine if an obstruction was a tree or not. If the size of the object is less than 6 pixels it is cons idered a tree. The values of the threshold height and the size of obstruction were achieved experimentally to get th e best visual agreement with 63 PAGE 64 m m m m m Figure 42. Procedure for ma king hydrologic surface. A) The procedure of morphological filter by multiple threshold heights. B) Nonground DEM C) Classified impediments where the white pixels represent the area of ba re ground or vegetation where water can easily flow. Small impediments such as curbs are still present. Tall impediments to flow, such as buildings, are in black. aerial images. After filtering the DEM at multiple threshold values through application of the region property filter, the filtered images were then combined by bitwise OR operation Small holes inside buildings are then fi lled for each footprint. The proce dure and result are depicted in Figure 42. It should be noted that more complex methods could have been applied in the segmentation of vegetation and build ings, but since the focus here is on the subsequent scaling of the hydrologic surface, the simple approach us ed here for producing hydrologic surfaces was deemed sufficient for that purpose. 64 PAGE 65 A. B C Figure 43. Spectral characteris tic of ground and buildings, A) Bare ground DEM (interpolated) B) Non ground DEM, C) Power spectra l densities of the A) and B). After creating the hydrologic surface from th e LiDAR DEM data, we can decompose the resulting model further, classi fying the pixels into ground and building. These two components exhibit very different spatial fr equencies as can be observed in the power spectral density in Figure 43. C. An approach like downsampling, which would reduce the spatial resolution of both components equally, is therefore not a ppropriate for preserving information. The implemented method must reduce the components separately. To illustrate the advantages of separate reduc tion, we investigate the entropy of a general (hypothetical) mixture distribution relative to the entropy of each component distribution. Theorem 1. Let z be a random variable (RV) that is distributed as z = p and governed by a mixture model of the three component RVs a, b, and c, such that each component RV is distributed according to a given pdf a = p1, b = p2, and c = p3. We can express the distribution z as 112233 p pp p (41) where 1321 We can then write the entropy as ) (pH 332211pppHpH (42) By the convexity property of entropy function, 65 PAGE 66 )()()() (332211332211pHpHpHpppH (43) The lower entropy for the mixed entropies is indicative of the advantage available by separating and processing th e components separately. 4.3 Reduction and Scale Selection In principle, the high spatial resolution of ai rborne LiDAR observati ons enables the use of relatively simple hydraulic models to accurately model the flow of wate r across open terrain in rural areas and in highly devel oped urban areas. Modelers can also take advantage of the high spatial resolution LiDAR data to compare more complex hydraulic models having higher order terms. However, natural disasters tend to be im pact relatively large ar eas. For example, the effective range of Hurricane Ka trina was about 200~300 km (Resio et al., 2008). Running meter scale simulations over areas of hundreds of kilo meters results in unacceptable computational times, even with the simplest hydraulic models, making it necessary to reduce the spatial resolution of the observational data (Jain et al., 2006) Historically data reduction in imagery ha s been achieved by (1) transforming the information into a different domain (e.g. JPEG: spatial to frequency, MPEG: spatial to wavelet) and (2) encoding (e.g. Huffman, runlength co ding) to reduce redundancy. Both processes (domain transformation or encoding ) transmute important statistica l properties of the original data. As a result, the reduced data must be decoded and transformed back to the original domain in order to extract these desired stru ctural features in the image (Gonzalez et al., 2002). The primary advantage of the method proposed in this text is that the reduced data does not require any reconversion to extract the original geometric properties. The ground data are reduced by downsampling with an approach th at preserves the primary charac teristic (elevation). Likewise, the desired features of the build ing data (building width and areas) can be calculated directly 66 PAGE 67 from the two corner point values in the reduced data. This is important because the hydrologic models need to run on the DEM di rectly. If the DEM had to be fully uncompressed prior to running the models, no performance adva ntage would have been gained. In addition, basic guidelines will be establis hed for selection of a scale that balances algorithm memory requirements w ith accuracy of hydraulic feat ure (water discharge rate) estimation. Though the best scale of reduction will ultimately be decided by the users tolerance for error, these guidelines serve as a framework for further investigation. 4.3.1 Ground In modern storm surge models, such as ADCI RC, irregular mesh grids are generated to represent the terrain heights since their use can reduce data storage by adju sting the node density (mesh resolution) depending on the amount of he ight variation in the data. However, the nonuniform node spacing leads to high memory us age and computational complexity in the subsequent implementation of the surge mode l. As a result, minimum node spacings are generally limited to 30 m, and are often much larger (Vemulakonda et al., 1999). Moreover, with this irregular dataset, it is difficult to fuse different resolution data sources (Wilson et al., 2004). If there is significant elev ation variation (as in urban or highrelief areas), then this method requires a large number of nodes, and the solution appro aches the same number of nodes that a regular grid would require. To more clearly examine the dependence on scale, we use a regular grid DEM, but it should be noted that this work coul d potentially be extended to nonuniform DEM grids. For the investigated areas, the quantity of information lost by downsampling the bare ground elevations usually increases slowly relative to the downsampling factor because most of the topography in coastal plain ci ties, like Miami, is smooth (Figure 44, B). However, we still require a criterion for deciding the downsam pling factor applied to the ground data for 67 PAGE 68 hydrologic modeling. Generally, th e optimal scale selection depends on the requirements of the surge modeler. The cost function we choose to use for selecting optimal scale is M WGG sTC )1() (,10 (44) where )( )()( )(NWDR sWDRNWDR sGW )( )()( )(NMU sMUNMU sGM (45) and s is the scale (analogous to resolution) and N is the finest scale. WDR(s) and MU(s) are water A B C D Figure 44. Optimal scale selec tion for ground. A) The finest sc ale DEM estimated by MKS, B) The filtered and interpolated ground data. Th e color bar shows the ground heights in meters, and the xand yaxes units are also in meters. C) Water discharge rate for a wave moving from left to right estimated at multiple scales. The xaxis is in units of the base2 log of the pixel length in me ters. For example, a value of 4 on the xaxis indicates a pixel scale of (24 =) 16 m. In this particular scenario, the discharge rate increases with scale because the ground beco mes smoother as resolution is lost. D) Optimal scale selection in terms of the memo ry usage gain and water discharge rate gain defined in Eq. 45. The cost function curve is then com puted using Eq. 44. The optimal scale is simply taken to be the scale at which the cost function is maximized. In this case, that occurs at x=4 68 PAGE 69 discharge rate and memory usage at scale s respectively. GW is the normalized water discharge rate error incurred by decreasing resolution. We refer to this as the negative gain for the estimation of water discharge rate. GM is a normalized difference between the change in memory usage incurred by decreasing resolu tion. We refer to this as the positive gain for memory reduction achieved by decreasi ng the scale. The choice of is left to the surge modeler. If the modelers want to only cons ider the error of water di scharge rate they can set =0. If they wish to give equal importance to memory usag e and discharge error, then can set =0.5. If we consider a particular storm surge scenario of a water wave moving from left to right, not flowing out through bottom and top edges in the DEM shown in Figure 44 B, we can compute these gains and evaluate the cost function. For example from Figure 44 C, the water discharge rates at x=6 (64 m resolution) and x=0 (1 m resolution) are 3425 and 3125 respectively. That yields 9.6 % error in the estimate of water discharge rate if we reduce the resolution to 64 m. With regard to the erro r in the discharge rate and memory usage, the optimal scale occurs where the TC curve is maximized. For the example in Figure 44 D, this occurs at x=4, or a scale of 16 m using Eq.44 withsm/3sm/35.0 It is worth noting that the memory usage is closely related to computational time si nce the local discharge ra te should be calculated in every pixel and for all desired directions 4.3.2 Building Now consider a scenario in which two buildings are spaced closely t ogether, one directly east of the other. Flood waters are flowing from nor th to south. The water reaches the south side of the buildings by flowing between them. If an overly smoothed or coarse resolution DEM is used, those two buildings may both appear to be s horter and wider (larger footprints) due to local 69 PAGE 70 averaging effects. This implies that the DEM elevations in the channel between the buildings will be erroneously high and skew the modeling result. The key concept behind the redu ction procedure for the build ing data is that building elevations above the maximum expected inundati on level are not important to modeling the flow of water on the hydrologic surface. Additiona lly, most flood modeling algorithms consider obstructions to water flow as da ta voids since the flood waters do not enter them in sufficient quantities to affect overall floodi ng dynamics. Thus, the height ch aracteristics of an obstruction are not used once it is classified as an obstr uction. Therefore we onl y need to preserve information about the foot print of the obstructions. Building obstructions contain high (spatial) frequency inform ation like edges and corners. One might consider initially usi ng a frequencydomain or wavelet filtering approach to isolate the high frequency spectral components for the purpose of data compression. This would unfortunately tend to emphasize only the edges and corners and not treat the obstructions as solid or convex areas through which water cannot flow. A different a pproach is therefore necessary Our goal is to maintain the shape of the bu ilding obstructions while reducing the amount of memory required for accurate representation of these structures. By considering a building footprint to be a union of recta ngles, we can preserve most of the highfrequency information using a simple parameterization. We analyze th e elevation data in the spatial domain and locate the two corner points (e.g. the northeast and sout hwest corners) of the smallest rectangle that circumscribes the building. We therefore main tain sharp edge information with very small memory requirements per building. The extent to which the building footprints are quantized in this method depends on the scale of the smallest rectangle a llowed. Clearly, if one uses large rectangles that enclose several 70 PAGE 71 B Figure 45. Encoding building and scaledivision pr ocedure. A) Two corner points of and resulting circumscribing rectangles for thre e hypothetical buildings. B) The procedure of scaledivision method. Initially the en tire DEM (512 x 512 in this case) is considered (top row). Subsequently, the scale is reduced by cutting the DEM into quadrants (bottom row). This has the effect of cutting some rectangularized buildings into two or more parts, and the number of buildings increases. The results show that downsampling for groundonly da ta and saving two corner points for buildings are very effective for preservi ng information as a function of scale. adjacent buildings or very complicated large bui ldings, a dramatic savings in memory can be achieved; the downside is a large resulting error in the final DEM due to loss of fine detail. To account for the information loss we use a scaledivision method. The procedure for the scaledivision method is as follows. Fi rst each building in the filtered binary image is bounded by a rectangular box. And then the loca tions of the two corner points (bottom left corner and upper 71 PAGE 72 Table 41 Memory requirements for buildings as a function of scale. By comparison, if buildings were not parameterized, the ra ster representation of a 512x512 image would result in (512x512) pixels x (2 location indices x 9 b its per number) = 4,718,592. The scale distance (m) # of buildings or other obstructions Total number of bits [# of obstructions number of points per obstruction (2) bits per corner point (2 bits)] 512 137 4,932 256 158 5,064 128 209 5,916 64 294 7,428 32 504 11,816 16 1033 23,628 8 2417 29,004 4 6696 78,624 2 21226 424,520 right corner) are saved for each generated rectangl e. A new image is then generated by replacing the circumscribed buildings with their rectangular representations. This is initially performed on the entire DEM area. Figure 45 shows an exampl e of a 512 x 512 pixel image. Because of the LiDAR DEM resolution (pixels ar e 1 m x 1 m), this image corresponds to a 512 m x 512 m area. In the demonstrated case, the first reconstructio n results in a building map that preserves the streets (channels) that will allow water to flow between buildings. For the subsequent reconstruction (Figure 45), th e DEM area is quartered in a standard quadtree fashion. Buildings that fall along the quadrant boundaries are cleaved and treated as two separate buildings. Each portion of the cleaved buildings is parameterized by its own rectangle. The process continues such that with subsequent r econstruction, more rectangles are used and a higher amount of detail information from the origin al buildings is preserved. As a result, more memory is required to store the corner locatio ns. Table 41 shows the memory requirements for the 2point building descriptions by scale. In order to choose an optimal scale at which to represent the building data, we need to balance the memory requirements with an error metric associated with the quantization of the building foot prints. Figure 46 demons trates the effect of 72 PAGE 73 changing the scale of encoded build ing footprint data on the calculat ed water discharge rate with same scenario of water movement we used in sec tion 4.3.1. The result agrees with the expected behavior in that we can expect some flow blockages to occur by coarsening the data and joining closely spaced or irregular shap es of buildings. For this case, we can achieve a 99% reduction (107888 bits at 2 m scale division, 912 bits at 256 m scale division) in memory usage at the cost of 27% error (950 at 2 m scale division, 690 at 256 m scale division) of water sm/3sm/3 A B C D Figure 46. Optimal scale selection for buildings. A) the finest scale building footprint, B) the coarsest scale building footprint, C) the ch ange of discharge ra te by the quantization of building footprint with a wave moving from left to right, The xaxis is in units of the base2 log of the pixel length in meters. In this particular scenario, the discharge rate decreases with scale because the widt hs of buildings are increased due to the quantization of building footprint. D) Optimal scale selection in terms of the gain of memory usage and discharge rate defined in Eq. 45. The cost function curve is then computed using Eq. 44. In this case, that occurs at x=4. 73 PAGE 74 discharge rate. We chose our optimal scale to be 4 (16m) by choosing th e appropriate balance between the error of discharge rate ( ) and memory usage ( ) using Eq.44 with ) (sGW)(sGM5.0 4.4 Evaluation Once the ground data and building data have been individually reduced, we can recombine these two components to synthesize an approximation of the true hydrologic surface. The resulting elevation image is memory efficien t while preserving most of the information that affects water flow over that terrain. The two type s of synthesized DEMs are shown in Figure 47. The DNn images show the standard quadtree downsamp ling approach of MKS at different scales. The subscript n denotes the downsampling factor. The GxBy images are created using our reduction method. The subscripts x and y denote the downsampling fact or on the ground data and 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 2.5 3 3.5 4 4.5 5 5.5 6 6.5 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 2.5 3 3.5 4 4.5 5 5.5 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 2.5 3 3.5 4 4.5 5 5.5 6 6.5 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 2.5 3 3.5 4 4.5 5 5.5 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 2.5 3 3.5 4 4.5 5 5.5 6 6.5 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 2.5 3 3.5 4 4.5 5 5.5 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 2.5 3 3.5 4 4.5 5 5.5 6 6.5 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 3.5 4 4.5 5 5.5 6 6.5 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 3.5 4 4.5 5 5.5 6 6.5 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 2.5 3 3.5 4 4.5 5 5.5 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 2.5 3 3.5 4 4.5 5 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5 DN2 DN4 DN8 G2B64 G4B64 G8B64 DN16 DN32 DN64 G16B64 G32B64 G64B64 Figure 47. Two kinds of DEMs in different scales. DMn stands for downsampling by a factor of n, GxBy stands for xscale for the ground and yscale for the buildings.The arrow represents the change of scale going from fine to coarse. 74 PAGE 75 the scaledivision factor on th e buildings, respectively. To quantify the performance of our method relative to the original quadtreebased downsampling of MKS (Slatton, Cheung et al., 2005), we selected MSE (Mean Squa re Error) and the Correlation Co efficient (CC) as metrics to evaluate the hydrologic DEM as an elevation image. As hydrol ogic features, we chose flow accumulation number (FAN) generated by a hydr ologic (flow routing) model and water discharge rate (WDR) generated by hydraulic (fluid dynamic) model. 4.4.1 Mean Square Error and Correlation Coefficient The MSE and CC are general metrics commonly used to evaluate image quality. Mean square error is the simplest approach to meas ure error by the Euclidean distance between images and is defined (Bovik, 2005) as 2)(DTIIEMSE (46) where IT is the true image and ID is the test image (distorted by a reduction of information). A higher value of MSE obviously corresponds to greater distortion in the resultant image. The correlation coefficient is a measure of th e linear dependence of two random variables. For image data, this metric is used to depict th e strength and direction (s ign) of the relationship between the image pixels (Bovi k, 2005). It is given by: D DD T TTII ECC (47) where is the mean value and is the standard deviation. The subscripts T and D denote true image and distorted image respectively. The CC value can vary between 1 (perfect negative correlation) and 1 (perfect posi tive correlation). In our case, the images before and after reduction will have the same genera l relationships for positive and ne gative values. We therefore, 75 PAGE 76 A. B. Figure 48. Evaluation of the hydr ologic DEMs in Figure 47 as th e scale is increased from 1m to 256m. The red curve is for nave downsam pling. The blue curves are for various GxBy configurations. A) Evaluation with MS E, B) Evaluation with CC. The results shows that downsampling reduction for ground data and saving two corner points for building data is very effective for preservi ng information without regard to scale. need not worry about differentiating negative from positive correlation, and we can simply use the strength of correlation (the absolute va lue of the CC) as our accuracy metric. Evaluation of the two reduction methods (downsampling and our proposed reduction method) based on MSE and CC values is presente d in Figure 48. While the performance of downsampling continues to deteriorate as the si ze of the scale (pixel size) increases, our new approach is much less affected by scale changes because the v ital information (e.g. building footprints) are preserved separa tely. The blue lines in Figur e 48 represent the change of performance based on the scale of building reduction. Even th ough the performance decreases as a coarser scale is used to represent the building f ootprints, the error gap of our approach is small relative to that of the downsampling MKS method. 4.4.2 Flow Accumulation Number Unlike the global MSE and CC metrics discussed above, FAN is evaluate d at every pixel. It represents the total number of cells (pixels) that would contribute runoff water to the current considered cell from a gradientbased routing of flow from each cell to its steepestdecent 76 PAGE 77 neighbor cell (ArcGIS, 2008). The flow routing algorithm allows us to resolve th e spatial flow patterns of water (or sediment, nut rients etc) in a DEM. Thus we can determine the down slope areas (cells) in a landscape to which the out flow from a cell will be distributed (Desment et al., 1996). To calculate FAN we used and modified the flow r outing algorithm presented by Wolfgang, (2008). His code was implemented in MATLAB to enable users and developers to easily customize the algorithm and combine it with alternate applications, such as ecological cellular automata (Wolfgang, 2008). Since slop e is a measure for the potential energy in a specific location, it is one of the most important influences on surface process. Wolfgang, (2008) defines the runoff slope from the considered pixe l to each neighbor pixe l as the downward angle between the pixels. It should be noted that other definitions for local DEM slope exist, such as areabased methods, and could be used. Anothe r important question in both geomorphology and Detect the direction of every down slope at each pixel in DN n Calculate Flow Accumulation Number Detect the direction of every down slope at each pixel in G x Check if there exists a channel for each down slope direction using reconstructed building image B y Calculate Flow Accumulation Number Figure 49. Procedure of calculating FAN. A) The standard procedure (D8 based multiple flow routing). B) the modified pro cedure for new data reduction. 77 PAGE 78 hydrology is, Where does the water flow when it rains?. Wolfgang, (2008) chose the multiple flow direction algorithm, which pa rtitions and transfers the discha rge in each cell to all of its lower elevation neighbor cells (i n the local 8neighbor group) to overcome the weakness of the single flow direction al gorithm. In general, single flow algorithms permit only parallel or convergent flows; however, multiple flow algorit hms can accommodate divergent flow as well (Desmet et al., 1996). The relative amount of discharge transferred from one cell to a maximum possible number of eight downw ard neighbors is proportional to the slope to each respective neighbor. For the DNn images, the FAN was calculated by th e standard procedure (Figure 49). However for the GxBy images generated using our reducti on method, the standard procedure does not accurately reflect the preserved detail in formation of obstructions (e.g. buildings). We therefore modified the standard procedure for ap plication to our reduction method as indicated in Figure 49. In order to evaluate the performance through the s cales, we need to approximate the ground truth FAN data in each scale. The multiscale true FAN values were calculated by downsampling the FAN calculated from the finest scale DEM (Figure 410). Detection of high risk areas (DHRA) was then carri ed out by setting the average FAN as a binary threshold; areas A B C Figure 410. Input and resultant images for th e FAN metric. A) A 1 m scale hydrologic DEM. Elevations are meters B) the corresponding FAN image. The colorbar units refer to the count of pixels that c ontribute runoff water to the each pixel in the DEM. C) Detected high risk areas by locating pixels with FAN values greater than the average value of FAN. 78 PAGE 79 where the FAN was higher than the average were labeled as high risk for flooding with a value of 1 while the rest were set to a value of zero (Figure 410. C.). We calculated the FAN values and DHRA in di fferent scales using both the downsampling method and our method (Figure 411 ). Performance was then eval uated for coarse scale DHRAs made from our proposed reduction method and fro m downsampling by calcul ating the strength of the CC with ground truth DHRA in different scale. The performance evaluation was plotted in Figure 412 in terms of different memory sizes and different building reduction scales. We see that, as expected, the correlation decreases as the memory size used in the downsampling MKS process is reduced. Applying our new data reduc tion method, we preserve a higher degree (more than 0.7) of correlation as the memory usage is decreased. AB methods are shown. In the leftm method on the DEM to compute FAN or DHR A with the modified procedure is here CD EF Figure 411. FAN and DHR A at different scales. For a given 1x3 row triplet, re sults from three ost plot of each triplet, the result using our proposed shown. In the rightmost plot of each trip let, the result using nave downsampling of the DEM and then computing FAN or DHRA w ith the standard procedure is shown. In the middle plot of each tr iplet, the approximation to g round truth is shown w the finestscale FAN or DHRA result is itself downsampled to the corresponding scale. (Atriplet) FAN computed for scal e = 32 m. (Ctriplet) FAN computed for scale = 8 m. (Etriplet) FAN computed fo r scale = 2 m. (Btriplet) DHRA computed 79 PAGE 80 for scale = 32 m. (Dtriplet) DHRA com puted for scale = 8 m. (Ftriplet) DHRA computed for scale = 2 m. 24 81 6 3 2 Figure 412. Comparison ri by FAN. The dotted line with square mark shows the performance of downsampling. The ot her lines show the ns. r), An interesting observation is that, in some cases, traditional downsampling actually perfo ales In to highsk area de tection (DHRA) performance of our new reduction method at di fferent scales of bu ilding descriptio As scales approach those that are feasible for surge modelers (e.g. 16 m and large the proposed method preserves significantly more of the spatial structure of the DHRA, as evidenced by the CC values. rms better for a very limited range of scales close to the finest scale (e.g. resolution sc from 1m to 6 m). This is perhaps not surpri sing since the rectangulariz ation of buildings can reduce more information than simple downsampli ng as one moves from meterscale to the few meter scales. For larger scales (more than 16 m), which are necessary for storm surge modeling the proposed method maintains a much higher correla tion with the original data. It should be noted that the building details seem to have an effect on the performance only in small scales. large scales, the very fine details of buildings, (e.g. short channels between small buildings that are close together) do not have a significant effect. 80 PAGE 81 The results from proposed reduction method app ear to be very promising. By preserving the pertinent information like obstruction shape a nd location, we lost only 30% to 40% accuracy even after significant reduction in data volume. In terestingly, the advantage seen with regards to processing time was even larger than the reduced memory footprint. The memory reduction from 512x512 to 256x256 is 75 % while the corresponding d ecrease in processing time (34 seconds for 512 x 512 DEM, 3 seconds for 256x256 DEM) was 90% on a Dell 3Ghz, Intel Core2 Duo. 4.4.3 Water Discharge Rate 4.4.3.1 Simple 2D Hydraulic Model In addition to the static hydr ologic concept of FAN and the resulting metrics we explored above, we now wish to examine a dynamic hydraulic model and the resulting performance of our method. The full differential equation of a twodimensional hydrodynamic modeling platform for shallow waters is known as the Shallow Wa ter St. Venant complete equation (SWE) and can be found in (Covelli et al., 2002). We ignored the convective acceleration and wind induced acceleration terms to arrive at a simple model. These modifications are appropriate since the former is a second order term that requires non linear solutions and the latter is an external forcing factor that is outside our scope. The continuity equation is then given by: y q x q t zy x (48) where t is time and ,, x yz are the three Cartesian dimensions, with referring to the vertical. z x q and denote the flux of water in the yq x and directions along the local surface patch. Likewise, the momentum equa tion can be expressed as: y bx xx z gh t q 1 (49) 81 PAGE 82 by yy z gh t q 1 where is the local gravit ational acceleration, is the water depth., ghbx and by are the shear stress components in x and y direction, and is the water density. It is common to solve these equations w ith computer aided numerical methods after approximating them into finite difference equa tions (FDE) since they do not typically have analytical solutions for general geometries. The appropriate FDEs, of fi rst order accuracy, are given as follows: FDE continuity equation: t jiy t yjiy t jix t jxix t ji tt jiqq y t qq x t zz),(),( ),(),( ),(),( (410) FDE momentum equations: 2 ),( ),(),( ),(),(),(),( ),(8  1jix t jijix t ji t jxi jix t jix tt jixh tqf zz x t hgq q (411) 2 ),( ),(),( ),(),(),(),( ),(8  1jiy t jijiy t ji t yji jiy t jiy tt jiyh tqf zz y t hgq q (412) where x ( y) is the distance (in meters) between tw o pixels (i.e. the horizontal/vertical resolution of the image); qx(i,j) ( qy(i,j)) is water discharge rate ( m3/s ) into location ( i,j) in the x ( y) direction; ),( jixh( ),( jiyh) is the averaged water depth ( ) between (i,j ) and ( ix,j ) in the x ( y) direction, such that m 2/),(),(),( jijxijixhhh ( 2/),(),(),( jiyjijiyhhh ); and ( ) is the friction coefficient (dimensionless) of the channel between the ( i,j ) and (ix,j ) in the x ( y )direction. ),(jixf),( jiyf 82 PAGE 83 ),( jixh),( jixf ),( jiyh),( jiyf Figure 413. Spatial domain discretization scheme Figure 413 shows the numerical approximation at spatial domain having first order accuracy for least computational complexity. For the finite difference to have a unique solution we must define boundary conditions and initial values. For boundary conditions, arbitrary diffe rent water heights were set to ensure some flow from the left to right edges in the DEM. We prescribed no flow boundary conditions at the upper and lower boundaries in the DEM to prevent more complex water flows in and out of the DEM. The initial value of the water surface height ( z( i,j )) was defined by the boundary conditions. While initial values of discharge rates ( qx( i,j ), qy( i,j )) are generally set to zero, we used an approximate solution of the Darcy Weisbach equation (Chow et al., 1988) as the initial values to achieve faster convergence. We then used the CourantFriedrichs and Lewy condition (Covelli et al. 2002) min2 ,min gh yx t (413) imposed on the time step to maintain numerical stability and help to ensure convergence. 83 PAGE 84 In order to verify our new model under the c onditions mentioned above we have to check if the final solution can reach a steady state. Si nce a steady state implies no change in water surface height regardless of the la pse of time, we can set the con tinuity equation to zero as shown below. 0),( ),( ),( dt dz dy dq dx dqji jiy jix (414) We define the criteria to stop the iterative calculation procedure 01 ),( ),( y dy dq dx dqJ j jiy jix (415) 01 ),( 1 ),( J j jiy J j jixydq dy d ydq dx d (416) 0)1,(),()(iyJiyixQQQ dx d (417) J j jxy dq1 ),1( J j jnxy dq1 ),( Figure 414. Pictorial illu stration of the criterion ( )( )1()1(....nx x xQ QQ ) 84 PAGE 85 and we provide the pictorial illustration in Figure 414. The criterion to stop the procedure is globally )()( xixixQQ for all (418) iIn practice excessive processing time is required to strictly satisfy the steadystate criterion. We therefore apply a le ss strict stopping crit erion as follows 5 )( )(10)(min)(max ix i ix iQ Q (419) Even with the above criterion, we checked the results for several simulations. In some cases, the solutions did not reach steadystate. Since our simple model does not consider every physical force related to topographi cal features and is only a firs t order approximation, we could could observe the occasional divergence of the solu tion in cases where some excessive inflow or outflow existed initially. This creates an unstabl e localized state that ev entually spreads through all other locations. This was the cause of the observed divergence of th e total solution. To prevent the divergence we initially set the friction coefficients, f (0), as an arbitrarily high value f0 and decreased it exponentially. The minimum value s hould not be lower than the real value we want to use, so we expressed the friction coefficient as follows. an teffnf0) ( (420) where n is the iteration number, f0 is the initial value much bigger than ft, a is the rate of decreasing f0. a was chosen experimentally since the iteration can potentially be stopped by the steadystate criterion before f (n) can reach ft for very small values of a On the other hand, the divergence can occur if a is chosen too large. To choose a proper value for a we decreased a gradually whenever we observed divergence. Figure 415 shows that our method to prevent the numerical solution diverging on the surf ace having obstructions is effective. 85 PAGE 86 A B C D Figure 415. Convergence of our simple model with changing bottom friction value. A) DEM, B), friction value f(n) C) averaged water discharge rate, D) difference between min and max of Qx(i) After we had established that our solution r eached steadystate, we want to test the accuracy of said solution. First, the simple model was evaluated by comparing the numerical solution to an analytical soluti on for a hypothetical flat bottom si nce the analytical solution is only applicable to the flat bottom case. Analyti cal solutions can be calculated from the equation of water head loss (Chow et al., 1988) rewritten in terms of the discharge rate per unit width as follows 3 224 gh qf dx dh xf hhg q )(24 2 4 1 (421) where h2, h1( x2, x1) are the boundary conditions at the left edge and right edge respectively Since the analytical so lution can be derived fr om numerical equations with the condition of 86 PAGE 87 A BC D E F Figure 416. Testing of the ratio of the numerical solution to an analytical solution. A,D) the ratio for a hypothetical flat bottom DEM. B,E) the ratio for real bare ground data. C,F) the ratio for flat surface with buildings data. The abscissas ( h1, h2) of D, E, F) represent of initial wa ter depth at left and right boundary respectively. .The ratio of C is low because the impact of buildings is significant flat surface the numerical solution should be same to the analytical solution without regard to water depth. With the assumption that the multiple water depths ( h1 and h2) are given at the left edge and right edge, the width ( x) of the DEM is 256 meters, and the bottom friction factor is 0.05, the ratio ( qn) of the numerical solution to the analy tical solution is shown in Figure 416 The results in Figure 416 A and D show that our simple model is valid since the numerical solution exactly agrees with the analytical soluti on for the case of a perfectly flat bottom with no obstructions. Figure 416 B and E depict resu lts from this simple model applied to real ground data in the Miami area. Results show a very small (9 %) difference between the numerical solution for real ground data and the analytical solution for the assumed flat surface since the real DEM data is nearly flat. In other words, in this case the effect of the actual ground data is very small, such that one can use the analytic al solution for the ground instead of numerical solution. 87 PAGE 88 The comparison of the numerical solution being applied to a flat surf ace with buildings and the analytical solution is low, with an accuracy of less than 40% (Figure 416, C, F) because the impact of buildings is not reflected in the an alytical solution. Ther efore we must somehow compensate for the impact of buildings when we make urban areas coarse. This is perhaps not surprising since buildings well tend to lose their perceived surface area as we decrease the resolution of the DEMs. The following section s hows how we compensate for this phenomenon. 4.4.3.2 Subgrid Parameterization Since there is no physical model of fluid flow which considers buildings, we must create a method to compensate for the effect of thes e structures. Based on the underlying physical behavior, we derived new friction values in consultation with Prof. Bob Dean of the University of Florida that are equivalent to the shear stress on the ground To derive the e quivalent friction factor, first we define the drag force due to the building as follows 22hWVC FD D (422) Figure 417. Example of the physical behavior of water by building. w is the width of building that is perpendicular to water direction. is the area of ground, is the area of building. is the total area (GABATAGBAA ) 88 PAGE 89 where W is the projected width of the buildings relative to the flow vector (m) (i.e. this quantity will change with flow direction), CD is the drag coefficient (dimensionless, around 1.0), V is the water speed in the direction of flow (in m/s ), h is the water depth and is the water density. Since the shear stress is force per unit area the equivalent shear stress considering a building is the sum of the drag force induced by the building and the shear stress induced by the ground (Figure 417). 8 4 82 82 2 2 2 2V AA fAhWVC AA A fVhWVC AA A fV F A FGB G D GB G D GB G D T T eq (423) where is the area of ground, is the area of building, is the total area of building and ground, is the friction factor by ground, is the drag force by building, and is the total drag force by ground and building, GABATAfDFTFThe bottom stress used in general fluid models is = fV2/8 so that the friction factor when considering multiple buildings is T n Bn T n n d MA AAfWhC f 4 (424) where n is the number of buildings. Note that it is depende nt on flow depth h as well as width and area of buildings From the equivalent friction equation we see that the vital features are the widths and sizes of buildings. The features can be directly extracted from the encoded building data depending on 89 PAGE 90 Figure 418. Procedure for calculating WDR. The additional red procedure is for the new reduction approach. the scale of the data. The procedure to calculate fluid discharge rate is shown in Figure 418. The steps outlined in the red box are done to use the decomposed data separately and parameterize buildings in a subgrid size. 90 PAGE 91 4.4.3.3 Evaluation A B C D Figure 419. Simple simulated DEM and its estimat ed WDR. A) Simulated DEM in the finest (1 meter) scale, B) input DEM after voiding bu ildings C) initial input water surface by boundary condition (First of D) water surface height, z(i,j), (Second of D) local charge rate in x direction, qx(i,j), (Third of D) local discha rge rate in y direction, qy(i,j). In order to evaluate the performance of our approach we compare results with those from naive downsampling. First we calculated the change of local discharge rates, qx(i,j), qy(i,j)and global discharge rate, Qx(i) using a simple but plausible DE M simulation (Figure 419). The simulation DEM clearly shows the effect of downsampling locally and globally as we increase the scale size. The water surface height and local discharge rates in x and y directions are provided in Figure 419. The same results for the downsampled DEM and the decomposed and parameterized DEM (our method) are shown in Figur es 420 and 421, respectively. While two buildings become merged in the downsampling method at the 8 me ter resolution, our approach preserved the impact of separate buildings in the discharge As a result, the discharge between the two 91 PAGE 92 A B C Figure 420. Estimated WDR of hydrologic DEM made by downsampling. (left) water surface height, (center) local charge rate ( qx(i,j)) in x direction, (right) local discharge rate ( qy(i,j)) in y direction of DEM made by naive dow n sampling. A) 2 meter resolution DEM, B) 8meter resolution DEM C) 32 meter resolution DEM buildings disappears in the downsampling method while it remains in our approach (compare second rows in Figures 420 and 421). The pe rformance difference betw een downsampling and our reduction at 32 m resolution is very significant (0 m3/s for downsampling, 1150 m3/s for our reduction, whereas the true rate is 1200 m3/s). While the discharge rate is calculated as zero due to total blockage in the downsampling case, our approach makes a good estimation of the discharge rate (compare third rows in Figures 420 and 421). The result shows that, at the same scale, the local detail information is preserved in our reduced image and discarded in the naively downsampled image. Figure 422 demonstrates that the decomposed /parameterized approach also makes an accurate estimate with regards to global inform ation. Figure 422 also shows the difference in running time between our method and the downsampling approach. You can find small 92 PAGE 93 A B C Figure 421. Estimated WDR of hydrologic DEM and subgrid parameterization made by our reduction method. (left) water surface height, (center) local charge rate ( qx(i,j)) in x direction, (right) local discharge rate ( qy(i,j)) in y direction of DEM made by our approach. A) 2 meter resolution DEM, B) 8meter resolution DEM C) 32 meter resolution DEM. At 8 m resolution the loca l discharges between buildings can be seen while the local discharges in Figur e 420 disappear by merging two buildings due to downsampling reduction. At 32 m resolution we can see the existence of the discharge while the di scharge rate is 0 in Figure 420 by a complete blockage due to downsampling. sm /3 A B Figure 422. Comparison of total discharge rate and runningtime between downsampling method and our reduction method at different scale. A)The Qx(i) comparison between naive downsampling and new method. The vertical axis is discharge in the x direction Qx(i) B) The real computation time to calculate water discharge rate in different scale and different method ( red is for down sampling, blue is our new method) 93 PAGE 94 differences between the two methods in each scale because our method has an additional procedure to encode the impact of buildings. However, the additional time required is small. We then repeated this process, comparing results for the real Miami DEM (Figure 423). Figure 423 shows the performance comparis on in terms of coar sening the building parameterization and the scale of ground for real data (Miami, FL). The notation B*G is used to represent the scale of buildi ng footprints, where is the corresponding scale/resolution in meters. Using fine scale ground data, the re sults deteriorate depending on the degree of coarsening of the building details. With the coarse scale ground data, the difference in performance is much less. In other words, similar performance can be obtained while saving memory usage for building information when th e surge modeler wants to use a very coarse resolution hydrologic DEM to save computation time. Comparing the results from our algorithm with those of downsampling over the real Miami data, the error trend is almost equivalent to th e error trend observed for the simulation. As the scale of downsampling is increased, the water disc harge rate decreases due to merging of closely A B Figure 423. Comparison of total discharge rate over real DEM data at different scales with different levels of building rectangular ization. A) Real Miami DEM, B) The comparison between naive downsampling and new method with the real data. We desire for the discharge to ch ange as little as possible as scale is increased. We see that the proposed method allows us to ma intain reasonable estimates of discharge across a large range of scales, wh ereas the downsampling does not. 94 PAGE 95 A B C D E F Figure 424. Estimate of total disc harge rate on different sites in Dade County, Miami, FL. A, C, E) Different sites B, D, F) Corresponding estimate of total discharge rate. Those show the aberrant behavior co mmon to the estimate of Qx( i). spaced adjacent buildings. For small downsampling factors, the effect of the reduction is not noticeable and the performance of downsampling is generally acceptable. However, as the scale of downsampling is increased, th e effects get worse and the advantages of our method become obvious. When we applied our reduction method on diffe rent sites we observe d a strange behavior common to the estimate of Qx ( i). The estimate was observed to rise suddenly after a steady decrease as a function of pixel size. 95 PAGE 96 In order to find the cause, we applied our reduction method on simulated DEMs with different building distributions (F igure 425). As a result, we dete rmined that the reason for the aberrant behavior in the performance curve was the new friction value (Eq. 424) that accounts for the impact of buildings. The new friction value is decided by the sum of forces of buildings A B C D E F Figure 425. Estimate of tota l discharge rate on simulated DEMs having different building distribution, A, C, E) Simula ted DEMs having different build ing distribution, B, D, F) Corresponding estimate of total discharge ra te. After two buildings being merged by downsampling it cannot genera te the total discharge rate. Our reduction method can still make the total discharge rate. As th e scale of the ground pixels increases, any given ground pixel will likely be a compos ite of multiple buildings, so that our friction value systematically underes timates the actual combined impact. 96 PAGE 97 and the shear stress of ground. In this process, we did not consider the e ffect of the distribution of buildings because the impact was generally thought to not be si gnificant. In other words, the new friction value cannot perfectly account for the impact of multiple buildings. At small scales, individual buildings reside insi de a single ground pixel, and the friction value properly predicts the impact of the building structure. As the scale of the ground pixels increases, any given ground pixel will likely be a composite of multiple buildings, so that our friction value systematically underestimates the actual combined impact. As future work, we plan to research a method that accurately parameterizes the impact of multiple buildings independent of scale. 97 PAGE 98 CHAPTER 5 CONCLUSIONS AND FUTURE WORK The fusion of diverse multisenso r observations allows us to identify appropriate surface models and scaling relationships for topographic and bathymetric su rface features in the coastal zone. However, in order to run hydrologic process models capable of predicting such things as hurricane storm surge, evacuation routes, flood risk, coastal wave spect ra, and littoralzone erosion, improved seamless elevation maps must be obtained that cover the land surface and the underwater surface. In our work we used four types of elevation data sets: 90 m resolution NGDC data, 30 m resolution SRTM, 10 m resoluti on bathymetric LiDAR, and 15 m resolution topographic LiDAR. The topography a nd bathymetry in the NGDC were derived from stereo aerial photography and sonar sensors, respectiv ely. The SRTM was acquired from the space shuttle with dualantenn a Cband SAR. The high resolution topography and bathymetry data derived from the LiDAR were acquired by airborne lasers using different wavelengths (532 nm for bathymetry, 1064 nm for topography) and flight altitudes (300~400 m for bathymetry, 900~1200m for topography). In section 3.2, we investigated a comput ationallydemanding app lication of MKS data fusion. When the data sets to be fused differ in resolution by an order of magnitude or more and the finescale observations are sparse and a ggregated, the standard MKS algorithm proves inefficient. We reduced th e floating point operations pe r node by two by deriving a new equation for calculating the covariance of the proc ess noise (the detail pr ocess). The equation has a general form so that we can reduce the operations per node without regard to the distribution of the data in the quadtree. We also then reduced the number of nodes in the quadtree that must be processed depending on the distribution of the finestscale data, which led to a dramatic 98 PAGE 99 reduction (more than 80% for our data sets) in floating point operations required for estimating a fused DEM from the component data. In section 3.3, the accuracy of the SRTM DTED2 data was evaluated by analyzing the statistical characterist ics of the Kalman innovations (the difference between measurements and projected prior estimates) in the MKS estimator to estimate a terraindependent measurement uncertainty for the SRTM data. Fused DEMs can be produced at every scale, such that the benefits of global coverage and finer details are present. Fused DEMs can provide improved boundary conditions for predicting areas affected by floodwaters as compared to a DEM at any single scale (See Figure 311). Th e uncertainty associated with each elevation estimate (the Kalman estimate error variance) was also obtained at every pixel, thus the fused DEMs at each scale are accompanied by a confidence map at each scale. The SRTM data were found to provide important intermediatescale informati on for bridging the gap between large national elevation data sets like the NGDC and small high resolution elevation data sets like the LiDAR. In future work, refined methods for estimating terraindependent SRTM uncertainty for more specific landuse classes will be investigated. In principle, the high spatial resolution of airborne LiDAR observat ions makes it possible to use relatively simple hydraulic models to accurately model the flow of water across open terrain in rural areas and in hi ghly developed urban areas. Modele rs can also take advantage of high spatial resolution LiDAR data to compare the relative advantage of incorporating more complex hydraulic models with higher order term s. However, because natural disasters tend to impact relatively large areas, we ar e also interested in analysis of areas of much larger scale. For example, the effective range of Hurri cane Katrina was about 200~300 km (Resio et al. 2008). Running meter scale simulations over areas of hundreds of square kilometers results in 99 PAGE 100 unacceptable computation times, even with the si mplest hydraulic models, making it necessary to reduce the spatial resolution of the observational data. Historically, reduction of image data, such as in JPEG and MPEG encoding, was done by transforming the data into a different domain (f rom spatial to frequency or from spatial to wavelet) and bit encoding (runle ngth coding) to redu ce redundancy. However, this process of transforming and encoding data can transmute inhe rent properties of the data in the original domain (e.g., elevation, area and width of build ings). As a result, the reduced data must be decoded and transformed back to the original domain to retrieve the desired features. The main advantage of the method proposed he rein is that the re duced data does not require any procedure to transf orm back to the original domain. The ground data are reduced by downsampling such that the topographic informati on is reduced gradually w ith scale. In reducing the building data, the desired featur es (building width and areas) can be calculated directly from two corner points. This parameterization leads to a much milder loss of information about hydraulic boundary conditions as scale increases. Our scale reduction method was evaluated usi ng various measures: MSE, CC, FAN, and WDR. MSE and CC were used to evaluate the sy nthesized hydrologic DEM and to compare this to a naive downsampling image. The results show that our reduction method can preserve important information over a large range of scal es. For example, the CC between the true and reduced high risk area detection (DHRA) is able to remain near 0.7 as the scale increases from 4 m to 32 m (see Figure 412). Over that same range of scal es, the downsampling method shows a much more rapid deterioration of image quality (CC = 0.63 to 0.3) as the scale is increased. Since image quality is often determined subj ectively, we used F AN and WDR to provide objective hydrologically meaningful metrics of performance. Wh en looking at total discharge 100 PAGE 101 rate in Figure 422, it was seen that our method approximates that true discharge much more faithfully as scale is increased than does downsampling. Another important concept introduced in th is work is the methodology to encode the impact of buildings for hydraulic modelers wo rking at coarse scales over urban areas. We derived an equivalent friction factor based on the physical be havior of water flowing over complex structure elements (buildings), which can be used generally. The friction factor is a function of the size and width of a particular building along with water surface height. This work represents the first systematic framework for the efficient multiscale fusion of multiplesource DEMs and decomposition of urban DEMs fo r the preservation of hydraulic boundary information as a function of increasing scale. As an exploratory method, we attempted to fi nd an alternative measure of water discharge rate that could be used to c hoose an optimal scale for the build ing reduction. Since topographic Figure 51.LOSD and its error for the quantizatio n of buildings. LOSD fo r principal directions (North, East, South, and West) is computed. 101 PAGE 102 relief in urban areas on coastal plains (e.g., Houston, TX; New Orleans, LA; Miami, FL; and Jacksonville, FL.) is quite small, buildings are th e main features that impact the estimation of water discharge rate. We devised a lineofsi ght distance (LOSD) metric, which measures the linear distance one can travel from a given pi xel without bein g impeded by a building. We measured LOSD in the four principal directi ons (North, East, South, and West, Figure 51) By determining that one or more of the MS E, CC and LOSD metrics tracks the error of discharge rate as scale is increased, we contend th at they can be used as computationally efficient and alternative error measures to a highly comp licated hydrodynamic model that keeps all higher order terms. From the results in Figure 52 and Table 51, we found that north and south direction LOSDs are most similar to Qx(i) generated in a case where the direction of water flow is set from west to east with no flow from north an d south. In other words, the discharge rate is more affected by the channel widths than the channel length between buildings as long as the lots of channels are not completely blocked. The quantized buildings by our method preserve most of 2 4 8 16 32 0 0.05 0.1 0.15 0.2 0.25 Normalized loss of performanceThe scale used to coarsen building Gain Qx Gain LOSDS Gain LOSDN Gain LOSDE Gain LOSDW Gain CC Gain MSE Figure 52. Comparison measurements (MSE CC, LOSD) with Water discharge rate,( Qx ) for building reduction. 102 PAGE 103 the channels between buildings. Therefore we can conclude that the loss of channel width between buildings in where the width is perpendi cular to the direction of water flow, can be a major feature in deciding the perf ormance of water discharge rate. Table 51. The similarity between Gain Qx and MSE, CC, LOSD MSE Difference of slope in loglog scale Gain Qx vs Gain LOSD 0.0005 0.0322 Gain Qx vs Gain LOSDS 0.00032 0.0261 Gain Qx vs Gain LOSDN 0.00029 0.0178 Gain Qx vs Gain LOSDE 0.0016 0.0691 Gain Qx vs Gain LOSDW 0.0017 0.0789 Gain Qx vs Gain CC 0.00072 0.3006 Gain Qx vs Gain MSE 0.0012 0.1556 103 PAGE 104 APPENDIX DETAILED DERIVATION OF PROCESS NOISE VARIANCE Let the spatial interval be Ms 0 and the spatial coarsetofine state model is )()()()()( swsBsxssx (A1) where is white proce ss and uncorrelated )(sw)0(x0)]([ swE, )()]()([ tsItwswET0)]0([ xE, ) 0()]0()0([x TPxxE The forward orthogonal condition is satisfied, i.e. for 0 )]()0([ swxET0 sThe backward model can be derived from (A .1) by reversing the direction of spatially dependent state variable. )()()()()()(1 1swsssxsBsx (A2) The backward state process is still Markov but it does not satisfy the backward orthogonal condition between and for ) ( Mx ) ( swMs We can define by Markovity of minimum mean s quare error (MMSE) as follows ) ( sw )( ~ )]()([ )( ~ )](),....1(),()([)( swsxswE swMxsxsxswEsw (A3) where is MMSE estimate and due to the property of MMSE, )()( sxswE)()( ~ sxsw We assume that and are zeromean Gaussian so that the following equation applies: (Kay, 1993). )(sw)(sx)()]()([)]()([)]()([1sxsxsxEsxswEsxswET T (A4) Since and we can substitute x(s) into (A3) so that ) ()()()()( swsBsxssx )()()()]()([1sxsPssxswEx T (A5) 104 PAGE 105 The backward Markov model can be rewritten (G.Verghese, 1979). )( ~ )()()()()()()( )( ~ )()()()()()()()()( )( ~ )]()([)()()()()(1 1 1 1 1 1 1 1 1swsssxsPssIs swsssxsPssssxs swsxswEsssxsBsxx T x T (A6) Let )()()()()(1 1sPssIssFx T (A7) )( ~ )()()(1swsssw (A8) As a result, the backward Markov model is )()()()(swsxsFBsx (A9) Since )()()()()()()()( )()()()()()()()()()( )()()()()()()()()( )()()()()()(1 1 1 1 1 1 1 1 1 1 1 1sPsBsPsPssss sPssssPsBsPsss ssssBsPsssPs sIssBsPssPx T x x T x T x T x T T x x T T x x (A10) )( sF is defined as )()()()(1sPsBsPsFx T x (A11) Let )]()([)( swswEsQT and )()()( )()()()()( )()(])()([)()( ])()()()()()()()([ ])]()([)()]()([)([)]( ~ )( ~ [1 1 1 1 1ssPsI ssPsPsPsI ssPsxsxEsPsI sxsPsswsxsPsswE sxswEswsxswEswEswswEx T T xxx T T x T x T T x T x T T T (A12) It then follows that 105 PAGE 106 )()()()()()()()()()()( )()()()()()()( )()()]( ~ )( ~ [)()( )]()([)(1 1 1 1 1 1ssssPsssssss ssssPsIss ssswswEss swswEsQT T x T T T T T x T T T T T (A13) Since )()()()()()()()()()()()()()()()()( )()()()()()()( )()()()()()()()()()()()( )()()()()()()()()()( )()()()() ()()()()( )()()()()()(1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1ssssPsBsPssssssssPsss ssssPsss ssssPsBsPssssss sPssssPsBsPsss ssssBsPsttPs sssBsPssPT T x T x T T T T x T T T x T T T x T x T T x T x T x T T x x T T x x (A14) Q(s) can be rewritten )()()()()()()(1ssssPsBsPsQT T x T x (A15) Since and )()()()(1sPsBsPsFx T x )()()()()()()( )()()()()()()()()( )()()()()()( BsPsstPsss sssssBsPsssP sssBsPssPx T x T T T T T T x T x T T x x (A16) It then follows that )()()()()()( )()()()()()( BsPssFssPsF BsPsssPsFsQx T x x T x (A17) Since the expression for becomes ) ()()()()()()()(1BsPssPsFsPsBsPsFx T x x T x )( sQ)()()( )()()()()( BsPssFI BsPssFBsPsQx x x (A18) The equation of which was written in (Fieguth et al. 1995) can be derived from )(sQ 106 PAGE 107 )()()()()( )()()()()()( )()()()()()()()()()( )()()()()()()( )()()()()()()(1 1 1 1 1 1BsPssPsIBsP BsPssPsBsPBsP BsPssPsBsPssPsPsBsP BsPsssPsPsBsP ssssPsBsPsQx x T x x x T x x x x T x T xx T x x T x x T x T T x T x (A19) As a result, we can get two equations for Q ( s) )()()( )()()()()()(1BsPssFI BsPssPsIBsPsQx x x T x (A20) Since the calculation of is necessary in the middle of changing the posterior estimate at previous scale into the prior estimate at present scale, we achieved a two multiplication reduction in scalar type MKS. This new equation is general fo rm so that we can achieve more operations for vector type MKS. )(sF 107 PAGE 108 LIST OF REFERENCES Acheson, D. J., 1990, Elementary Fluid Dynamics, Oxford Applied Mathematics and Computing Science Series, ( Oxford University Press ). Alemseged, T. H. and Rientjesb, T.H.M., 2005, Effects of LiDAR DE M Resolution in Flood Modelling : A model sensitivity study for the city of Tegucigalpa, Honduras, Proceedings of the ISPRS Workshop Laser S canning, Netherlands, pp. 1214 Anderson, B. D. O., 1999, From Wi ener to Hidden Markov models In IEEE IDC99 page 239, Adelaide, South Australia. ArcGIS, 2008, Desktop Help, Available online at: http://webhelp.esri.com/arcgisdesktop/9.2 Copyright Environmental Syst ems Research Institute, Inc Bates, P., 2004, Remote sensi ng and flood inundation modeling, Hydrological Processes 18, pp.25932597 Bates,P.D., and Roo, A.P.J., 2000, A simple rast erbased model for flood inundation simulation, Elsevier Journal of Hydrology, 236, pp. 54 Bovik, A. 2005, Handbook of image and vide o processing, 2nd edition, (Elsevier) Brown, R.G. and Hwang, P., 1996, Introducti on to Random Signals and Applied Kalman Filtering. 3rd ed. (John Wiley and Sons) Bruneau P., GascuelOdoux C., Robin P., Merot, P. and Beven, K., 1995, The sensitivity to space and time resolution of a hydrologica l model using digital elevation data, Hydrological Processes 9, pp. 69 Carter, W., Shrestha, R., Tuell G., Bloomquist D., and Sartori, M., 2001, Airborne laser swath mapping shines new light on earth's topography. Eos, Transactions, American Geophysical Union 82, pp. 549, 550, 555. Cheung, S., Jhee, H., and Slatton, K.C., unpub lished, Fusing airborne and SRTM data for improved multiscale DEMs over coastal zone and landscapedependent evaluation of SRTM accuracy. Photogrammetric Engineering and Remote Sensing (conditional accepted) Chou, K., Willsky A., and Albert, B., 1994, Mul tiscale recursive estimation data fusion and regularization, IEEE Transaction on Automatic Control 39, pp. 464 478. Chow, V., Maidment, D., and Mays, L.W., 1988, Applied Hydrology. (McGrawHill) Covelli, P., MarsiliLibelli, S., and Pacini, G., 2002, SWAMP: A twodimensional hydrodynamic and quality modeling platform for shallow waters, Numerical Methods for Partial Differential Equations: Wiley Periodicals, 18, pp. 663687. 108 PAGE 109 Cover, T.M. and Thomas, J.A., 1991, Elements of Information Theory. (John Wiley & Sons) Curkendall, D., Fielding, E., Cheng, T., and Pohl, J., 2003, A computationalgrid based system for continental drainage network extracti on using SRTM digital elevation models. In Proceedings of the International Conferen ce on Parallel Processing Workshops ICPPW03, 69 October, pp. 181190. Daniel, M. and Willsky, A., 1997, A multiresol ution methodology for signallevel fusion and data assimilation with applications to remote sensing. In Proceedings of the IEEE 5, January, pp. 164180 Dean, R., and Dalrymple, R., 2002, Coastal Processes with Engin eering Applications. (Cambridge University Press) Desmet, P.J.J. and Govers, G., 1996, Comparison of routing algorithms for digital elevation models and their implications for predicting ephemeral gullies, International Geographical Information Systems 10, pp. 311331 Dobson, M., Ulaby, F., Pierce, L., Sharik, T., Berg en, K., Kellndorfer, J., Ke ndra, J., Li, E., Lin, Y., Nashashibi, A., Sarabandi, K., and Siqueir a, P., 1995, Estimation of forest biophysical characteristics in northern mi chigan with SIRC/XSAR. IEEE Transactions on Geoscience and Remote Sensing 33, July, pp. 877 895. Fairfield, J., and Leymarie, P ., 1991, Drainage networks from grid digital el evation models. Water Resource Research 27, pp.709717 Farr, T. G., Rosen, P.A., Caro, E., Crippen, R., Duren, R., Hensley, S., Kobrick, M., Paller, M., Rodriguez, E., Roth, L., Seal, D., Shaffer, S., Shimada, J., and Umland, J., 2007, The Shuttle Radar Topography Mission, Rev. Geophys. 45, Fieguth, P., Karl, W., Willsky, A., and Wunsch C., 1995, Multiresolution optimal interpolation and statistical analysis of TOPEX/POSEIDON satellite altimetry. IEEE Transactions on Geoscience and Remote Sensing 33, March, pp. 280. Gan, Q., and Harris, C.J., 2001, Comparison of two measurement fusion methods for Kalmanfilterbased multisensor data fusion, IEEE Transactions on Aero space and Electronic System 37, issue 1, pp. 273279 Gader, P., Lee, W., and Wilson, J., 2004, Det ecting landmines with groundpenetrating radar using featurebased rules, order st atistics, and adaptive whitening. IEEE Transactions on Geoscience and Remote Sensing 42, November, pp. 25222534. Gao, Y., Maggs, M., 2005, Featurelevel fusion in personal identification, Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1, pp. 468473 109 PAGE 110 Gesch, D., William, J., and Miller, W., 2001, A comparison of U.S. geological survey seamless elevation models with shuttle radar topography mission data. In Geoscience and Remote Sensing Symposium,( IGARSS '01). IEEE 2001 International, 2, 913 July, pp. 754756 Gonzalez, R., and Wood, R., 2002, Digital Image Processing, 2nd ed (Prentice Hall) Gunatilaka,A.H., and Baertlein, B.A., 2001, F eaturelevel and decisionlevel fusion of noncoincidently sampledsensors for land mine detection, IEEE Transactions on Pattern Analysis and Machine Intelligence 23, issue: 6, pp. 577589 Haykin, S., 2002, Adaptive Filter Th eory, Fourth Ed. (Prentice Hall) Hall, D., and Llinas, J., 1997, An introduction to multisensor data fusion, In Proceedings of the IEEE 85, January, pp. 623 Harris, D., 1963, Characteristics of the hurricane storm surge, T echnical Paper No. 48, Weather Bureau, U.S. Depart of Commerce, Washington, D.C., pp. 139. Hensley, S., 2005, From raw data to digital el evation map. In Workshop of the Shuttle Radar Topography Mission Data Validation and Applications, 1416 June, Virginia. Hicks, S., Sillcox, R., Nichols, C., Via, B., and McCay E., 2000, Tide and Current Glossary. U.S. Department of Commerce, National Oceanic and Atmospheri c Administration, January Horritt, M.S., and Bates, P.D., 2001, Effects of spatial resolution on a raster based model of flood flow, Journal of Hydrology 253, pp.239249. Hunter, N. M., Bates, P. D., Horritt, M. S., W ilson, M. D., 2007, Simple spatiallydistributed models for predicting flood i nundation: A review, Elsevier Geomorphology pp. 208 IHRC, 2004, WSMP (Windstorm Simulation & M odeling Project): Airborne LiDAR data and digital elevation models in MiamiDade, Flor ida, final report. International Hurricane Research Center (IHRC). Iwasa, Y., Inoue, K., and Mizu tori, M., 1980, Hydrol ogic analysis of overland flood flows by means of numerical method. Annual Report Disaster Prevention Research Institute in Kyoto University Jain, I., Chittibabu, P., Agnihotri, N., Dube, S. K., Sinha, P.C., and Rao, A.D., 2006, Numerical storm surge model for India and Pakistan, Natural Hazard Springer, 42, No. 1, pp. 6773 Kamen, E.W. and Su, J. K. Su, 1999, Introduction to Optimal Estimation. London, UK:(SpringerVerlag) Kay, S.M., 1993, Fundamentals of Statistical Signal Processing: Estimation Theory. (Upper Saddle River, NJ: Prentice Hall) 110 PAGE 111 Komar, P. D., 1998, Beach Processes and Sedi mentation, 2nd ed. (Upper Saddle River, NJ: Prentice Hall) Kuperman, W., and Lynch, J., 2004, Shallowwater acoustics. Physics Today, 55, October, pp. 5557. LADS, 2003, Dade County LADS Survey Report. Coastal Planning and Engineering in Florida Department of Environmental Pr otection, Tallahassee, FL. Luettich, R., and Westerink, J., 2006, A (Parallel) Advanced Circula tion Model for Oceanic, Coastal and Estuarine Waters, T echnical Report, Available online at :http://www.adcirc.org/. McMillan, H., and Brasington, J., 2007, Reduced complexity strategies for modeling urban floodplain inundation, Elsevier Geomorphology ,, pp.226243 NEELZ, S and Pender, G., 2007, Subgrid scale parameterisation of 2D hydrodynamic models of inundation in the urban area, Acta Geophysica, 55, no. 1, pp. 6572 NGA, 2005, Geosciences: Coordinate Systems An alysis. National Geospatialimagery Agency, Bethesda, MD. NGDC, 2005, Available online at: www.ngdc.n oaa.gov, National Geophys ical Data Center (NGDC), National Oceanic & Atmospheri c Administration, Washington, DC. NOAA CSC, Part II of a roadmap to a seamle ss topobathy surface. Technical Report, National Oceanic and Atmospheric Administration Co astal Service Center, Washington, DC. 2007 OCallaghan, J.F. and Mark, D.M., 1984, The extr action of drainage networks from digital elevation data. Computer Vision Graphics and Image Proceedings 28, pp. 323344 Resio, D. T., and Westerink, J. J., 2008, Modeling the physics of storm Surges, Feature Article. Physics Today. Robertson, W., Whitman, D., Zhang, K., a nd Leatherman, S.P., 2004, Mapping shoreline position using airborne laser. Journal of Coastal Research 20, pp.884892. Roweis, S. and Ghahramani, Z., 1999, A unifying review of lin ear gaussian models. Neural Computation 11, pp. 305345. Schubert, J. E., Sanders, B. F., Smith, M. J., Wr ight, N. G., 2008,Unstructured mesh generation and landcoverbased resistance for hydrodyna mic modeling of urban flooding, Elsevier, Advances in Water Resources. 31, Issue 12, December, pp.16031621 SFWMD, 1999, 1999 Land Cover Land Use Section 3, (MiamiDade Co) Available online at: www.sfwmd.gov/org/gisit/index.html, South Florida Water Management District (SFWMD), West Palm Beach, FL. 111 PAGE 112 Shepard, M.S., Yerry, M.A., 1983, Approaching th e automatic generation of finite elements meshes, Computers in M echanical Enginerring, 1, pp. 4956 Shrestha, R.L., Carter W.E., Sartori, M., Luzum, B.J. and Slatton, K.C., 2005, Airborne laser Swath Mapping:quantify changes in sandy beaches over time scales of weeks to years. ISPRS Journal of Photogrammetry and Remote Sensing 59,pp. 222232 Slater, J., 2005, personal communication. National Geospatial Agency (NGA). Slatton, K.C., Cheung S., and Jhee, H., 2005, Reducedcomplexity fusion of multiscale topography and bathymetry data over the Florida coast. IEEE Geoscience and Remote Sensing Letters 2, pp. 389393. Slatton, K.C., Crawford, M.M., and Evans, B.L., 2001, Fusing interferometric radar and laser altimeter data to estimate surface topography and vegetation heights. IEEE Transactions on Geoscience and Remote Sensing 39, pp. 24702482. Sorenson, H.W., 1970, Leastsquare es timation: From Gauss to Kalman, IEEE Spectrum pp. 6368 Starek, M., Slatton, K.C., and Adams, P ., 2005, Highresolution measurement of beach morphological response to hu rricaneinduced wave dynamics, Eos, Transactions, American Geophysical Union, Fall Meet. Suppl ., Abstract G34A06. Stumpf, R. P., Holderied K., and Sinclair, M., 2003, Determination of water depth with highresolution satellite imagery over variable bottom types. Limnology and Oceanography., 48, pp. 547556. Treuhaft, R. N. and Siqueira, P., 2000, Vertic al structure of vegetated land surfaces from interferometric and polarimetric radar, Radio Science, 35, pp. 141177. Turcotte, D. L., 1997, Fractals and Chaos in Geology and Geophysics, 2nd ed, (Cambridge University Press). USACE, 2004, CORPSCON Coor dinate Conversion Software, Available online at: crunch.tec.army.mil/software/corpscon/corpsc on.html, US Army Corps Engineers, Alexandria, VA USGS, 2000, Fact Sheet 04000, US GeoData Digital Elevation Models, Available online at: erg.usgs.gov/isb/pubs/factshee ts/fs04000.html, U.S. Geological Survey, Reston, VA. USGS, 2004, Seamless Data Distribution system, Available online at: seamless.usgs.gov/, U.S. Geological Survey, Reston, VA. Verghese, G. and Kailath, T., 1979, A further note on backward markovian models. IEEE Transactions on Information Theory IT25, pp. 121124 112 PAGE 113 Vemulakonda, S.R., Scheffner, N.W., Mark, D. J., and Brown, M.E., 1999, Watersurface elevation frequency estimates for the Louisi ana Coast, Coastal Engineering Research Center, CERC991, Wang, M., and Hjelmfelt, A., 1998, DEM based overland flow routing model. Journal of Hydrologic Engineering 3, January, pp. 18. Wegmuller, U. and Werner, C., 1997, Retrie val of vegetation parameters with SAR interferometry, IEEE Transactions on Geoscience and Remote Sensing 35, pp. 1824. Wilson, M. D., 2004, Evaluating the effect of data and data uncertainty on predictions of flood inundation, PhD dissertation. Willsky, A.S., 2002, Multiresolution markov models for singal and image processing, Proceedings of the IEEE, 90, No. 8, pp. 13961458, August Wolfgang S., 2008, personal communication, Institute of Geographic Sciences, Freie Universitat Berlin, Germany Woods, J., 1984, TwoDimensional Digital Signal Processing I :Chapter 5, (Springer) Zhang, K., Chen, S.C., Whitman, D., Shyu, M., Yan, J. and Zhang, C., 2003, A progressive morphological filter for rem oving nonground measurements fr om airborne LiDAR data, IEEE Transactions on Geoscience and Remote Sensing, 41, pp. 872882. Zhang, X., Long, W., Xie, H., Zhu J. a nd Wang J., 2007, Numerical simulation of flood inundation processes by 2D shallow water e quation. Research Article, Frontiers and Architecture and Civil Engineering in China, 1, pp. 107113 113 PAGE 114 BIOGRAPHICAL SKETCH Sweungwon Cheung was born in Seoul, South Korea. He received both his B.S. degree and M.S. degree in Information Engineering fr om Korea University, South Korea in 1994 and 1996, respectively. After obtaining a second M.S degree in Electr ic and Computer Engineering from the University of Colorado in 2000, he began to pursue a Ph. D degree in El ectrical and Computer Engineering at the University of Florida. Si nce 2003 he has worked under the supervision of Dr. Kenneth C. Slatton. His research interests include remote sensing, image fusion and reduction, and stochastic estimation. 