<%BANNER%>

Benthic Mapping of Coastal Waters Using Data Fusion of Hyperspectral Imagery and Airborne Laser Bathymetry


PAGE 1

BENTHIC MAPPING OF COASTAL WA TERS USING DATA FUSION OF HYPERSPECTRAL IMAGERY AND AIRBORNE LASER BATHYMETRY By MARK LEE A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2003

PAGE 2

Copyright 2003 by Mark Lee

PAGE 3

ACKNOWLEDGMENTS First, I wish to thank the members of my supervisory committee for their help throughout this effort. Dr. Grady Tuell, the committee chair, was invaluable in his assistance, knowledge, and enthusiasm for the work. His efforts went above and beyond the normal expectations, ensuring the completion of this research. Dr. William Carter added his knowledge and expertise to this effort, and was always willing to help answer my questions at a moments notice. Dr. Bon Dewitt was incredibly supportive and helpful, and his advice and understanding throughout my journey were greatly appreciated. Dr. Ramesh Shrestha provided significant financial support throughout my graduate education, in addition to his insight and knowledge, for which I am grateful. Dr. Jasmeet Judge showed great interest and enthusiasm for this research, and her knowledge was very helpful. There are many others that I would like to thank for their contributions as well. From the Remote Sensing Division of the National Geodetic Survey, Capt. Jon Bailey provided much financial support for this work, and the efforts of his associates Mike Aslaksen, Chris Parrish, and Jason Woolard were invaluable. From the JALBTCX group at the U.S. Army Corps of Engineers, Jeff Lillycrop and his associates Mary Whittington and Jennifer Wozencraft contributed significant financial and technical assistance toward this research. Gary Guenther, with Optech International, provided vital technical expertise in this work. iii

PAGE 4

I would also like to thank Joong Yong Park and Paul Demkowicz, two friends who, having been there before, provided much support and advice. Thanks also go to Levent Genc and Balaji Ramachandran, fellow Ph.D. candidates also nearing the end of their academic endeavors, whose friendship and understanding were a great help. I also thank my other friends, and my family, for their support and prayers. Above all, I thank God for helping me throughout this process. He gave me the strength and ability to complete this work, and has given me an education during these years that is worth more than any university degree could ever be. iv

PAGE 5

TABLE OF CONTENTS p age ACKNOWLEDGMENTS.................................................................................................iii LIST OF TABLES............................................................................................................vii LIST OF FIGURES.........................................................................................................viii ABSTRACT.........................................................................................................................x CHAPTER 1 INTRODUCTION........................................................................................................1 Data Fusion...................................................................................................................3 Applications of Data Fusion..................................................................................6 Levels of Data Fusion............................................................................................8 Evidence Combination Methods.........................................................................11 Organization of the Dissertation.................................................................................12 2 BENTHIC MAPPING OF COASTAL WATERS BY REMOTE SENSING...........14 Hyperspectral Imagery................................................................................................14 Radiance and Reflectance....................................................................................16 Spectral Matching................................................................................................20 Pure Pixel Matching Algorithms.........................................................................21 Mixed Pixel Matching Algorithms......................................................................22 Airborne Laser Bathymetry........................................................................................23 Theory..................................................................................................................24 Limitations...........................................................................................................28 Benthic Mapping Methods.........................................................................................29 Neural Networks..................................................................................................29 Band Ratios.........................................................................................................30 Radiative Transfer Model....................................................................................32 Other Techniques.................................................................................................36 Modified Radiative Transfer Model....................................................................37 3 EXPERIMENT...........................................................................................................41 Datasets.......................................................................................................................42 Preprocessing..............................................................................................................44 v

PAGE 6

AVIRIS................................................................................................................44 SHOALS..............................................................................................................49 Water Attenuation Removal.......................................................................................50 AVIRIS................................................................................................................51 SHOALS..............................................................................................................53 Classification of Datasets...........................................................................................59 Fusion of Classified Images........................................................................................66 Statistical Analysis......................................................................................................73 Summary.....................................................................................................................75 4 DISCUSSION AND RECOMMENDATION FOR FURTHER WORK..................78 APPENDIX A SPECIFICATIONS OF THE DATA ACQUISITION SYSTEMS............................85 AVIRIS.......................................................................................................................85 ASD FieldSpec Portable Spectrometer.......................................................................86 SHOALS.....................................................................................................................87 B ASSESSING THE ACCURACY OF REMOTELY SENSED DATA......................91 Training and Test Pixels.............................................................................................91 Sample Size................................................................................................................92 Sample Acquisition.....................................................................................................92 Evaluation...................................................................................................................93 C DEMPSTER-SHAFER EVIDENTIAL REASONING..............................................97 Background.................................................................................................................97 Rules of Combination.................................................................................................99 D VARIABLE DEFINITIONS....................................................................................102 LIST OF REFERENCES.................................................................................................103 BIOGRAPHICAL SKETCH...........................................................................................108 vi

PAGE 7

LIST OF TABLES Table page 3-1. Linear regressions between overlapping flight data and associated r-squared values.....................................................................................................................56 3-2. Overall accuracies for the three classifications......................................................66 3-3. Error matrix for AVIRIS classification accuracies................................................67 3-4. Error matrix for SHOALS classification accuracies..............................................68 3-5. Error matrix for AVIRIS-plus-depths classification accuracies............................69 3-6. AVIRIS class-to-information table........................................................................70 3-7. SHOALS class-to-information table......................................................................71 3-8. Evidence combination matrix for AVIRIS and SHOALS classifications.............71 3-9. Accuracies for Dempster-Shafer classification image...........................................74 3-10. Error matrix for Dempster-Shafer classification accuracies..................................74 3-11. Kappa coefficients and variances for each classification......................................75 3-12. Test statistics and confidence levels for each classification comparison..............75 A-1. AVIRIS calibration information............................................................................86 A-2. SHOALS performance values................................................................................88 A-3. AVIRIS spectral calibration values for channels 1-50..........................................89 B-1. Example error matrix for four classes....................................................................94 B-2. Producer and user accuracies for example error matrix.........................................94 C-1. Dempster's probability mass combination rules..................................................100 vii

PAGE 8

LIST OF FIGURES Figure page 1-1. Using redundant and complementary data to discriminate objects.........................5 2-1. Comparison of the spectral sensitivities of Landsat TM bands 2 and 3, and AVIRIS bands 17-32..............................................................................................15 2-2. Contributions to at-sensor radiance.......................................................................17 2-3. AVIRIS radiance spectra for grass........................................................................18 2-4. AVIRIS reflectance spectra for grass....................................................................19 2-5. Illustration of linear unmixing ..............................................................................23 2-6. Interaction of ALB laser pulse with water body....................................................25 2-7. Laser pulse return waveform (logarithmic) from SHOALS system......................26 2-8. A neural network....................................................................................................30 3-1. Georegistered AVIRIS image of Kaneohe Bay, Hawaii.......................................43 3-2. Plot of ground points with their reflectance values and corresponding AVIRIS radiance values for band 5 (413 nm)......................................................................47 3-3. AVIRIS reflectance image (band 15, 510 nm) of the research area......................48 3-4. AVIRIS image (band 15, 510 nm) corrected for surface waves using FFT method....................................................................................................................49 3-5. SHOALS mean depth image. ................................................................................51 3-6. AVIRIS image (band 15, 510 nm) corrected for water attenuation.......................53 3-7. Spatial layout of SHOALS datasets collected over project area............................54 3-8. Plot of overlapping APD pixels from Areas 26a and 26b.....................................55 3-9. Plot of overlapping pixels from Areas 26 and 12..................................................57 viii

PAGE 9

3-10. Plot of overlap pixels from APD and PMT receivers............................................58 3-11. APD regressed pseudoreflectance image of research area....................................58 3-12. Depth image of research area.................................................................................59 3-13. Ground truth image for our research area..............................................................60 3-14. Class color legend for ground truth image.............................................................60 3-15. Regions of Interest (ROIs) draped over ground truth image.................................61 3-16. Regions of Interest (ROIs) draped over AVIRIS bottom reflectance image, band 15...................................................................................................................61 3-17. Plot of +/2 standard deviation spread of pseudoreflectance values for each class........................................................................................................................63 3-18. Classification of AVIRIS bottom reflectance dataset............................................64 3-19. Class color legend for classification images..........................................................64 3-20. Classification of SHOALS 2-band (pseudoreflectance and depth) image.............65 3-21. Classification of AVIRIS bottom reflectance-plus-depth dataset..........................65 3-22. Difference image between AVIRIS classification and ground truth image..........67 3-23. Difference image between SHOALS classification and ground truth image........68 3-24. Difference image between AVIRIS-plus-depths classification and ground truth image.............................................................................................................69 3-25. Result of Dempster-Shafer fusion of AVIRIS and SHOALS classifications........73 3-26. Difference image between D-S classification and ground truth image.................74 ix

PAGE 10

Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy BENTHIC MAPPING OF COASTAL WATERS USING DATA FUSION OF HYPERSPECTRAL IMAGERY AND AIRBORNE LASER BATHYMETRY By Mark Lee May 2003 Chair: Grady Tuell Major Department: Civil and Coastal Engineering One goal of mapping, the accurate classification of the object space, can be achieved by visual interpretation or analysis of relevant data. Most mapping of earth features relies on the latter method, and is realized using remote sensing. Various airborne sensors are used today for generating topographic and hydrographic mapping products. In this research, we combined data from airborne hyperspectral imagery and airborne laser bathymetry, using data fusion techniques, to map the benthic environment of coastal waters. Airborne laser bathymetry (ALB) uses laser pulse return waveforms to estimate water depth. These signals are attenuated by the water depth and clarity. A portion of the waveform signal, the peak bottom return, is a function of the bottom reflectance, and therefore, the bottom type. The purpose of this research is to exploit the peak bottom return signal of ALB to obtain benthic information, and then use the information, in combination with spectral imaging information, to aid in benthic classification. x

PAGE 11

We used AVIRIS hyperspectral data and SHOALS ALB data, obtained over Kaneohe Bay, Hawaii, for this research. After preprocessing the datasets, the water attenuation effects were removed from the AVIRIS data using a radiative transfer model. A variant of this model, developed for this research, was used on the ALB dataset to correct for water attenuation, resulting in a parameter we defined as pseudoreflectance. We classified the resulting datasets using the Maximum Likelihood supervised classification technique. Accuracy assessments of the classifications showed overall accuracies of 80.2% and 66.9% for the AVIRIS classification and the SHOALS classification, respectively. The two classifications were merged using the Dempster-Shafer (D-S) decision-level data fusion method, using a priori weights from the Maximum Likelihood classifications. The resulting D-S classification had an overall accuracy of 87.2%. For comparison, we classified the AVIRIS data (corrected for water attenuation) combined with a depth channel, producing an overall accuracy of 85.3%. Kappa coefficient analysis of all four classifications resulted in 82% confidence that the Kappa coefficients of the D-S classification and the AVIRIS-plus-depth classification are different. Kappa confidence levels greater than 99% were calculated for all the other pairs of classifications. The results indicate that ALB pseudoreflectance, computed from the peak bottom return waveform signals, contains information that aids in the benthic mapping process, and can be used in a sensor fusion algorithm with hyperspectral data to achieve greater accuracy in bottom classification. Further research into the computation of bottom reflectance from the ALB bottom return waveform may yield additional improvements. xi

PAGE 12

CHAPTER 1 INTRODUCTION The goal of mapping is to create an accurate iconic representation of the object space. A map contains information, which can be obtained directly by visual observation, or by the analysis of relevant data. Within our focus of the mapping of earth features, much of the mapping process is performed using the latter method, and is often realized by using remote sensing. Numerous data acquisition devices and analysis methods have emerged within remote sensing. These include passive devices such as airborne multispectral and hyperspectral sensors, which detect reflected visible and infrared electromagnetic energy from the earths surface. For example, in the emerging discipline of imaging spectroscopy, information is obtained from the sensor data by first correcting for atmospheric effects, converting to reflectance, then applying some type of matching algorithm, such as multi-channel clustering with multispectral data or the Spectral Angle Mapper (SAM) with hyperspectral data. Also available are active data acquisition technologies, such as Airborne Laser Swath Mapping (ALSM) and Interferometric Synthetic Aperture Radar (IFSAR), which are used to measure digital elevation data for the generation of accurate topographic products. More recent work into the combination, or fusion, of data from multiple sensors has resulted in improved mapping accuracy over using data from a single sensor (Madhok and Landgrebe 1999, Park 2002). Remote sensing is also used for benthic mapping. A benthic map is an iconic representation of the spatial distribution of land cover types located beneath a water body. 1

PAGE 13

2 These maps sometimes include associated bathymetry, or depths, of the water body. Aerial imagery acquired over water must be corrected for the same atmospheric effects as spectral data over land, however benthic mapping provides an additional challenge to remote sensing due to the attenuation of light by the water column. Several approaches have been developed to remove these water column effects using only passive data (Philpot 1989), and the fusion of passive and laser data (Borstad and Vosburg 1993, Estep et al. 1994, Lyzenga 1985). Most of this research has been centered on developing models to obtain improved bathymetry. However, these models can also be applied to benthic classification, which is the focus of our research. Typical benthic mapping from remotely sensed data exploits the spectral energy detected by passive sensors. A sensed spectrum is matched against a library of known spectra to help determine the identity of the reflecting surface. However active sensors, such as airborne topographic laser systems, also detect a reflected spectral power. Normally, the time difference between the transmitted and received laser power is used to calculate target range. Yet, some researchers have used the variation in this detected laser power, or intensity, to successfully map topographic features (Park 2002). Perhaps bathymetric laser intensity could be exploited to map benthic features as well. In our research, we investigated a new method of benthic mapping that makes use of the bathymetric laser systems return signal strength to aid in benthic classification. It was necessary to normalize these intensity values, so we used the term pseudoreflectance to represent the normalized result. Specifically, we adopted a data fusion approach, combining passive hyperspectral data with laser intensity and depth data, to improve the benthic mapping accuracy over that obtained by either system separately.

PAGE 14

3 Our research focused on classifying the benthic environment of coastal waters, which we defined as the water bodies along the seashore with depths up to 40 meters. These waters are an important natural resource, containing plant and animal species vital to the overall ecology of our oceans, as well as providing commercial and recreational uses. Researchers recognize the importance of our coastal waters (Bierwirth et al. 1993, Stumpf 2002), and the need to monitor the benthic characteristics of these waters, which can reflect changes due to natural and artificial influences. Our research involving the benthic mapping of coastal waters was realized through data fusion. Data fusion is the process of combining data from multiple sources, and obtaining a better result than what could be obtained from any of the sources independently. Data fusion research has been applied to military, commercial and industrial uses for decades (Abidi and Gonzalez 1992). The remainder of this chapter provides an overview of data fusion, followed by an outline of the dissertation. Data Fusion Data fusion can be defined as the process of combining data from multiple sources in order to obtain improved information about an environment than could be obtained from any of the sources independently. This process is also referred to by other terms, depending on the area of research, which include sensor fusion, correlation, tracking, estimation, and data mining (Hall and Llinas 2001). Regardless of the terminology, the common motivation is that if data from one sensor can improve our ability to interpret the environment, data from multiple sensors should improve it even more (Abidi and Gonzalez 1992). In some areas of research, data fusion is considered a component of a larger process, known as multisensor integration. Within this concept, multisensor integration

PAGE 15

4 is defined as the use of information from multiple sensors to help a system perform a task, while fusion is considered a stage in the integration process where the combining of data occurs (Abidi and Gonzalez 1992). Steinberg et al. (1999) define data fusion in terms of state estimation, where it is the process of combining data or information to estimate or predict entity states. This definition lends itself well to Kalman filtering. Others, within the machine intelligence community, view data fusion as a method of giving intelligence to systems without complete human interaction (Abidi and Gonzalez 1992). The concept of data fusion is not new, and can be observed in many areas of nature. For example, dolphins are equipped with sonar and vision, both of which are used for locating potential prey, and pit vipers use a combination of vision and infrared sensing to determine the angle at which to enact a strike (Mitiche and Aggarwal 1986). Humans use data fusion in everyday life, combining visual, auditory, and tactile stimuli, as well as other senses, to make some kind of inference about their environment. Each of these examples provides insight into what can be achieved through the fusion of intelligent systems (Abidi and Gonzalez 1992). Many of the potential advantages of data fusion can be categorized as qualitative or quantitative benefits (Hall 1992). Some qualitative benefits include robust system performance and reliability, as well as reduced ambiguity, while quantitative benefits include increased accuracy of the reported information, less time to receive the information, and less cost to acquire the information (Abidi and Gonzalez 1992, Hall 1992). These benefits are realized through the concepts of redundancy and complementarity. The use of redundant sensors is inherently beneficial to system

PAGE 16

5 performance and reliability, while redundant data have been shown to improve the accuracy of the information, as well as lower the time and cost of acquisition (Waltz 1986). Complementary data increase the dimensionality of knowledge about the sensed environment, which can reduce the ambiguity and increase the accuracy of the information related to features of interest (Abidi and Gonzalez 1992, Hall 1992). Figure 1-1. Using redundant and complementary data to discriminate objects. Adapted from Abidi and Gonzalez (1992). The concepts of redundancy and complementarity can be better described with the following example, adapted from Abidi and Gonzalez (1992) and illustrated in Figure 1-1. Figure 1-1(a) shows four objects, differing in reflectance and height. Three sensors are used for detecting these objects, with sensors 1 and 2 capable of detecting height, and sensor 3 capable of detecting reflectance. Figures 1-1(b) and 1-1(c) show sensor response curves for sensors 1 and 2, respectively, and their capabilities to discriminate between short objects (objects A and C) and tall objects (objects B and D), with a

PAGE 17

6 measure of object height along the x-axis. The black areas under the curve intersections indicate situations where height determination is uncertain. The sensor response curve for the combination of the redundant sensors (1 and 2) is represented in Figure 1-1(d). Note the increase in certainty (shown by the steeper and taller peaks) and the decrease in uncertainty (shown by smaller black area under curves). The fusion of redundant sensors 1 and 2 increases the ability to discern between short and tall objects than could be discerned by either sensor independently. Figure 1-1(e) shows the addition of a complementary sensor (sensor 3) to the fusion process. The resulting fusion of sensors 1 and 2 (height sensors) is fused with sensor 3 (reflectance sensor), providing discrimination among all four objects. The black areas again show areas of uncertainty in the discrimination of the objects. The complementary information provided by sensor 3 gives the added dimension of knowledge (reflectance) necessary to discern among all the objects. Applications of Data Fusion Many of the applications for data fusion are found in the military. In a battlefield environment, situation awareness is of vital importance. Data from only one source may provide information that is ambiguous, uncertain and perhaps inaccurate. However, fusion can combine relevant information from several sources to create consistent, accurate, comprehensive and global situation awareness. The application of this concept improves performance in many military instances, including ocean surveillance, air-to-air defense, battlefield intelligence, surveillance and target acquisition, and strategic warning and defense (Hall and Llinas 2001). Outside of the military uses, many other applications for data fusion have been developed. In the area of industrial robotics, three-dimensional imaging and tactile

PAGE 18

7 sensors are combined for robotic object manipulation, enabling a robot to handle materials that are randomly dispersed in a container (Abidi and Gonzalez 1992). This concept has been applied to develop a robot to grasp randomly oriented connectors and place them into a printed circuit board (Mochizuki et al. 1985). Researchers in the medical imaging field use data fusion concepts to combine magnetic resonance (MR) and computer tomography (CT) imagery into composites that are more useful during surgery than the individual components (Hill et al. 1994). Data fusion is also used for complex mechanical equipment monitoring. Several types of sensors within a helicopter transmission (e.g. temperature, oil debris monitors) provide data that, when combined, can identify and predict areas of failure, which reduces maintenance costs and improves safety (Hall and Llinas 2001). Data fusion methods have also been successfully applied to the mapping of earth features using remote sensing. In the area of land cover classification, Park (2002) applied two methods of combining airborne laser intensity data with aerial photography, each producing improved results over classifying the photography independently. Lei et al. (2001) merged multitemporal LANDSAT TM and SPOT images for better land cover change detection. There have also been advancements in mapping urban areas, where the fusion of hyperspectral imagery and digital elevation models have enhanced the delineation of building rooftops (Madhok and Landgrebe 1999). Additional urban mapping improvements were obtained by merging panchromatic and multispectral imagery (Fanelli et al. 2001). Data fusion methods have also been applied to mapping water depths, as Lyzenga (1985) demonstrated by combining airborne laser bathymetry with hyperspectral imagery.

PAGE 19

8 Levels of Data Fusion In the process of fusing multiple datasets together, several requirements must be met. First, the datasets must be in registration. Registration refers to the amount of spatial or temporal alignment among the multiple datasets, highlighting the importance that the data about a particular object, from each sensor, refer to the same object in the environment (Abidi and Gonzalez 1992, Hall and Llinas 2001). Spatial registration of imagery from multiple sensors is usually determined using a coordinate transformation, and then implemented by resampling the pixels in the images to a common size, location and orientation by application of the coordinate transformation parameters. Temporal registration is usually handled by collecting each dataset at exactly, or nearly, the same point in time. For datasets not in temporal registration, a correction may be used, if applicable, to bring the datasets into a common time frame, or an assumption made that the object space did not change between dataset acquisitions. Another requirement in the data fusion process is that the datasets are modeled in a common fashion. A model is a representation of the uncertainty or error in each dataset. Usually it is assumed that the error in the data from each sensor is best represented using a Gaussian model (Abidi and Gonzalez 1992). With the above requirements met, each of the datasets must be brought to a common level of representation, or data abstraction, before the fusion can proceed. The research community recognizes different levels of data fusion, which coincide with the amount of data abstraction present at the time of fusion. One accepted taxonomy used for data fusion levels consists of signal-, pixel-, featureand symbol-level data fusion, listed by increasing data abstraction. Sensors producing data of similar semantic content could possibly be fused at any of the levels, while sensors with dissimilar modalities may

PAGE 20

9 produce datasets with different semantic content, which would require a higher level of data abstraction before fusion could take place. The following descriptions of data fusion levels follow those provided by Abidi and Gonzalez (1992). Signal-level fusion is the combination of similar signals from one or more sensors in order to obtain a resultant signal of higher quality. This level of fusion requires the highest level of registration, both spatial and temporal. Richardson and Marsh (1988) have shown that redundant data almost always improve signal-level fusion, when based on optimal estimation. When used in real-time applications, signal-level fusion is usually considered an additional step in signal processing, and lends itself well to use in a Kalman filter. Pixel-level (or data-level) fusion is used to combine multiple images into a composite image containing pixels with an improved quality of information or an increased amount of information. The abstraction level of the data in each pixel is low, with each pixel containing either raw sensor data or the result of some type of image enhancement. In the fused result, each pixel may contain data from some mathematical combination of the component datasets, or contain additional dimensions (bands) corresponding to the component datasets (e.g., merging a radiance image with a height image to create a two-band radiance-height image). A high level of spatial registration is necessary in pixel-level fusion, ensuring that corresponding pixels refer to the same area of the object space. Features represent a higher level of data abstraction, some semantic meaning of interest, and are derived from the processing of data that are from a lower level of abstraction. Feature-level fusion is the combination of feature information that has been

PAGE 21

10 independently extracted from the datasets of multiple sensors. The types of features extracted from imagery include edges and areas having a constant data value (e.g., reflectance, height). Several methods have been developed for combining feature data, including the tie statistic (Flachs et al. 1990) and model-based approaches (Hall and Llinas 2001). The resulting fused information is used to show an increase in the likelihood of the existence of an extracted feature (based on the redundant reporting of similar features among the multiple datasets), or to create composite features comprised of the primary features in the component datasets. The needed level of spatial registration is not as high as that for pixel-level fusion. It is assumed that high spatial registration was used in the extraction of features within each component dataset. Symbol-level fusion (also referred to as decision-level fusion) is the combination of information that is at the highest level of abstraction. Data from multiple sensors, which have been independently classified using feature matching, are fused into a composite dataset. The classified datasets generated from each sensor contain associated measures of accuracy, which are used as input into some logical or statistical inference in the fusion process. This type of fusion requires the lowest level of spatial registration. High spatial registration is usually in place during the generation of the symbols within each dataset. Due to its high level of data abstraction, symbol-level fusion may be the only option available for combining information obtained from highly dissimilar sensors. The various taxonomies for fusion, which have been proposed to date, do not fully capture the complexity of the issues involved when applying fusion techniques to mapping problems. It is possible, for example, to use high-level fusion processes, but to apply them to a basic data structure that is still at the pixel level. This approach may

PAGE 22

11 adopt sophisticated algorithms (e.g., rule-based decision algorithms) but may not consider the neighboring pixels in the algorithm. We follow this strategy in our approach. Specifically, we apply a decision-level fusion algorithm to combine data from a hyperspectral instrument and an airborne laser bathymetric system, but conduct our resulting mapping at the pixel level as defined by the raster of the hyperspectral data. We use the Dempster-Shafer algorithm as a fusion technique. In the next section, we discuss the relationship of the Dempster-Shafer approach to other decision-level techniques. Evidence Combination Methods The information arising from a data-fusion process should be better than what could be obtained from any of the sensors independently. In the previous section, we stated that measures of accuracy are associated with the information used in decision-level fusion. Because of the statistical context of this term, many researchers have adopted the term evidence to describe this information, and we adopt it as well. Below we briefly describe some of the accepted methods of evidence combination, including rule-based, Bayesian estimation, and Dempster-Shafer. Rule-based decision-level fusion is a heuristic method of combining evidence derived from multiple sensors. This method uses production rules that are formed from the analysis of the information from each dataset. These rules are normally in the form of a logical implication, as in if A then B. Each implication could then lead to additional levels of implications before reaching a decision. Also, additional rules could be added that are not derived from the sensors, creating an even higher-level decision system, or expert system (Abidi and Gonzalez 1992).

PAGE 23

12 Bayesian estimation is named after Thomas Bayes, an English clergyman who lived in the 18 th century and helped develop what is known today as Bayes rule. This method of evidence combination works by updating an a priori probability of a hypothesis with evidence provided from observations, resulting in an a posteriori final probability determination (Hall 1992). It assumes an exhaustive set of hypotheses (all possible events), with all the hypotheses mutually exclusive. Dempster-Shafer evidential reasoning was introduced by Glenn Shafer in 1976 in a book entitled A Mathematical Theory of Evidence, in which he reiterated some of the work in statistical inference performed by Arthur Dempster. This method is a generalization of Bayesian estimation, allowing for a general level of uncertainty (Hall 1992). It is modeled after Dempsters analysis of the human decision making process, in which the set of hypotheses does not have to be exhaustive, nor mutually exclusive. Instead, measures of belief are assigned to propositions, which are combinations of hypotheses that may overlap or even be in conflict. As with Bayesian, this method will update a priori information with evidence provided by observations. However, instead of producing a final probability determination, the result is an a posteriori evidential interval, with lower and upper bound values representing a measure of belief and a plausibility, respectively. Additional detail on the Dempster-Shafer method is given in Appendix C. Organization of the Dissertation The goal of this research is to examine the potential of combining pseudoreflectance, derived from the returned power of a bathymetric laser, with other data for benthic mapping. We begin by examining some of the current research methods of benthic mapping in Chapter 2. This includes an overview of hyperspectral imaging, as

PAGE 24

13 well as the discipline of imaging spectroscopy, which best exploits the high-dimensionality of hyperspectral data. We also provide background on airborne laser bathymetry, with emphasis on the return waveform and its relationship to intensity in topographic systems. We then explore the common radiative transfer methods used for benthic mapping, and explain how we applied a variant of one of these methods to laser intensity data to compute estimates of pseudoreflectance. In Chapter 3, we describe our experiment to test the use of laser intensity data for benthic mapping. This experiment is realized using data fusion, combining hyperspectral and ALB data to improve the description of the object space. We discuss the datasets used in the experiment, and the preprocessing steps performed on the datasets, such as georegistration and surface wave removal. We then explain the processing involved in removing the effects of water attenuation from both the hyperspectral and the ALB datasets. Next, we discuss the supervised classification of three datasets, including the hyperspectral, ALB, and hyperspectral-plus-depth datasets. These classifications, which result in three separate benthic maps of our research area, are assessed for accuracy. We then describe the data fusion of the hyperspectral and ALB classifications, resulting in a fourth benthic map for which we also assess the accuracy. Lastly, we discuss the statistical significance of the results from the four accuracy assessments. In Chapter 4 we discuss the research and the results, and recommend future research.

PAGE 25

CHAPTER 2 BENTHIC MAPPING OF COASTAL WATERS BY REMOTE SENSING The use of remote sensing for benthic mapping has been researched for more than 40 years. Polcyn et al. (1970) developed a depth extraction algorithm for passive data, and implied that a pair of wavelength bands could be found whose ratio would not change for different benthic types within an area. Lyzenga (1978) helped develop methods to determine accurate depths from multispectral data by adding a deep-water radiance term to his model. Others have applied sensor fusion techniques using passive sensors and bathymetric laser systems to improve the accuracy of estimated depths (Borstad and Vosburg 1993, Lyzenga 1985). Most of the research has focused on obtaining accurate depths, however, the methods developed can also be applied to determining benthic types. Our research is centered on the fusion of data from two types of sensors, a hyperspectral system and a bathymetric laser system. In the following pages, we discuss both of these sensors and their associated data processing methods. We then examine some of the current methods of analyzing data from these sensors to obtain benthic information, and describe a new method to obtain additional information from the laser bathymeter. Hyperspectral Imagery Hyperspectral sensors provide imagery with high spectral dimensionality, narrow spectral channel sensitivity, and contiguous band channel acquisition. Figure 2-1 demonstrates the latter two qualities, showing a comparison between the Landsat TM 14

PAGE 26

15 520530540550560570600590630620640680580650690610}}}}30313229 23 660670 AVIRIS bands Landsat TMbands}}}}26272825}}}}22232421}}}}18192017wavelength (nm)GreenRed Figure 2-1. Comparison of the spectral sensitivities of Landsat TM bands 2 and 3, and AVIRIS bands 17-32. Braces indicate the extent of spectral sensitivities per band. multispectral sensor and the AVIRIS hyperspectral sensor. Landsat TM has 6 bands which sense electromagnetic energy from 450 nm to 2350 nm, and 1 band that senses thermal infrared energy. AVIRIS has 224 bands which sense electromagnetic energy from 380 nm to 2400 nm. The high spectral dimensionality of a hyperspectral sensor allows for each pixel to be evaluated as an n-dimensional vector, where n is the number of bands. The acquisition and analysis of these pixel vectors, or spectra, from hyperspectral imagery is called imaging spectroscopy. Object space classification within imaging spectroscopy is performed by matching hyperspectral spectra against a library of known object space spectra (Tuell 2002a). This differs from the method typically used for multispectral data, which classifies pixels by examining the clustering of their values from two or more bands. The type of spectra used for classification is usually reflectance spectra. However, hyperspectral sensors measure radiance spectra. Since reflectance is an intrinsic property of the object space, and radiance is not, it is preferable to convert spectra from at-sensor radiance to object-space reflectance, then match against known reflectance spectra. Also, spectral matching

PAGE 27

16 assumes that each pixel is pure containing the same type of material, with the same reflectance, throughout the pixel area. Obviously this is not always true. Methods such as Spectral Mixture Analysis (SMA) can be applied to these non-pure, or mixed, pixels to estimate their composition. In the following sections we explain radiance and reflectance, and the methods used for obtaining reflectance, and then address methods of spectral matching for pure pixels and mixed pixels. Radiance and Reflectance Radiance is the amount of spectral flux per unit area per unit solid angle, and is the measurement provided by a hyperspectral sensor. However, radiance is not an intrinsic property of the object space. Its value is dependent upon many other factors, as shown in Equation 2-1 (Tuell 2002a). adjpathadjdiffsunimgLLtEEtEL coscos (2-1) The terms of Equation 2-1 are explained as follows. The lambda () subscript implies that the values are wavelength dependent. imgL = upwelling radiance measured at the sensor. pathL = upwelling path radiance. adjL = upwelling radiance due to adjacent objects. sunE = downwelling solar irradiance. t = atmospheric transmittance. diffE = diffuse irradiance (not directly from sun). adjE = irradiance due to solar reflection off adjacent objects. = object reflectance. = sun angle. = sensor angle.

PAGE 28

17 SunSensorLpathLadjLtarEsunEdifEadj Figure 2-2. Contributions to at-sensor radiance.

PAGE 29

18 010002000300040005000600070008000900010000020406080100120140160180200220AVIRIS ChannelRadiance Figure 2-3. AVIRIS radiance spectra for grass. Radiance units are W/cm 2 /sr. Figure 2-2 illustrates Equation 2-1, with defined as the contribution to from the ground target, as shown in Equation 2-2. tarL imgL coscostEEtELadjdiffsuntar (2-2) Reflectance is an intrinsic property of the object space, and is defined as the ratio of the amount of spectral energy reflected by an object to the spectral energy incident upon the object (Lillesand and Kiefer, 1994). An example of the differences between radiance and reflectance is shown in Figures 2-3 and 2-4, showing radiance and reflectance spectra for grass, respectively. The reflectance spectra were obtained by dividing radiance, measured by a hand-held spectrometer, by a reference radiance measured over a spectralon panel (near perfect reflector). The resultant spectra were then convolved to the AVIRIS wavelengths. The radiance spectra are somewhat misleading, showing peaks in the blue (band 10) and the green (band 20). However the reflectance spectra show a peak

PAGE 30

19 only in the green among the visible bands, and display a well defined red edge in the near infrared (band 40), typical of healthy vegetation. 00.10.20.30.40.50.60.7020406080100120140160180200220AVIRIS ChannelReflectance Figure 2-4. AVIRIS reflectance spectra for grass. This example shows the need to invert reflectance, denoted from Equation 2-1. One method for obtaining reflectance is by applying a radiative transfer code, such as the moderate resolution transmission code, or MODTRAN. MODTRAN makes use of the absorption and scattering properties of the atmosphere, as well as the solar and sensor angles, to predict radiance above the earth (Tuell 2002a). An estimate of reflectance is then obtained by comparing the predicted radiance to the measured radiance. Another well established method for obtaining reflectance is the Empirical Line Method, or ELM. This procedure assumes a linear relationship between at-sensor radiance and object-space reflectance. The linear relationship is represented in Equation 2-3, which is a simplification of Equation 2-1, with slope and y-intercept b values defined in Equations 2-4 and 2-5, respectively (Tuell 2002a). Provided known m

PAGE 31

20 reflectance data for several image-identifiable points, a least-squares solution for and can be obtained for each wavelength of the image spectra. m b msunE bt bmLimg (2-3) cos1costEEtEmadjdiffsun (2-4) adjpathLLb (2-5) The assumption of linearity for the ELM procedure means that the solved and values are constant across the entire image. For this to be true, it follows that , and are constant, and differences in diffE adjE and are small. It also assumes that is constant and is negligible. pathL adjL Even with these assumptions, ELM is widely used as a simple, reliable technique for the conversion from radiance to reflectance. The primary difficulty in using the ELM procedure is that it requires in situ measurements of radiance and irradiance for several points in the object space, which are then used to calculate reflectance for each measured point. These measurements should be taken for bright, medium, and dark objects (Tuell 2002a) to avoid problems with the regression. The resulting regression parameters are then used in Equation 2-3 to solve for reflectance for the entire image. Spectral Matching Upon deriving a reflectance image, a matching algorithm must be implemented to classify the reflectance spectra. However, the type of algorithm used depends on the nature of the pixels to classify. The pixels may be pure pixels, which contain spectra reflected from only one type of material, or mixed pixels, which contain spectra made up

PAGE 32

21 of a combination of reflectance spectra from several materials. In the next sections we discuss the matching algorithms used for both pure pixels and mixed pixels. Pure Pixel Matching Algorithms Matching algorithms for pure pixels focus on finding the level of similarity between a given pixels spectra and a library of known spectra. Two examples of these algorithms include the Maximum Likelihood Classifier, and the Spectral Angle Mapper. The Maximum Likelihood Classifier assigns to a given pixel the class with an associated spectra that is most probable to have produced the given pixel (Jensen 1996). This method is a supervised classifier in that it requires training sets, consisting of groups of pixels from a known class, in order to calculate a mean vector and covariance matrix for each class, and assumes the training data are normally distributed. For each class c, the value is calculated for classifying pixel X using the following equation (Jensen 1996). cp ccTcccMXVMXVp15.0detln5.0 (2-6) In this equation, is the mean vector for class c (derived from the training set), and V is its covariance matrix. The class with corresponding greater than that of the other classes is the class assigned to the pixel X. cM c cp The Spectral Angle Mapper, or SAM, uses the vector dot product to calculate the angle between two vectors in n-space (Equation 2-7), where n is the number of bands in the imagery. The two vectors represent two spectra, one unknown to be classified, and one known from perhaps a spectral library. The assumption is that the smaller the angle

PAGE 33

22 between the two vectors, the more similar the two spectra are. Values for SAM range from 0 to /2 radians, with 0 indicating the greatest amount of similarity between spectra. yxyxSAMTarccos (2-7) Mixed Pixel Matching Algorithms Mixed pixels contain a combination of reflectance spectra from multiple material sources. The abundance of each source contributing to the mixed pixel spectra can be determined using Spectral Mixture Analysis, or SMA. SMA assumes that a mixed pixel spectra is a linear combination of individual material spectra, each weighted by its geometric abundance within the pixel (Sabol et al. 1992, Tuell 2002a). The process of determining the abundances of individual materials within mixed spectra is known as linear unmixing. This concept is illustrated in Figure 2-5, which shows a mixed pixel spectra consisting of a combination of three material spectra. The equation used for modeling linear unmixing is shown in Equation 2-8, with Y representing a mixed pixel vector, x the abundance vector containing percentages of each material in Y, A the matrix containing spectral vectors for individual materials (i.e., the endmember matrix), and e the measurement error. A least-squares solution for x can be obtained assuming the number of bands exceeds the number of endmembers in A. It is assumed that the endmembers in A span the object space. Constraints can be included to ensure that the abundance vector components sum to unity, and that each component is positive, although implementing the former is more straightforward than the latter (Tuell 2002a). eAxY (2-8)

PAGE 34

23 Figure 2-5. Illustration of linear unmixing (adapted from Columbia University). Another type of SMA method was developed by Harsanyi (1993) called the Orthogonal Subspace Projection, or OSP. This method is similar to linear unmixing, however the endmember matrix is separated into two distinct matrices, one containing a material of interest, the other containing materials considered to be noise. It then uses a projection operator to remove the effects of the noise materials from the mixed pixel, resulting in a linear mixture model for only the endmember of interest (Ren and Chang 2000). Airborne Laser Bathymetry Airborne laser bathymetry, also referred to as ALB or LIDAR, is a method of measuring the depths of coastal and inland waters using water penetrating, scanning, pulsed laser technology emitted from an aircraft. ALB differs from the typical method of bathymetric measurement, shipboard sonar, which tends to be time consuming, inefficient in shallow waters, and somewhat dangerous to the ship and crew. ALB can

PAGE 35

24 measure bathymetry and near-shore topography faster than sonar, obtain accurate results in shallow water, and provide a safer method of data collection (Guenther 2001). ALB technology was not developed to replace sonar or other depth measurement methods, but to augment the process of obtaining bathymetry. Limitations of ALB systems include water clarity and depth, as well as its ineffectiveness in detecting small objects. Very high-density surveys can be conducted to enhance small object detection, however these surveys are expensive and minimize the benefits of ALB (Guenther 2001). To ensure a navigation channel is free from small objects, multibeam and side-scan sonar are still the technologies of choice (ibid). ALB systems have been utilized for various applications. Dense bathymetric data provide information on the extent of the shoaling of navigation channels. Repeated surveys over a particular area can help determine sediment transport rates. ALB systems have also been used for emergency response, including the assessment of hurricane damage and ship grounding damage to coral reefs (Irish et al. 2000). Theory The concept of ALB is based on measuring time differences between different returns received from a single laser pulse. When an ALB system emits a laser pulse, part of the pulse energy reflects off the water surface (surface return), but some of the energy continues downward through the water. Some of the remaining downwelling energy is reflected upward from the water particles (volume backscatter), but some of it reaches the bottom and is reflected upward (bottom return). Figure 2-6 illustrates this process. An avalanche photodiode (APD) or photomultiplier tube (PMT) is typically used in the aircraft to detect the energy from the returned laser pulse (Guenther 2001). The detector uses a temporal filter, based on the estimated arrival time for the return pulse, with a large

PAGE 36

25 enough time window to detect returns from the water surface as well as the water bottom. The returns are digitized, usually at 1-nanosecond time intervals, and can be plotted as return power over time, producing a return waveform (Figure 2-7). Figure 2-7 graphs the return power detected by the SHOALS ALB system from one emitted laser pulse, logarithmically corrected to enhance the bottom return. The difference in time between the rising edges of the surface return and the bottom return are used to determine the depth. Figure 2-6. Interaction of ALB laser pulse with water body. Most ALB systems are dual-wavelength, emitting an infrared 1064 nm pulse and a collinear green 532 nm pulse. This is achieved using an Nd:YAG laser with 1064 nm output that is frequency-doubled to produce the simultaneous green pulse. The infrared and green signals are necessary for detecting the air-water interface (surface) and the

PAGE 37

26 water bottom, respectively. The green signal is necessary for water penetration, although a portion of it is reflected from the surface. However, this green surface reflection can be biased by the volume backscatter, making it unreliable for surface detection. Also, in shallow water, it is difficult to separate surface and bottom returns using the green signal (Guenther 2001, Guenther and Mesick 1988). Infrared light has very little penetration into water, providing a much cleaner return for surface detection in most circumstances. However, infrared surface returns can become weak in calm winds, and produce false returns above the surface from spray or sea smoke. In these situations, a red detector (645 nm) can be used to sense the green-excited Raman backscatter from the surface (Guenther et al. 1994). This occurs from the green signal exciting the surface water molecules, which absorb some of the signal energy and emit the remainder (Raman effect). Unlike the infrared signal, the Raman signal does not weaken in calm winds, and is only produced from interfacing with the water surface. However, this return is weaker than a typical infrared return, and is normally used as a check or backup for the infrared. Figure 2-7. Laser pulse return waveform (logarithmic) from SHOALS system.

PAGE 38

27 ALB systems use up to four sensors for detecting return signals, with two sensing the surface return (infrared and Raman) and two sensing the bottom return. In the case of the bottom returns, an Avalanche Photodiode (APD) is employed to detect returns in shallow water, and a Photomultiplier Tube (PMT) for returns in deeper water. A problem that occurs for these bottom sensors is that the surface returns may be six or seven orders of magnitude stronger than the bottom returns, due to the exponential attenuation effects from the water column (Guenther 2001). This change in signal strength can occur in just tens of nanoseconds. In order to handle this signal strength dynamic range, some ALB systems employ a minimum nadir angle on the scanned beam, which decreases the dynamic range for all return signals, allowing the sensors to be highly sensitive to weak return signals without concern for saturating the sensors. Angles between 15 and 20 degrees are typically used. Additional benefits of avoiding small nadir angles include minimizing the variations in depth biases, and increasing the likelihood of detecting small objects (Guenther 2001). ALB systems include a Global Positioning System (GPS) receiver and an Inertial Navigation System (INS) for precise determination of the aircraft position and orientation for each laser pulse. Simultaneous data collection from the aircraft GPS receiver and multiple ground receivers enables the use of kinematic GPS (KGPS) techniques to solve for aircraft positions with sub-decimeter accuracy. These positions are referenced to the WGS-84 ellipsoid, which allows for the collection of topographic and bathymetric data referenced to the ellipsoid, and eliminating the need for water level data (Guenther 2001). The INS records the rotations of the aircraft in three dimensions (roll, pitch, and yaw),

PAGE 39

28 necessary to correct the geometric effects of these rotations on the location of each laser pulse. Limitations A measure of the effectiveness of ALB is the maximum surveyable depth (MSD), which is defined as the maximum measured depth that meets existing accuracy standards. Several factors, derived from the system and the environment, contribute to limiting the MSD. System factors include the green laser pulse energy, electronic noise, and flight altitude. Environmental factors include water clarity and bottom reflectivity (Guenther 2001). Water clarity is generally the most significant factor for limiting the MSD (Guenther and Goodman 1978) because it has a negative exponential effect, while bottom reflectivity has a negative linear effect (Guenther 2001). The MSD can range from 50 meters in clear water to 10 meters in murky waters. Typical results will be between two and three times the Secchi depth (Guenther 2001). The Secchi depth refers to the maximum depth at which a black and white Secchi disk is visible when lowered into the water (Tyler 1968). The properties of the water that dominate the water attenuation effect will determine which multiplicative factor will apply. Water attenuation effects are due to absorption and scattering components. If absorption is the dominant component, the maximum surveyable depth will be closer to two times the Secchi depth. With scattering the dominant effect, three times the Secchi depth can be expected (Guenther 2001). Another limitation of ALB is its ability to detect small objects. Clearing a navigation channel is of utmost importance for shipping, and small objects can be a hazard. ALB systems have difficulty in detecting objects on the order of a one-meter cube (Guenther 2001). The problem is the inability to separate small object returns from

PAGE 40

29 bottom returns. Objects with larger surface areas and smaller heights, or with smaller surface areas and larger heights, are much more easily detected due to the better separation of the object returns from the bottom returns. This limitation is one reason why current ALB technology cannot replace sonar (ibid). Benthic Mapping Methods All mapping that uses airborne remote sensing must account for atmospheric effects, however the biggest challenge for mapping the benthic environment is removing the attenuation effects of the water column. Over the past several decades, many different methods of benthic mapping have been attempted, including using band ratios (Polcyn et al. 1970, Stumpf et al. 2002), radiative transfer models (Bierwirth et al. 1993, Lyzenga 1978), and neural networks (Sandidge and Holyer 1998). These researched methods mostly focus on obtaining depths, however the related problem of benthic classification can also be investigated using these techniques. In the following sections, we will investigate the above benthic mapping techniques, as well as other techniques, and then introduce a modified radiative transfer model using ALB bottom return data. Neural Networks A neural network is an architecture that uses parallel computer processing to train a computer program how to perform non-linear mappings (Lippman 1987). The network consists of layers of processing elements, called neurons, each of which form a weighted sum of inputs and generate an output using a non-linear transfer function. Outputs from one layer of neurons are fed as inputs into the next layer. The weights in each neuron are determined during a supervised training process, in which inputs and corresponding known outputs are presented to the system, and the weights are solved for using an

PAGE 41

30 iterative least-squares method (Sandidge and Holyer 1998). Figure 2-8 provides an illustration of the neural network process. Figure 2-8. A neural network. This training process was applied by Sandidge and Holyer (1998), using spectral data as input and measured depth as output to train a neural network to determine depths from hyperspectral data. Using sonar depth soundings and AVIRIS hyperspectral data for the training process, the resulting neural network obtained sub-meter RMS accuracy values for its estimated depths. The network also showed potential for generalizing, or adapting to conditions different from the training set data. Band Ratios A more deterministic approach to benthic mapping consists of using the ratio of certain bands for depth determination and bottom classification. Polcyn et al. (1970) used the model shown in Equation 2-9, the components of which are described below. fzBiisiiierkLL (2-9)

PAGE 42

31 iL = measured upwelling radiance, for band i. siL = radiance measured over deep water, due to surface reflection and atmospheric scattering. ik = a constant that includes solar irradiance. Bir = bottom reflectance for bottom type B and band i. i = water attenuation coefficient. f = a geometric factor to account for the path through the water. z = water depth. The algorithm developed by Polcyn et al. (1970) assumed that two wavelength bands existed such that the ratio of the bottom reflectance values in those bands remained constant, regardless of the changing bottom types. This assumption is shown in Equation 2-10, for bottom types A and B, and bands 1 and 2. Using the model in Equation 2-9 and the above assumption, the depth could be calculated using Equation 2-11. The value R is the ratio shown in Equation 2-12. bBBAARrrrr...2121 (2-10) bRRkkfzlnln)(12121 (2-11) 2211ssLLLLR (2-12) This algorithm also assumes that the difference between the attenuation coefficients )(21 is constant. Choosing bands that satisfy this assumption as well as the constant ratio assumption in Equation 2-10 was found to be difficult. However, this method was applied to airborne and space-borne multispectral data in shallow, clear water with some success (Polcyn and Lyzenga 1973). A variant of this method (Stumpf et al. 2002) was applied using IKONOS satellite data, resulting in depth estimates within 2-3 meters of ALB values.

PAGE 43

32 The model in Equation 2-9 was similarly applied to determine bottom types using multispectral data, with an assumption that the radiance ratio R (Equation 2-13) would be independent of water depth as long as the attenuation coefficients are the same in both bands (Lyzenga 1978, Wezernak and Lyzenga 1975). 2211BBrkrkR (2-13) This value of R would represent an index of bottom type, assuming the benthic areas mapped have different ratios for the bands selected. This method was successful for mapping algae at varying depths along the shore of Lake Ontario (Wezernak and Lyzenga 1975). However, its success was limited to separating only algae and sand within the same water type. The addition of multiple water and benthic types caused difficulty with the algorithm. Also, this method is restricted to using only two bands, failing to take advantage of the information available in the full spectrum of a given benthic type (Lyzenga 1978). Radiative Transfer Model Due to some of the limitations with the above ratio methods, the development of a radiative transfer model was a natural progression. The focus is to mathematically describe the attenuation of light as it passes through a water body. As photons travel through water, they undergo scattering and absorption processes with the particles in the water and with the water molecules themselves (Jerlov 1976). These processes attenuate the energy flux, which is defined as the product of the energy per photon and the number of photons per unit area per unit time. This downwelling energy flux will continue to decrease as it continues through the water, and will eventually reach zero at a depth that is dependent upon the water properties. The modeling of this process can be described

PAGE 44

33 using an explanation of Beers Law, which is given below and adapted from Bukata et al. (1995). The change in downwelling energy flux is proportional to the change in the number of photons N in the flux, since the energy per photon, hv, is constant (h is Plancks constant, v the light frequency). Also, the chance of attenuation increases with increasing thickness of the medium the light passes through. Given N photons incident upon a medium (e.g., water) of thickness r, the reduction in the number of emergent photons, N, would be proportional to the product of N and r. This is shown in Equation 2-14, where the constant of proportionality is the attenuation coefficient. rNN (2-14) Taking the limit as both N and r approach zero produces Equation 2-15 (Beers Law). Integrating Equation 2-15 from zero to a thickness r of an absorbing medium produces Equation 2-16. In this description, it is assumed that the attenuation property of the medium is constant, so will be invariant with respect to r. drNdN (2-15) reNrN0)( (2-16) Because of the proportional relationship between the energy flux and N, Equation 2-16 can be modified for energy flux producing Equation 2-17, which shows the exponential decrease of energy flux as it passes through the medium. The term is added to indicate wavelength dependency, and a is used for the attenuation coefficient. raer)(),0(),( (2-17)

PAGE 45

34 This model, describing the attenuation of light in a medium, has been applied by several researchers (Bierwirth et al. 1993, Philpot 1989) in order to develop a model for light attenuation in water. Most of the motivation for developing these models is for depth determination. However, they can also be used for benthic classification. Equations 2-18 and 2-19 are fundamental equations used for modeling radiative transfer in water. The components of the equations are described below. waterdkbottomsurfaceLeLL2 (2-18) )1(2dkdeepwatereLL (2-19) surfaceL = upwelling radiance measured just below water surface, at wavelength bottomL = upwelling radiance due to reflection from the water bottom, measured just above the bottom. waterL = upwelling radiance due to scattering within the water column. deepL = upwelling radiance of optically deep water, measured just below the water surface. k = diffuse attenuation coefficient of water. d = depth Equation 2-18 describes the upwelling radiance measured just below the water surface, consisting of additive components from the water bottom and the water column. The bottom radiance is attenuated exponentially as a function of depth d and a water attenuation coefficient k. Note the 2k d term in the exponent rather than the expected k d term. This is necessary because the energy flux is attenuated twice, since it passes through water of depth d from the surface to the bottom, and again from the bottom back to the surface. If the k d term is used instead (without the 2), then k would represent a two-way attenuation coefficient (Philpot 1989). Equation 2-19 describes upwelling radiance of the water column as being a maximum in optically deep water (water too deep for light to reach the bottom), and

PAGE 46

35 decreasing exponentially as a function of k and d. Substituting Equation 2-19 into 2-18 produces Equation 2-20. deepdkdeepbottomsurfaceLeLLL2)( (2-20) Equation 2-20 is simply a modification of Equation 2-18 to account for changes in water column radiance due to changes in depth. Note that all the parameters in Equation 2-20, with the exception of depth d, are wavelength dependent. Also, this model assumes the water is vertically homogeneous in its optical properties (Philpot 1989). In order to use this model for benthic classification, we must solve for upwelling bottom radiance. This requires that some assumptions be made about several of the parameters in the model. Many researchers have used the assumptions of a constant bottom type, a constant diffuse attenuation coefficient (both horizontally and vertically) over an area, and the use of a deep-water radiance measurement for (Brown et al. 1971, Lyzenga 1978, Philpot 1989). Initial estimates for and k deepL bottomL could then be obtained using a linearized version of Equation 2-20, given below. dkLLLLdeepbottomdeepsurface2)ln()ln( (2-21) Using Equation 2-21 and having the above assumptions in place, a minimum of two surface radiances and corresponding depths (three for a least-squares solution) would be needed to solve for and k bottomL The results could then be used as beginning estimates for an iterative, non-linear least-squares solution of Equation 2-20. Using a Taylor series expansion, and truncating the higher order terms, Equation 2-22 (in matrix form) was produced for determining a non-linear solution.

PAGE 47

36 kLLeLLdeeLeLLLdeepbottomdkdeepbottomdkdkdeepdkdeepbottomsurface])(21[)])(([2222 (2-22) For multiple wavelengths (as with multispectral and hyperspectral imagery) there would be an and k bottomLbottom to solve for each wavelength. However, there would also be an additional equation with each added wavelength. So the above minimum of two surface radiances and corresponding depths would still hold with the assumptions of constant and k L over the area. Note that the above equations are given using spectral radiance However, these equations would also be valid for remote sensing reflectance simply by normalizing to the downwelling irradiance at the water surface. This has been demonstrated by Lee et al. (1998) and Bierwirth et al. (1993). L Other Techniques Some researchers have combined passive imagery with lidar bathymetry in order to interpolate/extrapolate depths for the entire passive image (Kappus et al. 1998, Lyzenga 1985). Lyzenga (1985) used regression analysis between depth and a linear combination of all the possible band pairs from a multispectral data collection. The band pair with the largest correlation coefficient was selected as the best choice for depth determination. Results from that research showed the potential for lidar bathymetry to be used to calibrate passive imagery for depth extraction. Eigenspace analysis uses linear algebra techniques to represent the variation of spectral data. Instead of being represented by band or wavelength, the spectral data are

PAGE 48

37 rotated into a system of eigenvectors and corresponding eigenvalues, with each eigenvector orthogonal to the others. The eigenvector with the greatest eigenvalue is the direction of maximum variance in the data. Philpot (1989) used eigenspace analysis for bathymetric mapping using multispectral imagery. Using an assumption of constant bottom type and water attenuation, and applying a linearized radiative transfer model (as in Equation 2-21), multi-band data can be combined such that the first eigenvector (maximum variance) is correlated to varying depth (Philpot 1989). Modified Radiative Transfer Model The methods given above describe techniques for water attenuation removal for passive, remotely sensed data. Included among the methods is a radiative transfer model, which exploits upwelling radiance, or estimated reflectance, to obtain benthic information. In this section we introduce a new type of radiative transfer model, which exploits the ALB waveform return in order to obtain benthic information. However, the parameter provided by an active system, such as an ALB, for representing a waveform return is not radiance or reflectance, but returned power, denoted P r P r is a function of many parameters (see Equation 2-23), including system characteristics, atmospheric effects, and reflectance of the target at the laser wavelength (Lee and Tuell 2003). Our interest in P r is its relationship to reflectance, which is an intrinsic property of the target. Within the topographic laser mapping community, P r (or a measurement proportional to P r ) is often referred to as intensity. Some researchers have used intensity for scene classification by draping intensity images over DEMs (Carter et al. 2001), while others have combined intensity with passive imagery by applying data fusion algorithms (Park et al. 2001, Tuell 2002b).

PAGE 49

38 We introduce a modified radiative transfer model, designed for use with the SHOALS ALB system (see Appendix A). Each data point in the SHOALS dataset consists of a depth, UTM coordinates, flightline, output laser energy, and a bottom return amplitude measurement from the APD and the PMT. The bottom return amplitude measurements are peak power values, representing the height of the highest point of the bottom return (see Figure 2-7), and are recorded in photoelectrons per nanosecond. Our goal is to extract benthic information from the bottom return values due to variation of bottom reflectance, so we need to account for any other factors that could influence these values. The bottom return peak power measurement is a function of several environmental and system factors, and is described in the bathymetric laser radar equation, given in Equation 2-23 (Guenther 1985). These factors include transmitted power P T depth D, aircraft altitude H, bottom reflectance water attenuation k, the beam nadir angle the refracted nadir angle the refractive index of water n w the effective area of the receiver optics A r the receiver field of view loss factor F P a pulse stretching factor n(s, 0 , where s is the scattering coefficient, and 0 the single scattering albedo, a combined optical loss factor for the transmitter and receiver optics, and an empirical scaling factor m, used to account for air path loss and system detuning. sec),,(2220)(cos)(kDsnwrPTreDHnAFPmP (2-23) Solving for would be desirable, however it would require access to all of the above parameters. Therefore, we make use of the P r measurement in place of and

PAGE 50

39 assume a constant effect from the atmosphere, the optics, and the receiver. We have ALB depth measurements, so we can model the water attenuation and solve for k. Since the output laser energy can vary from pulse to pulse, we can correct for it by normalizing the bottom return value by the output laser energy measurement. The result is a modified peak bottom return value, which we refer to as the Normalized Bottom Amplitude (NBA), and we denote using the symbol Normalizing the P r measurement by the output energy to generate parallels the conversion to reflectance for hyperspectral data. However, the value is not a unitless reflectance value, since the bottom return is in photoelectrons per nanosecond, and the output laser energy in milliJoules. We could account for this to obtain a unitless value, however the bottom return is extremely small compared to the outgoing energy, so the resulting values are very small and difficult to work with. Since this conversion from photoelectrons to milliJoules involves multiplicative constants, the difference between a unitless normalization and is simply a matter of scale. Therefore, for convenience, we use the values. We would like to exploit these values to extract benthic information, however they are subject to exponential water attenuation just as passive data are in the radiative transfer model in Equation 2-18. However, this equation includes a term for water column radiance, which is necessary for passive data, but does not apply for ALB bottom return data. The signal that returns to the ALB airborne sensor is digitized at 1-nanosecond intervals, which provides for the separation of the return into different parts based on depth, as shown in Figure 2-7. The volume backscatter part of the return waveform represents the contribution from the water column. In terms of time, the waterL

PAGE 51

40 bottom return is detected by the airborne sensor after the volume backscatter is detected, so the bottom return measurement does not include any contribution from the water column. Therefore, an term is not needed in our radiative transfer model for ALB data. waterL ~ v ([ Additional modifications are necessary to Equation 2-18 as well. The term is replaced by the NBA value and the value with pseudoreflectance surfaceL bottomL ~ We use the pseudo prefix because is a function of reflectance, is not unitless, and we have not accounted for all the parameters in Equation 2-23. We also remove the subscripts since the values are the result of only one wavelength (532 nm green). These changes to Equation 2-18 are reflected in Equation 2-24 below. kdev2 ~ (2-24) Subsequent modifications to the linearized and iterative solutions are needed as well, shown in Equations 2-25 and 2-26. kdv2) ~ ln()ln( (2-25) kedeekdkdkd ~ ]~2[)]~222 (2-26) These modified radiative transfer model equations can then be applied in similar fashion to the original equations to solve for ~ and k for the ALB bottom return. As with the passive model, an assumption of constant ~ and k is necessary. The application of these modified equations will be demonstrated in the next chapter.

PAGE 52

CHAPTER 3 EXPERIMENT In Chapter 2, we provided examples of research into removing the effects of water attenuation found in passive imagery. The resulting water-corrected imagery, obtained over coastal waters, could then be used for benthic mapping. We then introduced pseudoreflectance, a parameter derived from the peak bottom return in ALB data. Our goal is to combine these data to produce a benthic map. In this chapter, we explain the experiment performed to test the validity of obtaining benthic information from ALB bottom waveform returns. In the experiment, we used georegistered datasets from hyperspectral and ALB systems. We corrected the hyperspectral data for water attenuation using the radiative transfer model described previously. We then corrected the ALB bottom return data for water attenuation using the modified radiative transfer model from Chapter 2. Both datasets were then classified using a supervised classification scheme. We also performed a supervised classification on a third dataset, consisting of hyperspectral bands plus a band containing ALB depths. Next, we merged the hyperspectral and ALB classifications using a data fusion approach. Finally, we performed an accuracy assessment of the classifications, including the hyperspectral, ALB, hyperspectral-plus-depths, and data fusion results, with the expectation that the data fusion classification would produce a higher mapping accuracy than that from either the hyperspectral or the ALB classification. 41

PAGE 53

42 This chapter begins with an overview of the datasets used in this experiment, including hyperspectral data provided by the AVIRIS system, ALB data from the SHOALS system, and ground-measured hyperspectral data obtained from an ASD handheld spectrometer. We then move into the preprocessing steps, which include the georegistration of the datasets, the conversion to reflectance and pseudoreflectance, and the removal of wave effects from the water surface. Next, we describe the details of the water attenuation removal process, showing similarities and differences in this process between the AVIRIS and SHOALS datasets. We then explain the supervised classification process, using the Maximum Likelihood classifier on both water-corrected AVIRIS and SHOALS datasets, and the AVIRIS-plus-depths dataset. Afterward, we describe the use of the Dempster-Shafer decision-level data fusion of the AVIRIS and SHOALS classified datasets, providing a fourth classification of the project area. Lastly, we provide an accuracy assessment of the four classifications, including Overall accuracies, Kappa coefficients, User accuracies, and Producer accuracies. Most of the work described in this chapter was implemented using the ENVI/IDL software suite (Research Systems Incorporated 2002). This package provided the freedom to write our own computer programs, in the IDL language, to ensure the algorithms were implemented correctly. Any other software packages used are specified in the text. Datasets The data collected for this research were obtained over Kaneohe Bay, Hawaii, an area known to contain coral, algae and seagrass within relatively clear waters. The hyperspectral data were collected using the Airborne Visual/Infrared Imaging Spectrometer (AVIRIS) system, flown in April 2000. The AVIRIS system obtained

PAGE 54

43 imagery along northwest-southeast flightlines covering all of Kaneohe Bay, as shown in Figure 3-1. The red box represents the area of focus for our research, which measures about 4600 meters by 2600 meters. The AVIRIS system collects 20-meter pixels with an 11-kilometer wide swath per flightline. Additional specifications on the AVIRIS system are provided in Appendix A. Figure 3-1. Georegistered AVIRIS image of Kaneohe Bay, Hawaii. The red box indicates the area of research (4600 m x 2600 m). In addition to the hyperspectral data, ALB data were obtained using the Scanning Hydrographic Operational Airborne Lidar Survey (SHOALS) system, flown in August 2000. The SHOALS system was flown along northwest-southeast flightlines at an altitude of 300 meters, collecting bathymetric data for most of Kaneohe Bay, and topographic data for most of the shoreline areas. Depths from the SHOALS system were obtained at a nominal spacing of about 4 meters, with a 110-meter wide swath per flightline. Depths within the outlined research area in Figure 3-1 range from 1 meter in

PAGE 55

44 the lower left, to 35 meters in the upper right. Additional specifications on the SHOALS system are provided in Appendix A. Spectral data were also collected at ground level using a FieldSpec Pro hand-held spectrometer, manufactured by Analytical Spectral Devices (ASD). Measurements were acquired in June 2000 over several locations surrounding the bay, each with different types of ground cover, such as asphalt, grass and sand. In addition to each ground cover measurement, a measurement was acquired over a spectralon panel at each location, immediately after each ground measurement. Spectralon is a nearly perfect diffuse reflector. Therefore, the spectralon measurements provided an estimate of irradiance, which when divided into the corresponding ground radiance measurement gives an estimate of reflectance for that ground type. The spectrometer measures in 1-nanometer channels in the visible and infrared wavelengths (350 nm to 2400 nm). Additional specifications for the FieldSpec Pro are provided in Appendix A. Preprocessing In this section we describe the preprocessing steps applied to each dataset before attempting to remove water attenuation effects. These steps include georegistration, conversion to reflectance, and removal of surface wave effects. We first describe these steps as applied to the AVIRIS data, and then for the SHOALS data. AVIRIS Our area of research consists of a significant portion of Kaneohe Bay. This area resides within one AVIRIS flightline, flown from the southeast to the northwest, and can be observed from the linear imaging in that direction in Figure 3-1. The original imagery was not georegistered, which is necessary for merging geospatial data. Georegistration requires selecting image-identifiable points with known coordinates, called control

PAGE 56

45 points, and performing a transformation to a known coordinate system. Because of the difficulty with selecting points in water, we decided to georegister the entire bay, including the topography surrounding it, and use image-identifiable points from the topography to generate the transformation parameters. Our area of research could then be clipped from the georegistered image of the bay. Two different sources were used for providing control points around the bay. One source was an orthoimage of the southeastern portion of the bay, which was provided by the Remote Sensing Division of NOAA/NGS. The other source was a USGS quadrangle map, in digital format, which covered most of the bay. Both sources were referenced to the NAD83 datum, and common points from the sources compared to within 15 meters, well within the pixel size (20 meters) of the AVIRIS imagery. The orthoimage was generated using high quality aerial photography obtained in April 2000. The photography was scanned into digital format, and imported into a softcopy photogrammetric software package called Socet Set, produced by LH Systems. Using image-identifiable points with corresponding three-dimensional coordinates (obtained from GPS observations and post-processing), the software georegistered and mosaicked the imagery, generated a digital terrain model, and then produced the orthographically corrected image (Woolard, J., NOAA/NGS, personal correspondence, December 2002). The georegistration of the AVIRIS image was performed using ERDAS Imagine software (Leica Geosystems 2002). Fifteen control points were selected from the topography surrounding the bay, and an Affine transformation was calculated for converting the image coordinates to the NAD83 datum. The results from this

PAGE 57

46 transformation produced RMS values of 0.48 pixels in the X direction, and 0.56 pixels in the Y direction. The image was then resampled to the UTM coordinate system using the cubic convolution method, producing the image in Figure 3-1. Our area of research was then clipped from this resultant image for further processing. Each pixel of AVIRIS imagery contains data in the form of at-sensor measured radiance. However, as mentioned in the previous chapter, radiance is not an inherent property of the object space, but is a function of solar irradiance, atmospheric transmittance, and additive radiance from sources other than the target pixel. Therefore, it was necessary to convert the AVIRIS radiance image to units of reflectance. Due to the availability of ground reflectance data obtained with the hand-held spectrometer, we employed the Empirical Line Method (ELM) to produce a reflectance image. The ASD FieldSpec Pro hand-held spectrometer records radiance for 2100 channels with bandwidths of about 1 nanometer per channel. However, the AVIRIS channels have bandwidths of about 10 nanometers. In order to use the ASD ground spectra for the ELM process, we convolved the 1-nanometer ground spectra to the 10-nanometer AVIRIS bandpass channels. This was done by appropriately weighting the contributions from the ASD channels to create new channels at the same bandwidth as the AVIRIS channels. The weighting was based on the spectral response curves for the AVIRIS channels (assumed to be Gaussian) and generated using spectral calibration information (channel center wavelength and full-width half-maximum values) for the Kaneohe Bay flight (Appendix A). After converting the ASD ground spectra to the AVIRIS channel format, we used these reflectance spectra to perform the ELM procedure. We selected points from the

PAGE 58

47 AVIRIS imagery co-located with the recorded ASD ground spectra, and performed a linear regression of AVIRIS radiance to ASD reflectance for each AVIRIS channel, following the ELM procedure described in Chapter 2. Since the AVIRIS imagery and the ground reflectance data were not temporally registered, we selected points where the reflectance values were likely to be temporally invariant (e.g. concrete, asphalt, beach sand) to perform the regression. A plot of several of these selected points, showing reflectance on the x-axis and radiance on the y-axis, is given in Figure 3-2. Note the linear relationship between the radiance and reflectance values (additional points to help verify the linear relationship were difficult to locate). Results from the regression for each band were applied to the AVIRIS radiance image, producing a reflectance image for the project area. AVIRIS channel 15 (510 nm) of this reflectance image is shown in Figure 3-3. AVIRIS Band 5 ELM300035004000450050005500600065007000750000.10.20.3ReflectanceRadiance Data Points Regression Figure 3-2. Plot of ground points with their reflectance values and corresponding AVIRIS radiance values for band 5 (413 nm).

PAGE 59

48 Figure 3-3. AVIRIS reflectance image (band 15, 510 nm) of the research area. The last step in the preprocessing phase for the AVIRIS imagery was to remove the effects caused by reflection off the surface waves. These waves are clearly visible in Figure 3-3, showing the waves moving mostly from the northeast to the southwest. Several methods for wave removal were attempted, with the best results obtained from just two techniques. The first was to subtract the reflectance of an infrared band from that of the visual bands, similar to the method used by Estep et al. (1994). Since only the visual bands will penetrate water, any reflectance over water in an infrared band should be from the surface. The second method was to apply a Fast-Fourier Transform (FFT) to an infrared band and to each of the visual bands. Then subtract the infrared FFT image from each of the visible FFT images, and then perform an inverse FFT on each of the differenced visible FFT images. This method is shown in Equation 3-1. Results from this second method proved to be visually superior to the first one, and the resulting wave-removed AVIRIS reflectance image, band 15 (510 nm), is shown in Figure 3-4. Note the

PAGE 60

49 significant reduction in surface wave effects in the deeper water (northeast quadrant) area of the image, and the increase in bottom contrast. InfraredFFTVisibleFFTIFFT (3-1) Figure 3-4. AVIRIS image (band 15, 510 nm) corrected for surface waves using FFT method. SHOALS The amount of preprocessing needed with the SHOALS dataset was considerably less than that with the AVIRIS dataset. The SHOALS data are provided in point format, with UTM northing and easting coordinates, referenced to the NAD83 datum, given for each point. The data are georegistered because the SHOALS system includes a GPS receiver and an INS system, providing accurate position and orientation measurements of the aircraft for each emitted laser pulse (Chapter 2). Each data point in the SHOALS dataset consists of a depth, UTM coordinates, flightline, output laser energy, and a bottom return amplitude measurement from the APD

PAGE 61

50 and the PMT. Following our modified radiative transfer model described in Chapter 2, we normalized the bottom return amplitude measurements from the APD and the PMT, for each pulse, by dividing each measurement by its associated output laser energy. This produced a Normalized Bottom Amplitude value, denoted for each pulse. The AVIRIS imagery contains reflected radiance measurements, with contributions from the water surface as well as from the water column and bottom. The water surface effects were removed using an FFT subtraction technique. For the SHOALS system, the reflected measurement for a laser pulse is temporally digitized, at 1-nanosecond intervals. This causes returns from objects closer to the aircraft to be detected first, as is the surface return shown in Figure 2-7. The bottom return is therefore temporally separated from the surface effects, eliminating the need to correct for this effect in the SHOALS data. The SHOALS data consist of depths and peak bottom return values for each laser pulse, and there is roughly a 4-meter spacing between adjacent pulses. This point format is different from that of the AVIRIS data, which is rasterized. Therefore the SHOALS data points, both depth and were binned into 20-meter pixels, coincident with the 20-meter pixel locations from the AVIRIS image, providing an active image (from the values) and depth image coincident with the AVIRIS image. The value in each binned pixel simply consists of the mean of the data points that were located within the boundaries of that pixel. Figure 3-5 shows the SHOALS mean depth image, which indicates increasing depths from left to right (west to east). Black pixels indicate areas where no depth data were acquired. Water Attenuation Removal In the process of extracting benthic information from the datasets, we have discussed the preprocessing steps, including the georegistration and rasterization of the

PAGE 62

51 data, handling of system variables, and accounting for atmospheric effects and surface wave effects. The next step is to remove the effects caused by water attenuation. We apply the radiative transfer equation theory presented in Chapter 2 in order to accomplish this process. In the next sections we describe the water attenuation removal for the AVIRIS dataset, followed by that for the SHOALS dataset. Figure 3-5. SHOALS mean depth image. Brown < 2.5m, Tan < 5m, Green < 10m, Blue < 15m, Yellow < 25m, Dark Green < 40m, Black = no data. AVIRIS In Chapter 2, we provided radiative transfer equations for hyperspectral data in terms of radiance, and mentioned that reflectance could be used as well. The AVIRIS data have been converted to reflectance, therefore the appropriate radiative transfer equations, given below for convenience, are shown using reflectance deepdkdeepbottomsurfacee2)( (3-2) dkdeepbottomdeepsurface2)ln()ln( (3-3)

PAGE 63

52 kedeeedeepbottomdkdeepbottomdkdkdeepdkdeepbottomsurface])(21[)])(([2222 (3-4) As stated in Chapter 2, in order to solve for and k bottom in Equation 3-2, we must select points for which both of these parameters are constant. To satisfy the constant requirement, we selected pixels only located in sand areas, which are easily observed on the right side of the image in Figure 3-4, channeling down to the bottom center. These sand pixels are the input points for the regression. To satisfy the constant k bottom requirement, we assumed a constant k for the entire research area. This assumption is not unreasonable, given the usually clear waters found off the coasts of Hawaii and the small research area. Using the AVIRIS image in Figure 3-4, we selected 117 points in sand areas, with corresponding depths ranging from 6-30 meters. We also chose a pixel in optically deep water (estimated depth > 40 meters) to represent Using these inputs, we obtained a linearized least-squares solution for Equation 3-3, providing estimates for and k deep bottombottom Using these estimates as initial values for Equation 3-4, we obtained an iterative least-squares solution for AVIRIS bands 5-28, providing updated estimates for k and The updated estimates for k deep and are not particular to just the sand areas, and can be applied to the entire image. So using the new k deep and values in Equation 3-2, and solving for each pixel, we were able to generate a image (i.e., water attenuation removed) of the research area. AVIRIS channel 15 of this water-corrected deep bottom bottom

PAGE 64

53 image is shown in Figure 3-6. The black pixels near the corners represent pixels where no depth data were available. Figure 3-6. AVIRIS image (band 15, 510 nm) corrected for water attenuation SHOALS The procedure used for removing water column attenuation effects from the SHOALS data was similar to that described for the AVIRIS data, except that Equations 2-25 and 2-26 were used for obtaining the linearized and iterative least-squares solutions, respectively. However, the application of this procedure was not as straightforward as with the AVIRIS data. The SHOALS data in our research area were collected on August 12, 2000 and August 26, 2000. Within each collection day there were three flights over the research area (flights a, b, and c). Figure 3-7 spatially depicts the six flights over our area, with the number representing the day and the letter the flight. Note that the eastern data were collected on August 12, and the western side obtained two weeks later on August 26.

PAGE 65

54 26b26c12c12b12a26a Figure 3-7. Spatial layout of SHOALS datasets collected over project area. The multiple datasets over the research area provided several challenges for processing. First, the two-week temporal separation between the eastern and western data collections necessitated the separate processing of each area for water attenuation removal. An assumption of constant k is necessary for applying Equations 2-25 and 2-26, but that assumption cannot be justified for the entire research area given the two-week time gap among the datasets. Therefore the datasets needed to be separately corrected for water attenuation, and a method of combining the two datasets examined. Another issue was with combining data from adjacent flights. A comparison of overlapping data from adjacent flights indicated differences in the data values between flights, forcing the development of a strategy for combining these datasets. A third challenge resulted from working with both APD and PMT waveform data. As described in Chapter 2, the APD can measure depths from 1-14 meters, and the PMT from 8-40 meters. Our research area contains depths ranging from 0-40 meters, as shown in the depth image in Figure 3-5. A

PAGE 66

55 method of combining data from these two receivers in their overlapping depth areas (8-14 meters) needed to be developed. 26b to 26a Regression050001000015000200000500010000150002000026b26a Figure 3-8. Plot of overlapping APD pixels from Areas 26a and 26b. It was determined that these issues could be resolved by applying linear regressions. A linear relationship was found between overlapping data for flights occurring on the same day. An example is shown in Figure 3-8, which plots APD values from overlapping pixels for Datasets 26a and 26b. Successive linear regressions for Datasets 26a, 26b, and 26c produced a combined dataset for APD Day 26 (these areas were too shallow for PMT returns). A similar procedure was used to combine 12a, 12b, and 12c for a combined dataset for APD Day 12, and also a combined dataset for PMT Day 12. A list of the linear regressions and associated r-squared values are shown in Table 3-1. The resulting datasets ( for APD Day 26, APD Day 12, and PMT Day 12) were then corrected for water attenuation using Equations 2-25 and 2-26 to model the radiative

PAGE 67

56 transfer. As with the AVIRIS data, sand points were selected from each dataset, and results from the least-squares solutions provided estimates for ~ and k. The k values were then used in Equation 2-24 to generate ~ images for APD Day 26, APD Day 12, and PMT Day 12. Table 3-1. Linear regressions between overlapping flight data and associated r-squared values. Linear Regression R-squared Value Flight 26c APD to Flight 26b APD 0.85 Flight 26b APD to Flight 26a APD 0.83 Flight 12c APD to Flight 12b APD 0.87 Flight 12b APD to Flight 12a APD 0.88 Flight 12c PMT to Flight 12b PMT 0.85 Flight 12b PMT to Flight 12a PMT 0.78 The ~ images for APD Day 26 and APD Day 12 were then combined using a linear regression. Figure 3-9 provides a plot of the overlap pixels for these datasets, indicating a degree of linear relationship. However, the grouping on the right side of the plot does not linearly fit with the grouping on the left side. The pixels associated with the right-side grouping are spatially located in the lower (southern) third of the overlap area. We do not yet understand what caused this behavior. One possibility is that k was not homogeneous in one of the datasets, which is one of our simplifying assumptions. It is difficult to determine which grouping is anomalous. Since most of the overlap area is represented by the left-side grouping, we chose to only use the left-side points in the

PAGE 68

57 regression. The red line in Figure 3-9 indicates the regression result, which had an associated r-squared value of 0.69. Figure 3-9. Plot of overlapping pixels from Areas 26 and 12. The resulting APD dataset was then combined with the PMT Day 12 dataset to create a ~ image of the entire research area. Again, a linear regression was employed. Pixels with corresponding depths from 10-12 meters were selected as overlap pixels for the regression. This range is well within the 8-14 meter sensitivity overlap of the APD and PMT receivers. Figure 3-10 shows a plot of the overlap pixels from the APD and PMT datasets, demonstrating the linear relationship. The associated r-squared value for this regression was 0.85. Results from the linear regression were then used to regress PMT pixels of 11 meters or deeper to APD values. The resulting APD regressed ~ image is shown in Figure 3-11. Note the seam visible between Areas 26a and 12c, indicative of the imperfections in combining these datasets. Another seam is visible between Areas 12a and 12b. However, the image has classification value as can be

PAGE 69

58 discerned from the distinct sand channel between adjacent coral and seagrass. The depth image of the research area is given in Figure 3-12. Figure 3-10. Plot of overlap pixels from APD and PMT receivers. Figure 3-11. APD regressed pseudoreflectance image of research area.

PAGE 70

59 Figure 3-12. Depth image of research area. Classification of Datasets At this point, the AVIRIS and SHOALS datasets have been preprocessed and corrected for water attenuation. The resulting datasets represent bottom reflectance values for AVIRIS bands 5-28, and pseudoreflectance values for SHOALS waveform data. The next step is to use a supervised classification technique to generate benthic characterizations from these images. We used the Maximum Likelihood classifier provided in the ENVI software suite. This classifier has been found to work well for data fusion applications (Park 2002) due to the class probability information produced for each pixel, which can be used as a priori input to the Dempster-Shafer data fusion method (see next section). Supervised classification requires ground truth data in order to train the classifier. Our ground truth data came from a paper by the Analytical Laboratories of Hawaii (2001) on the benthic habitats of the Hawaiian islands. This paper contained a benthic map of Kaneohe Bay derived from aerial photography, scanned at 1-meter pixels, that was

PAGE 71

60 photointerpreted for benthic types. Using ArcView software (Environmental Systems Research Institute Incorporated 2000), we digitized the map, storing it in an ArcView shapefile. Using ERDAS Imagine (Leica Geosystems 2002), we converted the shapefile to raster format, and georegistered it to the same raster as our research area (RMS 1.2 pixels). Using the resulting image, five benthic types were identified in our research area, including sand, colonized pavement, uncolonized pavement, macroalgae (10-50%) and macroalgae (50-90%). This image provided ground truth information for the western two-thirds of our research area. However, the sand in the northeast corner is easily identifiable from the AVIRIS and SHOALS imagery. The ground truth image is shown in Figure 3-13, and a class color legend for the image in Figure 3-14. Figure 3-13. Ground truth image for our research area. Figure 3-14. Class color legend for ground truth image.

PAGE 72

61 Using the ENVI software, Regions of Interest (ROIs) were selected corresponding to each of the five benthic types. AVIRIS and SHOALS pixels within these ROIs could then be used to train the Maximum Likelihood classifier. Figures 3-15 and 3-16 show the ROIs selected, draped over the ground truth image, and the AVIRIS water-corrected image (band 15), respectively. Red areas indicate pixels used to train the classifier, and blue areas are pixels used to assess the accuracy of the classification. Figure 3-15. Regions of Interest (ROIs) draped over ground truth image. Figure 3-16. Regions of Interest (ROIs) draped over AVIRIS bottom reflectance image, band 15.

PAGE 73

62 Before performing the classification, some analysis was done to determine the spectral separability of the benthic classes among the three datasets to classify (AVIRIS bottom reflectance bands 5-28, SHOALS pseudoreflectance, and AVIRIS bottom reflectance-plus-depths). An accepted method of determining class separability is the Transformed Divergence method (Jensen 1996). This method calculates a metric for multi-band images between 0.0 and 2.0, for each ROI pair, that indicates the statistical separability of that pair. Values greater than 1.9 indicate good separability (ibid). This method was applied to both the AVIRIS bottom reflectance dataset and the AVIRIS-plus-depths dataset, using the training ROIs. We determined that each possible pair, for both datasets, had a Transformed Divergence value greater than 1.9, indicating good separability among all the identified benthic classes for these images. Since the SHOALS pseudoreflectance dataset has only one band, Transformed Divergence could not be used to determine spectral separability. Therefore, we calculated the mean and standard deviation within each ROI for the SHOALS pseudoreflectance dataset. A plot of the spread of each class, showing the mean +/2 standard deviations, is shown in Figure 3-17. This analysis shows that, using only pseudoreflectance values, it would be difficult to distinguish among the two densities of macroalgae and uncolonized pavement, or between colonized pavement and sand. Therefore, we added depth as a second band to the SHOALS dataset, and calculated Transformed Divergence values for a two-band (pseudoreflectance-plus-depth) SHOALS dataset. Results for each class pair were greater than 1.9, with the exception of the uncolonized pavement and macroalgae (10%-50%) pair, which had a value of 1.7. This indicated good separability among the five classes using the 2-band SHOALS dataset,

PAGE 74

63 with some question of the separability between uncolonized pavement and macroalgae (10%-50%). Therefore, the SHOALS dataset was classified using a pseudoreflectance band and a depth band. After obtaining a satisfactory indication of spectral separability, the Maximum Likelihood classification was performed separately on the AVIRIS (24-band) dataset, the SHOALS (2-band) dataset, and the AVIRIS-plus-depth dataset, using the training ROIs selected earlier. The resulting classifications are shown in Figures 3-18, 3-20, and 3-21, with a benthic class legend provided in Figure 3-19. Note that the unclassified pixels in the lower left and upper right quadrants of the images are due to missing depth data. Other unclassified pixels are located along boundaries between benthic classes, and may be caused by mixed pixels. 020004000600080001000012000 Pseudoreflectance sandcol pavementuncol pavementmacro 50-90macro 10-50 Class Pseudoreflectance spread Figure 3-17. Plot of +/2 standard deviation spread of pseudoreflectance values for each class.

PAGE 75

64 We then assessed the accuracy of the three classifications, based on accepted methods described in Appendix B. Test ROIs were selected for each benthic class (Figures 3-15 and 3-16), separate from the training ROIs used for producing the classifications. The ENVI software includes post-classification accuracy utilities, which used the test ROIs to assess the accuracy of each classification. The utilities generated an overall accuracy and Kappa coefficient for each classification, which are shown in Table 3-2. Also generated were error matrices (described in Appendix B) for each classification image, given in Tables 3-3, 3-4, and 3-5. Figure 3-18. Classification of AVIRIS bottom reflectance dataset. Figure 3-19. Class color legend for classification images.

PAGE 76

65 Figure 3-20. Classification of SHOALS 2-band (pseudoreflectance and depth) image. Figure 3-21. Classification of AVIRIS bottom reflectance-plus-depth dataset.

PAGE 77

66 In addition to the error matrices, we also generated difference images between each classification and the ground truth image (Figure 3-13), shown in Figures 3-22, 3-23, and 3-24. Colored pixels indicate a classification match between the ground truth and the classification, and black pixels indicate a mismatch. The right side of these images is all black since there is no ground truth imagery in that area. Table 3-2. Overall accuracies for the three classifications. AVIRIS SHOALS AVIRIS + Depths Overall Accuracy 80.2% 66.9% 85.3% Kappa Coefficient 0.762 0.603 0.821 Fusion of Classified Images The next step was to combine the classifications of the AVIRIS and SHOALS results using data fusion. As discussed in Chapter 1, data fusion is defined as the process of combining data from multiple sources in order to obtain improved information about an environment than could be obtained from any of the sources independently. Data fusion can take place at different levels of data abstraction, including pixel-, featureand decision-level fusion (listed in order of increasing data abstraction). The data fusion for our research takes place at the decision-level, since object classification has already taken place. Several methods exist for data fusion at the decision-level, including rule-based methods, Bayesian Estimation, and Dempster-Shafer. We use the Dempster-Shafer method of evidence combination, applied in a similar fashion as that of Park (2002). More detail on Dempster-Shafer evidence combination is provided in Appendix C. For the rest of this section, we assume the reader is familiar with the information in the appendix.

PAGE 78

67 Table 3-3. Error matrix for AVIRIS classification accuracies. Reference Classified Sand Colonized Pavement Uncolonized Pavement Macroalgae (50-90%) Macroalgae (10-50%) Total Unclassified 3 9 73 43 53 181 Sand 198 0 0 0 0 198 Colonized Pavement 0 212 17 0 0 229 Uncolonized Pavement 0 1 153 1 3 158 Macroalgae (50-90%) 0 0 3 166 0 169 Macroalgae (10-50%) 0 0 10 3 157 170 Total 201 222 256 213 213 1105 Figure 3-22. Difference image between AVIRIS classification and ground truth image.

PAGE 79

68 Table 3-4. Error matrix for SHOALS classification accuracies. Reference Classified Sand Colonized Pavement Uncolonized Pavement Macroalgae (50-90%) Macroalgae (10-50%) Total Unclassified 20 29 53 19 75 196 Sand 149 28 0 0 0 177 Colonized Pavement 29 165 0 0 0 194 Uncolonized Pavement 0 0 156 0 32 188 Macroalgae (50-90%) 0 0 0 185 22 207 Macroalgae (10-50%) 3 0 47 9 84 143 Total 201 222 256 213 213 1105 Figure 3-23. Difference image between SHOALS classification and ground truth image.

PAGE 80

69 Table 3-5. Error matrix for AVIRIS-plus-depths classification accuracies. Reference Classified Sand Colonized Pavement Uncolonized Pavement Macroalgae (50-90%) Macroalgae (10-50%) Total Unclassified 1 7 59 16 26 109 Sand 200 0 0 0 0 200 Colonized Pavement 0 215 20 0 0 235 Uncolonized Pavement 0 0 161 0 9 170 Macroalgae (50-90%) 0 0 0 194 5 199 Macroalgae (10-50%) 0 0 16 3 173 192 Total 201 222 256 213 213 1105 Figure 3-24. Difference image between AVIRIS-plus-depths classification and ground truth image.

PAGE 81

70 There are five benthic classes to be discerned in the research area, which, in Dempster-Shafer terminology, can be referred to as basic propositions. The probability of the occurrence of one of these propositions, for one sensor, is calculated by summing the probability masses for general propositions that support the occurrence of that basic proposition. The AVIRIS and SHOALS classification images contain information from which general propositions can be obtained. By inspecting the error matrices from each classification (from the perspective of a User, not a Producer), we created class-to-information tables for each sensor. For a given row in an error matrix, any bottom type making up more than 10% of the total pixels classified for that row was included as information for that rows class. Tables 3-6 and 3-7 list the general propositions represented by the benthic classifications for each sensor. As an example, Table 3-7 implies that pixels labeled as sand in the SHOALS classification image are either sand or colonized pavement. Table 3-6. AVIRIS class-to-information table. Class Information Sand Sand Colonized Pavement Colonized Pavement Uncolonized Pavement Uncolonized Pavement Macroalgae (50-90%) Macroalgae (50-90%) Macroalgae (10-50%) Macroalgae (10-50%) Using the class-to-information tables, we could construct a matrix representing the evidence combination for the Dempster-Shafer fusion process. The matrix is based on defined rules that explain how to combine data from the AVIRIS and SHOALS classifications to obtain probability masses for both basic and general propositions. This

PAGE 82

71 matrix is provided in Table 3-8. The row and column labeled Information correspond to the entries in the class-to-information tables. The shaded boxes represent the probability masses, which are the product of the evidence values from their associated row and column information input. Table 3-7. SHOALS class-to-information table. Class Information Sand Sand, Colonized Pavement Colonized Pavement Colonized Pavement, Sand Uncolonized Pavement Uncolonized Pavement, Macroalgae (10-50%) Macroalgae (50-90%) Macroalgae (50-90%), Macroalgae (10-50%) Macroalgae (10-50%) Macroalgae (10-50%), Uncolonized Pavement Table 3-8. Evidence combination matrix for AVIRIS and SHOALS classifications. S=Sand, C=Colonized Pavement, U=Uncolonized Pavement, M9=Macroalgae (50-90%), M1=Macroalgae (10-50%), K=Conflicting Evidence SHOALS Sand Colonized Pavement Uncolonized Pavement Macroalgae (50-90%) Macroalgae (10-50%) AVIRIS Information S v C C v S U v M1 M9 v M1 M1 v U Sand S S S K K K Colonized Pavement C C C K K K Uncolonized Pavement U K K U K U Macroalgae (50-90%) M9 K K K M9 K Macroalgae (10-50%) M1 K K M1 M1 M1

PAGE 83

72 Input to the evidence combination matrix comes from rule images for the AVIRIS and SHOALS classifications. During the classification process, the Maximum Likelihood classifier in the ENVI software generates a rule image for each class. Each pixel in a rule image contains a statistical estimate of the likelihood that the pixel belongs to the class associated with the rule image. Values range from 0 to 1, with higher values representing greater likelihood. For each pixel in our research area, the associated rule image values from the AVIRIS and SHOALS classifications are entered into the Information column and row, respectively, in the evidence combination matrix. Associated probability masses are then computed for that pixel by filling in the shaded area of the evidence combination matrix. Each box in the shaded area is simply the product of the associated Information column and row values for that box. Class probabilities are then calculated using the formulas in Appendix C. For each pixel, a probability is determined for each class, representing the likelihood of that pixel belonging to each given class. The class with the maximum associated probability is the class assigned to that pixel. For the Dempster-Shafer classification image we computed, if the maximum class probability was less than 0.9, the pixel was considered unclassified. Figure 3-25 shows our resulting Dempster-Shafer (D-S) data fusion image computed using the previously described procedure. As with the previous classification images, the unclassified pixels in the lower left and upper right quadrants of the images are due to missing depth data. The other unclassified pixels are located along boundaries between benthic classes, and may be caused by mixed pixels. We assessed the accuracy of the D-S classification image, using the same procedure and test ROIs from the previous classifications. The Overall Accuracy and

PAGE 84

73 Kappa coefficient are shown in Table 3-9, and the error matrix is in Table 3-10. We also generated a difference image between the classification and the ground truth image (Figure 3-13), shown in Figure 3-26. Colored pixels indicate a classification match between the ground truth and the classification, and black pixels indicate a mismatch. The right side of the image is all black since there is no ground truth imagery in that area. Figure 3-25. Result of Dempster-Shafer fusion of AVIRIS and SHOALS classifications. Statistical Analysis The resulting Kappa coefficients for the classifications were 0.844 for the Dempster-Shafer, 0.762 for the AVIRIS, 0.603 for the SHOALS, and 0.821 for the AVIRIS-plus-depths. In order to measure the significance of each classification, estimates of the Kappa coefficient variances were computed, and test statistics were calculated, as described in Appendix B. Each Kappa coefficient and associated variance is shown in Table 3-11. Each test statistic and associated confidence level for each comparison is given in Table 3-12.

PAGE 85

74 Table 3-9. Accuracies for Dempster-Shafer classification image. Dempster-Shafer Overall Accuracy 87.2% Kappa Coefficient 0.845 Table 3-10. Error matrix for Dempster-Shafer classification accuracies. Reference Classified Sand Colonized Pavement Uncolonized Pavement Macroalgae (50-90%) Macroalgae (10-50%) Total Unclassified 0 1 34 43 39 117 Sand 201 0 0 0 0 201 Colonized Pavement 0 221 4 0 0 225 Uncolonized Pavement 0 0 208 1 5 214 Macroalgae (50-90%) 0 0 0 165 0 165 Macroalgae (10-50%) 0 0 10 4 169 183 Total 201 222 256 213 213 1105 Figure 3-26. Difference image between D-S classification and ground truth image.

PAGE 86

75 Table 3-11. Kappa coefficients and variances for each classification. Dempster-Shafer AVIRIS + Depths AVIRIS SHOALS Kappa Coefficient 0.844 0.821 0.762 0.603 Kappa Coefficient Variance 0.0001416 0.0001593 0.0001895 0.0002614 Table 3-12. Test statistics and confidence levels for each classification comparison. Test Statistic (Z) Confidence Level AVIRIS SHOALS 7.48 >99% AVIRIS AVIRIS + Depths 3.16 >99% AVIRIS D-S 4.52 >99% SHOALS AVIRIS + Depths 10.62 >99% SHOALS D-S 12.01 >99% D-S AVIRIS + Depths 1.33 82% Based on the Kappa analysis, we can say with 82% confidence that the Dempster-Shafer Kappa coefficient is different from the AVIRIS-plus-depths Kappa coefficient. We can say with more than 99% confidence that all the other classification pairs have Kappa coefficients that are different. Summary The goal of this research is to investigate a new method of benthic mapping that makes use of the airborne laser bathymetry waveform returns to aid in benthic classification. The method uses a data fusion approach, combining passive hyperspectral data with ALB waveform return and depth data to attempt to improve the benthic

PAGE 87

76 mapping accuracy over that obtained by either system separately. In this chapter, we performed an experiment to test this method. We began by briefly describing the datasets used in the experiment, including the times and locations of AVIRIS, SHOALS, and hand-held spectrometer dataset acquisitions. Next, we discussed the preprocessing steps applied to the AVIRIS and SHOALS datasets. These steps included image georegistration, conversion to reflectance, normalization, surface wave removal, and rasterization of point data. We then described the process of water attenuation removal for both datasets, from which we produced an AVIRIS bottom reflectance dataset and a SHOALS pseudoreflectance dataset. In the next step we performed a Maximum Likelihood supervised classification of three datasets, including the AVIRIS bottom reflectance, SHOALS pseudoreflectance, and AVIRIS-plus-depths. We then assessed the accuracy these classifications, producing overall accuracy metrics as well as error matrices. Next, we applied the Dempster-Shafer decision-level data fusion approach to combine the AVIRIS and SHOALS classification images, generating a new classification image containing information from both sensors. We then performed an accuracy assessment of the new data fusion classification image, using the same procedure as used with the previous classification images. The resulting overall accuracy of the Dempster-Shafer data fusion image was 87.2%. Overall accuracies for the AVIRIS and SHOALS classification images were 80.2% and 66.9%, respectively. The overall accuracy of the AVIRIS-plus-depths classification was 85.3%. Statistical analysis of the Kappa coefficients and associated variances for each classification indicate that the Dempster-Shafer and AVIRIS-plus

PAGE 88

77 depths Kappa coefficients differ at 82% confidence. The Kappa coefficients of the other classification pairs differ at greater than 99% confidence.

PAGE 89

CHAPTER 4 DISCUSSION AND RECOMMENDATION FOR FURTHER WORK Airborne Laser Bathymetry (ALB) has been shown to be an efficient, accurate, and safe method of obtaining depths of coastal waters. Its use of scanning, pulsed laser technology produces depths with accuracies useful for many hydrographic applications. Results obtained from this technology have been applied toward mapping navigation channels, monitoring sediment transport, and assessing storm damage. The return waveform for each laser pulse is used in ALB systems to obtain depth measurements. In our research, we investigated a method to exploit the measured power of the bottom return portion of the waveform in order to discriminate among benthic types, providing information to aid in generating a benthic map. Specifically, we introduced a new parameter, pseudoreflectance, which we computed from a partial inversion of the bathymetric laser radar equation, and by normalizing by the output laser energy. To the best of our knowledge, our pseudoreflectance image is the first such image generated using ALB waveforms. Our research combined information from ALB waveforms and hyperspectral data to generate a benthic map of coastal waters. We merged the information using data fusion techniques, with results more accurate than that obtained from either dataset independently. Two different levels of data fusion were applied in this research. SHOALS ALB depths were used in a radiative transfer model to correct the AVIRIS hyperspectral data for water attenuation. This fusion was applied at the pixel-level (i.e., data-level), since the data were still in a format similar to the original data, having only been through a 78

PAGE 90

79 preprocessing stage. A similar pixel-level fusion was performed to correct the SHOALS bottom return data for water attenuation as well. However, this correction is not data fusion in the strict sense, because the two datasets that were combined (bottom return and depth) both were obtained from the same sensor (SHOALS). The second type of data fusion used was a decision-level technique, combining benthic classifications derived from the water-corrected AVIRIS and SHOALS datasets, although the end result was a classification in the AVIRIS pixel raster. This decision-level fusion was realized using the Dempster-Shafer evidence combination approach. The Dempster-Shafer (D-S) decision-level fusion image of our research area produced an overall mapping accuracy of 87.2%. The AVIRIS and SHOALS classification images produced overall mapping accuracies of 80.2% and 66.9%, respectively. As expected, the D-S image had a higher mapping accuracy than either of the independent sensor images. For comparison, another dataset was classified using the AVIRIS water-corrected bands plus depths from the SHOALS dataset. The resulting classification had an overall accuracy of 85.3%. We performed an analysis of the Kappa coefficients, calculated from each classification, to quantitatively compare the results. The calculated Kappa coefficients for each classification were 0.844, 0.762, 0.603, and 0.821 for the D-S, AVIRIS, SHOALS, and AVIRIS-plus-depths classifications, respectively. After performing an analysis of the Kappa values (using estimates of the Kappa variances), we can say with 82% confidence that the D-S Kappa value differs from the AVIRIS-plus-depths Kappa value. The calculated confidence in comparing the Kappa values of the other classification pairs (e.g., AVIRIS to SHOALS, AVIRIS to D-S) was greater than 99%.

PAGE 91

80 The results from the statistical analysis indicate that the D-S classification is more accurate than the other classifications, although the statistical confidence that its Kappa coefficient is different from that of the AVIRIS-plus-depth classification is only 82%. It is preferable to obtain a confidence value greater than 90%, however the result still indicates that the SHOALS-derived pseudoreflectance values contain significant benthic information. The accuracy difference between the AVIRIS classification and the AVIRIS-plus-depths classification shows the significant contribution made by depth toward benthic classification. It is likely that much of the improvement in the D-S classification over the AVIRIS classification is also due to the depth component. The D-S classification included pseudoreflectance information as well, however the significance of the pseudoreflectance contribution is not as apparent. Using only the depth data, we would not have been able to create a reasonable benthic classification of the SHOALS dataset. Without a SHOALS-derived classification, we could not have taken advantage of the Dempster-Shafer decision-level fusion, which, based on previous data fusion research (Park 2002), produces better results than other decision-level fusion methods. In order to implement this decision-level method, we had to use two separately classified images. The pseudoreflectance component added the extra dimension of information necessary to create a separate benthic classification, and allowed us to take advantage of the Dempster-Shafer method. The results indicate that the SHOALS bottom return data contain information beneficial for benthic mapping, and may have implications for further research. For instance, ALB bottom return data, corrected for water attenuation, could be analyzed to determine which benthic types, if any, are prone to greater depth errors. Then, if the

PAGE 92

81 SHOALS waveform data could be corrected for water attenuation in real-time, in-flight operators could observe the results and make decisions about repeated passes over problem areas, or other areas of interest. One of the biggest problems experienced during this research was with combining SHOALS bottom return datasets that were acquired from different flights. We used an empirical approach to this problem, observing the overlapping data and fitting a line to the relationship. However, the resulting pseudoreflectance images of our research area show obvious seams between some datasets, indicating the inadequacy of this approach. This is likely a result of neglecting some of the parameters in the bathymetric laser radar equation (Equation 2-23). Further research should be performed to investigate the physics behind the dataset differences, in order to develop a more rigorous method of dataset combination. An obvious solution would be to fly the entire area in one flight, however this would create added stress for the pilot, operator, and perhaps the system as well, and could lead to an unsafe flying environment. Collecting data along some cross-flights (in a direction across all the datasets) would also help with merging the datasets, providing a common dataset to compare with. Another approach, albeit tedious, would be to correct each returned power value for flying height and variations in incidence angles. The ground truth image, used for classification and accuracy assessment, was generated from interpretation of aerial photography (Analytical Laboratories of Hawaii 2001), and covered the western 2/3 of our research area. However, the sand in the northeast quadrant is easily observed in all the imagery. Therefore, we had ground truth for about 70% of the research area. Since our research completed, some new ground

PAGE 93

82 truth imagery has become available, interpreted from aerial photography, hyperspectral imagery, and space-borne multispectral imagery (Coyne et al. 2003), and covering over 90% of our research area. Future related research in Kaneohe Bay would benefit by using this new data along with the ground truth image we used. We removed the effects of waves at the water surface by differencing the Fast-Fourier Transforms (FFT) of visible and infrared bands, and then applying an inverse transform to the result. This method removed much of the surface noise, however recent research (Wozencraft et al. 2003) has shown another method for removing wave effects. Using an irradiance curve (obtained from a hand-held spectrometer measurement), a ratio can be calculated between the irradiance for a visible band and that for a chosen infrared band. This irradiance ratio can then be used to scale the upwelling radiance measurement for the infrared band to that of the visible band, and then subtract the scaled infrared radiance from the visible radiance measurement. Wozencraft et al. (2003) used this method on hyperspectral data to remove surface effects that varied across the image, caused by acquiring the data with flightlines oriented obliquely to the solar azimuth. This method could be applied to the hyperspectral data in our research as well, and compared with the FFT method. Our SHOALS classification image was generated using pseudoreflectance and depth as a two-band dataset. The depth component, in this instance, contributed added information for discerning among bottom types. However, depth is not necessarily an intrinsic property of the benthic object space, so in different areas it may not be appropriate as classification input. For future research, metrics other than depth (perhaps

PAGE 94

83 texture-based) should be considered (in conjunction with pseudoreflectance) for benthic classification. We assumed in this research that the benthic environment did not change between the AVIRIS and SHOALS flights over the research area. This is not always a safe assumption, so it would be preferable to collect data from both systems with as little temporal separation as possible. The best arrangement would be to mount a hyperspectral and ALB system on the same aircraft, with simultaneous data collection during flights. Also, ground reflectance measurements should be obtained with minimal temporal separation from the hyperspectral data collection, allowing for a more accurate generation of reflectance imagery. Another simplifying assumption we made in this research was that the water attenuation coefficient, k, was horizontally and vertically constant throughout the research area. Given the clarity and consistency of the Hawaiian coastal waters, this is a fair assumption. However, additional research should be performed to allow for horizontal and vertical variation in k, which can be significant in other areas. Perhaps the return waveform itself could provide clues to varying values of k, especially in the volume backscatter portion of the waveform, where changes in slope may indicate vertical differences in k within the water column. Another consideration for further research would be to account for internal reflectance at the air/water interface. Upwelling radiance in a water column, upon reaching the surface, can have a portion of its energy reflected back downward into the water. This effect, known as Fresnel reflectance, is dependent upon several factors, including the angle of the upwelling radiance vector relative to the water surface, and the

PAGE 95

84 polarization of the electromagnetic energy. We did not address this phenomenon since its effects are minimal except in very shallow water with highly reflective bottom types. It should be noted that the SHOALS ALB system was not designed with benthic classification mapping in mind, but strictly depth measurement. Perhaps future design of ALB systems would include enhancements beneficial for benthic mapping as a secondary product. These enhancements might include more consistent and accurate measurement of the outgoing laser energy, as well as easier access to the full waveform for each return pulse.

PAGE 96

APPENDIX A SPECIFICATIONS OF THE DATA ACQUISITION SYSTEMS The data used in our research consists of hyperspectral imagery from the AVIRIS system, hyperspectral data from an ASD portable spectrometer, and Airborne Laser Bathymetry (ALB) data from the SHOALS system. All datasets were collected over Kaneohe Bay, Hawaii, with the AVIRIS data obtained in April, 2000, and the ASD and SHOALS data in June, 2000. This appendix provides a description of the three systems. AVIRIS The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) was developed by the Jet Propulsion Laboratory (JPL) in 1987. It was designed to provide earth remote sensing data for many areas of scientific research, including botany, geology, hydrology and oceanography (Vane et al. 1984). AVIRIS is a whisk-broom sensor, operating at a 12 hertz scanning rate. It is usually mounted in a NASA ER-2 airplane, and is flown at an altitude of 20 kilometers at about 730 km/hr. With a 1 milliradian IFOV and a 30 degree total FOV, it produces 614 pixels per scan at a 20 meter pixel size and an 11 kilometer wide swath (Lundeen 2002). For each pixel, AVIRIS collects 224 channels of upwelling radiance data at 10 nanometers (nm) of spectral bandwidth per channel, for the range of 380 nm to 2500 nm. The recorded radiance units are w/cm 2 /sr. The scanning mirror focuses light onto four optical fibers, which carry the light to four spectrometers. An aspheric grating is used to disperse the light across the detectors in the spectrometers. One spectrometer is for visible light, has 32 silicon detectors, and detects radiance in the range of 410 nm to 700 85

PAGE 97

86 nm. The other three spectrometers are for infrared light, have 64 indium-antimonide detectors each, and detect radiance in the range of 680 nm to 1270 nm, 1250 nm to 1860 nm, and 1840 nm to 2450 nm (Goetz 1992). In order to detect changes in plant health, or differentiate among various minerals, it is necessary to have an instrument that not only has high spectral resolution, but also is well calibrated (Vane et al. 1984). Therefore JPL designed the AVIRIS system so it could operate with a high level of spectral and radiometric calibration. Table 1 shows the required and achieved calibration levels for the AVIRIS system. These levels are obtained using laboratory and in-flight calibration methods (Vane et al. 1984), and indicate the high level of spectral and radiometric accuracy achievable with AVIRIS. At the end of this appendix, Table A-3 lists the spectral calibration values generated from the calibration for the April, 2000 flight. This list gives the center and full-width at half-maximum (FWHM) wavelength values based on the spectral response curves for AVIRIS channels 1-50. Table A-1. AVIRIS calibration information (Vane, et al 1984). Calibration Parameter Required Achieved Spectral Calibration 5 nm 2.1 nm Absolute Radiometry 10% 7.3% Band-to-Band Radiometry 0.5% 0.4% ASD FieldSpec Portable Spectrometer Analytical Spectral Devices, Incorporated (ASD), located in Boulder, Colorado, manufactures the FieldSpec portable spectrometer. This device measures ground-level solar-reflected spectra in the visual and infrared regions of the electromagnetic spectrum.

PAGE 98

87 Its sensitivity covers a spectral wavelength range of 350 nanometers to 2500 nanometers, recording 2150 channels at 1-nanometer bandwidths. The spectrometer is housed in a rectangular lightweight case, which is worn by the user with the help of shoulder straps. It has an attached fiber-optic cable with a pointing device at the end for aiming the receiver field of view. Measurements are recorded using software running on an attached notebook computer, which can display the measured spectra in graphic format immediately after each measurement. The software allows for exporting the recorded spectra into ASCII textual format, enabling the use of the data in other software packages. SHOALS The SHOALS system (Scanning Hydrographic Operational Airborne Lidar Survey) was developed by the US Army Corps of Engineers (USACE) in 1994. It was designed to gather near-shore bathymetric data for USACE coastal projects, and has since evolved to collect near-shore topographic data as well (Irish et al. 2000). SHOALS is an airborne pulsed-laser system, operating at a pulse rate of 168 to 900 Hz. Its operating platform is usually a Bell 212 helicopter or a Twin Otter DHC-6 airplane, and is flown at an altitude of 200 to 400 meters at speeds of 175 to 300 km/hr. Swath widths vary from 110 meters to 220 meters, and nominal point separations are typically around 4 to 5 meters. It has a constant nadir angle of 20 degrees, so the point swaths sweep back and forth in front of the aircraft. The maximum depth of measurement varies with water clarity and bottom type, but typical values are up to three times the Secchi (visible) depth. In clear waters, it can be expected for SHOALS to determine depths as great as 60 meters (Guenther 2001, Irish et al. 2000).

PAGE 99

88 The SHOALS system generates laser pulses at two wavelengths, using one to detect the water surface (1064 nm infrared), and one to detect the water bottom (532 nm green). This is accomplished using a solid-state Nd:YAG laser source producing 1064 nm radiation, which is then frequency doubled to produce the simultaneous 532 nm radiation (Guenther 2001). The need for the infrared channel for surface detection arises due to the inconsistency of the green surface return in varying environmental conditions. The green surface return can sometimes be weaker than the volume backscatter return, causing a time bias (and subsequent depth bias) in the identification of the water surface in the green waveform (Guenther 2001). The laser pulse returns are detected using both photomultiplier tubes (PMTs) and avalanche photodiodes (APDs). The PMT is more sensitive to detecting bottom returns in deep water (8-40 meters) and the APD in shallow water (1-14 meters) (Guenther, 2002). Two surface returns are measured by detecting the infrared signal, as well as the green-exited Raman backscatter in the red (645 nm). A GPS receiver and an INS unit are included in the system, which provide the information necessary to georegister the data points recorded during a flight (Estep et al. 1994, Irish et al. 2000). Accuracies obtained from the system are shown in Table A-2. Table A-2. SHOALS performance values (Irish, et al 2000). Dimension Accuracy (1-sigma) Vertical +/15 cm Horizontal using Coast Guard DGPS +/2 m Horizontal using Kinematic GPS +/1 m

PAGE 100

89 Table A-3. AVIRIS spectral calibration values for channels 1-50. Channel Center (nm) FWHM (nm) Center Sigma (nm) FWHM Sigma (nm) 1 374.37 15.45 0.56 0.32 2 384.46 11.53 0.33 0.23 3 394.12 11.38 0.14 0.19 4 403.77 11.23 0.10 0.17 5 413.43 11.09 0.10 0.14 6 423.09 10.96 0.09 0.12 7 432.75 10.83 0.09 0.12 8 442.42 10.71 0.09 0.12 9 452.08 10.59 0.10 0.14 10 461.75 10.49 0.10 0.13 11 471.41 10.38 0.10 0.13 12 481.08 10.29 0.09 0.12 13 490.75 10.20 0.09 0.12 14 500.41 10.12 0.09 0.11 15 510.08 10.04 0.09 0.11 16 519.76 9.97 0.09 0.11 17 529.43 9.91 0.09 0.11 18 539.10 9.85 0.09 0.11 19 548.78 9.80 0.09 0.11 20 558.45 9.76 0.09 0.11 21 568.13 9.72 0.09 0.11 22 577.81 9.69 0.09 0.11 23 587.49 9.66 0.09 0.11 24 597.17 9.64 0.09 0.11 25 606.85 9.63 0.09 0.11 26 616.53 9.63 0.09 0.11 27 626.21 9.63 0.09 0.11 28 635.90 9.64 0.09 0.11 29 645.58 9.65 0.10 0.11 30 655.27 9.67 0.09 0.11 31 664.96 9.70 0.09 0.11 32 676.31 12.67 0.09 0.11 33 655.02 10.88 0.07 0.06 34 664.89 9.56 0.07 0.06 35 674.43 9.52 0.07 0.06 36 683.97 9.50 0.07 0.06 37 693.52 9.48 0.07 0.06 38 703.07 9.47 0.07 0.06 39 712.62 9.47 0.07 0.06 40 722.17 9.47 0.07 0.06 41 731.73 9.49 0.07 0.06 42 741.29 9.51 0.07 0.06 43 750.86 9.54 0.07 0.06

PAGE 101

90 Table A-3. Continued. Channel Center (nm) FWHM (nm) Center Sigma (nm) FWHM Sigma (nm) 44 760.42 9.58 0.07 0.06 45 770.00 9.62 0.07 0.06 46 779.57 9.68 0.07 0.06 47 789.15 9.74 0.08 0.06 48 798.72 9.81 0.07 0.06 49 808.31 9.89 0.08 0.06 50 817.89 9.97 0.07 0.06

PAGE 102

APPENDIX B ASSESSING THE ACCURACY OF REMOTELY SENSED DATA Classification of the object space is the goal of mapping. However, the classification must be accurate for the resulting map products to be useful. This requires some type of assessment to be performed to determine the accuracy of the classification. In this appendix, we explain accepted methods of assessing the accuracy of object space classifications generated from remotely sensed data. In order to perform a map accuracy assessment, a comparison must be made between reference information and that obtained from remotely sensed data. However, several issues concerning the reference information and the comparison must be addressed in the assessment process. These issues include training and test pixels, sample size, sample acquisition, the level of assessment detail, and evaluation. The following sections address these issues, following that provided by Jensen (1996) and Congalton and Green (1999). Training and Test Pixels In the process of performing a supervised classification of remotely sensed data, training pixels are selected from the remotely sensed image using information provided by the reference image. These pixels are samples selected from each class to be identified in the remotely sensed image. The classification process then uses these training pixels as an identification tool for each class, and applies that information to the entire image, assigning an estimated class to each pixel. 91

PAGE 103

92 The use of training pixels as reference information for supervised classifications is widely accepted. However, some researchers will then use the training pixels again as test pixels for an accuracy assessment. This is bad practice because the selection of the training pixels is biased. During the selection of the training pixels, the user has a priori knowledge of locations of different classes, so the selection is not random. The resulting bias will usually cause the classification accuracy to be higher for the training pixels than for the other pixels in the image. Therefore, it is preferable to select test pixels, separate from the training pixels, to assess the accuracy of a classification. Sample Size The appropriate number of sample test pixels needed to assess the accuracy of a classification is difficult to determine. One method is to use an equation based on the normal approximation of the binomial distribution to compute the sample size. This technique works well for determining the total number of pixels to sample, and for computing an overall accuracy for a classification. However, it is not useful for determining the number of pixels to select for each class. Congalton (1999) suggests selecting 50 pixels for each class. This number should be increased to 75 or 100 for areas greater than one million acres or for classifying more than 12 classes. More samples can be added for classes of greater importance or interest, or those that show increased variability in their data. Sample Acquisition The best methods of accuracy assessment assume that the test pixels used are randomly sampled. However, random samples across an entire classification image may result in fewer samples selected for smaller classes. In order to ensure a minimum number of samples are selected for each class (e.g. the 50 mentioned above), most

PAGE 104

93 researchers recommend using a stratified random sampling method. This technique will perform a separate random sampling for each class. Stratified random sampling can be implemented using a random number generator for the row, column locations in the reference image, and only selecting coordinates associated with the current class. This would then be repeated for each class to be assessed. Evaluation After collecting randomized test data, a relationship must be established between the class assigned to each test pixel, and the actual class of each pixel as determined by the reference information. This relationship is best represented using an error matrix, as shown in Table B-1. An error matrix is a square array, laid out such that the columns represent the reference data (one column for each class), and the rows represent the remotely sensed data (one row for each class). The numbers in each box represent the number of test pixels assigned to a particular class (associated row), and the corresponding true class for those pixels (associated column). A diagonal error matrix would indicate no misclassified test pixels. The layout of the error matrix simplifies the process of determining an overall accuracy, as well as individual accuracies for each class. The Overall Accuracy is calculated by adding all the correct pixel classifications (along the diagonal), then dividing by the total number of test pixels (bottom right corner). For the example below, the Overall Accuracy is 72.3%, indicating the percentage of pixels classified correctly. Individual accuracies can also be assessed from the error matrix. However, these accuracies can be evaluated in two different ways. For those interested in the production of the classification, they would want to know, for each class, how many pixels were correctly classified, relative to the total number of reference test pixels sampled for that

PAGE 105

94 class. This is known as the Producer Accuracy, shown for our example in Table B-2. However, for those who would use the classification image, they would want to know, for each class, how many pixels were correctly classified, relative to the total number of pixels assigned to that class. This is known as the User Accuracy, and is also shown in Table B-2. Table B-1. Example error matrix for four classes. Reference Observed Vegetation Water Concrete Sand Total Vegetation 62 6 20 25 113 Water 5 83 7 7 102 Concrete 2 13 83 20 118 Sand 6 5 5 88 104 Total 75 107 115 140 437 Overall Accuracy = (62 + 83 + 83 + 88) / 437 = 316 / 437 = 72.3% Table B-2. Producer and user accuracies for example error matrix. Producer Accuracy User Accuracy Vegetation 62 / 75 = 82.7% 62 / 113 = 54.9% Water 83 / 107 = 77.6% 83 / 102 = 81.4% Concrete 83 / 115 = 72.2% 83 / 118 = 70.3% Sand 88 / 140 = 62.9% 88 / 104 = 84.6% For example, the Producer Accuracy for vegetation is calculated by noting the number of test pixels correctly classified as vegetation, 62, and dividing by the total number of test pixels sampled for vegetation, 75, located at the bottom of the vegetation

PAGE 106

95 column. The resulting value is 82.7%. However, the User accuracy would be calculated by dividing 62 by the total number of test pixels classified as vegetation, 113, located at the end of the vegetation row. Here, the result is only 54.9%. In this case, a high Producer Accuracy does not necessarily result in a high User Accuracy. Noting the sand accuracies in Table B-2, high User Accuracy does not always yield a high Producer Accuracy. Another statistic used to evaluate the accuracy of a classification is the Kappa coefficient. The Kappa coefficient is a multivariate statistic, taking into account the off-diagonal elements of the error matrix. Its value provides an overall accuracy of a classification, and is also used to determine if one error matrix is significantly different from another. Equation B-1 calculates the Kappa coefficient. piiipiiipiiinnnnnnnK1211 (B-1) For Equation B-1, n i+ is the sum of the error matrix elements in row i, n +i is the sum of the elements in column i, p is the number of classes, n is the total number of test pixels, and n ii is the value located at row i and column i. In order to test for significant differences between error matrices, the variance of the Kappa values must be computed for each error matrix. Equation B-2 approximates the large sample variance of Kappa (Congalton and Green 1999).

PAGE 107

96 422242132321122111411212111varnK (B-2) piiinn111 piiinnn1221 piiiiinnnn1231 pipjijijnnnn112341 Using Equation B-2 for two independent error matrices (matrix 1 and matrix 2), a test statistic can be calculated to determine if the matrices are significantly different. Equation B-3 calculates the test statistic. 2121varvarKKKKZ (B-3) Given the null hypothesis of H 0 :(K 1 -K 2 ) = 0, and the alternative hypothesis H 1 :(K 1 -K 2 ) 0, H 0 is rejected if Z > Z /2 Here, /2 is the confidence level of the two-tailed Z test and the degrees of freedom is infinite.

PAGE 108

APPENDIX C DEMPSTER-SHAFER EVIDENTIAL REASONING This appendix provides a description of the Dempster-Shafer method of data fusion. This technique is one of several methods used to perform decision-level fusion, combining data from multiple sensors that have been independently classified for object identification. The resulting fused dataset should represent the object space better than any of the separate input classifications. Background In 1976, Glenn Shafer wrote a book entitled A Mathematical Theory of Evidence, which reiterated some of the work done by Arthur Dempster on statistical inference. In the book, Shafer outlined what is known as the Dempster-Shafer evidential reasoning method of data fusion. This technique attempts to mimic the human method of assigning evidence, based on measures of belief. Bayesian inference attempts to assign probabilities to hypotheses, which are defined as fundamental statements about something in nature. These hypotheses are mutually exclusive and exhaustive. The Dempster-Shafer method assumes that humans assign measures of belief to propositions, which are hypotheses or combinations of hypotheses. Propositions can contain overlapping and even conflicting hypotheses, and are not mutually exclusive or exhaustive, creating a general level of uncertainty (Hall 1992, Hall and Llinas 2001). The following description is adapted from Hall (1992) and Hall and Llinas (2001). The Dempster-Shafer, or D-S, method defines a set of basic propositions (i.e., hypotheses), mutually exclusive and exhaustive, called the frame of discernment. If there 97

PAGE 109

98 are n basic propositions in the frame of discernment, then there exist 2 n-1 general propositions created by the possible combinations of the basic propositions. For example, if B represents the frame of discernment, and 2 B the set of general propositions derived from B, the two sets could be represented as in Equations C-1 and C-2. The symbol v denotes a Boolean OR. B = {b 1 b 2 , b n } (C-1) 2 B = {b 1 v b 2 b 1 v b 3 } (C-2) One particular general proposition, B`, is the Boolean disjunction of all the basic propositions. B` = b 1 v b 2 v v b n (C-3) The D-S method assigns evidence to propositions using the concept of probability mass m(p), where p is a proposition. A probability mass may be assigned for basic or general propositions, such as m(b 1 ) or m(b 1 v b 2 ). The values are such that 0 <= m(p) <= 1. Also, the sum of all probability masses for all propositions is 1. The likelihood of the occurrence of a proposition can then be obtained by summing the probability masses pertinent to that proposition. For instance, to obtain the likelihood of the proposition (b 1 vb 2 ), we would sum m(b 1 ), m(b 2 ), and m(b 1 v b 2 ). Note that this method assigns evidence to both mutually exclusive propositions (i.e., b 1 ) as well as overlapping and nonexclusive propositions. The general proposition B` also is assigned a probability mass, implying a general level of uncertainty. For a given sensor, if m(B`) = 1, then the sensor cannot distinguish between any basic propositions.

PAGE 110

99 The D-S method defines the terms support and plausibility for each proposition. Support, Spt(p), is simply the likelihood of the occurrence of a proposition, as described above. The plausibility of a proposition, Pls(p), represents the lack of evidence supporting its negation (~p). Therefore, Pls(p) = 1 Spt(~p). The relationship between these two terms is defined by the evidential interval, [Spt(p), Pls(p)]. This interval represents the minimum evidence, Spt(p), and maximum evidence, Pls(p), for the occurrence of proposition p. Output from the D-S process consists of a set of evidential intervals, while the inputs are the probability masses assigned by the sensors. The generation of the evidential intervals is dependent upon the combination of the probability masses. The D-S method has certain combination rules for merging probability masses from multiple sensors. These rules are explained in the next section. Rules of Combination Dempster defined a set of rules for combining probability masses from multiple independent sensors. Below, we describe an example case for two sensors, adapted from Hall (1992). This case can be repeated for additional sensors using results from the previous combination. We assume two sensors, S1 and S2, assign evidence for three propositions, P1, P2, and P3, and are defined as follows. P 1 = object is sand P 2 = object is macroalgae P 3 = object is sand or macroalgae Sensor S1 assigns probability masses to the three propositions, named m 1 (P 1 ), m 1 (P 2 ), and m 1 (P 3 ). Probability masses are also assigned by sensor S2, producing m 2 (P 1 ), m 2 (P 2 ), and m 2 (P 3 ). The combination of these masses are well represented using a matrix notation, which is shown in Table C-1, with the probability masses from sensor S1 listed along the left side, and those for sensor S2 along the top.

PAGE 111

100 Table C-1. Dempster's probability mass combination rules. S2 S1 m 2 (P 1 ) sand m 2 (P 2 ) macroalgae m 2 (P 3 ) sand or macroalgae m 1 (P 1 ) sand m(P 1 ) = m 1 (P 1 )m 2 (P 1 ) sand K 12 = m 1 (P 1 )m 2 (P 2 ) conflicting m(P 1 ) = m 1 (P 1 )m 2 (P 3 ) sand m 1 (P 2 ) macroalgae K 21 = m 1 (P 2 )m 2 (P 1 ) Conflicting m(P 2 ) = m 1 (P 2 )m 2 (P 2 ) macroalgae m(P 2 ) = m 1 (P 2 )m 2 (P 3 ) macroalgae m 1 (P 3 ) sand or macroalgae m(P 1 ) = m 1 (P 3 )m 2 (P 1 ) sand m(P 2 ) = m 1 (P 3 )m 2 (P 2 ) macroalgae m(P 3 ) = m 1 (P 3 )m 2 (P 3 ) sand or macroalgae The matrix in Table C-1 presents three different types of combinations, which are matching combinations, overlapping combinations, and conflicting combinations. A matching combination consists of assigned probability masses from each sensor for the same proposition. For example, in the upper-left box of the matrix, the combined mass is the product of separate masses from each sensor, but for the same proposition P 1 An overlapping combination exists for combining probability masses for propositions P 1 and P 3 or P 2 and P 3 Proposition P 3 is a disjoint proposition of P 1 and P 2 so it can be combined with either proposition in support of its evidence. An example of this combination is in the lower-left box in the matrix, combining m 1 (P 3 ) from sensor S1 with m 2 (P 1 ) from sensor S2, providing a combined probability mass for proposition P 1 Again, the combined mass is simply the product of the individual masses. A conflicting combination is produced from two assigned probability masses for conflicting propositions. In the above example, propositions P 1 and P 2 are conflicting, and any instance of sensor S1 assigning evidence to P 1 and sensor S2 assigning evidence to P 2 (or vice-versa) would result in a conflicting combination. These instances are represented by K 12 and K 21 in the matrix. Once again, the resulting combined value is

PAGE 112

101 simply the product of the individual probability masses. However, these values are used to compute a normalizing factor c, which is the sum of all the combined masses for conflicting combinations. Equation C-4 calculates c for our example. c = K 12 + K 21 (C-4) Equations C-5 and C-6 are used for combining evidence from two independent sensors using Dempsters rules of combination. cBmAmPmdjiPBAjid 121 (C-5) mkBAmkBmAmc21 (C-6) For Equations C-5 and C-6, P d denotes a general proposition defined as a Boolean combination of basic propositions A i and B j and denotes the empty set. As an example, we can apply Equations C-5 and C-6 to calculate the combined probability mass for sand. Using the matrix in Table C-1, we would sum the boxes labeled sand as well as the box labeled sand or macroalgae in the lower-right box. These boxes represent evidence from a matching combination (upper-left box), from overlapping combinations (lower-left and upper-right boxes), and a matching combination for the general proposition of sand or macroalgae (lower-right box). This sum would then be normalized by c, which is the sum of the conflicting evidence from Equation C-4.

PAGE 113

APPENDIX D VARIABLE DEFINITIONS This appendix contains a list of variables used in this dissertation, including a brief definition and appropriate units for each. L = spectral radiance (W m -2 sr -1 ) E = spectral irradiance (W m -2 ) t = atmospheric transmittance (unitless) = spectral reflectance (unitless) = sun angle (radians) = sensor angle (radians) P r = returned power measured at ALB receiver (photoelectrons ns -1 ) = normalized bottom amplitude (photoelectrons ns -1 mJ -1 ) ~ = pseudoreflectance (photoelectrons ns -1 mJ -1 ) 102

PAGE 114

LIST OF REFERENCES Abidi, M.A., Gonzalez R.C., 1992, Data Fusion in Robotics and Machine Intelligence, Academic Press, San Diego, California. Analytical Laboratories of Hawaii, 2001, Benthic habitats of the Hawaiian islands: a comparison of accuracy of digital maps prepared from color aerial photography and hyperspectral imagery, Final Report for the National Ocean Service and National Geodetic Survey, Center for Coastal Monitoring and Assessment, April. Bierwirth, P.N., Lee, T.J., Burne, R.V., 1993, Shallow sea-floor reflectance and water depth derived by unmixing multispectal imagery, Photogrammetric Engineering and Remote Sensing, vol. 59, no. 3, pp. 331-338, March. Borstad, G., Vosburg J., 1993, Combined active and passive optical bathymetric mapping: using the Larsen LIDAR and the CASI imaging spectrometer, Proceedings from the Canadian Symposium on Remote Sensing, Sherbrooke, Quebec, June. Brown, W.L., Polcyn, F.C., Stewart, S.R., A method for calculating water depth, attenuation coeffieients, and bottom reflectance characteristics, Proceedings of Seventh International Symposium on Remote Sensing of the Environment, ERIM, Ann Arbor, Michigan, 1971, pp. 663-680. Bukata, R.P., Jerome J.H., Kondratyev K.Y., Posdnyakov D.V., 1995, Optical Properties and Remote Sensing of Inland and Coastal Waters, CRC Press, Boca Raton, Florida. Carter, W., Shrestha, R., Tuell, G., Bloomquist, D., Sartori, M., 2001, Airborne laser swath mapping shines new light on Earths topography, EOS, Transactions, American Geophysical Union, vol. 82, no. 46, pp. 549-555, November 13. Congalton, R.G., Green, K., 1999, Assessing the Accuracy of Remotely Sensed Data: Principles and Practices, Lewis Publishers, Boca Raton, Florida. Coyne, M.S., Battista, T.A., Anderson, M., Waddell, J., Smith, W., Jokiel, P., Kendall, M.S., Monaco, M.E., 2003, Benthic Habitats of the Main Hawaiian Islands, NOAA Technical Memorandum NOS NCCOS CCMA 152, available from URL: http://biogeo.nos.noaa.gov/projects/mapping/pacific/ site last visited April 2003. 103

PAGE 115

104 Environmental Systems Research Institute (ESRI) Incorporated, 2000, ArcView GIS (version 3.2a), computer program, available from ESRI, 380 New York Street, Redlands, California 92373-8100 Estep, L.L., Lillycrop, W.J., Parson, L.E., 1994, "Sensor fusion for hydrographic applications," Proceedings, U.S. Army Corps of Engineers 1994 Training Symposium, Surveying and Mapping, Remote Sensing/GIS, New Orleans, Louisiana, pp. SM:2B 1-7. Fanelli, A., Leo, A., Ferri, M., 2001, Remote sensing images data fusion: a wavelet transform approach for urban analysis, Proceedings from IEEE/ISPRS Joint Workshop on Remote Sensing and Data Fusion over Urban Areas, Rome, Italy, November 8-9, pp. 112-116. Flachs, G.M., Jordan, J.B., Beer, C.L., Scott, D.R., Carlson, J.J., 1990, Feature space mapping for sensor fusion, Journal of Robotic Systems, vol. 7, no. 3, pp. 373-393. Goetz, A.F.H., 1992, Principles of narrow band spectrometry in the visible and IR: instruments and data analysis, Imaging Spectroscopy: Fundamentals and Prospective Applications, ECSC, Brussels, Belgium, F. Toselli and J. Bodechtel, editors. Guenther, G.C., 1985, Airborne Laser Hydrography, NOAA Professional Paper Series, National Ocean Service 1, U.S. Department of Commerce, NOAA, Silver Spring, Maryland. Guenther, G.C., 2001, Airborne lidar bathymetry: a primer for the 2 nd Airborne Hydrography Workshop, August 15-17, 2001, excerpt from Digital Elevation Models, published by American Society of Photogrammetry and Remote Sensing, September. Guenther, G.C., Goodman, L.R., 1978, Laser applications for near-shore nautical charting, Proceedings from SPIE Ocean Optics V, vol. 160, pp. 174-183. Guenther, G.C., LaRocque, P.E., Lillycrop, W.J., 1994, Multiple surface channels in SHOALS airborne lidar, Proceedings from SPIE Ocean Optics XII, Bergen, Norway, June, vol. 2258, pp. 422-430. Guenther, G.C., Mesick, H.C., 1988, Analysis of airborne laser hydrography waveforms, Proceedings from SPIE Ocean Optics IX, Orlando, Florida, April, vol. 925, pp. 232-241. Hall, D.L., 1992, Mathematical Techniques in Multisensor Data Fusion, Artech House, Norwood, Massachusetts. Hall, D.L., Llinas, J., 2001, Handbook of Multisensor Data Fusion, CRC Press, Boca Raton, Florida.

PAGE 116

105 Hill, D.L.G., Hawkes, D.J., Gleeson, M.J., Cox, T.C.S., Strong, A.J., Wong, W-L., Ruff, C.F., Kitchen, N., Thomas, D.G.T., Crossman, J.E., Studholme, C., Gandhe, A.J., Green, S.E.M., Robinson, G.P., 1994, Accurate frameless registration of MR and CT images of the head: applications in surgery and radiotherapy planning, Radiology, vol. 191, pp. 447-454. Irish, J.L., McClung, J.K., Lillycrop, W.J., Airborne lidar bathymetry: the SHOALS system, PIANC Bulletin, No. 103, pp. 43-53, 2000. Jensen, J.R., 1996, Introductory Digital Image Processing: a Remote Sensing Perspective, Second Edition, Prentice Hall, Upper Saddle River, New Jersey. Jerlov, N.G., 1976, Marine Optics, Second Edition of Optical Oceanography, Elsevier Scientific Publishing Company, Amsterdam, The Netherlands. Kappus, M., Davis, C., Rhea, W., 1998, Bathymetry from fusion of airborne hyperspectral and laser data, SPIE conference on Imaging Spectrometry IV, San Diego, California, vol. 3438, pp. 40-51, July. Lee, M., Tuell, G., 2003, A technique for generating bottom reflectance images from SHOALS data, presented at U.S. Hydro 2003 hydrographic conference, Biloxi, Mississippi, March 24-27. Lee, Z., Carder, K.L., Mobley, C.D., Steward, R.G., Patch, J.S., 1998, Hyperspectral remote sensing for shallow waters. A semianalytical model, Applied Optics, vol. 37, no. 27, September, pp. 6329-6338. Lei, Y., Jinwen, T., Jian, L., 2001, Application of land-cover change detection based on remote sensing image analysis, Proceedings of the SPIE Conference on Multispectral and Hyperspectral Image Acquisition and Processing, Wuhan, China, Oct 22-24, pp. 184-188. Leica Geosystems, 2002, ERDAS Imagine (version 8.5), computer program, available from Leica Geosystems GIS and Mapping, 2801 Buford Highway, N.E., Atlanta, Georgia 30329-2137. Lippman, R.P., 1987, An introduction to computing with neural nets, IEEE Acoustics, Speech and Signal Processing, vol. 4, pp. 4-22. Lundeen, S.R., 2002, AVIRIS concept, Jet Propulsion Laboratory, California Institute of Technology, available from URL: http://popo.jpl.nasa.gov/html/aviris.concept.html site last visited April 2003. Lyzenga, D.R., 1978, Passive remote sensing techniques for mapping water depth and bottom features, Applied Optics, vol. 17, no. 3, pp. 379-383, Feb.

PAGE 117

106 Lyzenga, D.R., 1985, Shallow-water bathymetry using combined lidar and passive multispectral scanner data, International Journal of Remote Sensing, vol. 6, no. 1, pp. 115-125. Madhok, V., Landgrebe, D., 1999, Supplementing hyperspectral data with digital elevation, Proceedings from IEEE International Geoscience and Remote Sensing Symposium, Hamburg, Germany, June 28-July 2, vol. 1, pp. 59-61. Mitiche, A., Aggarwal, J.K., 1986, Multiple sensor integration/fusion through image processing: a review, Optical Engineering, vol. 25, no. 3, March, pp. 380-386. Mochizuki, J., Takahashi, M., Hata, S., 1985, Unpositioned workpieces handling robot with visual and force sensors, Proceedings of IEEE International Conference of Industrial Electronics, Control and Instrumentation, San Francisco, California, November, pp. 299-302. Park, J.Y., 2002, Data fusion techniques for object space classification using airborne laser data and airborne digital photographs, Ph.D. Dissertation, University of Florida, Department of Civil and Coastal Engineering, Gainesville, Florida. Park, J.Y., Shrestha, R.L., Carter, W.E., Tuell, G.H., 2001, Land-cover classificataion using combined ALSM (LIDAR) and color digital photography, presented at American Society of Photogrammetry and Remote Sensing Conference, St. Louis, Missouri, April 23-27. Philpot, W.D., 1989, Bathymetric mapping with passive multispectral imagery, Applied Optics, vol. 28, no. 8, pp. 1569-1578, April. Polcyn, F.C., Brown, W.L., Sattinger, I.J., 1970, The measurement of water depth by remote sensing techniques, Report 8973-26-F, Willow Run Laboratories, University of Michigan, Ann Arbor. Polcyn, F.C., Lyzenga, D.R., 1973, Calculation of water depth from ERTS-MSS data, from Proceedings Symposium on Significant Results Obtained from ERTS-1, NASA Publication SP-327. Ren, H., Chang, C.-I., 2000, A generalized orthogonal subspace projection approach to unsupervised multispectral image classification, IEEE Transactions on Geoscience and Remote Sensing, vol. 38, no. 6, November. Research Systems Incorporated (RSI), 2002, ENVI/IDL (version 3.5), computer program, available from Research Systems, Incorporated, 4990 Pearl East Circle, Boulder, Colorado 80301. Richardson, J.M., Marsh, K.A., 1988, Fusion of multisensor data, International Journal of Robotic Research, vol. 7, no. 6, pp. 78-96.

PAGE 118

107 Sabol, D.E., Adams, J.B., Smith, M.O., 1992, Quantitative subpixel spectral detection of targets in multispectral images, Journal of Geophysical Research, vol. 97, issue E2, pp. 2659-2672. Sandidge, J.C., Holyer, R.J., 1998, Coastal bathymetry from hyperspectral observatons of water radiance, Remote Sensing Environment, vol. 65, pp. 341-352. Shafer, G., 1976, A Mathematical Theory of Evidence, Princeton University Press, Princeton, New Jersey. Steinberg, A.N., Bowman, C.L., White, Jr., F.E., 1999, Revisions to the JDL data fusion model, Proceedings from the SPIE Conference, Orlando, Florida, April 5-9, pp. 430-441. Stumpf, R.P., Holderied, K., Sinclair, M., 2002, Mapping coral reef bathymetry with high-resolution, multispectral satellite imagery, presented at the Seventh International Conference on Remote Sensing for Marine and Coastal Environments, Miami, Florida, May 20-22. Tuell, G.H., 2002a, A multichannel restoration approach to radiance refinement in imaging spectroscopy, Ph.D. Dissertation, Ohio State University, Columbus, Ohio. Tuell, G.H., 2002b, Data fusion of airborne laser data with passive spectral data, presented at 3 rd Annual Airborne Hydrography Workshop, Corte Madiera, California, July. Tyler, J.E., 1968, The Secchi disk, Limnological Oceanography, vol. 13, pp. 1-6. Vane, G., Chrien, T.G., Miller, E.A., Reimer, J.H., 1987, Spectral and radiometric calibration of the Airborne Visible/Infrared Imaging Spectrometer, Jet Propulsion Laboratory, California Institute of Technology, JPL Publication 87-38. Waltz, E.L., 1986, Data fusion for C3I: a tutorial, Command, Control Communications Intelligence (C3I) Handbook, EW Communications, Palo Alto, California. Wezernak, C.T., Lyzenga, D.R., 1975, Analysis of cladophora distribution in Lake Ontario using remote sensing, Remote Sensing Environment, vol. 4, pp. 37-48. Wozencraft, J., Lee, M., Tuell, G., Philpot, W., 2003, Use of SHOALS data to produce spectrally-derived depths in Kaneohe Bay, Hawaii, presented at U.S. Hydro 2003 hydrographic conference, Biloxi, Mississippi, March 24-27.

PAGE 119

BIOGRAPHICAL SKETCH Mark Patrick Lee was born in New Jersey in 1968. His family moved to Lyons, Colorado in 1972, where he stayed through his junior high school years. His family then moved to Longmont, Colorado, and Mark graduated from Longmont High School in 1985. He attended the University of Central Florida in Orlando, Florida, where he received a Bachelor of Science degree in Computer Science in 1991. Mark worked in the software engineering industry for three years in the central Florida area before deciding to enter graduate school. In 1996, he received a Master of Science degree in the Geomatics program at the University of Florida. After working a few years as a researcher, Mark entered the Ph.D. program in Geomatics. Upon graduation, Mark plans to pursue a career within the geospatial sciences, utilizing his interests in spatial data analysis and computer programming. 108


Permanent Link: http://ufdc.ufl.edu/UFE0000730/00001

Material Information

Title: Benthic Mapping of Coastal Waters Using Data Fusion of Hyperspectral Imagery and Airborne Laser Bathymetry
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0000730:00001

Permanent Link: http://ufdc.ufl.edu/UFE0000730/00001

Material Information

Title: Benthic Mapping of Coastal Waters Using Data Fusion of Hyperspectral Imagery and Airborne Laser Bathymetry
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0000730:00001


This item has the following downloads:


Full Text












BENTHIC MAPPING OF COASTAL WATERS USING DATA FUSION OF
HYPERSPECTRAL IMAGERY AND AIRBORNE LASER BATHYMETRY
















By

MARK LEE


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


2003

































Copyright 2003

by

Mark Lee















ACKNOWLEDGMENTS

First, I wish to thank the members of my supervisory committee for their help

throughout this effort. Dr. Grady Tuell, the committee chair, was invaluable in his

assistance, knowledge, and enthusiasm for the work. His efforts went above and beyond

the normal expectations, ensuring the completion of this research. Dr. William Carter

added his knowledge and expertise to this effort, and was always willing to help answer

my questions at a moment's notice. Dr. Bon Dewitt was incredibly supportive and

helpful, and his advice and understanding throughout my "journey" were greatly

appreciated. Dr. Ramesh Shrestha provided significant financial support throughout my

graduate education, in addition to his insight and knowledge, for which I am grateful. Dr.

Jasmeet Judge showed great interest and enthusiasm for this research, and her knowledge

was very helpful.

There are many others that I would like to thank for their contributions as well.

From the Remote Sensing Division of the National Geodetic Survey, Capt. Jon Bailey

provided much financial support for this work, and the efforts of his associates Mike

Aslaksen, Chris Parrish, and Jason Woolard were invaluable. From the JALBTCX group

at the U.S. Army Corps of Engineers, Jeff Lillycrop and his associates Mary Whittington

and Jennifer Wozencraft contributed significant financial and technical assistance toward

this research. Gary Guenther, with Optech International, provided vital technical

expertise in this work.









I would also like to thank Joong Yong Park and Paul Demkowicz, two friends who,

having "been there" before, provided much support and advice. Thanks also go to Levent

Genc and Balaji Ramachandran, fellow Ph.D. candidates also nearing the end of their

academic endeavors, whose friendship and understanding were a great help. I also thank

my other friends, and my family, for their support and prayers.

Above all, I thank God for helping me throughout this process. He gave me the

strength and ability to complete this work, and has given me an "education" during these

years that is worth more than any university degree could ever be.
















TABLE OF CONTENTS
page

A C K N O W L E D G M E N T S ................................................................................................. iii

LIST OF TA BLE S ....................................................... .. .......... ............ .. vii

L IST O F FIG U R E S .............. ............................ ............. ........... ... ........ viii

A B STR A C T ................................................. ..................................... .. x

CHAPTER

1 IN TR OD U CTION ............................................... .. ......................... ..

D ata F u sion ........................................................... ................. 3
A applications of D ata F u sion ...................................................................... ...... 6
Levels of D ata Fusion .................. ........................... .... .. .. ................ .8
Evidence Combination M ethods ..................................................... ...............11
Organization of the Dissertation............................................................. ... ....12

2 BENTHIC MAPPING OF COASTAL WATERS BY REMOTE SENSING ...........14

H y p ersp ectral Im ag ery ................................................ .................... .................... 14
R adiance and R eflectance...................... .............................. ............... 16
Spectral M watching ........... .......................................................... .. .... .. .. 20
Pure Pixel Matching Algorithms ................ ................ ....................21
Mixed Pixel Matching Algorithms............................... .....................22
A airborne L aser B athym etry ............................................... ................................... 23
Theory ............... ......... ......................................................................24
L im station s ............... ................ ..............................................................2 8
B enthic M apping M ethods .............................................................. .....................29
N eural N etw ork s ............ ... .. ........ .............................................. .. ......29
B a n d R a tio s ................................................................................................... 3 0
R adiative Transfer M odel ............................. ............................ ............... 32
O their Techniques................................................... .. ........ .... ........ ....36
Modified Radiative Transfer Model............ ...................................... 37

3 EXPERIM ENT ................ .................................................. ........ 41

D ata sets ............. ......... .. ............. .. .. ........ ..................................4 2
P re p ro c e ssin g ............. ......... .. ............. .. .................................................4 4


v









A V IR I S ........................................................................... 4 4
S H O A L S ............... ............. ........................................... 4 9
W ater A ttenuation R em oval ............................................... ............................ 50
AVIRIS ................... ............ ............... 51
SHOALS ................. .. .......................... ...............53
Classification of D atasets ..................................................................................... 59
Fusion of C lassified Im ages............................................................. ............... 66
Statistical Analysis............... ....................... ......... 73
S u m m a ry .....................................................................................................7 5

4 DISCUSSION AND RECOMMENDATION FOR FURTHER WORK .................. 78

APPENDIX

A SPECIFICATIONS OF THE DATA ACQUISITION SYSTEMS ..........................85

A V IR I S ............................ .. ............................................................ .............. 8 5
ASD FieldSpec Portable Spectrom eter...................... ........ ..................................86
SH O A L S ........... .................................... ......... ................... .87

B ASSESSING THE ACCURACY OF REMOTELY SENSED DATA.....................91

Training and Test Pixels .......................... ........... ........ ............ 91
S am p le S ize ...........................................................................92
Sam ple A acquisition ........... .................................................................. ......... ...... .. 92
E v a lu a tio n ............................................................................................................. 9 3

C DEMPSTER-SHAFER EVIDENTIAL REASONING............................................97

B a c k g ro u n d ........................................................................................................... 9 7
Rules of Com bination ..................................... .... .. .......... .. .. ...............99

D VARIABLE DEFINITIONS ............................................................................. 102

LIST OF REFEREN CES ............................................................ .. ............... 103

BIOGRAPHICAL SKETCH ............................................................. ............... 108
















LIST OF TABLES


Table pge

3-1. Linear regressions between overlapping flight data and associated r-squared
v alu es. .............................................................................56

3-2. Overall accuracies for the three classifications................................................66

3-3. Error matrix for AVIRIS classification accuracies.........................................67

3-4. Error matrix for SHOALS classification accuracies .........................................68

3-5. Error matrix for AVIRIS-plus-depths classification accuracies..........................69

3-6. AVIRIS class-to-information table............................... ..................... 70

3-7. SHOALS class-to-information table........... ................... ...............71

3-8. Evidence combination matrix for AVIRIS and SHOALS classifications ............71

3-9. Accuracies for Dempster-Shafer classification image ........................................74

3-10. Error matrix for Dempster-Shafer classification accuracies ...............................74

3-11. Kappa coefficients and variances for each classification. ....................................75

3-12. Test statistics and confidence levels for each classification comparison. ............75

A-1. AVIRIS calibration information ................... ........ ........................ 86

A -2. SH O AL S perform ance values.......................................... ........... ............... 88

A-3. AVIRIS spectral calibration values for channels 1-50. ........................................89

B-1. Example error matrix for four classes........... ..................................... ............. 94

B-2. Producer and user accuracies for example error matrix......................................94

C-1. Dempster's probability mass combination rules. ..........................................100
















LIST OF FIGURES


Figure pge

1-1. Using redundant and complementary data to discriminate objects.. ...................

2-1. Comparison of the spectral sensitivities of Landsat TM bands 2 and 3, and
A V IRIS bands 17-32 ........ ..... .... .................... ........ ..... ...... .. .......... 15

2-2. Contributions to at-sensor radiance. ........................................... ............... 17

2-3. AVIRIS radiance spectra for grass. ........................... ..................... 18

2-4. AVIRIS reflectance spectra for grass. .......................................... 19

2-5. Illustration of linear unmixing .......... ..........................................23

2-6. Interaction of ALB laser pulse with water body.................................................25

2-7. Laser pulse return waveform (logarithmic) from SHOALS system....................26

2-8. A neural netw ork .................. ...................................... .. ............ 30

3-1. Georegistered AVIRIS image of Kaneohe Bay, Hawaii. ...................................43

3-2. Plot of ground points with their reflectance values and corresponding AVIRIS
radiance values for band 5 (413 nm )........................................... ............... 47

3-3. AVIRIS reflectance image (band 15, 510 nm) of the research area .....................48

3-4. AVIRIS image (band 15, 510 nm) corrected for surface waves using FFT
m e th o d .............................................................................................................. 4 9

3-5. SHOALS m ean depth im age. ........................................ ........................... 51

3-6. AVIRIS image (band 15, 510 nm) corrected for water attenuation.......................53

3-7. Spatial layout of SHOALS datasets collected over project area............................54

3-8. Plot of overlapping APD pixels from Areas 26a and 26b. ...................................55

3-9. Plot of overlapping pixels from Areas 26 and 12. ............................................57









3-10. Plot of overlap pixels from APD and PMT receivers.................... ...............58

3-11. APD regressed pseudoreflectance image of research area. ..................................58

3-12. D epth im age of research area....................................................... .............. 59

3-13. Ground truth image for our research area. ....................................................... 60

3-14. Class color legend for ground truth im age ...................................... .....................60

3-15. Regions of Interest (ROIs) draped over ground truth image. .............................61

3-16. Regions of Interest (ROIs) draped over AVIRIS bottom reflectance image,
b an d 1 5 ............................................................................6 1

3-17. Plot of +/- 2 standard deviation spread of pseudoreflectance values for each
c la s s ...................................... .................................... ................ 6 3

3-18. Classification of AVIRIS bottom reflectance dataset................ ...............64

3-19. Class color legend for classification images.................. ................ ............... 64

3-20. Classification of SHOALS 2-band (pseudoreflectance and depth) image.............65

3-21. Classification of AVIRIS bottom reflectance-plus-depth dataset .......................65

3-22. Difference image between AVIRIS classification and ground truth image. .........67

3-23. Difference image between SHOALS classification and ground truth image........68

3-24. Difference image between AVIRIS-plus-depths classification and ground
truth im age. .........................................................................69

3-25. Result of Dempster-Shafer fusion of AVIRIS and SHOALS classifications........73

3-26. Difference image between D-S classification and ground truth image. ...............74















Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

BENTHIC MAPPING OF COASTAL WATERS USING DATA FUSION OF
HYPERSPECTRAL IMAGERY AND AIRBORNE LASER BATHYMETRY

By

Mark Lee

May 2003

Chair: Grady Tuell
Major Department: Civil and Coastal Engineering

One goal of mapping, the accurate classification of the object space, can be

achieved by visual interpretation or analysis of relevant data. Most mapping of earth

features relies on the latter method, and is realized using remote sensing. Various

airborne sensors are used today for generating topographic and hydrographic mapping

products. In this research, we combined data from airborne hyperspectral imagery and

airborne laser bathymetry, using data fusion techniques, to map the benthic environment

of coastal waters.

Airborne laser bathymetry (ALB) uses laser pulse return waveforms to estimate

water depth. These signals are attenuated by the water depth and clarity. A portion of

the waveform signal, the peak bottom return, is a function of the bottom reflectance, and

therefore, the bottom type. The purpose of this research is to exploit the peak bottom

return signal of ALB to obtain benthic information, and then use the information, in

combination with spectral imaging information, to aid in benthic classification.









We used AVIRIS hyperspectral data and SHOALS ALB data, obtained over

Kaneohe Bay, Hawaii, for this research. After preprocessing the datasets, the water

attenuation effects were removed from the AVIRIS data using a radiative transfer model.

A variant of this model, developed for this research, was used on the ALB dataset to

correct for water attenuation, resulting in a parameter we defined as pseudoreflectance.

We classified the resulting datasets using the Maximum Likelihood supervised

classification technique. Accuracy assessments of the classifications showed overall

accuracies of 80.2% and 66.9% for the AVIRIS classification and the SHOALS

classification, respectively. The two classifications were merged using the Dempster-

Shafer (D-S) decision-level data fusion method, using a priori weights from the

Maximum Likelihood classifications. The resulting D-S classification had an overall

accuracy of 87.2%. For comparison, we classified the AVIRIS data (corrected for water

attenuation) combined with a depth channel, producing an overall accuracy of 85.3%.

Kappa coefficient analysis of all four classifications resulted in 82% confidence that the

Kappa coefficients of the D-S classification and the AVIRIS-plus-depth classification are

different. Kappa confidence levels greater than 99% were calculated for all the other

pairs of classifications.

The results indicate that ALB pseudoreflectance, computed from the peak bottom

return waveform signals, contains information that aids in the benthic mapping process,

and can be used in a sensor fusion algorithm with hyperspectral data to achieve greater

accuracy in bottom classification. Further research into the computation of bottom

reflectance from the ALB bottom return waveform may yield additional improvements.














CHAPTER 1
INTRODUCTION

The goal of mapping is to create an accurate iconic representation of the object

space. A map contains information, which can be obtained directly by visual

observation, or by the analysis of relevant data. Within our focus of the mapping of earth

features, much of the mapping process is performed using the latter method, and is often

realized by using remote sensing.

Numerous data acquisition devices and analysis methods have emerged within

remote sensing. These include passive devices such as airborne multispectral and

hyperspectral sensors, which detect reflected visible and infrared electromagnetic energy

from the earth's surface. For example, in the emerging discipline of imaging

spectroscopy, information is obtained from the sensor data by first correcting for

atmospheric effects, converting to reflectance, then applying some type of matching

algorithm, such as multi-channel clustering with multispectral data or the Spectral Angle

Mapper (SAM) with hyperspectral data. Also available are active data acquisition

technologies, such as Airborne Laser Swath Mapping (ALSM) and Interferometric

Synthetic Aperture Radar (IFSAR), which are used to measure digital elevation data for

the generation of accurate topographic products. More recent work into the combination,

or fusion, of data from multiple sensors has resulted in improved mapping accuracy over

using data from a single sensor (Madhok and Landgrebe 1999, Park 2002).

Remote sensing is also used for benthic mapping. A benthic map is an iconic

representation of the spatial distribution of land cover types located beneath a water body.









These maps sometimes include associated bathymetry, or depths, of the water body.

Aerial imagery acquired over water must be corrected for the same atmospheric effects as

spectral data over land, however benthic mapping provides an additional challenge to

remote sensing due to the attenuation of light by the water column. Several approaches

have been developed to remove these water column effects using only passive data

(Philpot 1989), and the fusion of passive and laser data (Borstad and Vosburg 1993,

Estep et al. 1994, Lyzenga 1985). Most of this research has been centered on developing

models to obtain improved bathymetry. However, these models can also be applied to

benthic classification, which is the focus of our research.

Typical benthic mapping from remotely sensed data exploits the spectral energy

detected by passive sensors. A sensed spectrum is matched against a library of known

spectra to help determine the identity of the reflecting surface. However active sensors,

such as airborne topographic laser systems, also detect a reflected spectral power.

Normally, the time difference between the transmitted and received laser power is used to

calculate target range. Yet, some researchers have used the variation in this detected

laser power, or "intensity," to successfully map topographic features (Park 2002).

Perhaps bathymetric laser "intensity" could be exploited to map benthic features as well.

In our research, we investigated a new method of benthic mapping that makes use of the

bathymetric laser system's return signal strength to aid in benthic classification. It was

necessary to normalize these "intensity" values, so we used the term pseudoreflectance to

represent the normalized result. Specifically, we adopted a data fusion approach,

combining passive hyperspectral data with laser intensity and depth data, to improve the

benthic mapping accuracy over that obtained by either system separately.









Our research focused on classifying the benthic environment of coastal waters,

which we defined as the water bodies along the seashore with depths up to 40 meters.

These waters are an important natural resource, containing plant and animal species vital

to the overall ecology of our oceans, as well as providing commercial and recreational

uses. Researchers recognize the importance of our coastal waters (Bierwirth et al. 1993,

Stumpf 2002), and the need to monitor the benthic characteristics of these waters, which

can reflect changes due to natural and artificial influences.

Our research involving the benthic mapping of coastal waters was realized through

data fusion. Data fusion is the process of combining data from multiple sources, and

obtaining a better result than what could be obtained from any of the sources

independently. Data fusion research has been applied to military, commercial and

industrial uses for decades (Abidi and Gonzalez 1992). The remainder of this chapter

provides an overview of data fusion, followed by an outline of the dissertation.

Data Fusion

Data fusion can be defined as the process of combining data from multiple sources

in order to obtain improved information about an environment than could be obtained

from any of the sources independently. This process is also referred to by other terms,

depending on the area of research, which include sensor fusion, correlation, tracking,

estimation, and data mining (Hall and Llinas 2001). Regardless of the terminology, the

common motivation is that if data from one sensor can improve our ability to interpret the

environment, data from multiple sensors should improve it even more (Abidi and

Gonzalez 1992).

In some areas of research, data fusion is considered a component of a larger

process, known as multisensor integration. Within this concept, multisensor integration









is defined as the use of information from multiple sensors to help a system perform a

task, while fusion is considered a stage in the integration process where the combining of

data occurs (Abidi and Gonzalez 1992). Steinberg et al. (1999) define data fusion in

terms of state estimation, where it is the process of combining data or information to

estimate or predict entity states. This definition lends itself well to Kalman filtering.

Others, within the machine intelligence community, view data fusion as a method of

giving intelligence to systems without complete human interaction (Abidi and Gonzalez

1992).

The concept of data fusion is not new, and can be observed in many areas of nature.

For example, dolphins are equipped with sonar and vision, both of which are used for

locating potential prey, and pit vipers use a combination of vision and infrared sensing to

determine the angle at which to enact a strike (Mitiche and Aggarwal 1986). Humans use

data fusion in everyday life, combining visual, auditory, and tactile stimuli, as well as

other senses, to make some kind of inference about their environment. Each of these

examples provides insight into what can be achieved through the fusion of intelligent

systems (Abidi and Gonzalez 1992).

Many of the potential advantages of data fusion can be categorized as qualitative or

quantitative benefits (Hall 1992). Some qualitative benefits include robust system

performance and reliability, as well as reduced ambiguity, while quantitative benefits

include increased accuracy of the reported information, less time to receive the

information, and less cost to acquire the information (Abidi and Gonzalez 1992, Hall

1992). These benefits are realized through the concepts of redundancy and

complementarity. The use of redundant sensors is inherently beneficial to system










performance and reliability, while redundant data have been shown to improve the

accuracy of the information, as well as lower the time and cost of acquisition (Waltz

1986). Complementary data increase the dimensionality of knowledge about the sensed

environment, which can reduce the ambiguity and increase the accuracy of the

information related to features of interest (Abidi and Gonzalez 1992, Hall 1992).




High Refleclance Low Reflectance

A B C D 7 AC BD

(a) Four Objects Short Tall
(d) Sensors 1 and 2

Aorc\ BorD

Short Tall ,





Short Tall ,
(c) Sensor 2 Short Tal x,,
(e) Sensors 1,2, and 3

Figure 1-1. Using redundant and complementary data to discriminate objects. Adapted
from Abidi and Gonzalez (1992).


The concepts of redundancy and complementarity can be better described with the

following example, adapted from Abidi and Gonzalez (1992) and illustrated in Figure 1-

1. Figure 1-1(a) shows four objects, differing in reflectance and height. Three sensors

are used for detecting these objects, with sensors 1 and 2 capable of detecting height, and

sensor 3 capable of detecting reflectance. Figures 1-1(b) and 1-1(c) show sensor

response curves for sensors 1 and 2, respectively, and their capabilities to discriminate

between short objects (objects A and C) and tall objects (objects B and D), with a









measure of object height along the x-axis. The black areas under the curve intersections

indicate situations where height determination is uncertain. The sensor response curve

for the combination of the redundant sensors (1 and 2) is represented in Figure 1-1(d).

Note the increase in certainty (shown by the steeper and taller peaks) and the decrease in

uncertainty (shown by smaller black area under curves). The fusion of redundant sensors

1 and 2 increases the ability to discern between short and tall objects than could be

discerned by either sensor independently.

Figure 1-1(e) shows the addition of a complementary sensor (sensor 3) to the fusion

process. The resulting fusion of sensors 1 and 2 (height sensors) is fused with sensor 3

(reflectance sensor), providing discrimination among all four objects. The black areas

again show areas of uncertainty in the discrimination of the objects. The complementary

information provided by sensor 3 gives the added dimension of knowledge (reflectance)

necessary to discern among all the objects.

Applications of Data Fusion

Many of the applications for data fusion are found in the military. In a battlefield

environment, situation awareness is of vital importance. Data from only one source may

provide information that is ambiguous, uncertain and perhaps inaccurate. However,

fusion can combine relevant information from several sources to create consistent,

accurate, comprehensive and global situation awareness. The application of this concept

improves performance in many military instances, including ocean surveillance, air-to-air

defense, battlefield intelligence, surveillance and target acquisition, and strategic warning

and defense (Hall and Llinas 2001).

Outside of the military uses, many other applications for data fusion have been

developed. In the area of industrial robotics, three-dimensional imaging and tactile









sensors are combined for robotic object manipulation, enabling a robot to handle

materials that are randomly dispersed in a container (Abidi and Gonzalez 1992). This

concept has been applied to develop a robot to grasp randomly oriented connectors and

place them into a printed circuit board (Mochizuki et al. 1985). Researchers in the

medical imaging field use data fusion concepts to combine magnetic resonance (MR) and

computer tomography (CT) imagery into composites that are more useful during surgery

than the individual components (Hill et al. 1994). Data fusion is also used for complex

mechanical equipment monitoring. Several types of sensors within a helicopter

transmission (e.g. temperature, oil debris monitors) provide data that, when combined,

can identify and predict areas of failure, which reduces maintenance costs and improves

safety (Hall and Llinas 2001).

Data fusion methods have also been successfully applied to the mapping of earth

features using remote sensing. In the area of land cover classification, Park (2002)

applied two methods of combining airborne laser intensity data with aerial photography,

each producing improved results over classifying the photography independently. Lei et

al. (2001) merged multitemporal LANDSAT TM and SPOT images for better land cover

change detection. There have also been advancements in mapping urban areas, where the

fusion of hyperspectral imagery and digital elevation models have enhanced the

delineation of building rooftops (Madhok and Landgrebe 1999). Additional urban

mapping improvements were obtained by merging panchromatic and multispectral

imagery (Fanelli et al. 2001). Data fusion methods have also been applied to mapping

water depths, as Lyzenga (1985) demonstrated by combining airborne laser bathymetry

with hyperspectral imagery.









Levels of Data Fusion

In the process of fusing multiple datasets together, several requirements must be

met. First, the datasets must be in registration. Registration refers to the amount of

spatial or temporal alignment among the multiple datasets, highlighting the importance

that the data about a particular object, from each sensor, refer to the same object in the

environment (Abidi and Gonzalez 1992, Hall and Llinas 2001). Spatial registration of

imagery from multiple sensors is usually determined using a coordinate transformation,

and then implemented by resampling the pixels in the images to a common size, location

and orientation by application of the coordinate transformation parameters. Temporal

registration is usually handled by collecting each dataset at exactly, or nearly, the same

point in time. For datasets not in temporal registration, a correction may be used, if

applicable, to bring the datasets into a common time frame, or an assumption made that

the object space did not change between dataset acquisitions.

Another requirement in the data fusion process is that the datasets are modeled in a

common fashion. A model is a representation of the uncertainty or error in each dataset.

Usually it is assumed that the error in the data from each sensor is best represented using

a Gaussian model (Abidi and Gonzalez 1992).

With the above requirements met, each of the datasets must be brought to a

common level of representation, or data abstraction, before the fusion can proceed. The

research community recognizes different levels of data fusion, which coincide with the

amount of data abstraction present at the time of fusion. One accepted taxonomy used for

data fusion levels consists of signal-, pixel-, feature- and symbol-level data fusion, listed

by increasing data abstraction. Sensors producing data of similar semantic content could

possibly be fused at any of the levels, while sensors with dissimilar modalities may









produce datasets with different semantic content, which would require a higher level of

data abstraction before fusion could take place. The following descriptions of data fusion

levels follow those provided by Abidi and Gonzalez (1992).

Signal-level fusion is the combination of similar signals from one or more sensors

in order to obtain a resultant signal of higher quality. This level of fusion requires the

highest level of registration, both spatial and temporal. Richardson and Marsh (1988)

have shown that redundant data almost always improve signal-level fusion, when based

on optimal estimation. When used in real-time applications, signal-level fusion is usually

considered an additional step in signal processing, and lends itself well to use in a

Kalman filter.

Pixel-level (or data-level) fusion is used to combine multiple images into a

composite image containing pixels with an improved quality of information or an

increased amount of information. The abstraction level of the data in each pixel is low,

with each pixel containing either raw sensor data or the result of some type of image

enhancement. In the fused result, each pixel may contain data from some mathematical

combination of the component datasets, or contain additional dimensions (bands)

corresponding to the component datasets (e.g., merging a radiance image with a height

image to create a two-band radiance-height image). A high level of spatial registration is

necessary in pixel-level fusion, ensuring that corresponding pixels refer to the same area

of the object space.

Features represent a higher level of data abstraction, some semantic meaning of

interest, and are derived from the processing of data that are from a lower level of

abstraction. Feature-level fusion is the combination of feature information that has been









independently extracted from the datasets of multiple sensors. The types of features

extracted from imagery include edges and areas having a constant data value (e.g.,

reflectance, height). Several methods have been developed for combining feature data,

including the tie statistic (Flachs et al. 1990) and model-based approaches (Hall and

Llinas 2001). The resulting fused information is used to show an increase in the

likelihood of the existence of an extracted feature (based on the redundant reporting of

similar features among the multiple datasets), or to create composite features comprised

of the primary features in the component datasets. The needed level of spatial

registration is not as high as that for pixel-level fusion. It is assumed that high spatial

registration was used in the extraction of features within each component dataset.

Symbol-level fusion (also referred to as decision-level fusion) is the combination of

information that is at the highest level of abstraction. Data from multiple sensors, which

have been independently classified using feature matching, are fused into a composite

dataset. The classified datasets generated from each sensor contain associated measures

of accuracy, which are used as input into some logical or statistical inference in the fusion

process. This type of fusion requires the lowest level of spatial registration. High spatial

registration is usually in place during the generation of the symbols within each dataset.

Due to its high level of data abstraction, symbol-level fusion may be the only option

available for combining information obtained from highly dissimilar sensors.

The various taxonomies for fusion, which have been proposed to date, do not fully

capture the complexity of the issues involved when applying fusion techniques to

mapping problems. It is possible, for example, to use high-level fusion processes, but to

apply them to a basic data structure that is still at the pixel level. This approach may









adopt sophisticated algorithms (e.g., rule-based decision algorithms) but may not

consider the neighboring pixels in the algorithm.

We follow this strategy in our approach. Specifically, we apply a decision-level

fusion algorithm to combine data from a hyperspectral instrument and an airborne laser

bathymetric system, but conduct our resulting mapping at the pixel level as defined by

the raster of the hyperspectral data. We use the Dempster-Shafer algorithm as a fusion

technique. In the next section, we discuss the relationship of the Dempster-Shafer

approach to other decision-level techniques.

Evidence Combination Methods

The information arising from a data-fusion process should be better than what

could be obtained from any of the sensors independently. In the previous section, we

stated that "measures of accuracy" are associated with the information used in decision-

level fusion. Because of the statistical context of this term, many researchers have

adopted the term "evidence" to describe this information, and we adopt it as well. Below

we briefly describe some of the accepted methods of evidence combination, including

rule-based, Bayesian estimation, and Dempster-Shafer.

Rule-based decision-level fusion is a heuristic method of combining evidence

derived from multiple sensors. This method uses production rules that are formed from

the analysis of the information from each dataset. These rules are normally in the form of

a logical implication, as in "if A then B." Each implication could then lead to additional

levels of implications before reaching a decision. Also, additional rules could be added

that are not derived from the sensors, creating an even higher-level decision system, or

expert system (Abidi and Gonzalez 1992).









Bayesian estimation is named after Thomas Bayes, an English clergyman who

lived in the 18th century and helped develop what is known today as Bayes' rule. This

method of evidence combination works by updating an apriori probability of a

hypothesis with evidence provided from observations, resulting in an aposteriori final

probability determination (Hall 1992). It assumes an exhaustive set of hypotheses (all

possible events), with all the hypotheses mutually exclusive.

Dempster-Shafer evidential reasoning was introduced by Glenn Shafer in 1976 in a

book entitled "A Mathematical Theory of Evidence," in which he reiterated some of the

work in statistical inference performed by Arthur Dempster. This method is a

generalization of Bayesian estimation, allowing for a general level of uncertainty (Hall

1992). It is modeled after Dempster's analysis of the human decision making process, in

which the set of hypotheses does not have to be exhaustive, nor mutually exclusive.

Instead, measures of belief are assigned to propositions, which are combinations of

hypotheses that may overlap or even be in conflict. As with Bayesian, this method will

update apriori information with evidence provided by observations. However, instead of

producing a final probability determination, the result is an aposteriori evidential

interval, with lower and upper bound values representing a measure of belief and a

plausibility, respectively. Additional detail on the Dempster-Shafer method is given in

Appendix C.

Organization of the Dissertation

The goal of this research is to examine the potential of combining

pseudoreflectance, derived from the returned power of a bathymetric laser, with other

data for benthic mapping. We begin by examining some of the current research methods

of benthic mapping in Chapter 2. This includes an overview of hyperspectral imaging, as









well as the discipline of imaging spectroscopy, which best exploits the high-

dimensionality of hyperspectral data. We also provide background on airborne laser

bathymetry, with emphasis on the return waveform and its relationship to "intensity" in

topographic systems. We then explore the common radiative transfer methods used for

benthic mapping, and explain how we applied a variant of one of these methods to laser

intensity data to compute estimates of pseudoreflectance.

In Chapter 3, we describe our experiment to test the use of laser intensity data for

benthic mapping. This experiment is realized using data fusion, combining hyperspectral

and ALB data to improve the description of the object space. We discuss the datasets

used in the experiment, and the preprocessing steps performed on the datasets, such as

georegistration and surface wave removal. We then explain the processing involved in

removing the effects of water attenuation from both the hyperspectral and the ALB

datasets. Next, we discuss the supervised classification of three datasets, including the

hyperspectral, ALB, and hyperspectral-plus-depth datasets. These classifications, which

result in three separate benthic maps of our research area, are assessed for accuracy. We

then describe the data fusion of the hyperspectral and ALB classifications, resulting in a

fourth benthic map for which we also assess the accuracy. Lastly, we discuss the

statistical significance of the results from the four accuracy assessments.

In Chapter 4 we discuss the research and the results, and recommend future

research.














CHAPTER 2
BENTHIC MAPPING OF COASTAL WATERS BY REMOTE SENSING

The use of remote sensing for benthic mapping has been researched for more than

40 years. Polcyn et al. (1970) developed a depth extraction algorithm for passive data,

and implied that a pair of wavelength bands could be found whose ratio would not

change for different benthic types within an area. Lyzenga (1978) helped develop

methods to determine accurate depths from multispectral data by adding a deep-water

radiance term to his model. Others have applied sensor fusion techniques using passive

sensors and bathymetric laser systems to improve the accuracy of estimated depths

(Borstad and Vosburg 1993, Lyzenga 1985). Most of the research has focused on

obtaining accurate depths, however, the methods developed can also be applied to

determining benthic types.

Our research is centered on the fusion of data from two types of sensors, a

hyperspectral system and a bathymetric laser system. In the following pages, we discuss

both of these sensors and their associated data processing methods. We then examine

some of the current methods of analyzing data from these sensors to obtain benthic

information, and describe a new method to obtain additional information from the laser

bathymeter.

Hyperspectral Imagery

Hyperspectral sensors provide imagery with high spectral dimensionality, narrow

spectral channel sensitivity, and contiguous band channel acquisition. Figure 2-1

demonstrates the latter two qualities, showing a comparison between the Landsat TM










2 3
Landsat TM
bands


wavelength (nm)


AVIRIS bands 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32


Figure 2-1. Comparison of the spectral sensitivities of Landsat TM bands 2 and 3, and
AVIRIS bands 17-32. Braces indicate the extent of spectral sensitivities per
band.

multispectral sensor and the AVIRIS hyperspectral sensor. Landsat TM has 6 bands

which sense electromagnetic energy from 450 nm to 2350 nm, and 1 band that senses

thermal infrared energy. AVIRIS has 224 bands which sense electromagnetic energy

from 380 nm to 2400 nm. The high spectral dimensionality of a hyperspectral sensor

allows for each pixel to be evaluated as an n-dimensional vector, where n is the number

of bands. The acquisition and analysis of these pixel vectors, or spectra, from

hyperspectral imagery is called imaging spectroscopy.

Object space classification within imaging spectroscopy is performed by matching

hyperspectral spectra against a library of known object space spectra (Tuell 2002a). This

differs from the method typically used for multispectral data, which classifies pixels by

examining the clustering of their values from two or more bands. The type of spectra

used for classification is usually reflectance spectra. However, hyperspectral sensors

measure radiance spectra. Since reflectance is an intrinsic property of the object space,

and radiance is not, it is preferable to convert spectra from at-sensor radiance to object-

space reflectance, then match against known reflectance spectra. Also, spectral matching


Green Red
20 540 560 580 600 620 640 660 680
530 0_ 570 50 61i 63 __653 __67 693









assumes that each pixel is pure containing the same type of material, with the same

reflectance, throughout the pixel area. Obviously this is not always true. Methods such

as Spectral Mixture Analysis (SMA) can be applied to these non-pure, or mixed, pixels to

estimate their composition.

In the following sections we explain radiance and reflectance, and the methods

used for obtaining reflectance, and then address methods of spectral matching for pure

pixels and mixed pixels.

Radiance and Reflectance

Radiance is the amount of spectral flux per unit area per unit solid angle, and is the

measurement provided by a hyperspectral sensor. However, radiance is not an intrinsic

property of the object space. Its value is dependent upon many other factors, as shown in

Equation 2-1 (Tuell 2002a).


L"" = [E t, cos + Eff +E ad P t coss +LPath + La (2-1)


The terms of Equation 2-1 are explained as follows. The lambda (k) subscript

implies that the values are wavelength dependent.

Lmg = upwelling radiance measured at the sensor.
L ath = upwelling path radiance.
Lad = upwelling radiance due to adjacent objects.
Eu"" = downwelling solar irradiance.
t, = atmospheric transmittance.
E' = diffuse irradiance (not directly from sun).
Ed4 = irradiance due to solar reflection off adjacent objects.
p2 = object reflectance.
0 = sun angle.
0 = sensor angle.





17








Sensor


L path


L ad


Ltar
L-


Figure 2-2. Contributions to at-sensor radiance.


Esun


Eadj












10000
9000
8000
7000
S6000
5000
4000-
3000 -
2000
1000-
0
0 20 40 60 80 100 120 140 160 180 200 220
AVIRIS Channel

Figure 2-3. AVIRIS radiance spectra for grass. Radiance units are [tW/cm2/sr.


Figure 2-2 illustrates Equation 2-1, with L'T defined as the contribution to L""

from the ground target, as shown in Equation 2-2.


L' = [E"t, cos0 + Ejd + E j]Pz t cos (2-2)


Reflectance is an intrinsic property of the object space, and is defined as the ratio of

the amount of spectral energy reflected by an object to the spectral energy incident upon

the object (Lillesand and Kiefer, 1994). An example of the differences between radiance

and reflectance is shown in Figures 2-3 and 2-4, showing radiance and reflectance spectra

for grass, respectively. The reflectance spectra were obtained by dividing radiance,

measured by a hand-held spectrometer, by a reference radiance measured over a

spectralon panel (near perfect reflector). The resultant spectra were then convolved to the

AVIRIS wavelengths. The radiance spectra are somewhat misleading, showing peaks in

the blue (band 10) and the green (band 20). However the reflectance spectra show a peak










only in the green among the visible bands, and display a well defined "red edge" in the

near infrared (band 40), typical of healthy vegetation.





0.7
0.6
0.5
= 0.4
S0.3
i 0.2
0.1 -
0
0 20 40 60 80 100 120 140 160 180 200 220
AVIRIS Channel

Figure 2-4. AVIRIS reflectance spectra for grass.


This example shows the need to invert reflectance, denoted p,, from Equation 2-1.

One method for obtaining reflectance is by applying a radiative transfer code, such as the

moderate resolution transmission code, or MODTRAN. MODTRAN makes use of the

absorption and scattering properties of the atmosphere, as well as the solar and sensor

angles, to predict radiance above the earth (Tuell 2002a). An estimate of reflectance is

then obtained by comparing the predicted radiance to the measured radiance.

Another well established method for obtaining reflectance is the Empirical Line

Method, or ELM. This procedure assumes a linear relationship between at-sensor

radiance and object-space reflectance. The linear relationship is represented in Equation

2-3, which is a simplification of Equation 2-1, with slope m, and y-intercept b2 values

defined in Equations 2-4 and 2-5, respectively (Tuell 2002a). Provided known









reflectance data for several image-identifiable points, a least-squares solution for m, and

b, can be obtained for each wavelength of the image spectra.

L"g = mp, + b, (2-3)


m, = [E, cos + Ed + Ed ]t cosq (2-4)


b, = LPath + Lad (2-5)

The assumption of linearity for the ELM procedure means that the solved m, and

b2 values are constant across the entire image. For this to be true, it follows that Ejn",

t2, E and Ead are constant, and differences in 0 and q are small. It also assumes

that Liath is constant and LdJ is negligible.

Even with these assumptions, ELM is widely used as a simple, reliable technique

for the conversion from radiance to reflectance. The primary difficulty in using the ELM

procedure is that it requires in situ measurements of radiance and irradiance for several

points in the object space, which are then used to calculate reflectance for each measured

point. These measurements should be taken for bright, medium, and dark objects (Tuell

2002a) to avoid problems with the regression. The resulting regression parameters are

then used in Equation 2-3 to solve for reflectance p, for the entire image.

Spectral Matching

Upon deriving a reflectance image, a matching algorithm must be implemented to

classify the reflectance spectra. However, the type of algorithm used depends on the

nature of the pixels to classify. The pixels may be pure pixels, which contain spectra

reflected from only one type of material, or mixed pixels, which contain spectra made up









of a combination of reflectance spectra from several materials. In the next sections we

discuss the matching algorithms used for both pure pixels and mixed pixels.

Pure Pixel Matching Algorithms

Matching algorithms for pure pixels focus on finding the level of similarity

between a given pixel's spectra and a library of known spectra. Two examples of these

algorithms include the Maximum Likelihood Classifier, and the Spectral Angle Mapper.

The Maximum Likelihood Classifier assigns to a given pixel the class with an

associated spectra that is most probable to have produced the given pixel (Jensen 1996).

This method is a supervised classifier in that it requires training sets, consisting of groups

of pixels from a known class, in order to calculate a mean vector and covariance matrix

for each class, and assumes the training data are normally distributed. For each class c,

the value pc is calculated for classifying pixel Xusing the following equation (Jensen

1996).

p = -0.5ln[det(V)]- [0.5(X -Mc)T 1(X Mc) (2-6)


In this equation, Mc is the mean vector for class c (derived from the training set),

and Vc is its covariance matrix. The class with corresponding pc greater than that of the

other classes is the class assigned to the pixel X.

The Spectral Angle Mapper, or SAM, uses the vector dot product to calculate the

angle between two vectors in n-space (Equation 2-7), where n is the number of bands in

the imagery. The two vectors represent two spectra, one unknown to be classified, and

one known from perhaps a spectral library. The assumption is that the smaller the angle









between the two vectors, the more similar the two spectra are. Values for SAM range

from 0 to 7T/2 radians, with 0 indicating the greatest amount of similarity between spectra.


SAM = arccos (2-7)


Mixed Pixel Matching Algorithms

Mixed pixels contain a combination of reflectance spectra from multiple material

sources. The abundance of each source contributing to the mixed pixel spectra can be

determined using Spectral Mixture Analysis, or SMA. SMA assumes that a mixed pixel

spectra is a linear combination of individual material spectra, each weighted by its

geometric abundance within the pixel (Sabol et al. 1992, Tuell 2002a). The process of

determining the abundances of individual materials within mixed spectra is known as

linear unmixing. This concept is illustrated in Figure 2-5, which shows a mixed pixel

spectra consisting of a combination of three material spectra.

The equation used for modeling linear unmixing is shown in Equation 2-8, with Y

representing a mixed pixel vector, x the abundance vector containing percentages of each

material in Y, A the matrix containing spectral vectors for individual materials (i.e., the

endmember matrix), and e the measurement error. A least-squares solution for x can be

obtained assuming the number of bands exceeds the number of endmembers in A. It is

assumed that the endmembers in A span the object space. Constraints can be included to

ensure that the abundance vector components sum to unity, and that each component is

positive, although implementing the former is more straightforward than the latter (Tuell

2002a).


Y = Ax + e


(2-8)









%FL.lfIru iJ I 1M' LflLI I


hs.Lrcd sprcltu Ln pLxel i











Y = X A

Figure 2-5. Illustration of linear unmixing (adapted from Columbia University).


Another type of SMA method was developed by Harsanyi (1993) called the

Orthogonal Subspace Projection, or OSP. This method is similar to linear unmixing,

however the endmember matrix is separated into two distinct matrices, one containing a

material of interest, the other containing materials considered to be noise. It then uses a

projection operator to remove the effects of the noise materials from the mixed pixel,

resulting in a linear mixture model for only the endmember of interest (Ren and Chang

2000).

Airborne Laser Bathymetry

Airborne laser bathymetry, also referred to as ALB or LIDAR, is a method of

measuring the depths of coastal and inland waters using water penetrating, scanning,

pulsed laser technology emitted from an aircraft. ALB differs from the typical method of

bathymetric measurement, shipboard sonar, which tends to be time consuming,

inefficient in shallow waters, and somewhat dangerous to the ship and crew. ALB can









measure bathymetry and near-shore topography faster than sonar, obtain accurate results

in shallow water, and provide a safer method of data collection (Guenther 2001).

ALB technology was not developed to replace sonar or other depth measurement

methods, but to augment the process of obtaining bathymetry. Limitations of ALB

systems include water clarity and depth, as well as its ineffectiveness in detecting small

objects. Very high-density surveys can be conducted to enhance small object detection,

however these surveys are expensive and minimize the benefits of ALB (Guenther 2001).

To ensure a navigation channel is free from small objects, multibeam and side-scan sonar

are still the technologies of choice ibidd).

ALB systems have been utilized for various applications. Dense bathymetric data

provide information on the extent of the shoaling of navigation channels. Repeated

surveys over a particular area can help determine sediment transport rates. ALB systems

have also been used for emergency response, including the assessment of hurricane

damage and ship grounding damage to coral reefs (Irish et al. 2000).

Theory

The concept of ALB is based on measuring time differences between different

returns received from a single laser pulse. When an ALB system emits a laser pulse, part

of the pulse energy reflects off the water surface (surface return), but some of the energy

continues downward through the water. Some of the remaining downwelling energy is

reflected upward from the water particles (volume backscatter), but some of it reaches the

bottom and is reflected upward (bottom return). Figure 2-6 illustrates this process. An

avalanche photodiode (APD) or photomultiplier tube (PMT) is typically used in the

aircraft to detect the energy from the returned laser pulse (Guenther 2001). The detector

uses a temporal filter, based on the estimated arrival time for the return pulse, with a large









enough time "window" to detect returns from the water surface as well as the water

bottom. The returns are digitized, usually at 1-nanosecond time intervals, and can be

plotted as return power over time, producing a return waveform (Figure 2-7). Figure 2-7

graphs the return power detected by the SHOALS ALB system from one emitted laser

pulse, logarithmically corrected to enhance the bottom return. The difference in time

between the rising edges of the surface return and the bottom return are used to determine

the depth.


Figure 2-6. Interaction of ALB laser pulse with water body.


Most ALB systems are dual-wavelength, emitting an infrared 1064 nm pulse and a

collinear green 532 nm pulse. This is achieved using an Nd:YAG laser with 1064 nm

output that is frequency-doubled to produce the simultaneous green pulse. The infrared

and green signals are necessary for detecting the air-water interface (surface) and the









water bottom, respectively. The green signal is necessary for water penetration, although

a portion of it is reflected from the surface. However, this green surface reflection can be

biased by the volume backscatter, making it unreliable for surface detection. Also, in

shallow water, it is difficult to separate surface and bottom returns using the green signal

(Guenther 2001, Guenther and Mesick 1988). Infrared light has very little penetration

into water, providing a much cleaner return for surface detection in most circumstances.

However, infrared surface returns can become weak in calm winds, and produce false

returns above the surface from spray or sea smoke. In these situations, a red detector

(645 nm) can be used to sense the green-excited Raman backscatter from the surface

(Guenther et al. 1994). This occurs from the green signal exciting the surface water

molecules, which absorb some of the signal energy and emit the remainder (Raman

effect). Unlike the infrared signal, the Raman signal does not weaken in calm winds, and

is only produced from interfacing with the water surface. However, this return is weaker

than a typical infrared return, and is normally used as a check or backup for the infrared.


600 Bttom Return
Surface Retum \
500 -
Volume Backscatter
-
| 400 -
o.
S300 -

| 200-

S100

0I I I I I I
0 50 100 150 200 250 300 350
Time (ns)


Figure 2-7. Laser pulse return waveform (logarithmic) from SHOALS system.









ALB systems use up to four sensors for detecting return signals, with two sensing

the surface return (infrared and Raman) and two sensing the bottom return. In the case of

the bottom returns, an Avalanche Photodiode (APD) is employed to detect returns in

shallow water, and a Photomultiplier Tube (PMT) for returns in deeper water. A problem

that occurs for these bottom sensors is that the surface returns may be six or seven orders

of magnitude stronger than the bottom returns, due to the exponential attenuation effects

from the water column (Guenther 2001). This change in signal strength can occur in just

tens of nanoseconds. In order to handle this signal strength dynamic range, some ALB

systems employ a minimum nadir angle on the scanned beam, which decreases the

dynamic range for all return signals, allowing the sensors to be highly sensitive to weak

return signals without concern for saturating the sensors. Angles between 15 and 20

degrees are typically used. Additional benefits of avoiding small nadir angles include

minimizing the variations in depth biases, and increasing the likelihood of detecting small

objects (Guenther 2001).

ALB systems include a Global Positioning System (GPS) receiver and an Inertial

Navigation System (INS) for precise determination of the aircraft position and orientation

for each laser pulse. Simultaneous data collection from the aircraft GPS receiver and

multiple ground receivers enables the use of kinematic GPS (KGPS) techniques to solve

for aircraft positions with sub-decimeter accuracy. These positions are referenced to the

WGS-84 ellipsoid, which allows for the collection of topographic and bathymetric data

referenced to the ellipsoid, and eliminating the need for water level data (Guenther 2001).

The INS records the rotations of the aircraft in three dimensions (roll, pitch, and yaw),









necessary to correct the geometric effects of these rotations on the location of each laser

pulse.

Limitations

A measure of the effectiveness of ALB is the maximum surveyable depth (MSD),

which is defined as the maximum measured depth that meets existing accuracy standards.

Several factors, derived from the system and the environment, contribute to limiting the

MSD. System factors include the green laser pulse energy, electronic noise, and flight

altitude. Environmental factors include water clarity and bottom reflectivity (Guenther

2001). Water clarity is generally the most significant factor for limiting the MSD

(Guenther and Goodman 1978) because it has a negative exponential effect, while bottom

reflectivity has a negative linear effect (Guenther 2001).

The MSD can range from 50 meters in clear water to 10 meters in murky waters.

Typical results will be between two and three times the Secchi depth (Guenther 2001).

The Secchi depth refers to the maximum depth at which a black and white Secchi disk is

visible when lowered into the water (Tyler 1968). The properties of the water that

dominate the water attenuation effect will determine which multiplicative factor will

apply. Water attenuation effects are due to absorption and scattering components. If

absorption is the dominant component, the maximum surveyable depth will be closer to

two times the Secchi depth. With scattering the dominant effect, three times the Secchi

depth can be expected (Guenther 2001).

Another limitation of ALB is its ability to detect small objects. Clearing a

navigation channel is of utmost importance for shipping, and small objects can be a

hazard. ALB systems have difficulty in detecting objects on the order of a one-meter

cube (Guenther 2001). The problem is the inability to separate small object returns from









bottom returns. Objects with larger surface areas and smaller heights, or with smaller

surface areas and larger heights, are much more easily detected due to the better

separation of the object returns from the bottom returns. This limitation is one reason

why current ALB technology cannot replace sonar ibidd).

Benthic Mapping Methods

All mapping that uses airborne remote sensing must account for atmospheric

effects, however the biggest challenge for mapping the benthic environment is removing

the attenuation effects of the water column. Over the past several decades, many

different methods of benthic mapping have been attempted, including using band ratios

(Polcyn et al. 1970, Stumpf et al. 2002), radiative transfer models (Bierwirth et al. 1993,

Lyzenga 1978), and neural networks (Sandidge and Holyer 1998). These researched

methods mostly focus on obtaining depths, however the related problem of benthic

classification can also be investigated using these techniques. In the following sections,

we will investigate the above benthic mapping techniques, as well as other techniques,

and then introduce a modified radiative transfer model using ALB bottom return data.

Neural Networks

A neural network is an architecture that uses parallel computer processing to train a

computer program how to perform non-linear mappings (Lippman 1987). The network

consists of layers of processing elements, called neurons, each of which form a weighted

sum of inputs and generate an output using a non-linear transfer function. Outputs from

one layer of neurons are fed as inputs into the next layer. The weights in each neuron are

determined during a supervised training process, in which inputs and corresponding

known outputs are presented to the system, and the weights are solved for using an









iterative least-squares method (Sandidge and Holyer 1998). Figure 2-8 provides an

illustration of the neural network process.





INPUT






/ LAYER
/ / HIDDEN
/ LAYER
0 / hidden layers)



0 T-JU T


Figure 2-8. A neural network.


This training process was applied by Sandidge and Holyer (1998), using spectral

data as input and measured depth as output to train a neural network to determine depths

from hyperspectral data. Using sonar depth soundings and AVIRIS hyperspectral data

for the training process, the resulting neural network obtained sub-meter RMS accuracy

values for its estimated depths. The network also showed potential for generalizing, or

adapting to conditions different from the training set data.

Band Ratios

A more deterministic approach to benthic mapping consists of using the ratio of

certain bands for depth determination and bottom classification. Polcyn et al. (1970)

used the model shown in Equation 2-9, the components of which are described below.

L, = Ls + krBe- (2-9)









L, = measured upwelling radiance, for band i.
L,, = radiance measured over deep water, due to surface reflection and atmospheric scattering.
k, = a constant that includes solar irradiance.
rB = bottom reflectance for bottom type B and band i.
K, = water attenuation coefficient.
f = a geometric factor to account for the path through the water.
z = water depth.

The algorithm developed by Polcyn et al. (1970) assumed that two wavelength

bands existed such that the ratio of the bottom reflectance values in those bands remained

constant, regardless of the changing bottom types. This assumption is shown in Equation

2-10, for bottom types A and B, and bands 1 and 2. Using the model in Equation 2-9 and

the above assumption, the depth could be calculated using Equation 2-11. The value R is

the ratio shown in Equation 2-12.

rA rr
A B=... = Rb (2-10)
rA2 rB2


z = -In -1 (2-11)
(Kr -2 )/f k2 \Rb

(L L )
R = (2-12)
(L2 L2)

This algorithm also assumes that the difference between the attenuation coefficients

(K1 K2) is constant. Choosing bands that satisfy this assumption as well as the constant

ratio assumption in Equation 2-10 was found to be difficult. However, this method was

applied to airborne and space-borne multispectral data in shallow, clear water with some

success (Polcyn and Lyzenga 1973). A variant of this method (Stumpf et al. 2002) was

applied using IKONOS satellite data, resulting in depth estimates within 2-3 meters of

ALB values.









The model in Equation 2-9 was similarly applied to determine bottom types using

multispectral data, with an assumption that the radiance ratio R (Equation 2-13) would be

independent of water depth as long as the attenuation coefficients are the same in both

bands (Lyzenga 1978, Wezernak and Lyzenga 1975).

S(kvl)Bl
R (k,) (2-13)
(k, r )

This value of R would represent an index of bottom type, assuming the benthic

areas mapped have different ratios for the bands selected. This method was successful

for mapping algae at varying depths along the shore of Lake Ontario (Wezernak and

Lyzenga 1975). However, its success was limited to separating only algae and sand

within the same water type. The addition of multiple water and benthic types caused

difficulty with the algorithm. Also, this method is restricted to using only two bands,

failing to take advantage of the information available in the full spectrum of a given

benthic type (Lyzenga 1978).

Radiative Transfer Model

Due to some of the limitations with the above ratio methods, the development of a

radiative transfer model was a natural progression. The focus is to mathematically

describe the attenuation of light as it passes through a water body. As photons travel

through water, they undergo scattering and absorption processes with the particles in the

water and with the water molecules themselves (Jerlov 1976). These processes attenuate

the energy flux, which is defined as the product of the energy per photon and the number

of photons per unit area per unit time. This downwelling energy flux will continue to

decrease as it continues through the water, and will eventually reach zero at a depth that

is dependent upon the water properties. The modeling of this process can be described









using an explanation of Beer's Law, which is given below and adapted from Bukata et al.

(1995).

The change in downwelling energy flux is proportional to the change in the number

of photons N in the flux, since the energy per photon, hv, is constant (h is Planck's

constant, v the light frequency). Also, the chance of attenuation increases with increasing

thickness of the medium the light passes through. Given N photons incident upon a

medium (e.g., water) of thickness Ar, the reduction in the number of emergent photons,

AN, would be proportional to the product of N and Ar. This is shown in Equation 2-14,

where the constant of proportionality a is the attenuation coefficient.

AN = -aNAr (2-14)

Taking the limit as both AN and Ar approach zero produces Equation 2-15 (Beer's

Law). Integrating Equation 2-15 from zero to a thickness r of an absorbing medium

produces Equation 2-16. In this description, it is assumed that the attenuation property of

the medium is constant, so a will be invariant with respect to r.

d -adr (2-15)
N

N(r) = Ne- (2-16)

Because of the proportional relationship between the energy flux and N, Equation

2-16 can be modified for energy flux D, producing Equation 2-17, which shows the

exponential decrease of energy flux as it passes through the medium. The X term is

added to indicate wavelength dependency, and a is used for the attenuation coefficient.

OD(r, A) = (0,,A)e -a)r (2-17)









This model, describing the attenuation of light in a medium, has been applied by

several researchers (Bierwirth et al. 1993, Philpot 1989) in order to develop a model for

light attenuation in water. Most of the motivation for developing these models is for

depth determination. However, they can also be used for benthic classification.

Equations 2-18 and 2-19 are fundamental equations used for modeling radiative transfer

in water. The components of the equations are described below.

Lsurface Lottom -2kd + Later (2-18)

Later = L ep (1-e 2kd) (2-19)

Luc = upwelling radiance measured just below water surface, at wavelength 1.
L ttom = upwelling radiance due to reflection from the water bottom, measured just above the bottom.
L water upwelling radiance due to scattering within the water column.
Lee = upwelling radiance of optically deep water, measured just below the water surface.
k = diffuse attenuation coefficient of water.
d = depth

Equation 2-18 describes the upwelling radiance measured just below the water

surface, consisting of additive components from the water bottom and the water column.

The bottom radiance is attenuated exponentially as a function of depth d and a water

attenuation coefficient k. Note the 2khd term in the exponent rather than the expected kd

term. This is necessary because the energy flux is attenuated twice, since it passes

through water of depth d from the surface to the bottom, and again from the bottom back

to the surface. If the khd term is used instead (without the 2), then kh would represent a

two-way attenuation coefficient (Philpot 1989).

Equation 2-19 describes upwelling radiance of the water column as being a

maximum in optically deep water (water too deep for light to reach the bottom), and









decreasing exponentially as a function of k and d. Substituting Equation 2-19 into 2-18

produces Equation 2-20.

Lsuface = (Lbottom deep )e2kd + deep (2-20)

Equation 2-20 is simply a modification of Equation 2-18 to account for changes in

water column radiance due to changes in depth. Note that all the parameters in Equation

2-20, with the exception of depth d, are wavelength dependent. Also, this model assumes

the water is vertically homogeneous in its optical properties (Philpot 1989).

In order to use this model for benthic classification, we must solve for upwelling

bottom radiance. This requires that some assumptions be made about several of the

parameters in the model. Many researchers have used the assumptions of a constant

bottom type, a constant diffuse attenuation coefficient (both horizontally and vertically)

over an area, and the use of a deep-water radiance measurement for Lfdee (Brown et al.

1971, Lyzenga 1978, Philpot 1989). Initial estimates for Lrottm and k could then be

obtained using a linearized version of Equation 2-20, given below.


ln(L ace Ld) ln(L bottom Ld) 2k d (2-21)


Using Equation 2-21 and having the above assumptions in place, a minimum of

two surface radiances and corresponding depths (three for a least-squares solution) would

be needed to solve for L0ttO" and kh. The results could then be used as beginning

estimates for an iterative, non-linear least-squares solution of Equation 2-20. Using a

Taylor series expansion, and truncating the higher order terms, Equation 2-22 (in matrix

form) was produced for determining a non-linear solution.









SALbottom

[L'ace bottom m deep )e 2kd +Ldeep [e 2kd 1_ e2kd 2d(bottom -L e)e 2kd Ldeep
Ak,

(2-22)

For multiple wavelengths (as with multispectral and hyperspectral imagery) there

would be an Lttm"' and kh to solve for each wavelength. However, there would also be

an additional equation with each added wavelength. So the above minimum of two

surface radiances and corresponding depths would still hold with the assumptions of

constant Ltjo"~ and kh over the area.

Note that the above equations are given using spectral radiance L,. However,

these equations would also be valid for remote sensing reflectance simply by normalizing

to the downwelling irradiance at the water surface. This has been demonstrated by Lee et

al. (1998) and Bierwirth et al. (1993).

Other Techniques

Some researchers have combined passive imagery with lidar bathymetry in order to

interpolate/extrapolate depths for the entire passive image (Kappus et al. 1998, Lyzenga

1985). Lyzenga (1985) used regression analysis between depth and a linear combination

of all the possible band pairs from a multispectral data collection. The band pair with the

largest correlation coefficient was selected as the best choice for depth determination.

Results from that research showed the potential for lidar bathymetry to be used to

calibrate passive imagery for depth extraction.

Eigenspace analysis uses linear algebra techniques to represent the variation of

spectral data. Instead of being represented by band or wavelength, the spectral data are









rotated into a system of eigenvectors and corresponding eigenvalues, with each

eigenvector orthogonal to the others. The eigenvector with the greatest eigenvalue is the

direction of maximum variance in the data. Philpot (1989) used eigenspace analysis for

bathymetric mapping using multispectral imagery. Using an assumption of constant

bottom type and water attenuation, and applying a linearized radiative transfer model (as

in Equation 2-21), multi-band data can be combined such that the first eigenvector

(maximum variance) is correlated to varying depth (Philpot 1989).

Modified Radiative Transfer Model

The methods given above describe techniques for water attenuation removal for

passive, remotely sensed data. Included among the methods is a radiative transfer model,

which exploits upwelling radiance, or estimated reflectance, to obtain benthic

information. In this section we introduce a new type of radiative transfer model, which

exploits the ALB waveform return in order to obtain benthic information. However, the

parameter provided by an active system, such as an ALB, for representing a waveform

return is not radiance or reflectance, but returned power, denoted P,. P, is a function of

many parameters (see Equation 2-23), including system characteristics, atmospheric

effects, and reflectance of the target at the laser wavelength (Lee and Tuell 2003). Our

interest in P, is its relationship to reflectance, which is an intrinsic property of the target.

Within the topographic laser mapping community, P, (or a measurement proportional to

P,) is often referred to as intensity. Some researchers have used intensity for scene

classification by draping intensity images over DEMs (Carter et al. 2001), while others

have combined intensity with passive imagery by applying data fusion algorithms (Park

et al. 2001, Tuell 2002b).









We introduce a modified radiative transfer model, designed for use with the

SHOALS ALB system (see Appendix A). Each data point in the SHOALS dataset

consists of a depth, UTM coordinates, flightline, output laser energy, and a bottom return

amplitude measurement from the APD and the PMT. The bottom return amplitude

measurements are peakpower values, representing the height of the highest point of the

bottom return (see Figure 2-7), and are recorded in photoelectrons per nanosecond. Our

goal is to extract benthic information from the bottom return values due to variation of

bottom reflectance, so we need to account for any other factors that could influence these

values.

The bottom return peak power measurement is a function of several environmental

and system factors, and is described in the bathymetric laser radar equation, given in

Equation 2-23 (Guenther 1985). These factors include transmitted power PT, depth D,

aircraft altitude H, bottom reflectance p, water attenuation k, the beam nadir angle 0, the

refracted nadir angle q5, the refractive index of water nw, the effective area of the receiver

optics A, the receiver field of view loss factor Fp, a pulse stretching factor n(s, coo, 0),

where s is the scattering coefficient, and woo the single scattering albedo, a combined

optical loss factor r for the transmitter and receiver optics, and an empirical scaling factor

m, used to account for air path loss and system detuning.



P, = (m)PTpFpA, cos2 0 e -2n(so )kDsec (2-23)
r(nH +D)2


Solving for p would be desirable, however it would require access to all of the

above parameters. Therefore, we make use of the P, measurement in place of p, and









assume a constant effect from the atmosphere, the optics, and the receiver. We have

ALB depth measurements, so we can model the water attenuation and solve for k. Since

the output laser energy can vary from pulse to pulse, we can correct for it by normalizing

the bottom return value by the output laser energy measurement. The result is a modified

peak bottom return value, which we refer to as the Normalized Bottom Amplitude

(NBA), and we denote using the symbol v.

Normalizing the P, measurement by the output energy to generate v parallels the

conversion to reflectance for hyperspectral data. However, the v value is not a unitless

reflectance value, since the bottom return is in photoelectrons per nanosecond, and the

output laser energy in milliJoules. We could account for this to obtain a unitless value,

however the bottom return is extremely small compared to the outgoing energy, so the

resulting values are very small and difficult to work with. Since this conversion from

photoelectrons to milliJoules involves multiplicative constants, the difference between a

unitless normalization and v is simply a matter of scale. Therefore, for convenience, we

use the v values.

We would like to exploit these v values to extract benthic information, however

they are subject to exponential water attenuation just as passive data are in the radiative

transfer model in Equation 2-18. However, this equation includes a term for water

column radiance, Lae"', which is necessary for passive data, but does not apply for ALB

bottom return data. The signal that returns to the ALB airborne sensor is digitized at 1-

nanosecond intervals, which provides for the separation of the return into different parts

based on depth, as shown in Figure 2-7. The volume backscatter part of the return

waveform represents the contribution from the water column. In terms of time, the









bottom return is detected by the airborne sensor after the volume backscatter is detected,

so the bottom return measurement does not include any contribution from the water

column. Therefore, an L"'t term is not needed in our radiative transfer model for ALB

data.

Additional modifications are necessary to Equation 2-18 as well. The L'"face term

is replaced by the NBA value v, and the L botto value with pseudoreflectance p. We use

the pseudo prefix because p is a function of reflectance, is not unitless, and we have not

accounted for all the parameters in Equation 2-23. We also remove the X subscripts since

the v values are the result of only one wavelength (532 nm green). These changes to

Equation 2-18 are reflected in Equation 2-24 below.


S= pe2kd (2-24)


Subsequent modifications to the linearized and iterative solutions are needed as

well, shown in Equations 2-25 and 2-26.

ln(v) = ln(p)- 2kd (2-25)

[v (pe2kd)] [e -2kd 2dpe-2kd ] (2-26)



These modified radiative transfer model equations can then be applied in similar

fashion to the original equations to solve for p and k for the ALB bottom return. As

with the passive model, an assumption of constant p and k is necessary. The application

of these modified equations will be demonstrated in the next chapter.














CHAPTER 3
EXPERIMENT

In Chapter 2, we provided examples of research into removing the effects of water

attenuation found in passive imagery. The resulting water-corrected imagery, obtained

over coastal waters, could then be used for benthic mapping. We then introduced

pseudoreflectance, a parameter derived from the peak bottom return in ALB data. Our

goal is to combine these data to produce a benthic map. In this chapter, we explain the

experiment performed to test the validity of obtaining benthic information from ALB

bottom waveform returns.

In the experiment, we used georegistered datasets from hyperspectral and ALB

systems. We corrected the hyperspectral data for water attenuation using the radiative

transfer model described previously. We then corrected the ALB bottom return data for

water attenuation using the modified radiative transfer model from Chapter 2. Both

datasets were then classified using a supervised classification scheme. We also

performed a supervised classification on a third dataset, consisting of hyperspectral bands

plus a band containing ALB depths. Next, we merged the hyperspectral and ALB

classifications using a data fusion approach. Finally, we performed an accuracy

assessment of the classifications, including the hyperspectral, ALB, hyperspectral-plus-

depths, and data fusion results, with the expectation that the data fusion classification

would produce a higher mapping accuracy than that from either the hyperspectral or the

ALB classification.









This chapter begins with an overview of the datasets used in this experiment,

including hyperspectral data provided by the AVIRIS system, ALB data from the

SHOALS system, and ground-measured hyperspectral data obtained from an ASD

handheld spectrometer. We then move into the preprocessing steps, which include the

georegistration of the datasets, the conversion to reflectance and pseudoreflectance, and

the removal of wave effects from the water surface. Next, we describe the details of the

water attenuation removal process, showing similarities and differences in this process

between the AVIRIS and SHOALS datasets. We then explain the supervised

classification process, using the Maximum Likelihood classifier on both water-corrected

AVIRIS and SHOALS datasets, and the AVIRIS-plus-depths dataset. Afterward, we

describe the use of the Dempster-Shafer decision-level data fusion of the AVIRIS and

SHOALS classified datasets, providing a fourth classification of the project area. Lastly,

we provide an accuracy assessment of the four classifications, including Overall

accuracies, Kappa coefficients, User accuracies, and Producer accuracies.

Most of the work described in this chapter was implemented using the ENVI/IDL

software suite (Research Systems Incorporated 2002). This package provided the

freedom to write our own computer programs, in the IDL language, to ensure the

algorithms were implemented correctly. Any other software packages used are specified

in the text.

Datasets

The data collected for this research were obtained over Kaneohe Bay, Hawaii, an

area known to contain coral, algae and seagrass within relatively clear waters. The

hyperspectral data were collected using the Airborne Visual/Infrared Imaging

Spectrometer (AVIRIS) system, flown in April 2000. The AVIRIS system obtained









imagery along northwest-southeast flightlines covering all ofKaneohe Bay, as shown in

Figure 3-1. The red box represents the area of focus for our research, which measures

about 4600 meters by 2600 meters. The AVIRIS system collects 20-meter pixels with an

11-kilometer wide swath per flightline. Additional specifications on the AVIRIS system

are provided in Appendix A.


1- #10.Zoom..4


Figure 3-1. Georegistered AVIRIS image of Kaneohe Bay, Hawaii. The red box
indicates the area of research (4600 m x 2600 m).


In addition to the hyperspectral data, ALB data were obtained using the Scanning

Hydrographic Operational Airborne Lidar Survey (SHOALS) system, flown in August

2000. The SHOALS system was flown along northwest-southeast flightlines at an

altitude of 300 meters, collecting bathymetric data for most of Kaneohe Bay, and

topographic data for most of the shoreline areas. Depths from the SHOALS system were

obtained at a nominal spacing of about 4 meters, with a 110-meter wide swath per

flightline. Depths within the outlined research area in Figure 3-1 range from 1 meter in









the lower left, to 35 meters in the upper right. Additional specifications on the SHOALS

system are provided in Appendix A.

Spectral data were also collected at ground level using a FieldSpec Pro hand-held

spectrometer, manufactured by Analytical Spectral Devices (ASD). Measurements were

acquired in June 2000 over several locations surrounding the bay, each with different

types of ground cover, such as asphalt, grass and sand. In addition to each ground cover

measurement, a measurement was acquired over a spectralon panel at each location,

immediately after each ground measurement. Spectralon is a nearly perfect diffuse

reflector. Therefore, the spectralon measurements provided an estimate of irradiance,

which when divided into the corresponding ground radiance measurement gives an

estimate of reflectance for that ground type. The spectrometer measures in 1-nanometer

channels in the visible and infrared wavelengths (350 nm to 2400 nm). Additional

specifications for the FieldSpec Pro are provided in Appendix A.

Preprocessing

In this section we describe the preprocessing steps applied to each dataset before

attempting to remove water attenuation effects. These steps include georegistration,

conversion to reflectance, and removal of surface wave effects. We first describe these

steps as applied to the AVIRIS data, and then for the SHOALS data.

AVIRIS

Our area of research consists of a significant portion of Kaneohe Bay. This area

resides within one AVIRIS flightline, flown from the southeast to the northwest, and can

be observed from the linear imaging in that direction in Figure 3-1. The original imagery

was not georegistered, which is necessary for merging geospatial data. Georegistration

requires selecting image-identifiable points with known coordinates, called control









points, and performing a transformation to a known coordinate system. Because of the

difficulty with selecting points in water, we decided to georegister the entire bay,

including the topography surrounding it, and use image-identifiable points from the

topography to generate the transformation parameters. Our area of research could then be

clipped from the georegistered image of the bay.

Two different sources were used for providing control points around the bay. One

source was an orthoimage of the southeastern portion of the bay, which was provided by

the Remote Sensing Division of NOAA/NGS. The other source was a USGS quadrangle

map, in digital format, which covered most of the bay. Both sources were referenced to

the NAD83 datum, and common points from the sources compared to within 15 meters,

well within the pixel size (20 meters) of the AVIRIS imagery.

The orthoimage was generated using high quality aerial photography obtained in

April 2000. The photography was scanned into digital format, and imported into a

softcopy photogrammetric software package called Socet Set, produced by LH Systems.

Using image-identifiable points with corresponding three-dimensional coordinates

(obtained from GPS observations and post-processing), the software georegistered and

mosaicked the imagery, generated a digital terrain model, and then produced the

orthographically corrected image (Woolard, J., NOAA/NGS, personal correspondence,

December 2002).

The georegistration of the AVIRIS image was performed using ERDAS Imagine

software (Leica Geosystems 2002). Fifteen control points were selected from the

topography surrounding the bay, and an Affine transformation was calculated for

converting the image coordinates to the NAD83 datum. The results from this









transformation produced RMS values of 0.48 pixels in the X direction, and 0.56 pixels in

the Y direction. The image was then resampled to the UTM coordinate system using the

cubic convolution method, producing the image in Figure 3-1. Our area of research was

then clipped from this resultant image for further processing.

Each pixel of AVIRIS imagery contains data in the form of at-sensor measured

radiance. However, as mentioned in the previous chapter, radiance is not an inherent

property of the object space, but is a function of solar irradiance, atmospheric

transmittance, and additive radiance from sources other than the target pixel. Therefore,

it was necessary to convert the AVIRIS radiance image to units of reflectance. Due to

the availability of ground reflectance data obtained with the hand-held spectrometer, we

employed the Empirical Line Method (ELM) to produce a reflectance image.

The ASD FieldSpec Pro hand-held spectrometer records radiance for 2100

channels with bandwidths of about 1 nanometer per channel. However, the AVIRIS

channels have bandwidths of about 10 nanometers. In order to use the ASD ground

spectra for the ELM process, we convolved the 1-nanometer ground spectra to the 10-

nanometer AVIRIS bandpass channels. This was done by appropriately weighting the

contributions from the ASD channels to create new channels at the same bandwidth as

the AVIRIS channels. The weighting was based on the spectral response curves for the

AVIRIS channels (assumed to be Gaussian) and generated using spectral calibration

information (channel center wavelength and full-width half-maximum values) for the

Kaneohe Bay flight (Appendix A).

After converting the ASD ground spectra to the AVIRIS channel format, we used

these reflectance spectra to perform the ELM procedure. We selected points from the









AVIRIS imagery co-located with the recorded ASD ground spectra, and performed a

linear regression of AVIRIS radiance to ASD reflectance for each AVIRIS channel,

following the ELM procedure described in Chapter 2. Since the AVIRIS imagery and the

ground reflectance data were not temporally registered, we selected points where the

reflectance values were likely to be temporally invariant (e.g. concrete, asphalt, beach

sand) to perform the regression. A plot of several of these selected points, showing

reflectance on the x-axis and radiance on the y-axis, is given in Figure 3-2. Note the

linear relationship between the radiance and reflectance values (additional points to help

verify the linear relationship were difficult to locate). Results from the regression for

each band were applied to the AVIRIS radiance image, producing a reflectance image for

the project area. AVIRIS channel 15 (510 nm) of this reflectance image is shown in

Figure 3-3.


AVIRIS Band 5 ELM

7500
7000
6500
S6000
| 5500 Data Points
C 5000 Regression
4500
4000
3500
3000
0 0.1 0.2 0.3
Reflectance


Figure 3-2. Plot of ground points with their reflectance values and corresponding
AVIRIS radiance values for band 5 (413 nm).




























Figure 3-3. AVIRIS reflectance image (band 15, 510 nm) of the research area.


The last step in the preprocessing phase for the AVIRIS imagery was to remove the

effects caused by reflection off the surface waves. These waves are clearly visible in

Figure 3-3, showing the waves moving mostly from the northeast to the southwest.

Several methods for wave removal were attempted, with the best results obtained from

just two techniques. The first was to subtract the reflectance of an infrared band from

that of the visual bands, similar to the method used by Estep et al. (1994). Since only the

visual bands will penetrate water, any reflectance over water in an infrared band should

be from the surface. The second method was to apply a Fast-Fourier Transform (FFT) to

an infrared band and to each of the visual bands. Then subtract the infrared FFT image

from each of the visible FFT images, and then perform an inverse FFT on each of the

difference visible FFT images. This method is shown in Equation 3-1. Results from

this second method proved to be visually superior to the first one, and the resulting wave-

removed AVIRIS reflectance image, band 15 (510 nm), is shown in Figure 3-4. Note the









significant reduction in surface wave effects in the deeper water (northeast quadrant) area

of the image, and the increase in bottom contrast.


IFFT(FFT(Visible)


FFT(Infrared))


Figure 3-4. AVIRIS image (band 15, 510 nm) corrected for surface waves using FFT
method.


SHOALS

The amount of preprocessing needed with the SHOALS dataset was considerably

less than that with the AVIRIS dataset. The SHOALS data are provided in point format,

with UTM nothing and eating coordinates, referenced to the NAD83 datum, given for

each point. The data are georegistered because the SHOALS system includes a GPS

receiver and an INS system, providing accurate position and orientation measurements of

the aircraft for each emitted laser pulse (Chapter 2).

Each data point in the SHOALS dataset consists of a depth, UTM coordinates,

flightline, output laser energy, and a bottom return amplitude measurement from the APD


(3-1)









and the PMT. Following our modified radiative transfer model described in Chapter 2,

we normalized the bottom return amplitude measurements from the APD and the PMT,

for each pulse, by dividing each measurement by its associated output laser energy. This

produced a Normalized Bottom Amplitude value, denoted v, for each pulse.

The AVIRIS imagery contains reflected radiance measurements, with contributions

from the water surface as well as from the water column and bottom. The water surface

effects were removed using an FFT subtraction technique. For the SHOALS system, the

reflected measurement for a laser pulse is temporally digitized, at 1-nanosecond intervals.

This causes returns from objects closer to the aircraft to be detected first, as is the surface

return shown in Figure 2-7. The bottom return is therefore temporally separated from the

surface effects, eliminating the need to correct for this effect in the SHOALS data.

The SHOALS data consist of depths and peak bottom return values for each laser

pulse, and there is roughly a 4-meter spacing between adjacent pulses. This "point

format" is different from that of the AVIRIS data, which is rasterized. Therefore the

SHOALS data points, both depth and v, were "binned" into 20-meter pixels, coincident

with the 20-meter pixel locations from the AVIRIS image, providing an active image

(from the v values) and depth image coincident with the AVIRIS image. The value in

each binned pixel simply consists of the mean of the data points that were located within

the boundaries of that pixel. Figure 3-5 shows the SHOALS mean depth image, which

indicates increasing depths from left to right (west to east). Black pixels indicate areas

where no depth data were acquired.

Water Attenuation Removal

In the process of extracting benthic information from the datasets, we have

discussed the preprocessing steps, including the georegistration and rasterization of the









data, handling of system variables, and accounting for atmospheric effects and surface

wave effects. The next step is to remove the effects caused by water attenuation. We

apply the radiative transfer equation theory presented in Chapter 2 in order to accomplish

this process. In the next sections we describe the water attenuation removal for the

AVIRIS dataset, followed by that for the SHOALS dataset.




















Figure 3-5. SHOALS mean depth image. Brown < 2.5m, Tan < 5m, Green < 10m, Blue
< 15m, Yellow < 25m, Dark Green < 40m, Black = no data.


AVIRIS

In Chapter 2, we provided radiative transfer equations for hyperspectral data in

terms of radiance, and mentioned that reflectance could be used as well. The AVIRIS

data have been converted to reflectance, therefore the appropriate radiative transfer

equations, given below for convenience, are shown using reflectance p.

surface ( bottom deep 2kd + deep (32)
PIZ P e 2k (3-23)

In(pface dee- p ) = In(p"bottom" p ) 2kd (3-3)









A bottom -

[puface ((Pottom, dee )e2 + dp )] [e 2k d 1e 2kd 2d(pottom p )e 2kd] Adeep
Ak]

(3-4)

As stated in Chapter 2, in order to solve for pottom and kh in Equation 3-2, we must

select points for which both of these parameters are constant. To satisfy the constant

bottom
p'tt requirement, we selected pixels only located in sand areas, which are easily

observed on the right side of the image in Figure 3-4, channeling down to the bottom

center. These sand pixels are the input points for the regression. To satisfy the constant

k, requirement, we assumed a constant kh for the entire research area. This assumption is

not unreasonable, given the usually clear waters found off the coasts of Hawaii and the

small research area.

Using the AVIRIS image in Figure 3-4, we selected 117 points in sand areas, with

corresponding depths ranging from 6-30 meters. We also chose a pixel in optically deep

water (estimated depth > 40 meters) to represent pe Using these inputs, we obtained a

linearized least-squares solution for Equation 3-3, providing estimates for p bottom and kh.

Using these estimates as initial values for Equation 3-4, we obtained an iterative least-

squares solution for AVIRIS bands 5-28, providing updated estimates for p ttom, k0 and


pdep. The updated estimates for kh and pjee" are not particular to just the sand areas, and

can be applied to the entire image. So using the new kh and pjee values in Equation 3-2,

and solving pbotto for each pixel, we were able to generate a pbotto image (i.e., water

attenuation removed) of the research area. AVIRIS channel 15 of this water-corrected









image is shown in Figure 3-6. The black pixels near the corners represent pixels where

no depth data were available.




















Figure 3-6. AVIRIS image (band 15, 510 nm) corrected for water attenuation


SHOALS

The procedure used for removing water column attenuation effects from the

SHOALS data was similar to that described for the AVIRIS data, except that Equations

2-25 and 2-26 were used for obtaining the linearized and iterative least-squares solutions,

respectively. However, the application of this procedure was not as straightforward as

with the AVIRIS data. The SHOALS data in our research area were collected on August

12, 2000 and August 26, 2000. Within each collection day there were three flights over

the research area (flights a, b, and c). Figure 3-7 spatially depicts the six flights over our

area, with the number representing the day and the letter the flight. Note that the eastern

data were collected on August 12, and the western side obtained two weeks later on

August 26.











12a


\ \ 12c
\ \ 26a\ \ \
\26b \2
26c\ \ \
\ \ \ \


Figure 3-7. Spatial layout of SHOALS datasets collected over project area.

The multiple datasets over the research area provided several challenges for
processing. First, the two-week temporal separation between the eastern and western
data collections necessitated the separate processing of each area for water attenuation
removal. An assumption of constant k is necessary for applying Equations 2-25 and 2-26,
but that assumption cannot be justified for the entire research area given the two-week
time gap among the datasets. Therefore the datasets needed to be separately corrected for
water attenuation, and a method of combining the two datasets examined. Another issue
was with combining data from adjacent flights. A comparison of overlapping data from
adjacent flights indicated differences in the data values between flights, forcing the
development of a strategy for combining these datasets. A third challenge resulted from
working with both APD and PMT waveform data. As described in Chapter 2, the APD
can measure depths from 1-14 meters, and the PMT from 8-40 meters. Our research area
contains depths ranging from 0-40 meters, as shown in the depth image in Figure 3-5. A









method of combining data from these two receivers in their overlapping depth areas (8-14

meters) needed to be developed.


26b to 26a Regression

20000

15000

d 10000 -


0M A-"----*--7-----------
5000

0 *
0 5000 10000 15000 20000
26b


Figure 3-8. Plot of overlapping APD pixels from Areas 26a and 26b.


It was determined that these issues could be resolved by applying linear

regressions. A linear relationship was found between overlapping v data for flights

occurring on the same day. An example is shown in Figure 3-8, which plots APD v

values from overlapping pixels for Datasets 26a and 26b. Successive linear regressions

for Datasets 26a, 26b, and 26c produced a combined v dataset for APD Day 26 (these

areas were too shallow for PMT returns). A similar procedure was used to combine 12a,

12b, and 12c for a combined v dataset for APD Day 12, and also a combined v dataset for

PMT Day 12. A list of the linear regressions and associated r-squared values are shown

in Table 3-1.

The resulting datasets (v for APD Day 26, APD Day 12, and PMT Day 12) were

then corrected for water attenuation using Equations 2-25 and 2-26 to model the radiative









transfer. As with the AVIRIS data, sand points were selected from each dataset, and

results from the least-squares solutions provided estimates for p and k. The k values

were then used in Equation 2-24 to generate p images for APD Day 26, APD Day 12,

and PMT Day 12.

Table 3-1. Linear regressions between overlapping flight data and associated r-squared
values.
Linear Regression R-squared Value

Flight 26c APD to Flight 26b APD 0.85

Flight 26b APD to Flight 26a APD 0.83

Flight 12c APD to Flight 12b APD 0.87

Flight 12b APD to Flight 12a APD 0.88

Flight 12c PMT to Flight 12b PMT 0.85

Flight 12b PMT to Flight 12a PMT 0.78



The p images for APD Day 26 and APD Day 12 were then combined using a

linear regression. Figure 3-9 provides a plot of the overlap pixels for these datasets,

indicating a degree of linear relationship. However, the grouping on the right side of the

plot does not linearly fit with the grouping on the left side. The pixels associated with the

right-side grouping are spatially located in the lower (southern) third of the overlap area.

We do not yet understand what caused this behavior. One possibility is that k was not

homogeneous in one of the datasets, which is one of our simplifying assumptions. It is

difficult to determine which grouping is anomalous. Since most of the overlap area is

represented by the left-side grouping, we chose to only use the left-side points in the










regression. The red line in Figure 3-9 indicates the regression result, which had an

associated r-squared value of 0.69.


26 to 12 Regression

8000
7000
6000 -
5000 -
+ +
o 4000


2000
1000
0
0 --------------------------
0 2000 4000 6000 8000 10000
26


Figure 3-9. Plot of overlapping pixels from Areas 26 and 12.


The resulting APD dataset was then combined with the PMT Day 12 dataset to

create a p image of the entire research area. Again, a linear regression was employed.

Pixels with corresponding depths from 10-12 meters were selected as overlap pixels for

the regression. This range is well within the 8-14 meter sensitivity overlap of the APD

and PMT receivers. Figure 3-10 shows a plot of the overlap pixels from the APD and

PMT datasets, demonstrating the linear relationship. The associated r-squared value for

this regression was 0.85. Results from the linear regression were then used to regress

PMT pixels of 11 meters or deeper to APD values. The resulting APD regressed p

image is shown in Figure 3-11. Note the seam visible between Areas 26a and 12c,

indicative of the imperfections in combining these datasets. Another seam is visible

between Areas 12a and 12b. However, the image has classification value as can be






58

discerned from the distinct sand channel between adjacent coral and seagrass. The depth

image of the research area is given in Figure 3-12.


APD to PMT Regression

14000
12000 ,-
10000
8000
6000
4000
2000
0 I I-
0 2000 4000 6000 8000 10000 12000 14000
APD


Figure 3-10. Plot of overlap pixels from APD and PMT receivers.


"p

4941'


Figure 3-11. APD regressed pseudoreflectance image of research area.




























Figure 3-12. Depth image of research area.


Classification of Datasets

At this point, the AVIRIS and SHOALS datasets have been preprocessed and

corrected for water attenuation. The resulting datasets represent bottom reflectance

values for AVIRIS bands 5-28, and pseudoreflectance values for SHOALS waveform

data. The next step is to use a supervised classification technique to generate benthic

characterizations from these images. We used the Maximum Likelihood classifier

provided in the ENVI software suite. This classifier has been found to work well for data

fusion applications (Park 2002) due to the class probability information produced for

each pixel, which can be used as apriori input to the Dempster-Shafer data fusion

method (see next section).

Supervised classification requires ground truth data in order to train the classifier.

Our ground truth data came from a paper by the Analytical Laboratories of Hawaii (2001)

on the benthic habitats of the Hawaiian islands. This paper contained a benthic map of

Kaneohe Bay derived from aerial photography, scanned at 1-meter pixels, that was









photointerpreted for benthic types. Using ArcView software (Environmental Systems

Research Institute Incorporated 2000), we digitized the map, storing it in an ArcView

shapefile. Using ERDAS Imagine (Leica Geosystems 2002), we converted the shapefile

to raster format, and georegistered it to the same raster as our research area (RMS 1.2

pixels). Using the resulting image, five benthic types were identified in our research

area, including sand, colonized pavement, uncolonized pavement, macroalgae (10-50%)

and macroalgae (50-90%). This image provided ground truth information for the western

two-thirds of our research area. However, the sand in the northeast corner is easily

identifiable from the AVIRIS and SHOALS imagery. The ground truth image is shown

in Figure 3-13, and a class color legend for the image in Figure 3-14.


&"i


Figure 3-13. Ground truth image for our research area.


U na:-illied I Uncolonized Pvmt.

sand M .crloalgae 50-90

Coloi.r-leg d Pf.rmt hin jctile 1 a.i5U


Figure 3-14. Class color legend for ground truth image.








Using the ENVI software, Regions of Interest (ROIs) were selected corresponding

to each of the five benthic types. AVIRIS and SHOALS pixels within these ROIs could

then be used to train the Maximum Likelihood classifier. Figures 3-15 and 3-16 show the

ROIs selected, draped over the ground truth image, and the AVIRIS water-corrected

image (band 15), respectively. Red areas indicate pixels used to train the classifier, and

blue areas are pixels used to assess the accuracy of the classification.










Figure 3-15. Regions of Interest (ROIs) draped over ground truth image.
L L





Figure 3-15. Regions of Interest (ROIs) draped over ground truth image.


Figure 3-16. Regions of Interest (ROIs) draped over AVIRIS bottom reflectance image,
band 15.










Before performing the classification, some analysis was done to determine the

spectral separability of the benthic classes among the three datasets to classify (AVIRIS

bottom reflectance bands 5-28, SHOALS pseudoreflectance, and AVIRIS bottom

reflectance-plus-depths). An accepted method of determining class separability is the

Transformed Divergence method (Jensen 1996). This method calculates a metric for

multi-band images between 0.0 and 2.0, for each ROI pair, that indicates the statistical

separability of that pair. Values greater than 1.9 indicate good separability ibidd). This

method was applied to both the AVIRIS bottom reflectance dataset and the AVIRIS-plus-

depths dataset, using the training ROIs. We determined that each possible pair, for both

datasets, had a Transformed Divergence value greater than 1.9, indicating good

separability among all the identified benthic classes for these images.

Since the SHOALS pseudoreflectance dataset has only one band, Transformed

Divergence could not be used to determine spectral separability. Therefore, we

calculated the mean and standard deviation within each ROI for the SHOALS

pseudoreflectance dataset. A plot of the spread of each class, showing the mean +/- 2

standard deviations, is shown in Figure 3-17. This analysis shows that, using only

pseudoreflectance values, it would be difficult to distinguish among the two densities of

macroalgae and uncolonized pavement, or between colonized pavement and sand.

Therefore, we added depth as a second band to the SHOALS dataset, and calculated

Transformed Divergence values for a two-band (pseudoreflectance-plus-depth) SHOALS

dataset. Results for each class pair were greater than 1.9, with the exception of the

uncolonized pavement and macroalgae (10%-50%) pair, which had a value of 1.7. This

indicated good separability among the five classes using the 2-band SHOALS dataset,









with some question of the separability between uncolonized pavement and macroalgae

(10%-50%). Therefore, the SHOALS dataset was classified using a pseudoreflectance

band and a depth band.

After obtaining a satisfactory indication of spectral separability, the Maximum

Likelihood classification was performed separately on the AVIRIS (24-band) dataset, the

SHOALS (2-band) dataset, and the AVIRIS-plus-depth dataset, using the training ROIs

selected earlier. The resulting classifications are shown in Figures 3-18, 3-20, and 3-21,

with a benthic class legend provided in Figure 3-19. Note that the unclassified pixels in

the lower left and upper right quadrants of the images are due to missing depth data.

Other unclassified pixels are located along boundaries between benthic classes, and may

be caused by mixed pixels.


Figure 3-17. Plot of +/- 2 standard deviation spread of pseudoreflectance values for each
class.









We then assessed the accuracy of the three classifications, based on accepted

methods described in Appendix B. Test ROIs were selected for each benthic class

(Figures 3-15 and 3-16), separate from the training ROIs used for producing the

classifications. The ENVI software includes post-classification accuracy utilities, which

used the test ROIs to assess the accuracy of each classification. The utilities generated an

overall accuracy and Kappa coefficient for each classification, which are shown in Table

3-2. Also generated were error matrices (described in Appendix B) for each

classification image, given in Tables 3-3, 3-4, and 3-5.






















Figure 3-18. Classification of AVIRIS bottom reflectance dataset.

fl iriEl ::::::ild Uncolonized Pvmt.

sand IC Mzic.Igae 50-

fI:l ri:fr Priij M ac re igae 10-50
Figure 3-19. Class color legend for classification images.

sand,"=.a.:;[ algae 53 ,0

































Figure 3-20. Classification of SHOALS 2-band (pseudoreflectance and depth) image.


Figure 3-21. Classification of AVIRIS bottom reflectance-plus-depth dataset.









In addition to the error matrices, we also generated difference images between each

classification and the ground truth image (Figure 3-13), shown in Figures 3-22, 3-23, and

3-24. Colored pixels indicate a classification match between the ground truth and the

classification, and black pixels indicate a mismatch. The right side of these images is all

black since there is no ground truth imagery in that area.

Table 3-2. Overall accuracies for the three classifications.
AVIRIS SHOALS AVIRIS + Depths
Overall Accuracy 80.2% 66.9% 85.3%
Kappa Coefficient 0.762 0.603 0.821


Fusion of Classified Images

The next step was to combine the classifications of the AVIRIS and SHOALS

results using data fusion. As discussed in Chapter 1, data fusion is defined as the process

of combining data from multiple sources in order to obtain improved information about

an environment than could be obtained from any of the sources independently. Data

fusion can take place at different levels of data abstraction, including pixel-, feature- and

decision-level fusion (listed in order of increasing data abstraction). The data fusion for

our research takes place at the decision-level, since object classification has already taken

place. Several methods exist for data fusion at the decision-level, including rule-based

methods, Bayesian Estimation, and Dempster-Shafer. We use the Dempster-Shafer

method of evidence combination, applied in a similar fashion as that of Park (2002).

More detail on Dempster-Shafer evidence combination is provided in Appendix C. For

the rest of this section, we assume the reader is familiar with the information in the

appendix.










Table 3-3. Error matrix for AVIRIS classification accuracies.
Reference
Xs Sa Colonized Uncolonized Macroalgae Macroalgae
Classifie. Pavement Pavement (50-90%) (10-50%)

Unclassified 3 9 73 43 53 181
Sand 198 0 0 0 0 198
Colonized
0 212 17 0 0 229
Pavement
Uncolonized
Uncolonized 0 1 153 1 3 158
Pavement
Macroalgae 0 0 3 166 0 169
(50-90%)
Macroalgae 0 0 10 3 157 170
(10-50%)
Total 201 222 256 213 213 1105


Figure 3-22. Difference image between AVIRIS classification and ground truth image.










Table 3-4. Error matrix for SHOALS classification accuracies.
Reference
Sand Total
X Sn Colonized Uncolonized Macroalgae Macroalgae T
Classified Pavement Pavement (50-90%) (10-50%)

Unclassified 20 29 53 19 75 196
Sand 149 28 0 0 0 177
Colonized 29 165 0 0 0 194
Pavement
Uncolonized 0 0 156 0 32 188
Pavement
Macroalgae 0 0 0 185 22 207
(50-90%)
Macroalgae 3 0 47 9 84 143
(10-50%)
Total 201 222 256 213 213 1105


Figure 3-23. Difference image between SHOALS classification and ground truth image.










Table 3-5. Error matrix for AVIRIS-plus-depths classification accuracies.
Reference
Sand Total
X Sn Colonized Uncolonized Macroalgae Macroalgae T
Classified Pavement Pavement (50-90%) (10-50%)

Unclassified 1 7 59 16 26 109
Sand 200 0 0 0 0 200
Colonized 0 215 20 0 0 235
Pavement
Uncolonized 0 0 161 0 9 170
Pavement
Macroalgae 0 0 0 194 5 199
(50-90%)
Macroalgae 0 0 16 3 173 192
(10-50%)
Total 201 222 256 213 213 1105


Figure 3-24. Difference image between AVIRIS-plus-depths classification and ground
truth image.









There are five benthic classes to be discerned in the research area, which, in

Dempster-Shafer terminology, can be referred to as basic propositions. The probability

of the occurrence of one of these propositions, for one sensor, is calculated by summing

the probability masses for general propositions that support the occurrence of that basic

proposition. The AVIRIS and SHOALS classification images contain information from

which general propositions can be obtained. By inspecting the error matrices from each

classification (from the perspective of a User, not a Producer), we created class-to-

information tables for each sensor. For a given row in an error matrix, any bottom type

making up more than 10% of the total pixels classified for that row was included as

information for that row's class. Tables 3-6 and 3-7 list the general propositions

represented by the benthic classifications for each sensor. As an example, Table 3-7

implies that pixels labeled as sand in the SHOALS classification image are either sand or

colonized pavement.

Table 3-6. AVIRIS class-to-information table.
Class Information

Sand Sand

Colonized Pavement Colonized Pavement

Uncolonized Pavement Uncolonized Pavement

Macroalgae (50-90%) Macroalgae (50-90%)

Macroalgae (10-50%) Macroalgae (10-50%)



Using the class-to-information tables, we could construct a matrix representing the

evidence combination for the Dempster-Shafer fusion process. The matrix is based on

defined rules that explain how to combine data from the AVIRIS and SHOALS

classifications to obtain probability masses for both basic and general propositions. This










matrix is provided in Table 3-8. The row and column labeled "Information" correspond

to the entries in the class-to-information tables. The shaded boxes represent the

probability masses, which are the product of the evidence values from their associated

row and column information input.

Table 3-7. SHOALS class-to-information table.
Class Information

Sand Sand, Colonized Pavement

Colonized Pavement Colonized Pavement, Sand

Uncolonized Pavement Uncolonized Pavement,
Macroalgae (10-50%)

Macroalgae (50-90%) Macroalgae (50-90%),
Macroalgae (10-50%)

Macroalgae (10-50%) Macroalgae (10-50%),
Uncolonized Pavement



Table 3-8. Evidence combination matrix for AVIRIS and SHOALS classifications.
S=Sand, C=Colonized Pavement, U=Uncolonized Pavement, M9=Macroalgae
(50-90%), Ml=Macroalgae (10-50%), K=Conflicting Evidence
Colonized Uncolonized Macroalgae Macroalgae
SHOALS Sand
Pavement Pavement (50-90%) (10-50%)

AVIRIS Information SvC CvS U v M1 M9 v M1 M vU


Sand S S S K K K

Colonized
C C C K K K
Pavement

Uncolonized K K U K U
U K K U K U
Pavement

Macroalgae
Macroalgae M9 K K K M9 K
(50-90%)

Macroalgae M1 K K M1 M1 M1
(10-50%)










Input to the evidence combination matrix comes from "rule images" for the

AVIRIS and SHOALS classifications. During the classification process, the Maximum

Likelihood classifier in the ENVI software generates a "rule" image for each class. Each

pixel in a rule image contains a statistical estimate of the likelihood that the pixel belongs

to the class associated with the rule image. Values range from 0 to 1, with higher values

representing greater likelihood. For each pixel in our research area, the associated rule

image values from the AVIRIS and SHOALS classifications are entered into the

"Information" column and row, respectively, in the evidence combination matrix.

Associated probability masses are then computed for that pixel by filling in the shaded

area of the evidence combination matrix. Each box in the shaded area is simply the

product of the associated "Information" column and row values for that box. Class

probabilities are then calculated using the formulas in Appendix C.

For each pixel, a probability is determined for each class, representing the

likelihood of that pixel belonging to each given class. The class with the maximum

associated probability is the class assigned to that pixel. For the Dempster-Shafer

classification image we computed, if the maximum class probability was less than 0.9,

the pixel was considered unclassified. Figure 3-25 shows our resulting Dempster-Shafer

(D-S) data fusion image computed using the previously described procedure. As with the

previous classification images, the unclassified pixels in the lower left and upper right

quadrants of the images are due to missing depth data. The other unclassified pixels are

located along boundaries between benthic classes, and may be caused by mixed pixels.

We assessed the accuracy of the D-S classification image, using the same

procedure and test ROIs from the previous classifications. The Overall Accuracy and










Kappa coefficient are shown in Table 3-9, and the error matrix is in Table 3-10. We also

generated a difference image between the classification and the ground truth image

(Figure 3-13), shown in Figure 3-26. Colored pixels indicate a classification match

between the ground truth and the classification, and black pixels indicate a mismatch.

The right side of the image is all black since there is no ground truth imagery in that area.


























Figure 3-25. Result of Dempster-Shafer fusion ofAVIRIS and SHOALS classifications.
r -"



Statistical Analysis













Dempster-Shafer, 0.762 for the AVIRIS, 0.603 for the SHOALS, and 0.821 for the
comparison is given in Table 3-12.
p L. V -






q- "..



EiI
-. d:L r '% 'N





Figure 3-25. Result of Dempster-Shafer fusion of AVIRIS and SHOALS classifications.


Statistical Analysis

The resulting Kappa coefficients for the classifications were 0.844 for the

Dempster-Shafer, 0.762 for the AVIRIS, 0.603 for the SHOALS, and 0.821 for the

AVJRIS-plus-depths. In order to measure the significance of each classification,

estimates of the Kappa coefficient variances were computed, and test statistics were

calculated, as described in Appendix B. Each Kappa coefficient and associated variance

is shown in Table 3-11. Each test statistic and associated confidence level for each

comparison is given in Table 3-12.










Table 3-9. Accuracies for Dempster-Shafer classification image.
Dempster-Shafer
Overall Accuracy 87.2%
Kappa Coefficient 0.845



Table 3-10. Error matrix for Dempster-Shafer classification accuracies.
Reference
Colonized Uncolonized Macroalgae Macroalgae
Sand Total
Classified Pavement Pavement (50-90%) (10-50%)

Unclassified 0 1 34 43 39 117
Sand 201 0 0 0 0 201
Colonized
Colonized 0 221 4 0 0 225
Pavement
Uncolonized
Uncolonized 0 0 208 1 5 214
Pavement
Macroalgae 0 0 0 165 0 165
(50-90%)
Macroalgae 0 0 10 4 169 183
(10-50%)
Total 201 222 256 213 213 1105


Figure 3-26. Difference image between D-S classification and ground truth image.









Table 3-11. Kappa coefficients and variances for each classification.
Dempster-Shafer AVIRIS + Depths AVIRIS SHOALS

Kappa
Kappa 0.844 0.821 0.762 0.603
Coefficient

Kappa
Coefficient 0.0001416 0.0001593 0.0001895 0.0002614
Variance



Table 3-12. Test statistics and confidence levels for each classification comparison.
Test Statistic (Z) Confidence Level

AVIRIS f SHOALS 7.48 >99%
AVIRIS f AVIRIS +
3.16 >99%
Depths

AVIRIS D-S 4.52 >99%

SHOALS AVIRIS +
10.62 >99%
Depths

SHOALS D-S 12.01 >99%

D-S AVIRIS + Depths 1.33 82%



Based on the Kappa analysis, we can say with 82% confidence that the Dempster-

Shafer Kappa coefficient is different from the AVIRIS-plus-depths Kappa coefficient.

We can say with more than 99% confidence that all the other classification pairs have

Kappa coefficients that are different.

Summary

The goal of this research is to investigate a new method of benthic mapping that

makes use of the airborne laser bathymetry waveform returns to aid in benthic

classification. The method uses a data fusion approach, combining passive hyperspectral

data with ALB waveform return and depth data to attempt to improve the benthic









mapping accuracy over that obtained by either system separately. In this chapter, we

performed an experiment to test this method.

We began by briefly describing the datasets used in the experiment, including the

times and locations of AVIRIS, SHOALS, and hand-held spectrometer dataset

acquisitions. Next, we discussed the preprocessing steps applied to the AVIRIS and

SHOALS datasets. These steps included image georegistration, conversion to

reflectance, normalization, surface wave removal, and rasterization of point data. We

then described the process of water attenuation removal for both datasets, from which we

produced an AVIRIS bottom reflectance dataset and a SHOALS pseudoreflectance

dataset. In the next step we performed a Maximum Likelihood supervised classification

of three datasets, including the AVIRIS bottom reflectance, SHOALS pseudoreflectance,

and AVIRIS-plus-depths. We then assessed the accuracy these classifications, producing

overall accuracy metrics as well as error matrices. Next, we applied the Dempster-Shafer

decision-level data fusion approach to combine the AVIRIS and SHOALS classification

images, generating a new classification image containing information from both sensors.

We then performed an accuracy assessment of the new data fusion classification image,

using the same procedure as used with the previous classification images.

The resulting overall accuracy of the Dempster-Shafer data fusion image was

87.2%. Overall accuracies for the AVIRIS and SHOALS classification images were

80.2% and 66.9%, respectively. The overall accuracy of the AVIRIS-plus-depths

classification was 85.3%. Statistical analysis of the Kappa coefficients and associated

variances for each classification indicate that the Dempster-Shafer and AVIRIS-plus-






77


depths Kappa coefficients differ at 82% confidence. The Kappa coefficients of the other

classification pairs differ at greater than 99% confidence.














CHAPTER 4
DISCUSSION AND RECOMMENDATION FOR FURTHER WORK

Airborne Laser Bathymetry (ALB) has been shown to be an efficient, accurate, and

safe method of obtaining depths of coastal waters. Its use of scanning, pulsed laser

technology produces depths with accuracies useful for many hydrographic applications.

Results obtained from this technology have been applied toward mapping navigation

channels, monitoring sediment transport, and assessing storm damage.

The return waveform for each laser pulse is used in ALB systems to obtain depth

measurements. In our research, we investigated a method to exploit the measured power

of the bottom return portion of the waveform in order to discriminate among benthic

types, providing information to aid in generating a benthic map. Specifically, we

introduced a new parameter, pseudoreflectance, which we computed from a partial

inversion of the bathymetric laser radar equation, and by normalizing by the output laser

energy. To the best of our knowledge, our pseudoreflectance image is the first such

image generated using ALB waveforms. Our research combined information from ALB

waveforms and hyperspectral data to generate a benthic map of coastal waters. We

merged the information using data fusion techniques, with results more accurate than that

obtained from either dataset independently.

Two different levels of data fusion were applied in this research. SHOALS ALB

depths were used in a radiative transfer model to correct the AVIRIS hyperspectral data

for water attenuation. This fusion was applied at the pixel-level (i.e., data-level), since

the data were still in a format similar to the original data, having only been through a









preprocessing stage. A similar pixel-level fusion was performed to correct the SHOALS

bottom return data for water attenuation as well. However, this correction is not data

fusion in the strict sense, because the two datasets that were combined (bottom return and

depth) both were obtained from the same sensor (SHOALS). The second type of data

fusion used was a decision-level technique, combining benthic classifications derived

from the water-corrected AVIRIS and SHOALS datasets, although the end result was a

classification in the AVIRIS pixel raster. This decision-level fusion was realized using

the Dempster-Shafer evidence combination approach.

The Dempster-Shafer (D-S) decision-level fusion image of our research area

produced an overall mapping accuracy of 87.2%. The AVIRIS and SHOALS

classification images produced overall mapping accuracies of 80.2% and 66.9%,

respectively. As expected, the D-S image had a higher mapping accuracy than either of

the independent sensor images. For comparison, another dataset was classified using the

AVIRIS water-corrected bands plus depths from the SHOALS dataset. The resulting

classification had an overall accuracy of 85.3%.

We performed an analysis of the Kappa coefficients, calculated from each

classification, to quantitatively compare the results. The calculated Kappa coefficients

for each classification were 0.844, 0.762, 0.603, and 0.821 for the D-S, AVIRIS,

SHOALS, and AVIRIS-plus-depths classifications, respectively. After performing an

analysis of the Kappa values (using estimates of the Kappa variances), we can say with

82% confidence that the D-S Kappa value differs from the AVIRIS-plus-depths Kappa

value. The calculated confidence in comparing the Kappa values of the other

classification pairs (e.g., AVIRIS to SHOALS, AVIRIS to D-S) was greater than 99%.









The results from the statistical analysis indicate that the D-S classification is more

accurate than the other classifications, although the statistical confidence that its Kappa

coefficient is different from that of the AVIRIS-plus-depth classification is only 82%. It

is preferable to obtain a confidence value greater than 90%, however the result still

indicates that the SHOALS-derived pseudoreflectance values contain significant benthic

information. The accuracy difference between the AVIRIS classification and the

AVIRIS-plus-depths classification shows the significant contribution made by depth

toward benthic classification. It is likely that much of the improvement in the D-S

classification over the AVIRIS classification is also due to the depth component. The D-

S classification included pseudoreflectance information as well, however the significance

of the pseudoreflectance contribution is not as apparent.

Using only the depth data, we would not have been able to create a reasonable

benthic classification of the SHOALS dataset. Without a SHOALS-derived

classification, we could not have taken advantage of the Dempster-Shafer decision-level

fusion, which, based on previous data fusion research (Park 2002), produces better results

than other decision-level fusion methods. In order to implement this decision-level

method, we had to use two separately classified images. The pseudoreflectance

component added the extra dimension of information necessary to create a separate

benthic classification, and allowed us to take advantage of the Dempster-Shafer method.

The results indicate that the SHOALS bottom return data contain information

beneficial for benthic mapping, and may have implications for further research. For

instance, ALB bottom return data, corrected for water attenuation, could be analyzed to

determine which benthic types, if any, are prone to greater depth errors. Then, if the









SHOALS waveform data could be corrected for water attenuation in real-time, in-flight

operators could observe the results and make decisions about repeated passes over

problem areas, or other areas of interest.

One of the biggest problems experienced during this research was with combining

SHOALS bottom return datasets that were acquired from different flights. We used an

empirical approach to this problem, observing the overlapping data and fitting a line to

the relationship. However, the resulting pseudoreflectance images of our research area

show obvious seams between some datasets, indicating the inadequacy of this approach.

This is likely a result of neglecting some of the parameters in the bathymetric laser radar

equation (Equation 2-23). Further research should be performed to investigate the

physics behind the dataset differences, in order to develop a more rigorous method of

dataset combination. An obvious solution would be to fly the entire area in one flight,

however this would create added stress for the pilot, operator, and perhaps the system as

well, and could lead to an unsafe flying environment. Collecting data along some cross-

flights (in a direction across all the datasets) would also help with merging the datasets,

providing a common dataset to compare with. Another approach, albeit tedious, would

be to correct each returned power value for flying height and variations in incidence

angles.

The ground truth image, used for classification and accuracy assessment, was

generated from interpretation of aerial photography (Analytical Laboratories of Hawaii

2001), and covered the western 2/3 of our research area. However, the sand in the

northeast quadrant is easily observed in all the imagery. Therefore, we had ground truth

for about 70% of the research area. Since our research completed, some new ground









truth imagery has become available, interpreted from aerial photography, hyperspectral

imagery, and space-borne multispectral imagery (Coyne et al. 2003), and covering over

90% of our research area. Future related research in Kaneohe Bay would benefit by

using this new data along with the ground truth image we used.

We removed the effects of waves at the water surface by differencing the Fast-

Fourier Transforms (FFT) of visible and infrared bands, and then applying an inverse

transform to the result. This method removed much of the surface noise, however recent

research (Wozencraft et al. 2003) has shown another method for removing wave effects.

Using an irradiance curve (obtained from a hand-held spectrometer measurement), a ratio

can be calculated between the irradiance for a visible band and that for a chosen infrared

band. This irradiance ratio can then be used to scale the upwelling radiance measurement

for the infrared band to that of the visible band, and then subtract the scaled infrared

radiance from the visible radiance measurement. Wozencraft et al. (2003) used this

method on hyperspectral data to remove surface effects that varied across the image,

caused by acquiring the data with flightlines oriented obliquely to the solar azimuth. This

method could be applied to the hyperspectral data in our research as well, and compared

with the FFT method.

Our SHOALS classification image was generated using pseudoreflectance and

depth as a two-band dataset. The depth component, in this instance, contributed added

information for discerning among bottom types. However, depth is not necessarily an

intrinsic property of the benthic object space, so in different areas it may not be

appropriate as classification input. For future research, metrics other than depth (perhaps









texture-based) should be considered (in conjunction with pseudoreflectance) for benthic

classification.

We assumed in this research that the benthic environment did not change between

the AVIRIS and SHOALS flights over the research area. This is not always a safe

assumption, so it would be preferable to collect data from both systems with as little

temporal separation as possible. The best arrangement would be to mount a

hyperspectral and ALB system on the same aircraft, with simultaneous data collection

during flights. Also, ground reflectance measurements should be obtained with minimal

temporal separation from the hyperspectral data collection, allowing for a more accurate

generation of reflectance imagery.

Another simplifying assumption we made in this research was that the water

attenuation coefficient, k, was horizontally and vertically constant throughout the

research area. Given the clarity and consistency of the Hawaiian coastal waters, this is a

fair assumption. However, additional research should be performed to allow for

horizontal and vertical variation in k, which can be significant in other areas. Perhaps the

return waveform itself could provide clues to varying values of k, especially in the

"volume backscatter" portion of the waveform, where changes in slope may indicate

vertical differences in k within the water column.

Another consideration for further research would be to account for internal

reflectance at the air/water interface. Upwelling radiance in a water column, upon

reaching the surface, can have a portion of its energy reflected back downward into the

water. This effect, known as Fresnel reflectance, is dependent upon several factors,

including the angle of the upwelling radiance vector relative to the water surface, and the









polarization of the electromagnetic energy. We did not address this phenomenon since its

effects are minimal except in very shallow water with highly reflective bottom types.

It should be noted that the SHOALS ALB system was not designed with benthic

classification mapping in mind, but strictly depth measurement. Perhaps future design of

ALB systems would include enhancements beneficial for benthic mapping as a secondary

product. These enhancements might include more consistent and accurate measurement

of the outgoing laser energy, as well as easier access to the full waveform for each return

pulse.














APPENDIX A
SPECIFICATIONS OF THE DATA ACQUISITION SYSTEMS

The data used in our research consists of hyperspectral imagery from the AVIRIS

system, hyperspectral data from an ASD portable spectrometer, and Airborne Laser

Bathymetry (ALB) data from the SHOALS system. All datasets were collected over

Kaneohe Bay, Hawaii, with the AVIRIS data obtained in April, 2000, and the ASD and

SHOALS data in June, 2000. This appendix provides a description of the three systems.

AVIRIS

The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) was developed by

the Jet Propulsion Laboratory (JPL) in 1987. It was designed to provide earth remote

sensing data for many areas of scientific research, including botany, geology, hydrology

and oceanography (Vane et al. 1984).

AVIRIS is a whisk-broom sensor, operating at a 12 hertz scanning rate. It is

usually mounted in a NASA ER-2 airplane, and is flown at an altitude of 20 kilometers at

about 730 km/hr. With a 1 milliradian IFOV and a 30 degree total FOV, it produces 614

pixels per scan at a 20 meter pixel size and an 11 kilometer wide swath (Lundeen 2002).

For each pixel, AVIRIS collects 224 channels of upwelling radiance data at 10

nanometers (nm) of spectral bandwidth per channel, for the range of 380 nm to 2500 nm.

The recorded radiance units are aw/cm2/sr. The scanning mirror focuses light onto four

optical fibers, which carry the light to four spectrometers. An aspheric grating is used to

disperse the light across the detectors in the spectrometers. One spectrometer is for

visible light, has 32 silicon detectors, and detects radiance in the range of 410 nm to 700









nm. The other three spectrometers are for infrared light, have 64 indium-antimonide

detectors each, and detect radiance in the range of 680 nm to 1270 nm, 1250 nm to 1860

nm, and 1840 nm to 2450 nm (Goetz 1992).

In order to detect changes in plant health, or differentiate among various minerals,

it is necessary to have an instrument that not only has high spectral resolution, but also is

well calibrated (Vane et al. 1984). Therefore JPL designed the AVIRIS system so it

could operate with a high level of spectral and radiometric calibration. Table 1 shows the

required and achieved calibration levels for the AVIRIS system. These levels are

obtained using laboratory and in-flight calibration methods (Vane et al. 1984), and

indicate the high level of spectral and radiometric accuracy achievable with AVIRIS. At

the end of this appendix, Table A-3 lists the spectral calibration values generated from

the calibration for the April, 2000 flight. This list gives the center and full-width at half-

maximum (FWHM) wavelength values based on the spectral response curves for AVIRIS

channels 1-50.

Table A-1. AVIRIS calibration information (Vane, et al 1984).
Calibration Parameter Required Achieved

Spectral Calibration 5 nm 2.1 nm
Absolute Radiometry 10% 7.3%
Band-to-Band
0.5% 0.4%
Radiometry


ASD FieldSpec Portable Spectrometer

Analytical Spectral Devices, Incorporated (ASD), located in Boulder, Colorado,

manufactures the FieldSpec portable spectrometer. This device measures ground-level

solar-reflected spectra in the visual and infrared regions of the electromagnetic spectrum.









Its sensitivity covers a spectral wavelength range of 350 nanometers to 2500 nanometers,

recording 2150 channels at 1-nanometer bandwidths. The spectrometer is housed in a

rectangular lightweight case, which is worn by the user with the help of shoulder straps.

It has an attached fiber-optic cable with a pointing device at the end for aiming the

receiver field of view. Measurements are recorded using software running on an attached

notebook computer, which can display the measured spectra in graphic format

immediately after each measurement. The software allows for exporting the recorded

spectra into ASCII textual format, enabling the use of the data in other software

packages.

SHOALS

The SHOALS system (Scanning Hydrographic Operational Airborne Lidar Survey)

was developed by the US Army Corps of Engineers (USACE) in 1994. It was designed

to gather near-shore bathymetric data for USACE coastal projects, and has since evolved

to collect near-shore topographic data as well (Irish et al. 2000).

SHOALS is an airborne pulsed-laser system, operating at a pulse rate of 168 to 900

Hz. Its operating platform is usually a Bell 212 helicopter or a Twin Otter DHC-6

airplane, and is flown at an altitude of 200 to 400 meters at speeds of 175 to 300 km/hr.

Swath widths vary from 110 meters to 220 meters, and nominal point separations are

typically around 4 to 5 meters. It has a constant nadir angle of 20 degrees, so the point

swaths sweep back and forth in front of the aircraft. The maximum depth of

measurement varies with water clarity and bottom type, but typical values are up to three

times the Secchi (visible) depth. In clear waters, it can be expected for SHOALS to

determine depths as great as 60 meters (Guenther 2001, Irish et al. 2000).









The SHOALS system generates laser pulses at two wavelengths, using one to

detect the water surface (1064 nm infrared), and one to detect the water bottom (532 nm

- green). This is accomplished using a solid-state Nd:YAG laser source producing 1064

nm radiation, which is then frequency doubled to produce the simultaneous 532 nm

radiation (Guenther 2001). The need for the infrared channel for surface detection arises

due to the inconsistency of the green surface return in varying environmental conditions.

The green surface return can sometimes be weaker than the volume backscatter return,

causing a time bias (and subsequent depth bias) in the identification of the water surface

in the green waveform (Guenther 2001).

The laser pulse returns are detected using both photomultiplier tubes (PMTs) and

avalanche photodiodes (APDs). The PMT is more sensitive to detecting bottom returns

in deep water (8-40 meters) and the APD in shallow water (1-14 meters) (Guenther,

2002). Two surface returns are measured by detecting the infrared signal, as well as the

green-exited Raman backscatter in the red (645 nm). A GPS receiver and an INS unit are

included in the system, which provide the information necessary to georegister the data

points recorded during a flight (Estep et al. 1994, Irish et al. 2000). Accuracies obtained

from the system are shown in Table A-2.

Table A-2. SHOALS performance values (Irish, et al 2000).
Dimension Accuracy (1-sigma)
Vertical +/- 15 cm
Horizontal using Coast Guard DGPS +/- 2 m
Horizontal using Kinematic GPS +/- 1 m










Table A-3. AVIRIS spectral calibration values for channels 1-50.

Channel Center (nm) FWHM (nm) Center Sigma (nm) FWHM Sigma (nm)
1 374.37 15.45 0.56 0.32
2 384.46 11.53 0.33 0.23
3 394.12 11.38 0.14 0.19
4 403.77 11.23 0.10 0.17
5 413.43 11.09 0.10 0.14
6 423.09 10.96 0.09 0.12
7 432.75 10.83 0.09 0.12
8 442.42 10.71 0.09 0.12
9 452.08 10.59 0.10 0.14
10 461.75 10.49 0.10 0.13
11 471.41 10.38 0.10 0.13
12 481.08 10.29 0.09 0.12
13 490.75 10.20 0.09 0.12
14 500.41 10.12 0.09 0.11
15 510.08 10.04 0.09 0.11
16 519.76 9.97 0.09 0.11
17 529.43 9.91 0.09 0.11
18 539.10 9.85 0.09 0.11
19 548.78 9.80 0.09 0.11
20 558.45 9.76 0.09 0.11
21 568.13 9.72 0.09 0.11
22 577.81 9.69 0.09 0.11
23 587.49 9.66 0.09 0.11
24 597.17 9.64 0.09 0.11
25 606.85 9.63 0.09 0.11
26 616.53 9.63 0.09 0.11
27 626.21 9.63 0.09 0.11
28 635.90 9.64 0.09 0.11
29 645.58 9.65 0.10 0.11
30 655.27 9.67 0.09 0.11
31 664.96 9.70 0.09 0.11
32 676.31 12.67 0.09 0.11
33 655.02 10.88 0.07 0.06
34 664.89 9.56 0.07 0.06
35 674.43 9.52 0.07 0.06
36 683.97 9.50 0.07 0.06
37 693.52 9.48 0.07 0.06
38 703.07 9.47 0.07 0.06
39 712.62 9.47 0.07 0.06
40 722.17 9.47 0.07 0.06
41 731.73 9.49 0.07 0.06
42 741.29 9.51 0.07 0.06
43 750.86 9.54 0.07 0.06