Fusion of Lidar and Aerial Imagery for the Estimation of Downed Tree Volume Using Support Vector Machines Classification...

MISSING IMAGE

Material Information

Title:
Fusion of Lidar and Aerial Imagery for the Estimation of Downed Tree Volume Using Support Vector Machines Classification and Region Based Object Fitting
Physical Description:
1 online resource (108 p.)
Language:
english
Creator:
Selvarajan,Sowmya
Publisher:
University of Florida
Place of Publication:
Gainesville, Fla.
Publication Date:

Thesis/Dissertation Information

Degree:
Doctorate ( Ph.D.)
Degree Grantor:
University of Florida
Degree Disciplines:
Forest Resources and Conservation
Committee Chair:
Mohamed, Ahmed Hassan
Committee Members:
Smith, Scot E
Dewitt, Bon A
Escobedo, Francisco Javier
Beck, Howard W

Subjects

Subjects / Keywords:
aerial -- lidar -- remote -- support
Forest Resources and Conservation -- Dissertations, Academic -- UF
Genre:
Forest Resources and Conservation thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract:
The study classifies 3D small footprint full waveform digitized LiDAR fused with aerial imagery to downed trees using Support Vector Machines (SVM) algorithm. Using small footprint waveform LiDAR, airborne LiDAR systems can provide better canopy penetration and very high spatial resolution. The small footprint waveform scanner system Riegl LMS-Q680 is addition with an UltraCamX aerial camera are used to measure and map downed trees in a forest. The various data preprocessing steps helped in the identification of ground points from the dense LiDAR dataset and segment the LiDAR data to help reduce the complexity of the algorithm. The haze filtering process helped to differentiate the spectral signatures of the various classes within the aerial image. Such processes, helped to better select the features from both sensor data. The six features: LiDAR height, LiDAR intensity, LiDAR echo, and three image intensities are utilized. To do so, LiDAR derived, aerial image derived and fused LiDAR- aerial image derived features are used to organize the data for the SVM hypothesis formulation. Several variations of the SVM algorithm with different kernels and soft margin parameter C are experimented. The algorithm is implemented to classify downed trees over a pine trees zone. The LiDAR derived features provided an overall accuracy of 98% of downed trees but with no classification error of 86%. The image derived features provided an overall accuracy of 65% and fusion derived features resulted in an overall accuracy of 88%. The results are observed to be stable and robust. The SVM accuracies were accompanied by high false alarm rates, with the LiDAR classification producing 58.45%, image classification producing 95.74% and finally the fused classification producing 93% false alarm rates The Canny edge correction filter helped control the LiDAR false alarm to 36%, image false alarm to 48.56% and fused false alarm to 37.69% The implemented classifiers provided a powerful tool for downed tree classification with fused LiDAR and aerial image. The classified tree pixels are utilized in the object based region fitting technique to compute the diameter and height of the downed trees and the volume of the trees are estimated.
General Note:
In the series University of Florida Digital Collections.
General Note:
Includes vita.
Bibliography:
Includes bibliographical references.
Source of Description:
Description based on online resource; title from PDF title page.
Source of Description:
This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility:
by Sowmya Selvarajan.
Thesis:
Thesis (Ph.D.)--University of Florida, 2011.
Local:
Adviser: Mohamed, Ahmed Hassan.

Record Information

Source Institution:
UFRGP
Rights Management:
Applicable rights reserved.
Classification:
lcc - LD1780 2011
System ID:
UFE0042849:00001


This item is only available as the following downloads:


Full Text

PAGE 1

1 FUSION OF LIDAR AND AERIAL IMAGERY FOR THE ESTIMATION OF DOWNED TREE VOLUME USING SUPPORT VECTOR MACHINES CLASSIFICATION AND REGION BASED OBJECT FITTING By SOWMYA SELVARAJAN A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2011

PAGE 2

2 2011 Sowmya Selvarajan

PAGE 3

3 To my parents, husband, son and baby

PAGE 4

4 ACKNOWLEDGMENTS I express my sincere gratitude to my advisor and mentor Dr. Ahmed Mohamed, for his strong interest in my study, continuous encouragement and patience. Most important, he inspires and truly cares fo r his students and wants them to succeed. I would like to thank him for all the knowledgeable time he spent with my research and the unmatched help with coding. His supervision gave me a lot of opportunities to explore my research interests. I am also grat eful to Dr. Scot Smith, Dr. Bon Dewitt, Dr. Francisco Escobedo and Dr. Howard Beck for their valuable time and interest in serving on my supervisory committee, as well as their comments, which helped improve my dissertation. I would also express my sincere appreciation to Dr. Scot Smith for his guidance with my studies and his expertise in remote sensing. I would like to express my gratitude to Dr. Bon Dewitt for providing me the opportunity to be part of such a great program. I am proud to be a Geomatics G ator! I would like to thank Dr. Francisco Escobedo for his inspiration in my study topic and giving me an opportunity to work on the hurricane project. Furthermore, I am grateful to Dr. Howard Beck for serving on the committee and taking the time to work w ith me outside his department. I enjoyed very much the guidance and advising and his interest towards my study. I would also like to remember Late Dr. Clint Slatton at this moment, who initially supported my study concepts, and his expertise in LiDAR. He w as a true inspiration and has motivated me to work in the area of LiDAR. In addition, I would like to thank Benjamin Wilkinson for all of his guidance an d support Also I would like to extend my appreciation to Adam Benjamin for his help in the field. Addi tionally, I am extremely thankful to all my fellow G eomatics lab mates

PAGE 5

5 including Emilly Foster, Siripon Kamontum, Zoltan Szantoi both for enlightening discussions and wonderful times shared in and out of lab. I would like to express my gratitude to Riegl USA especially the President, Jim Van Rens, and technic al staff, Jenniffer Triana for all the funding and help extended to collect data, time and guidance. I would also like to thank American Cartographics of America (ACA) in Orlando, especially to Mr. Be ute Edward for their special interest in my study, funding and data collection. Their interest truly motivated me to complete my study. Finally, I extend my most heartfelt gratitude to my mother, father and mother in law who made achieving this goal possib le with their warm hearts and n ever ending prayers especially my mother who has always made me achieve what I dreamt At this moment, I would like to remember my late father in law, Mr. Athithan, a true space scientist. He has always inspired me. I am alw ays grateful to my husband, Arun and son, Vikram who stood by me during all these years of my study, and their motivation to complete my study. And my cooperative little baby in my tummy. Their enduring love and support kept me pushing forward and believin g in myself. For them I am forever grateful.

PAGE 6

6 TABLE OF CONTENTS P age ACKNOWLEDGMENTS ................................ ................................ ................................ .. 4 LIST OF TABLES ................................ ................................ ................................ ............ 8 LIST OF FIGURES ................................ ................................ ................................ ........ 10 ABSTRACT ................................ ................................ ................................ ................... 12 CHAPTER 1 INTRODUCTION ................................ ................................ ................................ .... 14 Hurric ane and Natural Disasters Assessment to Vegetation ................................ .. 14 Motivation ................................ ................................ ................................ ............... 16 Objectives ................................ ................................ ................................ ............... 19 Organization of this Study and Contribution ................................ ............................ 20 2 AERIAL MAPPING TECHNOLOGY ................................ ................................ ........ 21 Airborne Laser Scanning ................................ ................................ ........................ 21 Aerial Photography ................................ ................................ ................................ 23 3 METHODOLOGY ................................ ................................ ................................ ... 26 Data Processin g ................................ ................................ ................................ ..... 26 Identification of Ground from LiDAR Dataset ................................ .................... 26 Segmentation of LiDAR Points ................................ ................................ ......... 26 Haze Filtering on Aerial Image ................................ ................................ ......... 27 Feature selection ................................ ................................ .............................. 27 Support Vector Machines Classification ................................ ................................ .. 28 Linear Discriminants ................................ ................................ ......................... 28 Theoretical Foundations ................................ ................................ ................... 30 Non linear Case ................................ ................................ ............................... 31 Linearly inseparable case ................................ ................................ .......... 31 Nonlinear functions through kernels ................................ ........................... 32 Training and Classification Datasets ................................ ................................ 35 LiDAR training and classification ................................ ................................ 35 Image training and classification ................................ ................................ 35 Fused LiDAR image training and classification ................................ .......... 36 Canny Edge Correction Filter ................................ ................................ .................. 36 Region Based Object Fitting Technique ................................ ................................ .. 37 Computation of Geometric Features of Downed Tree Trunks .......................... 37 Overview of the Approach ................................ ................................ ................ 38 Convex Hull ................................ ................................ ................................ ...... 39

PAGE 7

7 Centroid of the Region ................................ ................................ ..................... 39 Comput ation of Major and Minor Axes ................................ ............................. 39 Computation of the Corner Edge Points of the Region ................................ ..... 40 Downed Tree Volume ................................ ................................ ....................... 41 4 MATERIALS ................................ ................................ ................................ ........... 46 Study Site ................................ ................................ ................................ ............... 46 Data Acquisition ................................ ................................ ................................ ...... 46 Laser Scanner ................................ ................................ ................................ .. 48 Riegl LMS Q680 ................................ ................................ ........................ 48 Airborne Camera ................................ ................................ .............................. 49 Microsoft Ul traCamX camera ................................ ................................ ..... 49 5 RESULTS ................................ ................................ ................................ ............... 53 Data Processing Results ................................ ................................ ......................... 53 Identification of Ground from LiDAR Dataset ................................ .................... 53 Segmentation of LiDAR Points ................................ ................................ ......... 53 Haze Filtering on Aerial Image ................................ ................................ ......... 53 Feature Selection ................................ ................................ ............................. 54 Support Vector Machines Classification Results ................................ ..................... 54 Training Results ................................ ................................ ............................... 55 LiDAR training ................................ ................................ ............................ 55 Image training ................................ ................................ ............................ 56 Fused LiDAR image training ................................ ................................ ...... 56 Kappa Coefficient ................................ ................................ ............................. 57 Classification Results ................................ ................................ ....................... 57 LiDAR classification result ................................ ................................ .......... 57 Image classification result ................................ ................................ .......... 59 LiDAR image fused classification result ................................ ..................... 60 LiDAR no classification result ................................ ................................ ..... 62 Image intensity LiDAR intensity based classification result ....................... 63 Image intensity LiDAR elevation based classification result ...................... 63 Image intensity LiDAR return number based classification result .............. 64 Effect of SVM and Kernel Parameters ................................ ................................ .... 64 Canny Edge Correction Filter Result ................................ ................................ ...... 68 Region Based Object Fitting Results ................................ ................................ ...... 68 6 CONCLUSIONS AND FUTURE DIRECTIONS ................................ ...................... 99 Conclusions ................................ ................................ ................................ ............ 99 Future Work ................................ ................................ ................................ .......... 101 LIST OF REFERENCES ................................ ................................ ............................. 103 BIOGRAPHICAL SKETCH ................................ ................................ .......................... 108

PAGE 8

8 LIST OF TABLES Table P age 3 1 Feature selection ................................ ................................ ................................ 45 3 2 Standard kernels ................................ ................................ ................................ 45 4 1 Technical Data and Specification for UltraCamX sensor unit ............................. 52 5 1 LiDAR hits with near ground and ground features ................................ .............. 71 5 2 Trai ning samples ................................ ................................ ................................ 71 5 3 Classification accuracy of downed trees using different kernels using LiDAR data ................................ ................................ ................................ .................... 71 5 4 Classification accuracy of downed trees in LiDAR data using different values of C with and without the sigmoid kernel ................................ ............................ 71 5 5 Confusion matrix of the LiDAR SVM Classification ................................ ............. 71 5 6 Classification accuracy of downed trees using different kernels using image data ................................ ................................ ................................ .................... 71 5 7 Classification accuracy of downed trees in image data using different values of C with and without the polynomial kernel ................................ ....................... 72 5 8 Confusion matrix of the image SVM Classification ................................ ............. 72 5 9 Classification accuracy of downed trees using different kernels using fused LiDAR image data ................................ ................................ .............................. 72 5 10 Classification accuracy of downed trees in fused LiDAR image data using different values of C with and without the polynomial kernel .............................. 72 5 11 Confusion matrix of the fused LiDAR image SVM Classification ........................ 73 5 12 Classification accuracy of downed trees using fus ed intensity data ................... 73 5 13 Classification accuracy of downed trees using fused image intensity LiDAR elevation data ................................ ................................ ................................ ..... 73 5 14 Classification accuracy of downed trees using fused image intensity LiDAR return number data ................................ ................................ ............................. 73 5 15 Data ranges of the features ................................ ................................ ................ 73 5 16 Parameter estimation diameter ................................ ................................ ........ 74

PAGE 9

9 5 17 Parameter estimation height ................................ ................................ ............. 74 5 18 Parameter estimation volume ................................ ................................ ........... 74

PAGE 10

10 LIST OF FIGURES Figure Page 2 1 Waveform LiDAR ................................ ................................ ................................ 25 3 1 Supporting planes with support vectors ................................ .............................. 43 3 2 Many possible minimum width margin planes ................................ .................... 43 3 3 One maximum margin plane ................................ ................................ ............... 44 3 4 Plane selected to maximize margin and minimize error ................................ ..... 44 4 1 Study area ................................ ................................ ................................ .......... 51 4 2 Working principle of full waveform LiDAR ................................ ........................... 51 4 3 UltraCamX sensor ................................ ................................ .............................. 52 5 1 Ground points from LiDAR dataset ................................ ................................ ..... 75 5 2 Segmentation of LiDAR points. ................................ ................................ ........... 76 5 3 Signature analysis on the (non haze filtered) original aerial image ..................... 77 5 4 Signature analysis on the haze reduction fil tered aerial image ........................... 78 5 5 Haze filtering on aerial image ................................ ................................ ............. 79 5 6 LiDAR range values on a transparent aerial image ................................ ............ 80 5 7 LiDAR intensity values on a transparent aerial image ................................ ........ 81 5 8 LiDAR echo values on a transparent aerial image ................................ .............. 82 5 9 Part of downed tree shadowed (circle) from standing trees ................................ 83 5 10 Classification accuracy of downed trees in LiDAR data using different kernels and C values ................................ ................................ ................................ ....... 84 5 11 LiDAR SVM classification visual ................................ ................................ ......... 85 5 12 Comparison of false alarm of downed trees in LiDAR and image classification 86 5 13 Classification accuracy of downed trees in image data using different kernels and C values ................................ ................................ ................................ ....... 86 5 14 Image SVM classification visual ................................ ................................ ......... 87

PAGE 11

11 5 15 Classification accuracy of downed trees in LiDAR image fused data using different kernels and C values ................................ ................................ ............ 88 5 16 Comparison of false alarm of downed trees in LiDAR, image and fused LiDAR image classification ................................ ................................ ................. 88 5 17 Fused LiDAR image SVM class ification visual ................................ ................... 89 5 18 Image and individual LiDAR SVM classification ................................ ................. 90 5 19 SVM classification with high false alarm rate ................................ ...................... 91 5 20 Canny edge correction filter ................................ ................................ ................ 92 5 21 Classification accuracy before and after the Canny edge correction filter .......... 93 5 22 False alarm rate comparison before and after Canny edge correction filter ....... 94 5 23 Region based object fitting parameters ................................ .............................. 95 5 24 Extracted downed trees ................................ ................................ ...................... 96 5 25 Trees selected for accuracy estimation ................................ .............................. 97 5 26 Error in estimating tree parameters ................................ ................................ .... 98

PAGE 12

12 Abstract of Dissertation Presented to t he Gra duate School o f t he University o f Florida i n Partial Fulfil lment o f The Requirements for the Degree of Doctor o f Philosophy FUSION OF LIDAR AND AERIAL IMAGERY FOR THE ESTIMATION OF DOWNED TREE VOLUME USING SUPPORT VECTOR MACHINES CLASSIFICATION AND REGION BASED OBJECT FITTING By Sowmya Selvarajan August 2011 Chair: Ahmed Mohamed Major: Forest Resources and Conservation The study classifies 3D small footprint full waveform digitized LiDAR fused with aerial image ry to downed trees using Support Vector Machines (SVM) algorithm. Using small footprint waveform LiDAR, airborne LiDAR systems can provide better canopy penetration and very high spatial resolution. The small footprint waveform scanner system Riegl LMS Q680 is addition with an UltraCamX aerial camera are used to measure and map downed trees in a forest. The various data preprocessing steps helped in the identification of ground points from the dense LiDAR dataset and segment the LiDAR data to help reduce the complexity of the algorithm. T he haze filtering process helped to differentiate the spectral signatures of the various classes within the aerial image. Such processes, helped to better select the features from both sensor data. The six features: LiDAR height, LiDAR intensity, LiDAR ech o, and three image intensities are utilized. To do so, LiDAR derived, aerial image derived and fused LiDAR aerial image derived features are used to organize the data for the SVM hypothesis formulation. Several variations of the SVM algorithm with differe nt kernels and soft margin parameter C are experimented. The algorithm is implemented to classify

PAGE 13

13 downed trees over a pine trees zone. The LiDAR derived features provided an overall accuracy of 98% of downed trees but with no classification error of 86%. T he image derived features provided an overall accuracy of 65% and fusion derived features resulted in an overall accuracy of 88%. The results are observed to be stable and robust. The SVM accuracies were accompanied by high false alarm rates with the LiDA R classification producing 58.45%, image classification producing 95.74% and finally the fused classification producing 93% false alarm rates The Canny edge correction filter helped control the LiDAR false alarm to 3 5.99 %, image false alarm to 48.56% and f used false alarm to 37.69% The implemented classifiers provided a powerful tool for downed tree classification with fused LiDAR and aerial image. The classified tree pixels are utilized in the object based region fitting technique to compute the diameter a nd height of the downed trees and the volume of the trees are estimated.

PAGE 14

14 C HAPTER 1 INTRODUCTION Hurricane and Natural Disasters Assessment to Vegetation Research into detecting the impacts of hurricane s and other natural disasters has attracte d increasing attention from researchers in carbon cycle study, hazard relief, as well as forest management. Hurricanes are major natural disturbances in forest ecosystems in the southeastern United States. Statistically, severe hurricanes (Saffir Simpson sc ale 3 and above) make landfall along the western Atlantic and Gulf coastlines two out of three years (Smith et al. 1994) The hurricane and its associated strong winds are a destructive natural phenomenon that occurs about 40 to 50 times worldwide each year (Hoyos et al. 2006) The recent increase in hurricane intensity since the late twentieth century has led the ecologists to focus their research towards the influen ce of such natural disasters on vegetation structure (Roth 1992) Catastrophic winds from hurricanes and storms are a major cause of natural disturbance especially in urban forests. The isolation of an urban forest an d influence of humans leads to a hindrance in the spontaneous re growth o f vegetation (Burley, Robinson, and Lundholm 2008) This is ofte n expressed as defoliation of and structural damage to the trees which directly relates to the timber/debris volume lost. Increased focus on forest recovery and resource management actions in urban forests after natural disturbances requires the estimations of the volume of blow down trees in urban forest stands in order to determin e if a significant quantity has been affected and for justifying the reimbursement claimed by the communities during the recovery operation. Only 15% of the total carbon in destroyed timber is salvaged following a major hurricane (McNulty 2002) with the remaining not cleared. One of the

PAGE 15

15 primary missions for Federal Emergency Management Agency (FEMA) would be to clear the debris after a majo r hurricane. The need to provide such agencies with the extent of the damage is very vital. The costs of the removal efforts are shared by FEMA (75%) and the state (25%). Escambia C ounty, Florida reported approximately 90% of the hurricane debris as vegetation debris after hurricane Ivan hit in 2004 (Escambia County) Quantifying the downed coarse woody debris can be important for post hurricane response. Staudhammer et al. ( 2009) analyzed the patterns of urban forest debris from the 2004 and 2005 hurricanes (namely, Charley, Jeanne, Frances, and Ivan in 2004 and Dennis, Katrina, and Wilma in 2005). Studies on natural disaster induced forest disturbances (e.g. by hurricanes) follow three general directions. The first direction involves qual itative description and quantitative measurement of forest damage, and efforts to find influencing factors and their relationships to hurricane induced forest damage using field sampling and statistical analysis (Everham and Brokaw 1996) (Foster 1988) This is a traditional procedure followed by most ecologists. The second direction, grounded in intensive ground inventories, involves projecting forest damage using wea ther models, topographic exposure models and/or ground observations (Boose, Foster, and Fluet 1994) (Kovacs, Wang, and Blanco Correa 2001) (Jacobs 2007) The third direction, satellite and airborne remote sensing techniques, has been gradually adopted since early 2000 by modeling studies (Ramsey et al. 2001) (Kupfer et al. 2008) (Wang and Y. Xu 2008) Because traditional ground surveys face limited resources such as time, funds, man power, and the extent of spatial and temporal coverage, the remote sensing direct ion is currently being developed for the purpose of assessing forest disturbances

PAGE 16

16 with less need of ground survey information. In the past three decades, remote sensing has emerged as a unique technique to monitor forest status. The forest damages induced by wind storm include defoliation, branch loss, stem breakage, uprooting, over story canopy removal, and mortality Defoliation is the most common type of damage, followed by branch loss, snapping and uprooting. For example, severe hurricanes cause the me chanical destruction of forest structure (i.e. canopy height, vertical stratification and leaf area), and result in a massive transfer of biomass to the forest floor (Lodge and W. H. McDowell 1991) The categorizati on based on the physical principles employed to detect forest damage are as follows. First ly detection based on the change of chlorophyll content (Stueve, C. W. Lafon, and R. E. Isaacs 2007) (Lee et al. 2008) Next detection based on the change of non photosynthetic vegetation (Chambers et al. 2007, 1107) Next by the detection of the change of leaf water content (Aosier and M. Kaneko 2007) And finally, detection based on the structural changes of damaged forests (Dwyer et al. 1999) (Wiesmann et al. 2001) (Fransson et al. 2002) The latter categorization will be the focus of this study. Motivation The recent hurricane surges in Southeastern gulf coast United States create impacts to vegetation structure (Szantoi et al. 2008) After such a natural disaster, immediate attention is required to identify damaged and downed trees. Furthermore, a best management plan has to be developed for salvage operation, which is borne by the local communitie s but if declared as a federal disaster wherein the costs are taken care of by reimbursement programs as designated by the Stafford Act (FEMA 2007) In addition, of the reduction of annual harvestable timber and its economi c ramifications

PAGE 17

17 has to be addressed. (Escobedo et al. 2009) developed hurricane debris assessment protocol to report tree debris and damage after a hurricane in Florida. They used on ground data from hurricane im pacted communities to assess and characterize the debris and damage at an urban and community forest level. This is a unique opportunity to investigate the use of remote sensing for post disaster urban vegetation damage assessment, and technology that has the potential for improving the effectiveness of disaster response activities. The motivation for this study evolved from the project for the rapid estimation and monitoring of the tree cover change in Florida urban forests due to urbanization and hurrica nes. The tree cover change was measured by 53 0.04 hectare (ha) circular plots called UFORE plots. Natural color aerial photograph digital orthophoto quarter quadrangles (DOQQs) acquired in 2004 (pre hurricane Ivan photograph) and 2007 (post hurricane Iv an photograph) re sampled to 1m by 1m ground cell size was utilized. The study site was the metropolitan area of Pensacola, Florida. The tree types included were deciduous, coniferous with shrubs and grass excluded. Individual tree cover crowns were digit ized within the 0.04 ha circular plots for the pre hurricane and post hurricane photographs. An average of 0.0007 ha (approximately 2%) of tree cover loss was calculated for the 53 UFORE plots. The implementation of this method to quantify and monitor tree cover by digitization was time consu ming, although cost effective. While traditional optical remote sensing has been successful in studying the biophysical characteristics of vegetation, quantification of vegetation structure has not been much useful (Waring et al. 1995) One factor that has added to the disadvantage of traditional optical remote sensing technology has been the lack of direct relationships

PAGE 18

18 between remotely sensed reflectance values and the forest metrics of direct interest (e.g. tree density, timber volume, tree heights or mean diameter) (Surez et al. 2005) (Korpela 2007a) concluded that the 3D aerial photography tech nique derived better estimations because the 2D method is susceptible to systematic errors in crown and stem diameter estimations. However, the whole process is both costly and time consuming. With the 3D mapping inventory technique it is possible to achie ve better accuracies than with an inventory process based on 2D technique (Holopainen and Talvitie 2007) In recent years, the use of airborne LiDAR to measure forest and tree level characteristics has been rapidly increasing. LiDAR data can provide information about the canopy surface and vegetation parameters directly, such as height, tree density, and crown dimensions [ (Lefsky et al. 2002) (Dubayah and Drake 2000) (Nilsson 1996 ) ]. The LiDAR derived tree attributes were very similar or much closer to those obtained from traditional field inventory derived attributes (Holmgren 2003) (Naesset 2004) The study by Holmgren ( 2003) concluded that airborne LiDAR could obtain attribute information such as t ree height and stem volume with in 10m radius plots in a forest area. Yu et al. ( 2004) examined the applicability of LiDAR data in monitor ing harvested trees using datasets with a point density of about 10pts/m 2 over a 2 year period. With field inventory data and/or statistical analysis, there was overall 5cm accuracy at stand level and approximately 10 15cm accuracy at plot level using obje ct oriented image vision analysis and post classification comparison [ibid]. Using small footprint waveform lasers, airborne LiDAR systems can provide both high spatial resolution and canopy penetration. One general problem of most algorithms

PAGE 19

19 is that most features cannot be characterized by geometric properties only. To increase rapidity, information from other sensors will have to be used. Spectral information and texture maybe be valuable sources, especially if the data are acquired synchronously. The in tegration of laser scanning and aerial imagery can be based on simultaneous data capturing. The LiDAR data provide accurate height information, which is missing in non stereo optical imagery, whereas optical images provide more details about the spatial ge ometry and color information usable for downed tree classification. Hence, it makes eminent sense to combine the two methods we have a classical fusion scenario where the synergism of two sensory input data considerably exceeds the information obtained by the individual sensors (Korpela 2007b) (Huber et al. 2003) Gougeon et al. ( 2001) studied the synergy between aerial photography and L iDAR data. They concluded that the LiDAR data, which are used as a fil ter to the aerial data or on their own, made extremely obvious distinction between the dominant and understory level, or regeneration versus ground vegetation, thus permitting separate a nalyzes. Based on the above background study it makes the study of downed tree volume estimation very valuable. Such downed trees need to be identified after a natural disaster, or sometimes even detected as harvested trees in a timber valuable forest. Ve ry few studies were found that focused on volume estimation after a landscape altering event had occurred due to the fact it is very difficult to obtain planned data after a mishap and during chaos. In addition, no literature was found that reports on the use of feature extraction for this specific study using the fusion of LiDAR and aerial imagery. Objectives The goal of the study is to develop a method to fuse waveform LiDAR and aerial imagery using support vector machines classification and region based object fitting

PAGE 20

20 algorithm to better estimate downed tree volume. The specific objectives of this study will include: 1. Assessment o f whether the LiDAR only or aerial image only or fusion of LiDAR image data is better for accurate downed tree classification, 2. Determination of the accuracy of downed tree volume that can be estimated by such tool. Organization of th is Study and Contribution Chapter 1 serves as necessary background information to the study in this dissertation. The main contributions are also listed at the end of the chapter. Chapter 2 describes the small footprint full waveform digitized return from the LiDAR sensor and its measurements are described. The background on the traditional aerial photography is explained. Chapter 3 describes the study site and data acquisition. The motivation for choosing the study site is explained in this section. The Li DAR data, aerial photography and ground truth acquisitions over the study sites are also explained. Chapter 4 presents the preprocessing approaches on the LiDAR data and aerial image. Chapter 5 presents the Support Vector Machines technique for downed tree classification. This chapter also discusses the feature selection approaches. The results of the classification techniques are discussed. The results of the classification techniques are discussed in Chapter 6. Chapter 7 details the region fitting object oriented technique and its results. Chapter 8 concludes the study with conclusions and future work. The main goal of this work is the development of an approach to detect individual downed trees using the LiDAR only or aerial image only or fusion of 3D LiD AR data and aerial image, which then allows the subsequent estimation of several important tree structural parameters.

PAGE 21

21 CHAPTER 2 AERIAL MAPPING TECHN OLOGY Airborne Laser Scanning Airborne LiDAR (Light Detection And Ranging) is an active remote sensing technique (analogous to radar) that can accurately depict the earth surface in a three dimensional format by measuring the distance from the sensor to the ground target. Different configurations of Airborne LiDAR Syste m (ALS) are currently being used in surveying, geosciences and vegetation assessment. An airborne LiDAR sensor sends difference between pulse generation and pulse return (Wehr and Lohr 1999) Laser ranging in a repetitively pulsed mode in a near nadir direction is also called laser altimetry. When the sensor is flown over the forest canopy, the laser energy interacts with leaves and br anches and reflects back to the instrument. A portion of the initial pulse may continue through the canopy to lower canopy layers, and possibly to the ground. The LiDAR systems are composed of a laser sensor, GPS (Global Positioning System) receiver, and a n INS (Inertial Navigation System) or an IMU (Inertial Measurement Unit). By accurately recording the roll, pitch and heading of aircraft with a time stamp coincident with the laser measurements and the GPS position, the motion of the aircraft can be corre cted and precise positions of the laser hits on the ground surface can be calculated. The acronym LASER stands for L ight A mplification by S timulated E mission of R adiation; a powerful highly directional optical light beam can be generated which is often very highly coherent both in space and time. Measuring ranges with lasers, two major ranging principles are applied: pulse ranging, and phase difference ranging

PAGE 22

22 betw een the transmitted and the received signal. The latter belongs to the continuous wave (CW) lasers. Pulse lasers are usually solid state lasers, which produce high power output. In the past decade, major advances have been made in environmental and other non military ALS applications. These advances have been driven to a large extent by increasing technological capabilities of ALS systems (sampling density, multiple pulses, positional accuracy, etc.). Remarkably, much of the technological innovation was bo rne by commercial ALS manufacturers and service providers, who entered the field in the mid 1990s (Flood 2001) The characteristics listed below vary between airborne LiDAR systems built for land surface characterizati on: of view ranges from a few centimeters to tens of meters in diameter (depends on the beam width of the laser), The pulse repetition frequency, or sampling rate, and scanning pat tern used during data acquisition, and energy, since some sensors record the range to the first and/or last return, while others retrieve multiple (discrete) returns, or fully dig itize the return signal (waveform resolving). The advanced laser altimeters, imaging or scanning LiDARs, are capable of scanning the ground surface beneath the airborne platform, resulting in a true three dimensional data set. Commonly, for such LiDAR sens ors the laser beam sampling area, or footprint, is small, usually less than 1 m in diameter and such systems are known as small footprint LiDAR. For forestry applications, it is critical that small footprint LiDAR systems capture multiple returns or digiti ze the entire return signal through the canopy. Recent studies have suggested that the optimal experimental design for small footprint systems is to capture three echoes per pulse, since less than 1 percent of

PAGE 23

23 pulses return a fourth echo, and only about 0. 1 percent of pulses return a fifth echo (Lim et al. 2003) Large footprint LiDAR systems have been widely used in forestry to determine the vertical distribution of vegetation characteristics since such systems digitize the entire return signal. Commercial small footprint airborne laser scanner systems with full waveform digitizing capabilities as depicted in Fig ure 2 1, have recently become available. The potential of these sensors was demonstrated by (Hug, et. al. 2004) based on waveform data acq uired with the LMS Q560 manufac tured by Riegl (www.riegl.com). This opens the possibility of deriving physical observables in addition to the range. Small footprint full waveform airborne LiDA R systems offer large opportunities for improved forest characterization. Chauve et al. ( 2009) found that there were 40% 60% additional points in the lower part of the canopy and in low vegetation due to the detecti on of weak and overlapping return echoes in small footprint waveform dataset. Aerial Photography One of the most common, versatile, and economical forms of remote sensing is aerial photography from optical sensors. Aerial photography has been used for seve ral decades as a tool in forest management and inventory. Increased spatial resolution allows improved characterization of surface features through the separation of small scale features, such as canopy foliage and gaps. Historically, analog aerial photogr aphy has provided a means to manually measure many forest attributes including stand density, crown diameter, crown closure, and tree height (Hagan and Smith 1986) With the advancement in digital cameras, digital ae rial photography has experienced remarkable improvements in radiometry, spatial resolution and accuracy. The Ultracam is a frame camera system. Rosso, et. al. ( 2005) studied the applicability of four airborne

PAGE 24

24 sensors (ADS40 1 and 2, Ultracam and DMC) to land surface interpretation. Ultracam showed wider range of DN values than ADS40 1. Cramer (2005) analyzes the status and the future of digital airborne imaging systems and points the direction of the trend to large format cameras. The following major trends in digital airborne imaging can be seen: The digital airborne imaging world is heterogeneous, considering the various applications as c ompared to the analogue imaging. In addition, these systems are often used as a part of the multi sensor systems, incorporated with GPS/INS sensors and/or laser sensors. The availability of large format and small format digital imaging sensors, s uch as for traditional photogrammetric applicat ions, the large format systems ( ADS40 ( Leica Geosystems) ) UltracamD (Microsoft) are of interest. Besides this, medium and small format imagers (DSS (Applanix Corp.) can be found.

PAGE 25

25 Fig ure 2 1. Waveform LiDAR ( Courtesy: Wagner, W., Hollaus, M., Briese, C. and Ducic, V. 3D vegetation mapping using small footprint full waveform airborne laser scanners 2008 )

PAGE 26

26 CHAPTER 3 METHODOLOGY Data Processing The raw LiDAR data consists of point clouds of irregularly spaced x,y, z values. The scan density was high enough to discern downed trees. These points must undergo a series of pre processing steps before the downed trees are extracted. The four main preprocessing steps are (1) identification of ground points from LiDAR data, (2) segmentation of LiDAR points, (3) haze filtering on aerial image, and (4) selection of features from fused LiDAR and aerial image. Identification of G round from LiDAR Dataset The effective accuracy and precision for identification of ground points ca n be significantly degraded if vegetation is present. The ground is only identified when laser pulses manage to pass through the gaps in the canopy and intercept the ground. Many different filters have been proposed to remove non ground points from LiDAR d ata, several of which are reviewed in (Sithole and Vosselman 2004) The vegetation filter designed by (Kampa and Slatton 2004) is used for this process. In this process of data s egmentation, the adaptive, multi scale approach is applied directly on the LiDAR point data. The algorithm initializes with an empty result and adds LiDAR points using progressively smaller search windows and locally interpolated surfaces. The method works well for segmenting heavy vegetation LiDAR datasets especially as in forests. Segmentation of LiDAR Points Once the ground points were obtained through filtering, the points were classified as ground, near non ground and non ground points. The latter two classifications were comprised of shrubs and downed trees, and standing trees respectively. The LiDAR

PAGE 27

27 data are first segmented based on height differences and larger segments are removed. This works well to remove standing tree data because in the forest canopy most of the LiDAR returns are from standing trees. Also this approach reduces the size of the dataset therefore increasing the efficiency of the algorithm. Haze Filtering on Aerial Image Signature analysis was applied on the aerial imagery to visua lize the distinctiveness of the classes in the three bands: red, green and blue. The five classes were downed trees, fire lanes in the forest, ground, standing trees and shrubs. The classes of particular interest were the downed trees and fire lanes since their digital values in were approximately the same in all the three bands. When the high frequency kernel is used on a set of pixels in which a relatively low value is surrounded by higher values the low value gets lower. When used with pixels in which a relatively high value is surrounded by lower values the high value becomes higher. The haze reduction filter computes a convolution filter to blend the image with a first directional derivative high pass filtered version of the image. The haze reduction fi lter was applied on the aerial image and the signature analysis was repeated on the haze reduction filtered aerial image. Feature selection Feature selection is defined as a process of combining an optimum subset of features from a huge dataset of features The end goal of feature selection is to reduce the number of features used in the classification as using a smaller feature selection may improve the classification accuracy by eliminating noise. The features selected are categorized based on Table 3 1

PAGE 28

28 Support Vector Machines Classification Recently there is an increase in the number of researches in Support Vector Machines (SVM). SVMs have been successful in a number of applications ranging from face recognition, text categorization, bioinformatics, e tc to name a few. This technique is motivated by statistical learning theory. Support Vector Machines is a classification technique based on Vapnik Chervonenkis (VC) dimension theory and structural risk minimization logic (Va pnik 1995) The central idea underlyin g Support Vector Machines is separating data into binary classes using optimal separating hyper planes. In the SVM, input vector is projected into high dimension feature space nonlinearly. In the new space, the esta blishment of a linear decision plane will lead to the production of a nonlinear decision plane in original input space. The SVM method has high spread ability under the condition of non prior knowledge and, can solve sparse sampling. There are four importa nt kernel functions including; Gaussian Radius Basis Function (RBF), Linear, Polynomial and Sigmoid function which are computed in the training process. Linear Discriminants Consider a supervised binary classification problem. The training data are repres ented by In the first case, the assumption is that the two classes are linearly separable. This means that it is possible to find at least one hyperplane defined by a vector with a b ia s which can separate the classes without error. The vector determines the orientation of the discriminant plane. The scalar determines the offset of the plane from the origin.

PAGE 29

29 There are infinitely many possible separating planes that correctly classify the training samples. Geometrically, the optimal hyperplane would be the plane furthest from two classes. The hyperplane is defined as ( 3 1) The approach is to maximize the margin between the two parallel supporting planes. A plane supports a class if all points in the respective class are on one side of that plane. For the data points with class label = +1, there would exist and such that or depending on the class label. Assuming, the smallest value of then Th e argument within the decision function is invariant under a positive rescaling. By implicitly fixing the scale, is required. Similarly for data points with class label 1, The plane furthest from sets the distance or margin between the support plane s for each class is maximized as seen in Figure 3 1 The support planes are moved apart until they meet a small number of data points (the support vectors) from each class. The support vectors in Figure 3 3 are seen. The margin between the parallel suppor ting planes and is given by Thus maximizing the margin is equivalent to minimizing The optimal hyper plane can be found by solving the following optimization problem: ( 3 2) subject to ; The constraints can be simplified to

PAGE 30

30 ( 3 3) It is to be noted that the solution only depends on the support vectors. This is due to the mathematical concept of duality. The Lagrangian dual of the supporting plane quadratic program (QP) yields the following dual QP ( 3 4 ) Subject to With this f ormulation, the optimal hyper plane discriminant function becomes: ( 3 5 ) Where S is a subset of training samples that correspond to non zero Lagrange multipliers. These training samples are called support vectors. Theoretical Foundations For linear prob lems, maximizing the margin of separation reduces the function complexity. Thus by explicitly maximizing the margin, the bounds on the generalization error are minimized. This leads to better generalization with a high probability. The width of the margin is not dependent on the dimensionality of the data. Even with a large number of instances or attributes, good performance is expected t hus proving the fact that SVMs can significantly reduce over fitting of high dimensional data. Classification functions that fit the training data near to perfection are more likely to over fit resulting in poor generalization. In Figure 3 2 the plane with smaller width margin can take many possible orientations and still separate all the data. In Figure 3 3 the plane with a large

PAGE 31

31 margin width has limited flexibility to separate the data. Thus, the complexity of a linear discriminant is a function of the margin of separation. The misconception is that the complexity of the linear function is determined by the number of vari ables in the problem. But if the margin has a larger width, the complexity of the function can be low even if the number of variables is large. Non linear Case Linearly inseparable case In most cases, classes are not linearly separable, and the constrain t of Equation ( 3 3 ) cannot be satisfied Thus, the constraints must be relaxed to insure that each data point is on its respective side of its supporting plane. Any data point lying on the wrong side of its supporting plane is taken to be an error. In conc lusion, the margin should be maximized simultaneously when the error is minimized. In order to handle such cases, a cost function can be formulated to combine maximization of margin and minimization of error (Figure 3 6 ) criteria, using a set of variables called slack variables to each of constraint (measures the error of hyperplane fitting) as follows: ( 3 6) Subject to The Lagrangian dual of the QP Equation ( 3 4) becomes: ( 3 7) Subject to

PAGE 32

32 See Cortes and Vapnik (1995) and Vapnik (1998) for the formal derivation of this dual problem. It seen that the difference of the QP formulation between the separable case Equation ( 3 4) and inseparable case ( 3 7) is the addition of upper bounds on Until now, the linear discrimination of an inseparab le case had been examined. The basic principle of the SVM is to construct the maximum margin separating plane. But if the linear discriminants are not appropriate for the existing data set, high training errors arise, and SVM classification does not work w ell. Nonlinear functions through k ernels The classification function now is, ( 3 8) If no simple linear discriminant function works well for the SVM classification, a nonlinear classification algorithm has to be designed. The best way to convert the linear to nonlinear classification algorithm is to add additional attributes to the data set that are nonlinear functions of the existing data set. The original input space is mapped to a higher dimensional feature space and constructs a linear discriminant in the feature space. The problem seen for the nonlinear mapping method due to the high dimensionality of the feature sp ace is : (i) over fitting. SVMs are not prone to this problem, since it depends only on margin maximization, given that an appropriate value of C is decided. (ii) It is not possible to compute This is simplified by the SVM through the use of kernels. Whe n the nonlinear mapping is introduced in QP Equation ( 3 7):

PAGE 33

33 ( 3 9) Subject to The mapped data only occurs as an inner product in Equation ( 3 theorem (Sch¨olkopf et al. 1998) the inner produc t of vectors in the mapping space, can be expressed as a function of the inner products of the corresponding vectors in the original space. The inner product operation has an equivalent representation: ( 3 10) where is called the kernel function. If a kernel function K can be found, this function can be used for training withou t knowing the explicit form of The QP formulation now becomes: ( 3 11) Subject to The resulting classifier becomes: ( 3 12) Some of the standard kernels are given in Table ( 3 2 ). The linear kernel is the simplest kernel function. The Gaussian kernel is by far one of the most versatile Kernels. It is a radial basis function kernel, and is the preferred kernel when not much is

PAGE 34

34 known about the data being modeled. This is mainly because of their localized and finite responses across the entire range of the real x axis. The polynomial kernel is a non stationary kernel. It is well suited for problems where all data is normalized. A SVM model using a sigmoid kernel function i s equivalent to a two layer, perceptron neural network. Downed tree LiDAR points were identified as true downed tree LiDAR samples. The rest were the true non downed trees samples in the LiDAR data These were later used to build the confusion matrix. The correctly identified samples (true positives) from the classification were the total points (in both downed and non downed trees classes) belonging to the respective classes in the true downed tree samples. The misclassification samples were points classif ied as downed trees when it was actually non downed trees and vice versa. The false alarm for downed tree samples was calculated as the non downed tree points classified as downed trees. The overall accuracy is the percentage of correctly classified sample s to the total samples. The same procedure is repeated for image data. As with fused LiDAR and image data, the truth model was built by the visual interpretation of combined LiDAR aerial imagery. Few true downed tree points that were not available in the a erial imagery were picked up the LiDAR data and vice versa. The entire testing dataset was divided into three categories: LiDAR dataset. The features derived from the LiDAR data are classified during the experimentation Image dataset. The features derived from the image data are classified during the experimentation Image and LiDAR fused dataset. The features derived from the fused image and LiDAR are classified during the experimentation

PAGE 35

35 Training and Classification D ataset s The dataset is classified into two classes: downed trees and non downed trees using the features under the following categories: LiDAR derived only, image derived only and fused LiDAR and image As in any supervised classification, the selection of training data sets is very important to classify more detail, which results in better classification. The first training set, constituted only the features of the LiDAR data. The second set, constitute d only the features of the image data. The final set is a combination of the above two training sets to experiment the fusion of LiDAR and aerial image. LiDAR training and classification The training samples for the downed tree class from the LiDAR datase t was selected manu ally based on visually interpretation of the aerial image with the LiDAR returns based on LiDAR intensity, LiDAR elevation and LiDAR return number. The LiDAR intensity varied between 0 and 2047 (11 bit). The three features derived from t he LiDAR data namely, z (elevation), r (return number), and i L (LiDAR intensity) were utilized for the SVM classification. Image training and classification The training samples for the downed tree class from the aerial image dataset was selected manu ally based on visually interpretation of the aerial image based on red, green and blue intensity values The three features derived from the aerial image namely, v R (intensity value in red band), v G (intensity value in green band), v B (intensity value in blue band) were used to classify downed trees in the aerial image data.

PAGE 36

36 Fused LiDAR image training and classification The training samples for the downed tree class from the six features of the LiDAR and image (called as fused dataset) was selected after a car eful visual interpretation of the LiDAR and aerial image and from the above created LiDAR and aerial image training datasets. The six features derived from the aerial image and LiDAR namely, v R (intensity value in r ed band) v G (intensity value in g reen band) v B (intensity value in b lue band) z (elevation), r (return number), and i L (LiDAR intensity) were utilized for the SVM classification. Canny Edge Correction Filter The support vector machines classifier lead to high rate of false alarm. Therefore the Canny edge correction filter is chosen to rectify this. Canny (1986) determined edges by an optimization process and proposed an approximation to the optimal detector as the maxima of gradient magnitude of a Gaussian smoothed image. The Canny edge dete ctor belongs to the class of the gradient operators. Among the edge detection methods proposed earlier in the previous section, the Canny edge detector is the most rigorously defined operator and is widely used. This is the work John Canny did for his Mast ers degree at MIT in 1983. The objective function of the Canny edge detector was designed to achieve the following optimization constraints: 1) Maximize the signal to noise ratio to give good detection. This favors the marking of true positives. 2) Achieve good localization to accurately mark edges. 3) Minimize the number of responses to a single edge. This favors the identification of true negatives, that is, non edges are not marked.

PAGE 37

37 The upper tracking threshold can be set to quite high and lowe r threshold quite low for good results. Setting the lower threshold too high will cause noisy edges to break up. Setting the upper threshold too low increases the number of spurious and undesirable edge fragments appearing in the output. Threshold plays a n important role in the performance of the edge detector as the probability of missing an edge and probability of marking false edge depends on the threshold. The thresholds are based on the amount of noise in the image. This thresholding technique has bee n of much importance in this research. Region Based Object Fitting Technique Computation of Geometric Features of Downed Tree Trunks Methods are needed that more or less automatically detect object features (downed trees) from the classified SVM outputs. The method is a region based fitting method for a closed region. This method, region based object fitting algorithm reduces the numerical effort drastically and the fitting is invariant. In this study, computing the shape of a simple polygon is a signifi cant step since it is later used to compute the volume of downed trees. Typical results of classification are set of points grouped close to each other characterized by contiguity and uniformity. Hence each group of points is hypothesized as an individual object or a significant part of the parent larger object. Convex hull is defined as a convex envelope for a set of points as containing the minimal set of boundary points. The solution of encompassing the points with a bounding rectangle is not adopted in this study. The reasoning being, the bounding rectangle does not provide the exact structural parameters of the region. It exaggerates the length and width of the actual region. The convex hull is visualized by imagining an elastic band stretched open to e ncompass the given object; when

PAGE 38

38 released, it will assume the shape of the required convex hull. The focus of this section is to compute the convex hull and the furthest edge points above and below major and minor axes. The first step is to detect the bound ary of the region. The second step is to estimate its parameters based on boundary points. The approach adopted in this study is similar and a modified approach of the study by (Chaudhuri and Samal 2007) for fitti ng bounding rectangle to closed regions. Here, instead of fitting the bounding rectangle, the convex hull is computed. Most of the algorithms utilize the internal points of a region in a segmented image. The geometric computation algorithm in this study u ses the boundary pixels. It is important to note that the closed boundary is computed without the edge detection approach to determine the boundary pixels of the objects (i.e. downed trees). Overview of the Approach There are six steps in this approach. I t starts with the classified binary image from the SVM classification result in which the downed trees are identified. The boundary of each of the individual trees is computed using convex hulls. Using these boundary points, the centroid of the object is c omputed. The next step is to find the major and minor axes of the object by using the boundary points. Then the upper and lower furthest points with respect to the major and minor axes are found. The next step is to compute the width and the length of the object using the furthest points. Finally, the volume is computed using the width as the dbh (diameter of the downed tree at breast height) and length as the height of the downed tree.

PAGE 39

39 Convex Hull This problem starts with identifying the boundary pixels for each region from the previous SVM classification results. Also the centroid and the orientation of the region can be determined using the boundary pixels. Let R be a simple region or polygon in t he finite unordered set of points with b boundary points or vertices. Assume that X = {b 1 b 2 n ) is given by a linked list of boundary points as they are encountered in the counter clockwise direction of the boundary where each boundary point characterized by its and coordinates. The convex polygon whose vertices are the set of extreme poi nts of R is called the convex hull of X. For more reading on the preliminaries and theory on convex hull, refer to Graham and Yao ( 1983). Centroid of the Region In geometry, the centroid of an object X is the intersection of all straight lines that divide the figure X into equal parts of equal moments about the line. Informally, it is the average of all points of X Considering a region X in R 2 where are the boundary points of X computed using the convex hulls. The centroid is defined as the center of t he polygon and with point of intersection of the diagonals. Following this, the centroid of the object X is defined as: Computation of M ajor and M inor A xes Let of the majo r axis of the polygon to the horizontal axis.

PAGE 40

40 T he equation of the line passing through The above equation is the major axis of the bounding rectangle The minor axis is given by: The perpendicular distance of a boundary point to the line in eq. (above major axis) The sum of perpendicular distances is calculated as: By minimizing A, angle can be computed (differentiate A w.r.t. ) The authors Chaudhuri and Samal (2007) had used the similar standard formula to compute the directions of the principal axes, with the difference, the boundary or edge pixels are used to compute the axes instead of the inside pixels. Logically, since there are fewer boundary pixels, this approach proved to be computationally efficient. Comp utation of the Corner Edge Points of the R egion The corner edge points of region (object X) will identify the sides of the polygon of the object X. Such points are found relative to the major and minor axes. This would be expressed as: Assume a point on a straight line is written as:

PAGE 41

41 Let be the n boundary points of the object. The equation of the major axis is Substituting the value and in the left side of the above equation If V > 0 then is an upper boundary point with respect to the major axis.If V < 0 then is a lower boundary point with respect to the major axis. If V = 0 then lies on the major axis. Using this property, all the points on the boundary of the object can be classifie d as upper, lower or on with respect to the major axis. Now, the furthest among the upper and lower boundary points can be easily determined. Similarly, all the upper and lower furthest boundary points of the object with respect to the minor axis can b e found. Then the furthest of boundary points of both upper and lower with respect to the minor axis can be determined. Downed Tree Volume The volume content of a tree is normally estimated using traditional volume tables or equations which require the me asurement of tree diameter (dbh) and height. Using such measures, the estimation of total downed tree volume can be made by assuming the tree has a particular form. This study hypothesizes that the tree is cylindrical in shape. The width of the above compu ted object would be the diameter and the length would be the height of the cylinder. The volume is then expressed as:

PAGE 42

42

PAGE 43

43 Figure 3 1 Supporting planes with support vectors Figure 3 2 Many possible minimum width margin planes

PAGE 44

44 Figure 3 3 One maximum margin plane Figure 3 4 Plane selected to maximize margin and minimize error

PAGE 45

45 Table 3 1 Feature selection Features Symbol Range Height value: The height value associated with each point in the point cloud z Echo Return Number r Intensity Image intensity: red, green, and blue. Value number corresponds to the response of the terrain and non terrain surfaces to visible light LiDAR Intensity: Along with height values, airborne LIDAR contains the amplitude of the response reflected ba ck to the laser scanner. This is the near IR range (1068nm) v R v G v B i L Table 3 2 Standard kernels Linear Radial basis function (RBF) Polynomial degree d Sigmoid

PAGE 46

46 CHAPTER 4 MATERIALS Study Site The approximately 12 hectare experimental site (depicted in Fig ure 4 1 ) is on the Florida. The area of study (compartment 17) was occupied by old mixed stands of slash and longleaf pine. The entire e xperimental area was prescribe burned during April, 2009 for understory removal of saw palmetto and common shrubby oaks. The objectives for the prescribed burn were to improve light penetrations and make gaps between canopies, in order to have more returns of LiDAR from the understory details so that fallen trees could be well detected. As many as 60 dead trees were felled for the study. Data Acquisition The objective of the data acquisition mission is to demonstrate for the first time that small footprint waveform digitization laser scanners are capable of detecting single downed trees in a long leaf pine forest zone. Another objective was to show that image vision techniques can be applied to retrieve important downed tree characteristics. For that purpos e it was important to collect range data as perpendicular to the ground as possible to minimize shadowed areas formed by to the standing trees that hinders the reflectivity of the fallen trees. High number of laser hits was required in order to be able to detect individual downed trees. Steep scan angles enable sufficient number of ground laser hits while scan angles more than 10 o off nadir decrease the number of actual laser hits on the ground and gaps in point clouds (seen as holes in DEMs) occur more frequently. The profiling capability of the laser sensors has far advanced recently. In this study, unlimited returns (discretized waveform returns) from the number of target

PAGE 47

47 echoes such as in a forested environment are acquir ed. A laser scanner with high measurement density and steep scan angle was selected for the study. Reigl LMS Q680 is one such system with i ts full waveform laser scanner (echo digitization and online waveform analysis), to measure the full waveform of the returned signal up to 300kHz ( http://www.rieglusa.com/products/airborne/vq 480/pdf/DataSheet_VQ 480_30 09 2008_PRELIMINARY.pdf ). Aerial Cartographics of America (ACA) Inc., acquired the LiDAR and aerial imagery datasets along the Austin Cary Memorial Forest during January 2010. This resulted in 9.4 cm (3.7 in) aerial imagery acquired from an altitude of 1304 m (4278 ft). The images had red green and blue bands. The small footprint full waveform LiDAR was flown at an altitude of 366m. The intensity data w ere also recorded. The absolute accuracy was 2 7cm. These parameters resulted in a me an ground point density of 10 points/m 2 The test site was intensively flown from a low altitude ( approximately 400m), with a resulting measurement density equiv alent of at least 10 points/ m 2 The survey altitude was half of the normal use in order to acquire the number of pulses needed to separate indivi dual trees. Due to the low flying altitude, the swath width was reduced. Acquired points of the category of the first echoes mostly hit canopies, whereas last echoes of the LiDAR are reflections of the ground and near ground features which was the main foc us of this study. Simultaneously, intensity data that is reflected by the objects in the target area, was recorded with high radiometric resolution in LiDAR data. This type of data is especially useful to differentiate objects with same height but differen t emission characteristics (e.g. grass, roads and tree trunks). Together with LiDAR intensity

PAGE 48

48 (reflectance) and aerial imagery recorded at the same time, various segmentation and classification techniques is exploited extensively, using Data Fusion. Laser Scanner Riegl LMS Q680 In a typical ranging setup, the direct ranging measurement is determining the time of flight of a light pulse, i.e., by measuring the travelling time between the emitted and received pulse. The travelling time of a light pulse is: t=2 R/c ( 4 1) R distance between the ranging unit and the object surface c speed of light From E quation ( 4 ( 4 2) A traditional LiDAR system has limitations concerning the number of recordable pulse reflections. New full waveform scanners overcome this drawback, since they record the entire laser pulse echo as a function of time. Therefore, detailed information about the geometric and p hysical characteristics of the tree structure can be derived and used to retrieve more sophisticated and precise vegetation structures. Figure 4 1 illustrates a measurement situation where three measurements are taken on different types of targets. The red pulse denotes the laser signals travelling towards the target with the speed of light. When the signal interacts with the reflecting target surface, a fraction of the transmitted signal is reflected towards the laser instrument (as shown in the blue signa ls).

PAGE 49

49 Case 1, the laser pulse hits the canopy first and causes three distinct echo pulses. A part of the laser pulse hits the ground giving rise to another echo pulse. Case 2, the laser signal is reflected from a flat surface at a small angle of incidence y ielding an exten ded echo pulse width. Case 3, the pulse is reflected by a flat surface at perpendicular incidence resulting in one echo pulse with a shape identical to the transmitted laser pulse. As depicted in Fig ure 4 2 the first (red) pulse related to a fraction of the laser transmitted pulse, and the next three (blue) pulses correspond to the reflections by the branches of the tree; and the last pulse corresponds to the ground reflection. The long range RIEGL LMS Q680 airborne laser scanner makes us e of a powerful collimated laser source and a digital full waveform processing. It gives access to detailed target parameters by digitizing the echo signal with high laser pulse repetition rates up to 240kHz, and measurement rates up to 160,000 measurement s/sec (www.riegl.com). The ranging accuracy is up to 20mm with the high scan speed up to 200 lines/sec and a field of view of up to 60 The most comprehensive information from the echo signals is extracted with the full waveform LiDAR data. Airborne Camer a Microsoft UltraCamX c amera The camera used in this study is the UltracamX (Microsoft Corp.); it was introduced in 2006 to the commercial market as the large format digital aerial camera. It has a radiometric resolution of 12bits or better and simultaneo us infrared acquisition. The camera consists of the sensor unit, the onboard storage and data capture system, the operators interface panel and two removable data storage units. Table ( 4 1) lists the specification.

PAGE 50

50 The UltraCamX (UCX) employs 7.2 micromete r pixels and thus achieves an even larger format at 216 gross pixels (14,430 across track x 9,420 along track), means fewer flight lines, and with better radiometric performance. The improved optical system maintains image sharpness and high radiometric ra nge well into corners of each image. The UCX collects pixels at a sustained rate of 3GBits/sec which supports automated image analysis. The concept of the UCX camera, is the arrangement of four lenses mounted along the flight direction. These are used in conjunction with the multiple CCD area arrays (9 CCD pan, 4 CCD color) to produce the pan and color images. The shutter on ea ch lens is triggered in a time sequence so that each is exposed from a single position in the air. The nine individual pan images a re then stitched together to form the final composite pan image. The UCX sensor head (Figure 4 3 A ) consists of 8 camera heads. Four of them contribut e to the large format panchromatic image. These four heads are equipped with a total of 9 CCD sensors in their four focal planes. The focal plane of the so called Master Cone (M) carries 4 CCDs (Figure 4 3 B ).

PAGE 51

51 Fig ure 4 1 Study area Fig ure 4 2 Working principle of full waveform LiDAR

PAGE 52

52 A B Figure 4 3 UltraCamX sensor. A. UCX sensor head. B. Arrangement of CCD sensors Table 4 1. Technical Data and Specification for UltraCamX sensor unit Technical Data UCX Sensor Unit Panchromatic Channel Multi cone multi sensor concept 4 camera heads Image size in pixel (cross track/along track) 14430 9420 pixel Physical pixel size 7.2 micron Physical image format (cross track/along track) 103.9 mm 67.8 mm Focal length 100 mm Lens aperture f = 1/5.6 Angle of view (cross track/along track) 55 / 3 7 Multispectral Channel Four channels (Red, Green, Blue, Near Infrared) 4 camera heads Image size in pixel (cross track/along track) 4992 3328 pixel Physical pixel size 7.2 micron Physical image format (cross track/along track) 34.7 mm 2 3.9 mm Focal length 33 mm Lens aperture f = 1/ 4 General Shutter speed options 1/500 sec 1/32 sec Forward motion compensation TDI controlled, 50 pixels Frame rate per second 1 frame in 1.35 sec A/DC bandwidth 14 bit (16384 levles) Radiometric resolution > 12 bit /channel

PAGE 53

53 CHAPTER 5 RESULTS Data Processing Results The processing is done in the MATLAB and ArcGIS environments. Identification of G round from LiDAR D ataset The LiDAR dataset is very dense with a high number of LiDAR hits on the ground and near ground features. Table 5 1 demonstrates the percentage of LiDAR hits with near ground features. The results show that the near ground features are able to be resolved with the given LiDAR dataset. Figure 5 1 shows the ground points from LiDAR dataset. Segmentation of LiDAR P oints The segmentation is done based on height differences. This approach reduces the size of the dataset. The three segments are ground points, near ground points (shrubs and downed trees) and non ground points. The results are visually shown in Figure 5 2. Haze F iltering on A erial I mage Figure 5 3 depicts the spectral signatures of the five classes: downed trees, fire lanes, shrubs, standing trees and ground. Downed trees and fire lanes had high reflectance values but overlapping (mixed) signatures. Figure 5 4 depicts the spectral signatures of the same five classes after the haze reduction filtering has been applied. The classes of downed trees and fire lanes had distinct sig natures. The signature of the downed tree was flat and had very high brigh tness values in the three bands. Figure 5 5 visually compares the haze filtered aerial image with the non haze filtered aerial image.

PAGE 54

54 Feature S election The features selected from th e LiDAR dataset are done as follows. The segmented near non ground (NNG) points class data with a pixel size of 0.10m is selected since this class has all the downed tree returns. Range (z) is l ocally interpolated in the NNG LiDAR class using the minimum v alue of z in each pixel (Figure 5 6) The zoomed Figure 5 6 shows the LiDAR hits on the downed tree with minimum range values in the near ground LiDAR points class. Echo (r ) is locally interpolated using the last return number for the pixel (Figure 5 8 ) L iDAR intensity (i L ) is locally interpolated using the maximum value in each pixel (since downed trees have high intensity values in LiDAR data) (Figure 5 7) Image intensities (v R (value in red band), v G (value in green band), v B (value in blue band) ) from the aerial image are drawn from the signature analysis results. The pixel size of the image is 0.10m. Support Vector Machines Classification Results The support vector machines (SVM) classification algorithm is implemented with a modified version of t he LIBSVM software ( (Chih Chung Chang and Chih Jen Lin 2001) All variables are scaled to the range [0, 1] to avoid values in larger numeric ranges In this classification study, we focus testing (i) kernel type s and (ii) penalty parameter of the error ( C ) ; see Chapter 3 for a discussion on the importance of the kernel types and the variable C where C provides the tradeoff between the margin maximization and error minimization.

PAGE 55

55 Training R esults LiDAR training The training samples for the downed tree class from the LiDAR dataset was selected manu ally based on visually interpretation of the aerial image with the LiDAR returns based on LiDAR intensity, LiDAR elevation and LiDAR return number. The LiDAR intensity varied between 0 and 2047 (11 bit). The downed trees specifically had very high intensities especially above 400. There were some returns with low intensity values but belonging to downed tree class too. Such a feature was exhibited by returns under a clo sed canopy. Their values were attenuated. But interestingly the LiDAR downed trees returns had high intensity values under shadowed open ca nopy (values above 600) (Figure 5 9 ) compared to their image intensity counterpart which had lower intensity values. The previous studies have discussed that the training sample of around 1% of the total dataset for the SVM classification would result in good classification accuracy. Within the overall current dataset of approximately 5.5 million points 821 points were true downed tree LiDAR points. The total LiDAR points in the 5.5 million dataset was approximately 70 000 points. It is to be noted that these 70 000 points were segmented data from the LiDAR processing stages thus eliminating further computations and saving time and efficiency. All the datasets were resampled to 0.10m pixel size. The point spacing of the LiDAR dataset was 0.25m. The training dataset constituted approximately 1% of the overall LiDAR points. The non downed tree training of the training LiDAR dataset had 237 (237 samples out of 69000 true non downed tree samples: 0.03%) samples and downed tree training had 177 (177 samples out of 821 true downed tree samples: 22%) samples. The re sults of the classification are discussed in the following section s

PAGE 56

56 Image training The training samples for the downed tree class from the aerial image dataset was selected manu ally based on visually interpretation of the aerial image base d on red, green and blue intensity values The downed trees specifically had very high intensities above 250 in each of the band. These conclusions were derived after the signature a nalysis on the haze reduction filtered aerial imag e in Chapter 3 (see Chap ter 3 for more details). There were some samples of downed trees with low intensity values but belonging to downed tree class too. Such a fea ture was exhibited by intensity values under a closed canopy due to obstruction of the standing trees, and shadows of the standing trees as some of the reasons Within the overall current dataset of approximately 5.5 million points 5227 points were true downed tree image points. The non downed tree training of the training image dataset had 982 samples and downed tree training had 1011(1011 samples out of 5227 true downed tree samples: 19%) samples. The re sults of the classification are discussed in the following section s Fused LiDAR image training The training samples for the downed tree class from the six features o f the LiDAR and image (called as fused dataset) was selected after a careful visual interpretation of the LiDAR and aerial image and from the above created LiDAR and aerial image training datasets. Within the overall current dataset of approximately 5.5 mi llion points 5289 points were true downed tree image points. The non downed tree training of the training image dataset had 1031 samples and downed tree training had 1052(1052 samples out of 5289 true downed tree samples: 20%) samples. The re sults of the classification are discussed in the following section s Table 5 2 provides the training samples for the three categories of SVM training.

PAGE 57

57 Kappa C oefficient Kappa analysis (Cohen 1960) was applied to evaluate the confusion matrices derived from the classification. The kappa coefficient was computed as equation ( 5 1). ( 5 1) Where, is the number of rows in the matrix, is the number of observations in row i and column i and are the marginal totals of row i and column i respectively, and N is the total number of observations. In this study, kappa coefficient was calculated as a measure of classificatio n accuracy. A kappa coefficient was computed for each matrix, which measures how well Classification R esults The results from this study show a comparison of the accuracy assessments from three cate gories of LiDAR, image and fused LiDAR image datasets. The kernels experimented with for this classification: linear, sigmoid, radial basis function (RBF) and the polynomial function kernels (with degrees 1through 4). The broad variation in the C values te sted was: C = 10, 50, and 100. LiDAR classification result The three features derived from the LiDAR data namely, z (elevation), r (return number), and i L (LiDAR intensity) were utilized for the SVM classification. The training

PAGE 58

58 set had a total of 414 pixels and the testing data was 5 493 107 pixels. Table 5 3 reports the results from the classification using only LiDAR features. The obtained results were better than 90% accu racy using all kernels. Even though the downed trees class had a severe data imbalance with the non downed trees class which made the accuracy assessment a chal lenge The effect of the kernels utilized had very slight variations in the classification resu lts. Inconsistent results resulted from using the sigmoid kernel function with lower values of C (i.e. C = 10). But as the penalty parameter of the error C value increases the sigmoid kernel function was seen to behave consistently similar to the other ker nel types. Testing has been performed in order to determine how robust the algorithm is to variations in C; C trades between margin maximization and minimum error. Table 5 4 reports the accuracy from the classification with the three values of C with and w ithout the sigmoid kernel Figure 5 10 shows the classification accuracy using the different kernels and corresponding C values. The classification results were not affected by varying values of C except for the sigmoid kernel. The sigmoid kernel is seen c onverging for increasing values of C. Table 5 5 reorganizes the accuracy results to identify the two types of errors misclassified points by correct class (Error I) and misclassified points by prediction (Error II). These errors were less than 2% for bo th classes for the binary SVM classification of LiDAR data A false positive is considered a non downed tree point being classified as a downed tree point, and the true positive is a downed tree point being classified as a downed tree point. [Note: Total s amples = 5,493,107, True downed tree samples = 821, true non downed tree samples = 5491152 correctly classified

PAGE 59

59 samples = 5,491,958 overall accuracy = 99 .96%, kappa coefficient = 0.99 ]. Figure 5 11 visually depicts the LiDAR classification of downed tre es. Image classification result The three features derived from the aerial image namely, v R (intensity value in red band), v G (intensity value in green band), v B (intensity value in blue band) were used to classify downed trees in the aerial image data. The training set consisted of 1993 pixels and the testing data was 5,493,107 pixels. Table 5 6 reports the results from the classification using only image features in the fou r different kernel (Linear, Sigmoid, RBF and Polynomial) functions. The obtained results were better than 6 0% accuracy using the kernels mentioned The effect of the kernels utilized had variations in the classification results The use of linear, sigmoid and RBF kernels results in approximately 60% classification accuracy. But the polynomial kernel provided a classification accuracy of about 97% at the cost of high false alarm. This is attributed to the mix of intensity values of the fire lanes, some soil properties and downed trees. The second reason for the high rate of false alarm is attributed to the fact that the classification scheme is pixel based. Therefore each pixel has the same weight during the classification Figure 5 1 2 provides the false ala rm for downed trees in image data as compared to LiDAR data. It is seen clearly that there is a high rate of false alarm of downed trees in aerial image classification results due to the reasons given above. The false alarm rate of LiDAR classification is not comparably low. The overall high rate of false alarm is justified due to the severe data imbalance of the two classes: downed trees and non downed trees.

PAGE 60

60 Testing has been performed in order to determine how robust the algorithm is to variations in C; C trades between margin maximization and minimum error. Table 5 7 reports the accuracy from the classification with the three values of C with and without the polynomial kernel. Although the variations in C did not affect the classification accuracy drasti cally except for the polynomial. For increasing values of C, the linear, sigmoid and RBF were providing consistent results and the polynomial is seen converging. The overall classification of downed trees using aerial image derived features is 69.5%. Figur e 5 1 3 shows the classification accuracy using the different kernels and corresponding C values. Table 5 8 reorganizes the accuracy results to identify the two types of errors misclassified points by correct class (Error I) and misclassified points by prediction (Error II). Error I is reported as 35.09% and Error II as 95.72 % for the binary SVM classification of image data A false positive is considered a non downed tree point being classified as a downed tree point, and the true positive is a downed tree point being classified as a downed tree point. [Note: Total samples = 5495556 True downed tree samples = 3393 true non downed tree samples = 5 411 626 correctly classified samples = 5 415 019 overall accuracy = 64.91 %, kappa coefficient = 0.65 ]. Figure 5 14 visually depicts the image classification of downed trees. LiDAR image fused classification result The six features derived from the aerial image and LiDAR namely, v R (intensity value in r ed band) v G (intensity value in g reen band) v B (intensity value in b lue band) z (elevation), r (return number), and i L (LiDAR intensity) were utilized for the SVM classification. Th e training set consisted of 2083 pixels and the testing data was

PAGE 61

61 5493107 pixels. Table 5 9 reports the results fr om the classification using the six LiDAR image fused features in the four different kernel (Linea r, Sigmoid, RBF and Polynomial) functions. The obt ained results were better than 8 0% accuracy for downed tree classification using the kernels mentioned. The effect of the kernels utilized had variations in the classification results. The use of linear, si gmoid and RBF kernels results in a pproximately 8 0% classification accuracy. But the polynomial kernel provided a varied classification accuracy of about between 95% and 87% at the cost of high rate false alarm. Table 5 10 reports the accuracy from the cla ssification with the three values of C with and without the polynomial kernel. Although the variations in C did not affect the classification accuracy except for the polynomial and RBF kernels. For increasing values of C, the linear, sigmoid and RBF were p roviding consistent results and the polynomial is also seen converging when C = 100. The overall classification of downed trees using aerial image derived features is 87.5%. Figure 5 1 5 shows the classification accuracy using the different kernels and corr esponding C values. The linear and sigmoid kernels provided consistent results. The RBF and polynomial kernels provided higher accuracy for lower values of C (C = 10) at the cost of higher false alarm rate. But the latter two kernels is seen converging to similar results for C = 100 Figure 5 1 6 provides the false alarm for downed trees in the three classification categories. It is seen clearly that there is a high rate of false alarm of downed trees in the fused LiDAR image classification results. Table 5 11 reorganizes the accuracy results to identify the two types of errors misclassified points by correct class (Error I) and misclassified points by prediction

PAGE 62

62 (Error II). Error I is reported as 11.87% and Error II as 92.81 % for the binary SVM classifica tion of fused LiDAR image data Error I of 11.87% even with a high false alarm provides a superior classification results compared to the image only classification results. The higher accuracy is achieved by fusing the LiDAR features with the image feature s. The superior classification accuracy of the fused dataset even with a highly imbalanced dataset is achievable because of the contribution of the LiDAR derived features. Some of the pixel intensity values of fire lanes and soil which had the same high in tensity values as downed trees in the aerial image were better differentiated in the fused classification since the LiDAR intensity values was dissimilar between the fire lanes, soil and downed trees. A false positive is considered a non downed tree point being classified as a downed tree point, and the true positive is a downed tree point being classified as a downed tree point Figure 5 17 visually compares the LiDAR image classification using the linear and polynomial kernels. LiDAR no classification r esult In addition to the Error I and Error II in the confusion matrices above, a special (better than 90% accuracy in detecting downed trees using the LiDAR derived features in the LiDAR data ), there were not sufficient LiDAR hits on all the downed trees in the study area. The total number of true downed tree LiDAR pixels was only 821 compared to the fused LiDAR aerial image true downed tree pixels being 5829. Therefore, in real scenario not all downed tree pixels are identified in the LiDAR SVM classification. This result is reported as no classification error of 86% ( 5023 pixels out of 5829 t rue downed

PAGE 63

63 tree pixels not classified) of downed trees not classified by the LiDAR SVM f ormulation in the study area. Image intensity LiDAR intensity based classification result The success of combination of intensity features: v R (intensity value in r ed band) v G (intensity value in g reen band) v B (intensity value in b lue band) and i L (LiDAR intensity) in the fused image and LiDAR data are experimented. The training sample size and test data are the same as the fused classification with the exception of LiDAR return number and LiDAR elevation. Table 5 1 2 reports the classification accuracy, false alarm rate, and kappa coefficient. The classification accuracy of downed trees is 88% being comparable to fused image LiDAR results. The importance of doing thi s analysis is to interpret the contribution of each of the LiDAR features, in this instance, LiDAR intensity. The false alarm rate is high at 92% which is the similar false alarm rate as fused image LiDAR features but lower than the rate of image features only. Therefore, LiDAR features (LiDAR intensity) controls the false alarm rate than just the image derived feature SVM classification. Figure 5 18 A shows the result visually. Image intensity LiDAR elevation based classification result The success of combination of image intensity v R (intensity value in r ed band) v G (intensity value in g reen band) v B (intensity value in b lue band) and LiDAR elevation z (elevation) features in the fused image and LiDAR are experimented. The training sa mple size and test data are still the same as the fused classification. The effect of utilizing only the LiDAR elevation along with the image intensity is analyzed. Table 5 1 3 reports the classification accuracy, false alarm rate, and kappa coefficient. Th e classification accuracy of downed trees is 85% compared lower to fused image LiDAR results. The importance of doing this analysis is to interpret the contribution of the

PAGE 64

64 LiDAR elevation to the fused image LiDAR SVM classification. The false alarm rate is high at 94% which is the slightly higher false alarm rate as fused image LiDAR features but lower than the rate of image features only. Therefore, LiDAR elevation controls the false alarm rate better than just the image derived feature SVM classification, but higher than the LiDAR intensity contribution to the fused image LiDAR classification of downed trees. Figure 5 18 B shows the result visually. Image intensity LiDAR return number based classification result The combination of image intensity v R (inten sity value in r ed band) v G (intensity value in g reen band) v B (intensity value in b lue band) and LiDAR return number r (return number) in the fused image and LiDAR classification are experimented. The training and test sample size are the same as the fus ed classification. Table 5 1 4 reports the classification accuracy, false alarm rate, and kappa coefficient. The classification accuracy of downed trees is 87% compared same to fused image LiDAR results. The importance of doing this analysis is to interpret the contribution of the LiDAR return number to the fused image LiDAR SVM classification. The false alarm rate is high at 96% which is higher false alarm rate as fused image LiDAR features and comparable to rate of image features classification only. There fore, LiDAR features (LiDAR return number) controls the false alarm rate better than just the image derived feature SVM classification, but higher than the LiDAR intensity contribution to the fused image LiDAR classification of downed trees. The results o f the image intensity fused with LiDAR return number is visually shown in Figure 5 18 C. Effect of SVM and Kernel Parameters Training an SVM finds the large margin hyperplane, i.e. s ets the parameters w and b ( Equation 3 1 in Chapter 3 ). The SVM has another set of parameters: the soft margin

PAGE 65

65 constant, C (penalty of error), and any parameters the kernel function may depend on, for example, the width of the RBF kernel or degree of the polynomial kernel. Certain parameters such as the width of the RBF kernel ar e beyond the scope of this study. The first discussion is with the soft margin constant. The parameter C represents the tradeoff between minimizing the training set error and maximizing the margin. For a large value of C a large penalty is assigned to erro rs, resulting in the hyperplane coming close to several other data points. When C is decreased, these points become margin errors, providing a much larger margin for the rest of the data. Many datasets in practical situations and applications are unbalanc ed, i.e. one class contains a lot more samples than the other class. Unbalanced datasets can present a challenge when training the classifier and SVMs are no exception. The normal approach for producing a high accuracy classifier on imbalanced data is to c lassify any sample dataset to the majority class. While highly accurate under the standard measure of accuracy such a classifier is not very useful. This approach is followed in this study. The non downed tree class has a larger sample size compared to the class of downed trees. One way to handle this case would have been to assign different costs/penalty parameters to each class. Large margin classifiers are known to be sensitive to the way features are scaled. The main advantage of scaling is to avoid attributes in greater numeric ranges dominating those in smaller ranges. The second advantage is to ease the numerical difficulties during the calculation. Because kernel values usually depend on the inner products of feature vectors, large values might cause numerical problems. Therefore it is essential to normalize the data. The accuracy of SVM can severely degrade if the

PAGE 66

66 data are n ot normalized. Normalization can be performed at the level of the input features. As in many other applications, each feature is measured in a different scale and has a different range of possible values. Table 5 1 5 reports the ranges for each of the featu re used in this study. The normalization adopted in this study was by normalizing each feature to be a unit vector. Normalizing data to unit vectors reduces the dimensionality of the data by one since the data is projected to the unit sphere. This procedur e is good for high dimensional dataset as in this study but not suitable for low dimensional data. The RBF kernel nonlinearly maps samples into a higher dimensional space, in cases where the relationship between the class and features are non linear. The second reason for the RBF being a reasonable choice is the number of hyper parameters, which influences the complexity of the model. For example, the polynomial kernel has more hyper parameters than the RBF kernel. If the number of features are large (whi ch is not true in this study, maximum number of features is six), there is no need to map data into a higher dimensional feature space. That is to say, that the non linear kernels do not improve the performance. In such instances, linear kernels are good e nough and only the value of C is used as the hyper parameter. In this study, the number of samples/instances is much larger (approximately 5.5 million points) than the features (six features). In such cases, the data are mapped to a higher dimensional feat ure space using the non linear kernels. In case of LiDAR classification, superior results were achieved regardless of the kernel type and C values. The reasoning is the LiDAR dataset is well processed, segmented to remove most of the unwanted data points t o ease the computation. The shortcomings are the point spacing and overall

PAGE 67

67 samples being classified. The point spacing of the LiDAR dataset is 0.25m which given the present technology is considered high resolution LiDAR data, but compared to the image reso lution of 0.10m, it is not superior. The overall number of LiDAR points considered for classification was 70000 points compared to the 5.5 million points for image classification. From within the 70000 LiDAR points the downed trees of approximately 800 poi nts were very accurately classified with the LiDAR classification. The rate of false alarm is controlled well with the LiDAR data. In case of the image classification, reasonable results with an overall accuracy of 65% is achieved but with a high false ala rm rate of 97% compared to the LiDAR false alarm of 60%. The polynomial kernel with lower values of C produced superior results but converged with the other kernels for increasing values of C. This is attributed to the mixture of intensity values in the re d, green and blue bands for the various classes within the aerial image. Although the haze filtering produced distinct signatures, there was still a conflict of This led to the high false alarm rate. When all the six features from the LiDAR and image dataset were fused, the overall classification accuracy of downed trees was 88%. This was better than the image based classification. Therefore it is concluded that LiDAR derived features when fused with image derived features produced better results. However, the false alarm rate is still high. Compared to the LiDAR elevation (z) and the LiDAR return number (r ), the LiDAR intensity produced the comparatively better resul ts when fused with image intensity.

PAGE 68

68 Canny Edge Correction Filter Result The false alarm rate from fusion results above is very high (Figure 5 19 ). The Canny edge correction filter is applied on the classification results. Figure 5 20 show the before and af ter Canny edge results. The final classification accuracy decreased slightly (Figure 5 21 ) but at the cost of drastic decrease in the false alarm rate (Figure 5 22 ). These results were carried into the region fitting algorithm. Region Based Object Fittin g Results The Canny edge correction fused LiDAR image classification result is utilized for this object based region fitting technique. Only the downed tree class results are used to fit the objects. The downed trees with continous points (more than five points in any direction) are considered for fitting the objects. This considerably reduced the inclusion of noise (speckled misclassified points from the classification result). The first step was the fitting of convex hull. The convex hull returns the 2 D convex hull of the points (x,y) where x and y are column vectors. The convex hull K is expressed in terms of a vector of point indices arranged in a counter clockwise cycle around the hull. The fitting on convex hull to one downed tree is shown in Figure 5 23 A The centroid of the region is computed along with the major and minor axes. Finally the furthest boundary points with respect to the major and minor axes are determined. The above para meters are shown in Figure 5 23. Figure 5 24 shows all the down ed trees computed from the region based object fitting technique. Using the furthest boundary points the width (diameter) and length (height) of the downed trees are calculated The computed width is the diameter of the downed tree

PAGE 69

69 and computed length is t he height of the downed tree. These are substituted to calculate the volume. The region based object fitting technique has been successfully applied to calculate the volume of the downed trees. Total of 64 trees were fitted with a miss of five trees. An o verall accuracy of 92.3% of the downed trees was detected with this technique. Two trees were not fitted well due to the noise from the misclassification. On e of them is shown in Figure 5 26 Figure 5 25 shows the five selected trees for the object based region accuracy. The trees were measured manually in the field against the algorithm estimated parameters. The diameters and the height of the tree trunk are measured ( d m h m ) and estimated ( d c, h c ). The diameter closer to the trunk is measured in the field. The volume is calculated and the error in the volume is calculated using the Equation 5 2. V/V = 1 (d m /d c ) 2 (h m /h c ) (5 2) A method for fitting of convex hull is presented above. The a pproach is based on the boundary points as opposed to internal points of the region of interest. The approach is simple and is based on simple geometry. Since this method depends on the boundary and furthest boundary points of the object, this is considere d as a novel fitting method for binary images. The boundary points are fitting using the convex hull process. The center of the object based on boundary points is computed by the estimated averaging x and y coordinates of the boundary points. Using these b oundary points, the directions of the major and minor axes are determined. Using this orientation of the object, the four furthest vertices of the object are computed. These vertices help

PAGE 70

70 to calculate the width and length of the individual downed trees. Th is approach provides a direct method to compute the diameter and length of the downed tree object. The results of the parameter estimation are discussed as follows. Five trees were measured in the field and the same trees computed values from the algorith m were compared. The estimated diameter as in Table 5 16 shows the error ranging between 5% and 38%. The highe st error which was recorded for tree number one was due to the algorithm computing the incorrect diameter. This is attributed to the noise in the classified data, and the width of the tree was incorrectly computed. The computation error is shown in Figure 5 26. Similarly the height of the tree for the five downed trees were computed and measured as shown in Table 5 17. The estimated height shows the error ranging between 4% and 24%. Finally the volume of individual tree is estimated based on measured and computed parameters as shown in Table 5 18 Tree number one resulted in approximately 50% error due to the incorrect computations from the algorithm but lower errors on other trees. Such estimates could be used for disaster management applications by local and federal agencies.

PAGE 71

71 Table 5 1. LiDAR hits with near ground and ground features Non Ground LiDAR points Ground and near ground LiDAR points 3718845 (52.65 %) 3344592 (47.35%) Table 5 2 Training samples `Training Category Training Samples 1. LiDAR ONLY features 414 2. Image ONLY features 1993 3. LiDAR Image fused features 2083 Table 5 3 Classification accuracy of downed trees using different kernels using LiDAR data Kernel Accuracy (%) Linear 98.72 92.33 98.05 98.05 98.05 98.05 98.05 Sigmoid RBF Polynomial deg 1 Polynomial deg 2 Polynomial deg 3 Polynomial deg 4 Table 5 4 Classification accuracy of downed trees in LiDAR data using different values of C with and without the sigmoid kernel C Accuracy (%) with sigmoid kernel without sigmoid kernel C 10 96.65 98.09 98.09 98.09 C 50 97.96 C 100 98.02 Table 5 5 Confusion matrix of the LiDAR SVM Classification CLASS Downed Tree Non Downed Tree I Error (%) Downed Tree 806 15 1.83 Non Downed Tree 1134 5 491 152 0 II Error (%) 58.45 0 Table 5 6 Classification accuracy of downed trees using different kernels using image data Kernel Accuracy (%)

PAGE 72

72 Linear 64.92 62.21 65.01 97.55 97.55 97.55 97.55 Sigmoid RBF Polynomial deg 1 Polynomial deg 2 Polynomial deg 3 Polynomial deg 4 Table 5 7 Classification accuracy of downed trees in image data using different values of C with and without the polynomial kernel C Accuracy (%) with polynomial kernel without polynomial kernel C 10 72.42 64.05 61.86 61.36 C 50 68.87 C 100 67.25 Table 5 8 Confusion matrix of the image SVM Classification with = 0.65 CLASS Downed Tree Non Downed Tree I Error (%) Downed Tree 3393 1834 35.09 Non Downed Tree 76290 5411626 1.39 II Error (%) 95.72 0 Table 5 9 Classification accuracy of downed trees using different kernels using fused LiDAR image data Kernel Accuracy (%) Linear 88.15 84.63 88.32 95.93 95.93 95.93 95.93 Sigmoid RBF Polynomial deg 1 Polynomial deg 2 Polynomial deg 3 Polynomial deg 4 Table 5 10 Classification accuracy of downed trees in fused LiDAR image data using different values of C with and without the polynomial kernel C Accuracy (%) with polynomial kernel without polynomial kernel

PAGE 73

73 C 10 89.26 87.03 86.16 86.33 C 50 86.60 C 100 86.68 Table 5 1 1 Confusion matrix of the fused LiDAR image SVM Classification with = 0.88 CLASS Downed Tree Non Downed Tree I Error (%) Downed Tree 5138 692 11.87 Non Downed Tree 66367 5420911 1.21 II Error (%) 92.81 0 Table 5 1 2 Classification accuracy of downed trees using fused intensity data Features Classification accuracy (%) False alarm (%) Misclassification (%) Kappa v R v G v B i L 88.00 92.54 13.63 0.88 Table 5 1 3 Classification accuracy of downed trees using fused image intensity LiDAR elevation data Features Classification accuracy (%) False alarm (%) Misclassification (%) Kappa v R v G v B z L 84.68 94.10 18.11 0.85 Table 5 1 4 Classification accuracy of downed trees using fused image intensity LiDAR return number data Features Classification accuracy (%) False alarm (%) Misclassification (%) Kappa v R v G v B r L 86.91 95.74 15.16 0.87 Table 5 1 5 Data ranges of the features Features Data range Image red v R 0 255 (8 bit) Image green v G 0 255 (8 bit) Image blue v B 0 255 (8 bit) LiDAR intensity r L 0 2048 (11 bit) LiDAR return number r 0 7

PAGE 74

74 LiDAR elevation z 10 18m Table 5 16. Parameter estimation d iameter Tree Measured (cm) Computed (cm) Error (%) 1 25 40 37.5 2 29 30 3.33 3 22 20 10.00 4 32 30 6.67 5 19 20 5.00 Table 5 17. Parameter estimation height Tree Measured (m) Computed (m) Error (%) 1 9.2 7.0 23.91 2 9.5 8.6 11.63 3 9.3 8.7 6.32 4 7.1 6.5 8.45 5 8.1 7.8 3.70 Table 5 18 Parameter estimation v olume Tree Measured (m 3 ) Computed (m 3 ) Error (%) 1 0.45 0.88 48.66 2 0.63 0.61 3.22 3 0.35 0.27 29.34 4 0.57 0.46 24.28 5 0.23 0.25 6.28

PAGE 75

75 Figure 5 1 Ground points from LiDAR dataset

PAGE 76

76 GROUND NEAR NON GROUND Figure 5 2. Segmentation of LiDAR points. The legend provides the segments.

PAGE 77

77 Figure 5 3 Signature a nalysis on the (non haze filtered) original aerial image Downed trees Fire lanes Ground Standing trees Shrubs

PAGE 78

78 Figure 5 4 Signature a nalysis on the haze reduction filtered aerial image Downed trees Fire lanes Ground Standing trees Shrubs

PAGE 79

79 A B Figure 5 5. Haze filtering on aerial image. A: Haze filtered aerial image, B: Non haze filtered aerial image

PAGE 80

80 Figure 5 6. LiDAR range values on a transparent aerial image

PAGE 81

81 Figure 5 7. LiDAR intensity values on a transparent aerial image

PAGE 82

82 Figure 5 8. LiDAR echo values on a tr ansparent aerial image

PAGE 83

83 Fig ure 5 9 P art of downed tree shadowed (circle) from standing trees. Red highlight over point of interest with LiDAR intensity = 677 and corresponding image intensity [RGB] = [62 73 73]

PAGE 84

84 Figure 5 10 Classification accuracy of downed trees in LiDAR data using different kernels and C values

PAGE 85

85 Figure 5 1 1 LiDAR SVM classification visual

PAGE 86

86 Figure 5 1 2 Comparison of false alarm of downed trees in LiDAR and image classification Figure 5 1 3 Classification acc uracy of downed trees in image data using different kernels and C values

PAGE 87

87 Figure 5 1 4 Image SVM classification visual

PAGE 88

88 Figure 5 1 5 Classification ac curacy of downed trees in LiDAR image fused data using different kernels and C values Figure 5 1 6 Comparison of false alarm of downed trees in LiDAR, image and fused LiDAR image classification

PAGE 89

89 Figure 5 1 7 Fused LiDAR image SVM classification visual

PAGE 90

90 A B C Figure 5 18 Image and individual LiDAR SVM classification. A: Image intensity LiDAR intensity, B: Image intensity LiDAR elevation, C: Image intensity LiDAR echo

PAGE 91

91 Figure 5 19 SVM classification with high false alarm rate

PAGE 92

92 A B Figure 5 20 Canny edge correction filter. A: Before filtering, B: After f iltering

PAGE 93

93 Figure 5 21 Classification accuracy before and after the Canny edge correction filter

PAGE 94

94 Figure 5 22 False alarm rate comparison before and after Canny edge correction filter

PAGE 95

95 Figure 5 23 Region based object fitting parameters

PAGE 96

96 Figure 5 24 Extracted downed trees

PAGE 97

97 Figure 5 25 Trees selected for accuracy estimation

PAGE 98

98 Figure 5 26 Error in estimating tree parameters

PAGE 99

99 CHAPTER 6 CONCLUSIONS AND FUTU RE DIRECTIONS Conclusions Small footprint full waveform LiDAR technology can provide spatially dense coverage over forest canopies and penetrates the canopy by illuminating the ground and understory through small gaps in the crown layer. These advantages, relative to other remote sensing technologies, such as passive optical al low identification of individual downed trees and estimation of their structural parameters. But due to the sparse hits on the downed trees, the fusion of aerial images has added to the advantage of detection downed trees. An assessment of the fusion of Li DAR and aerial imagery for accurate downed tree classification is done using the support vector machines algorithm. The entire process was initially developed on the raw point cloud data to avoid loss of 3D information from interpolating the point data to images. Working on point data with high point density requires more computational time. But this was alleviated by dividing the area into smaller patches. Later, the LiDAR segmented data and aerial imagery were fused using the derived features from both da tasets. The proposed algorithm performs very well overall for the LiDAR derived features only classification (overall accuracy = 99%) but lower rate of false alarm. The lowest accuracy occurred for aerial image only classification (overall accuracy = 65%) for downed trees. However, the performance was more than 20% better when the fusion of LiDAR and aerial imagery (overall accuracy = 88%) occurred with the major contributing feature being the intensity. The results confirm that fusion of small footprint fu ll waveform digitized LiDAR and aerial imagery can be utilized for the extraction of downed trees and closely related

PAGE 100

100 applications. The effect of the kernels such as linear, sigmoid, radial basis and polynomial provided stable performance with LiDAR classification results. The effect of the penalty parameter C which determines the tradeoff between a hard classification an d soft margin classification had a major effect on the sigmoid kernel in LiDAR data classification but not on the other kernels tested. It was proved that as the value of C increases the sigmoid kernel behaved consistently as the other kernels. In case of the aerial image only classification, the polynomial kernel performed exceeding better than the other kernels but with a very high rate of false alarm. With the fusion of LiDAR and aerial image classification, the results improved better than the image onl y classification results but with the polynomial kernel performing better than other kernels yet with high rate of false alarm. The study has proved that LiDAR proves a valuable contribution to the extraction of downed trees. In applications that involve only the identification of downed trees, the LiDAR data only could provide the accurate locations and extent of such features. In order to accurately extract the downed trees for structural estimation purposes, the fusion of LiDAR and high resolution aeria l imagery could yield better results. The SVM accuracies were accompanied by high false alarm rates, with the LiDAR classification producing 58.45%, image classification producing 95.74% and finally the fused classification producing 93% false alarm rates The Canny edge correction filter helped control the LiDAR false alarm to 3 5.99 %, image false alarm to 48.56% and fused false alarm to 37.69% The application of threshold applied C anny edge correction technique lowered the false positive percentage with a s light decrease in downed tree accuracy The LiDAR accuracy decreased from 98.17% to 95.74%, the image accuracy

PAGE 101

101 decreased by 2% dropping to 63.17% and the fusion of LiDAR and image classification of downed tree accuracy decreasing from 88.15% to 87.30%. Su ch estimations could be useful inputs for disaster management processes. Future Work In the proposed tree classification technique, the various other features of LiDAR and imagery were not utilized, for e.g., image texture, image edges, image color filte r, and LiDAR mean vector between all the returned pulses, LiDAR standard deviation, LiDAR normalized height. One general problem of most algorithms is that most features cannot be characterized by geometric properties only as with the LiDAR elevation and L iDAR echo properties. To increase rapidity, information from other sensors will have to be used. The extraction of proper features from the fused dataset could yield better results. The kernels are their respective parameters could be well studied in futur e works. For example, the Gaussian width could be utilized. In the present study, only the kernels types and variable C are tested and analyzed. Optimal value of C can be computed from cross validation of the dataset. This is a time consuming search proc ess but can yield better results for customized values of C for the particular dataset. The relationship between the intensities in the aerial image and small footprint full waveform LiDAR is another interesting component within the fusion scenario especia lly if the near infra red reflectance is obtained with the aerial image. In the present study, the near infra red data is not available for the aerial image. For future studies, fusion of LiDAR and aerial image to detect downed trees after a natural disas ter will be continued because such information is necessary for providing timely post disaster response efforts from the federal agencies. Such fusion techniques

PAGE 102

102 encompass more information than single sensor datasets, but also contain redundant information The redundant information can be filtered. To achieve optimal results over sites other than those examined here (particularly if those sites contain tree species other than Loblolly and slash pine), the technique should train the algorithm using local gr ound truth data. It would be interesting to test the robustness of the classification algorithm on natural forests after a disaster and determine how much accuracy is obtained. The performance would depend, however, on the nature of the understory. LiDAR d ata obtains fewer returns from the understory than from the upper canopy and ground. Finer representations of understory forests by denser spatial and vertical resolution are expected to yield better accuracy in individual downed tree and tree parameter es timations. This may also remove the need of fusing the optical passive images with LiDAR data.

PAGE 103

103 LIST OF REFERENCES Aosier, B., and M. Kaneko. 2007. Evaluation of the forest damage by typhoon using remote sensing technique. Barcelona Spain ed.Geoscience and Remote Sensing Symposium, IGARSS 2007. Boose, E. R., D. R. Foster, and M. Fluet. 1994. Hurricane impacts to tropical and temperate forest landscapes. Ecological Monographs 64, 369 400. Burley, S., S. L. Robinson, and J. T. Lundh olm. 2008. Post hurricane vegetation recovery in an urban forest. Landscape and Urban Planning 85, no. 2:111 122. Canny, J. 1986. A Computati onal Approach to Edge Detection. IEEE Trans. Pattern Analysis and Machine In telligence. Vol. 8, no. 6: 679 678. Chambers, J. Q Science 318, 1107. Chaudhuri, D., and A. Samal. 2007. A simple method for fitting of bounding rectangle to closed regions. Pattern Recognition 40, no. 7:1981 1989. Cha uve, A., et al. 2009. Advanced full waveform lidar data echo detection: assessing quality of derived terrain and tree height models in an alpine coniferous forest. nternational Journal of Remote Sensing 30, 5211 5228. Chih Chung Chang, and Chih Jen Lin. 2 001. LIBSVM: A library for support vector machines. Software available at: http://www.csie.ntu.edu.tw/~cjlin/libsvm ed. Cortes, C., and V. Vapnik. November 1995. Support vector networks. Machine Learning 20, 273 297. Cramer, M. 2005. Digital airborne cameras Status and future. Dubayah, R. O., and J. B. Drake. 2000. Lidar remote sensing for forestry. Journal of Forestry 98, no. 6:44 46. Dwyer, E., P. Pasquali, F. Holecz, and O. Arino. 1 999. Mapping forest damage caused by the 1999 Lothar Storm in Jura (France), Using SAR Interferometry. ESA Earth Observation Quarterly 65, 28 29. Escambia County. Hurricane Ivan Strom Debris Removal Escambia. County Board of County Commissioners Meeting O ctober 22,2004. 15 June 2008. Available from http://www.co.escambia.fl.us/documents/Hurricane_Ivan_Status_Report_10 22 04.pdf

PAGE 104

104 Escobedo, F. J., et al. 2009. Hurricane Debris and Damage Assessment for Florida Urban Forests. Arboriculture and Urban Forestry 35, no. 2:100 106. Everham, E. M. I., and N. V. Brokaw. 1996. Forest damage and recovery from catastrophic wind. Botanical Review 62, 114 185. F EMA. 2007. Robert T. Stafford Disaster Relief and Emergency Assistance Act. FEMA Publication 592. 93 288, Flood, M. 2001. Laser altimetry From science to commercial lidar mapping. PE & RS Photogrammetric Engineering & Remote Sensing 67, no. 11:1209 12 11. Foster, D. R. 1988. Species and stand response to catastrophic wind in central New England, USA. Journal of Ecology 76, 135 151. Fransson, J., et al. 2002. Detection of storm damaged forested areas using airborne CARABAS II VHF SAR image data. IEEE T ransactions on Geoscience and Remote Sensing 40, no. 10:2170 2175. Gougeon, F. A., et al. 2001. Synergy of airborne laser altimetry and digital videography for individual tree crown delineation. Graham, R. L., and Yao, F. F. 1983. Finding the convex hull of a simple polygon. Journal of Algorithms 4, 324 331 Hagan, G. F., and J. L. Smith. 1986. Predicting tree groundline diameter from crown measurements made on 35 mm aerial photography. Photogrammetric Engineering and Remote Sensing 52, no. 5:687 690. Hol mgren, J. 2003. Estimation of forest variables using airborne laser scanning. Department of Forest Economics, Swedish University of Agricultural Sciences. Holopainen, M., and M. Talvitie. 2007. Effect of data acquisition accuracy on timing of stand harvest s and expected net present value. Silva Fennica 40, no. 3:531. Hoyos, C. D., P. A. Agudelo, P. J. Webster, and J. A. Curry. 2006. Deconvolution of the factors contributing to the increase in global hurricane intensity. Science 312, no. 5770:94. Huber, M. W. Schickler, S. Hinz, and A. Baumgartner. 2003. Fusion of LIDAR data and aerial imagery for automatic reconstruction of building surfaces. The 2nd GRSS/ISPRS Joint Work shop on Data Fusion and Remote Sensing over Urban Areas, Berlin

PAGE 105

105 Hug, C., A. Ullric h, and A. Grimm. 2004. Litemapper 5600 A waveform digitizing LiDAR terrainand vegetation mapping system. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 36, no. Part 8:24 29. Jacobs, D. M. 2007. Forest inventory, catastrophic events and historic geospatial assessment in the South. Tampa, Florida ed.Proceedings of the ASPRS 2007 Annual Conference Identifying Geospatial Solutions. Kampa, K., and Slatton, K. C. 2004. An Adaptive Multiscale Filter for Segmenting Vegetation in ALSM Data. Vol. 6. Proc. IEEE International Geoscience and Rem ote Sensing Symposium (IGARSS). Korpela I. 2007a. 3D Treetop Positioning by Multiple Image Matching of Aerial Images in a 3D Search Volume Bounded by LIDAR Surface Models. PHOTOGRAMMETRIE FERNERKUNDUNG GEOINFORMATION 2007, no. 1:35. Korpela, I. 2007b. 3D Treetop Positioning by Multiple Image Matching of Aerial Images in a 3D Search Volume Bounded by LIDAR Surface Models. PHOTOGRAMMETRIE FERNERKUNDUNG GEOINFORMATION 2007, no. 1:35. Kovacs, J. M., J. F. Wang, and M. Blanco Correa. 2001. Mapping disturbances in a mangrove forest using multi date Landsat TM imagery. Environmental Management 27, 763 776. Kupfer, J. A., A. T. Myers, S. E. McLane, and G. N. Melton. 2008. Patterns of forest damage in a southern Mississippi landscape caused by Hurricane Katrina. Ecosystems 11, 45 60. Lee, M., T. Lin, M. A. Vadeboncoeur, and J. Hwong. 2008. Remote sensing assessment of forest damage in relation to the 1996 strong typhoon Herb at Lienhuachi Experimental Forest, Taiwan. Forest Ecology and Management 255, 3297 3306. Lefsky, M. A., W. B. Cohen, G. G. Park er, and D. J. Harding. 2002. Lidar remote sensing for ecosystem studies. Bioscience 52, no. 1:19 30. Lim, K., et al. 2003. LiDar remote sensing offorest structure Progress in physical geography 27, 88 105. Lodge, D. J., and W. H. McDowell. 1991. Summary of ecosystem level effects of Caribbean hurricanes. Biotropica 23, 373 378. McNulty, S. G. 2002. Hurricane impacts on US forest carbon sequestration. Environmental Pollution 116, 17 24.

PAGE 106

106 Naesset, E. 2004. Practical large scale forest stand inventory usin g a small footprint airborne scanning laser. Scandinavian Journal of Forest Research 19, no. 2:164 179. Nilsson, M. 1996. Estimation of tree heights and stand volume using an airborne lidar system. Remote Sensing of Environment 56, no. 1:1 7. Ramsey, E. W., M.E. Hodgson, S.K. Sapkota, and G.A. Nelson. 2001. Forest impact estimated with NOAA AVHRR and Landsat TM data related to an empirical hurricane wind field distribution. Remote Sensing of Environment 77, 279 292. Rosso, P. H., S. L. Ustin, and A. Hast ings. 2005. Mapping marshland vegetation of San Francisco Bay, California, using hyperspectral data. International Journal of Remote Sensing 26, 5169 5191. Roth, L. C. 1992. Hurricanes and mangrove regeneration: effects of Hurricane Joan, October 1988, on the vegetation of Isla del Venado, Bluefields, Nicaragua. Biotropica 24, no. 3:375 384. Sch¨olkopf, B., et al. 1998. Support vector methods in learning and feature extraction. Ninth Australian Congress on Neural Networks Sithole, G., and G. Vosselman. bare earth extraction from airborne laser scanning point clouds. ISPRS Journal of Photogrammetry & Remote Sensing 59, 85 101. Smith, Thomas J., III, Michael B. Robblee, Harold R. Wanless, and Thomas W. Doyle. 1994. Mangroves, Hurricanes, and Lightning Strikes. BioScience 44, no. 4:256 262. Staudhammer, Christina L., Francisco Escobedo, Christopher Luley, and Jerry Bond. 2009. Technical Note: Patterns of Urban Forest Debris from the 2004 and 2005 Flori da Hurricane Seasons Southern Journal of Applied Forestry 33, no. 4:193 196. Stueve, K. M., C. W. Lafon, and R. E. Isaacs. 2007. Spatial patterns of ice storm disturbance on a forested landscape in the Appalachian Mountains, Virginia. Area 39, no. 1:20 3 0. Surez, J. C., C. Ontiveros, S. Smith, and S. Snape. 2005. Use of airborne LiDAR and aerial photography in the estimation of individual tree heights in forestry. Computers and Geosciences 31, no. 2:253 262. Szantoi, Z., et al. 2008. Rapid methods for estimating and monitoring tree cover change in Florida urban forests: The role of hurricanes and urbanization. Vapnik, V. 1998. Statistical Learning Theory. New York: J. Wiley

PAGE 107

107 Vapnik, V. The nature of statistical learning theory. 1995. NY Springer Wan g, F., and Y. Xu. 2008. Hurricane Katrina induced forest damage in relation to ecological factors at landscape scale. Environmental Monitoring and Assessment Waring, R. H., et al. 1995. Imaging radar for ecosystem studies. Bioscience 715 723. Wehr, A., a nd U. Lohr. 1999. Airborne laser scanning an introduction and overview. ISPRS Journal of Photogrammetry and Remote Sensing 54, no. 2 3:68 82. Wiesmann, A., et al. 2001. Potential and methodology of satellite based SAR for hazard mapping. Sydney, Australia ed. Vol. Proceedings of Geoscience and Remote Sensing Symposium. Yu, X., J. Hyypp, H. Kaartinen, and M. Maltamo. 2004. Automatic detection of harvested trees and determination of forest growth using airborne laser scanning. Remote Sensing of Environment 90, no. 4:451 462.

PAGE 108

108 BIOGRAPHICAL SKETCH Sowmya Selvarajan was born in Chennai, India. She earned her B.E. in Geo informatics Engineering (Civil Engineering) from College of Engineering, Guindy Anna University, Chennai, India in 2000. She earned her M.Eng. in Civil Engineering (research in Remote Sensing and GIS) from the National University of Singapore, Singapore in 2004. Upon graduating with her M.Eng. in Civil Engineering, Sowmya joined the Spatial Innovision Pte. Ltd., Kingston, Jamaica as a Geomatics Consultant. She undertook various surveying projects and worked with clients in customizing remote sensing and GIS needs. She later worked as a Lecturer for the University of Technology, Kings ton, Jamaica and taught undergraduate and graduate remote sensing courses in the School of Architecture. She received her doctoral degree in Forest Resources and Conservation with a concentration in Geomatics from the University of Florida in the summer of 2011.