Citation

Material Information

Title:
An Object-Based Image Analysis Method for Enhancing Classification of Land Covers Using Fully Convolutional Networks and Multi-View Images of Small Unmanned Aerial System
Series Title:
0
Creator:
Liu, Tao
Publisher:
MDPI
Publication Date:
Language:
English
Physical Description:
Journal Article

Notes

Abstract:
Fully Convolutional Networks (FCN) has shown better performance than other classifiers like Random Forest (RF), Support Vector Machine (SVM) and patch-based Deep Convolutional Neural Network (DCNN), for object-based classification using orthoimage only in previous studies; however, for further improving deep learning algorithm performance, multi-view data should be considered for training data enrichment, which has not been investigated for FCN. The present study developed a novel OBIA classification using FCN and multi-view data extracted from small Unmanned Aerial System (UAS) for mapping landcovers. Specifically, this study proposed three methods to automatically generate multi-view training samples from orthoimage training datasets to conduct multi-view object-based classification using FCN, and compared their performances with each other and also with RF, SVM, and DCNN classifiers. The first method does not consider the object surrounding information, while the other two utilized object context information. We demonstrated that all the three versions of FCN multi-view object-based classification outperformed their counterparts utilizing orthoimage data only. Furthermore, the results also showed that when multi-view training samples were prepared with consideration of object surroundings, FCN trained with these samples gave much better accuracy than FCN classification trained without context information. Similar accuracies were achieved from the two methods utilizing object surrounding information, although sample preparation was conducted using two different ways. When comparing FCN with RF, SVM, DCNN implies that FCN generally produced better accuracy than the other classifiers, regardless of using orthoimage or multi-view data.
Acquisition:
Collected for University of Florida's Institutional Repository by the UFIR Self-Submittal tool. Submitted by Tao Liu.
General Note:
Publication of this article was funded in part by the University of Florida Open Access Publishing Fund.

Record Information

Source Institution:
University of Florida Institutional Repository
Holding Location:
University of Florida
Rights Management:
Copyright Creator/Rights holder. Permission granted to University of Florida to digitize and display this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.

Downloads

This item is only available as the following downloads:


Full Text

PAGE 1

Article AnObject-BasedImageAnalysisMethodfor EnhancingClassicationofLandCoversUsingFully ConvolutionalNetworksandMulti-ViewImagesof SmallUnmannedAerialSystem TaoLiu 1,2, *andAmrAbd-Elrahman 1,2 ID 1 SchoolofForestResourcesandConservation,UniversityofFlorida,Gainesville,FL32611,USA; aamr@u.edu 2 GulfCoastResearchCenter,UniversityofFlorida,PlantCity,FL33563,USA Correspondence:taoliu@u.edu Received:15February2018;Accepted:10March2018;Published:14March2018 Abstract:FullyConvolutionalNetworksFCNhasshownbetterperformancethanotherclassierslikeRandomForestRF,SupportVectorMachineSVMandpatch-basedDeepConvolutionalNeuralNetworkDCNN,forobject-basedclassicationusingorthoimageonlyinpreviousstudies;however,forfurtherimprovingdeeplearningalgorithmperformance,multi-viewdatashouldbeconsideredfortrainingdataenrichment,whichhasnotbeeninvestigatedforFCN.ThepresentstudydevelopedanovelOBIAclassicationusingFCNandmulti-viewdataextractedfromsmallUnmannedAerialSystemUASformappinglandcovers.Specically,thisstudyproposedthreemethodstoautomaticallygeneratemulti-viewtrainingsamplesfromorthoimagetrainingdatasetstoconductmulti-viewobject-basedclassicationusingFCN,andcomparedtheirperformanceswitheachotherandalsowithRF,SVM,andDCNNclassiers.Therstmethoddoesnotconsidertheobjectsurroundinginformation,whiletheothertwoutilizedobjectcontextinformation.WedemonstratedthatallthethreeversionsofFCNmulti-viewobject-basedclassicationoutperformedtheircounterpartsutilizingorthoimagedataonly.Furthermore,theresultsalsoshowedthatwhenmulti-viewtrainingsampleswerepreparedwithconsiderationofobjectsurroundings,FCNtrainedwiththesesamplesgavemuchbetteraccuracythanFCNclassicationtrainedwithoutcontextinformation.Similaraccuracieswereachievedfromthetwomethodsutilizingobjectsurroundinginformation,althoughsamplepreparationwasconductedusingtwodifferentways.WhencomparingFCNwithRF,SVM,DCNNimpliesthatFCNgenerallyproducedbetteraccuracythantheotherclassiers,regardlessofusingorthoimageormulti-viewdata. Keywords: FCN;deeplearning;object-based;OBIA;UAS;multi-viewdata;wetland 1.IntroductionSmallUnmannedAircraftSystemUAS,hasbecomeapopularremotesensingplatformforprovidingveryhigh-resolutionimagestargetingsmallormediumsizesitesinthepastdecade,duetoitsadvantagesofsafety,exibility,andlow-costoverotherairborneorspace-borneplatforms.Thecontinuoustechnicaladvancementsthathaveimproveditspayloadandduranceovertheyearssignicantlycontributedtoitsincreasedutilization,atrendnotexpectedtoslowdownsoon[1,2].Object-basedImageAnalysisOBIAhasbeenroutinelyemployedtoprocessUASimagesforlandcovermapping,withitscapabilityofgeneratingmoreappealingmapsandcomparableifnothigherclassicationaccuracywhencomparedwithpixel-basedmethods[38].AnalyzingtheUASimagesusingtraditionalOBIAnormallystartswithbundleadjustmentproceduretoproduceorthoimagefromRemoteSens. 2018 10 ,457;doi:10.3390/rs10030457www.mdpi.com/journal/remotesensing

PAGE 2

RemoteSens. 2018 10 ,457 2of24 alltheUASimages.Then,imagesegmentationalgorithmisconductedtosegmenttheorthoimagetogroupsofhomogeneouspixelstoformnumerousmeaningfulobjects.Spectral,geometrical,textural,andcontextualfeaturesareextractedfromtheseobjectsandusedasinputtodifferentclassiers,suchasRandomForestRF[9]andSupportVectorMachineSVM[10],tolabeltheobjects.FeatureextractionandselectionthathavetobeconductedduringtraditionalOBIAproceduresarechallengingtasksandcanlimitclassicationperformance.Recently,theriseofdeeplearningtechniquesprovidedanalternativetotraditionallandcoverclassiers.Deeplearningbroughtaboutaround2006[11],becamewellknowninthecomputervisioncommunityaround2012,sinceonesupervisedversionofdeeplearningnetworksDeepConvolutionalNeuralNetworksDCNNmadeabreakthroughforsceneclassicationtasks[12,13],andhasreachedouttomanyindustrialapplicationsandotheracademicareasinrecentyearsasitcontinuestoadvancetechnologiesinareas,likespeechrecognition[14],medicaldiagnosis[15],autonomousdriving[16],oreventhegamingworld[17,18].Whencomparedwithothertraditionalclassiers,deeplearningdoesnotrequirefeatureengineering,whichattractedmanyresearchersfromtheremotesensingcommunitytotestitsusabilityforlandcovermapping[1923].Twolatestreviewpapers[20,24]onOBIAbothalsoemphasizetheneedfortestingdeeplearningtechniquesundertheOBIAframework.Deeplearningnetworksnormallyhaveahugenumberofparameterstobeadjustedduringthetrainingprocedureandmayrequiremassivetrainingsamplestotriggeritspower,asshowninoneofthelateststudies[25],butcollectingtrainingsamplesisexpensiveforremotesensingapplications.Toovercomethescarcetrainingsampleslimitation,severalstrategieshavebeenproposed,suchasaugmentingthelimitedlabeledsampleswithvarioustransformationoperations,suchasrotation,translationandscaling[26,27],unsupervisedpre-training[11,28],transferlearning[29,30],etc.Multi-viewdatacollectedbysmallUASnaturallyexpandsthetrainingdataset,thankstothebidirectionalreectanceeffectresultingfromthechangesinviewandilluminationanglesalongtheimageacquisitionmission.Multi-viewdatahasbeenprovedusefulforvegetationinseveralpublications[3134].MostoftheapplicationsreliedonbidirectionalreectancedistributionfunctionBRDFmodelingtoextractBRDF3parametersaspartoflandcoverfeaturestoutilizethemulti-viewinformationforlandcovermapping.However,thistypeofmethodisinefcientandinapplicableforthedeeplearningclassierstoutilizethemulti-viewinformation,sinceDCNNorFCNextractfeaturesautomaticallywithinaspartoftheclassiertrainingprocess.Werecognizetwotypesofconvolutionalneuralnetworksfordeeplearningtechniquesthatareapplicableforlandcovermappingtasks:Therstoneassignssingleclasslabeltothewholeinputimagepatch,whiletheotheroneassignsclasslabelstoeachindividualpixelwithininputimagepatch.WerefertothersttypeasDeepConvolutionalNeuralNetworkDCNNandthesecondtypeasFullyConvolutionalNetworkFCN[35].FCNhasbeenusedtodealwithvariouscomputervisionrelatedproblemssuccessfullyinrecentyearssinceitsintroduction,suchaslivercancerdiagnosisviaanalysisofcanceroustissuepathologicalimage[36],diagnosisofsmallerboweldiseasethroughautomaticallymarkingcross-sectionaldiametersonsmallbowelimages[37],osteosarcomatumorsegmentationoncomputedtomographyCTimages[38],trafcsigndetection[39],etc.ApplicationsusingFCNinremotesensingdomaincanalsobefound,eventhoughtheirnumberisstillsmall.MostofthesestudieswereconductedusingtheISPRSVaihingendatasetachieves[40].Thisdatasetcontains8cmresolutionNearInfraredNIR,RedR,andGreenGbandsorthoimage,pointcloudpoints/m2,and9cmresolutionDigitalSurfaceModelDSMofanurbanarea.ThisdatasetwascollectedforurbanobjectdetectionandhasbeenusedbyseveralstudiescomparingFCNwithotherclassierssuchasDCNNandrandomforest[4143].ArecentstudybyLiuetal.[25]conductedacomprehensivecomparisonamongFCN,DCNN,RF,andSVMperformancesundertheOBIAframework,whenconsideringtheimpactoftrainingsamplesize.ThestudyconcludedthatDCNNmightproduceinferiorperformanceascomparedtoconventionalclassierswhenthetrainingsamplesizeissmall,butittendstoshowsubstantiallyhigheraccuracywhenthetrainingsamplesizeincreases.TheirresultsalsoindicatedthatFCNismore

PAGE 3

RemoteSens. 2018 10 ,457 3of24efcientinexploitingtheinformationinthetrainingsamplesthantheotherclassiersachievinghigheraccuracyinmostcasesregardlessofsamplesize.ThisstudyextendsthestudyofLiuetal.[25]bydevelopingnovelmethodsviaphotogrammetrictechniquestoenabletheFCNtoutilizethemulti-viewdataextractedfromUASimagestoinvestigatewhethertheenrichedtrainingsamplesresultingfrommulti-viewdataextractioncanfurtherimprovetheFCNperformanceandalsocompareFCNwithotherclassiersunderthismulti-viewOBIAframeworkregardingthemulti-viewdataimpactsontheirperformances,inordertondthebestpracticeofapplyingFCNforlandcovermapping. 2.StudyAreaandDataPreprocessing 2.1.StudyAreaTheproposedclassicationmethodsweretestedona677m518marea,whichispartofa31,000-acreranch,locatedinSouthernFlorida,betweenLakeOkeechobeeandthecityofArcadia.Theranchiscomprisedofdiversetropicalforagegrasspastures,palmettowetanddryprairies,pineatwoods,andlargeinterconnectingmarshofnativegrasswetlands[44].Thelandalsohostscabbagepalmandliveoakhammocksscatteringalongthelengthsofcopiouscreeks,gullies,andwetlands.ThestudyareaisinfestedbyCogongrassImperataCylindrical,asshowninthelowerleftcornerofFigure1,scatteredacrossthepasture.Inthisstudy,aCogangrassclassisdenedduetoitsharmfuleffectontheregionasaninvasivespecies.Cogongrassisconsideredoneofthetoptenworstinvasiveweedsintheworld[45].Thegrassisnotpalatableasalivestockforage,decreasesnativeplantbiodiversityandwildlifehabitatquality,increasesfirehazard,andlowersthevalueofrealestate.Severalagencies,includingU.S.ArmyCorpsofEngineersUSACE,areinvolvedinroutinemonitoringandcontroloperationstolimitthespreadofCogangrassinFlorida.TheseeffortswillgreatlybenetfromdevelopinganefcientwaytoclassifyCogangrassfromUASimageries.HavingaccuratemapsoftargetvegetationwouldreducecontractorlaborcostsformostofthespeciesthatUSACEistargeting.Inaddition,anaccuratemapwouldalsoenablethemtoseetheimpactsthattheinvasivespeciesishavingontheadjacentnativeplantcommunitiesandiftheirmanagementeffortsherbicide,mechanicalremoval,etc.arehavinganyimpactsaswellonthenativepopulations.Alloftheotherclasses,excepttheshadowclass,wereassignedaccordingtothestandardofvegetationclassicationforSouthFloridanaturalareas[46].OurobjectiveistoclassifytheCogongrassspecieslevelandveothercommunity-levelclassesaswellastheshadowclass,aslistedinTable1. Figure1.Studyarea:leftcornerhighlightsanareaseriouslyimpactedbyinvasivevegetationCogonGrass.

PAGE 4

RemoteSens. 2018 10 ,457 4of24 Table1. Landcoverclassesinthestudyarea. ClassIDClassNameDescription CGCogongrass CogongrassImperatacylindricaisaninvasive, non-nativegrasswhichoccursinFloridaandseveral otherSoutheasternUSstates. IPImprovedPastureAsownpasturethatincludesintroducedpasturespecies,usuallygrassesincombinationwithlegumes.Thesearegenerallymoreproductivethanthelocalnativepastures,havehigherproteinandmetabolizableenergyandare typicallymoredigestible.Inourcase,wealsoassumeit isnotinfestedbyCogongrass. SUsSawPalmettoShrublandSawPalmettoSerenoarepensdominantshrubland. MFBBroadleafEmergentMarsh Broadleafemergentdominatedfreshwatermarsh.Itcan befoundthroughoutFlorida. MFGGraminoidFreshwaterMarshGraminoiddominatedfreshwatermarsh.ItcanbefoundthroughoutFlorida. FHpHardwoodHammock-PineForest Aco-dominatemix/60to60/40ofSlashPine Pinuselliottiivar.densawithLauralOakQuercus laurifolia,LiveOakQ.virginiana,and/orCabbage PalmSabalpalmetto. ShadowShadowShadowofallkindsofobjectsinthestudyarea. 2.2.UASImageAcquisitionandPreprocessingTheimagesusedinthisstudywerecapturedbytheUSACE-JacksonvilleDistrictusingtheNOVA2.1smallUAS.Aightmissionwasdesignedwith83%forwardoverlapand50%sidelapwasplannedandimplemented.ACanonEOSREBELSL1digitalcameraisusedinthisstudy.TheCCDsensorofthiscamerahas34565184pixels.TheimagesaresynchronizedwithonboardnavigationgradeGPSreceivertoprovideimagelocations.Fivegroundcontrolpointswereestablishedfournearthefourcornersandoneclosetothecenterofthestudyareaandwereusedinthephotogrammetricsolution.MoredetailsonthecameraandightmissionparametersarelistedinTable2. Table2. Summaryofsensorandightprocedure. ItemsDescription UASTypeLightUASwithFixedwing SensorNameCanonEOSREBELSL1 SensorTypeCCD PixelDimension5184 3456 Lengthoffocus20mm SensorSize22.3 14.9mm ChannelsRGB Takeofftime29/10/201516:54:51 EDT a Landingtime29/10/201517:49:33 EDT a TakeoffLatitude27.22736549 TakeoffLongitude )]TJ/F158 8.9664 Tf 7.375 0 Td [(81.51152802 AverageWindSpeed5.1m/s AverageAltitude302.7m AveragePixelSize6.5cm Forwardoverlap83% Sideoverlap50% FOV b across-track 58 FOV b along-track 41 a EasternDaylightTime; b Fieldofviewindegree.

PAGE 5

RemoteSens. 2018 10 ,457 5of24 2.3.OrthoimageCreationandSegmentationTheUASimageswerepre-processedtocorrectforthechangeinsunangleduringtheacquisitionperiodbeforetheorthoimageiscreated.GivenanoriginalUASimageiwithzenithangleq i,theoriginalUASimageswascorrectedasImgCorrected i = ImgOriginal i cos q i cos 75 [47].TheoperationwasconductedonalloftheUASimages.Oncetheimagesarecorrected,theAgisoftPhotoscanProversion1.2.4softwarewasusedtoimplementthebundleblockadjustmentonatotalof1397UASimagesofthestudyarea.Thesoftwarewasusedtoproduceandexporta3bandRed,Green,andBlue6cmresolutionorthoimage,a27cmDigitalSurfaceModelDSM,andthecameraexteriorandinteriororientationparameters.Thethree-bandRGBorthoimage,togetherwiththeDSM,wasanalyzedusingobject-basedanalysistechniques.TheTrimble'seCognitionsoftwarewasusedtosegmenttheorthoimageimage.Segmentationparametersthescale,shape.2andcompactness.5parameterswerecarefullyandmanuallyselectedsuchthattheygavevisuallyappealingsegmentationresultsacrossthemajorityoftheorthoimagefollowingthecommonpracticeforselectingsegmentationparametersforOBIA[48].Thisprocessresultedin40,239objectswithinthestudyarea. 3.Methods 3.1.MultiviewDataGenerationGivenanorthoimageobject,theobjectiveofthissectionistoshowhowtogenerateobjectinstancesontheUASimagescorrespondingtothisorthoimageobject,tosupportmulti-viewobject-basedclassication.ThisproblemcanbeboileddowntoprojectingeachoftheverticesontheorthoimageobjectboundaryontoUASimages.Afterthevertexprojectionisdone,anobjectinstanceontheUASimagecanbeeasilyformedbythreadingtogethertheprojectedboundaryvertices.Thetechniqueintroducedinthissectioni.e.,projectagroundpointontoUASimageswillbealsousedinSection3.3togeneratemulti-viewtrainingsamples.Giventhereal-worldcoordinates,XandY,andZofanobjectboundaryvertexorapointonthegroundontheorthoimageandtheoutputofthebundleblockadjustmentresultsoftheUASimagesthatwererepresentedbythecameraexteriororientationandself-calibrationparameters,itisrequiredtondthexandycoordinatesorrowandcolumnnumbersintheUASimagepixelcoordinatesystem,iftheboundarypointexistsonthatUASimage.ThisrequiresconvertingXYZfromreal-worldcoordinatesystemtocameracoordinatesystemusingEquation,followedbytheconversionfromcameracoordinatesystemtosensorcoordinatesystembyEquationandthenfromcamerasensorsystemtopixelcoordinatesystembyEquation.However,duetothepotentialerrorcomingfrominaccuraciesoftheDSMusedtoextractZvalue,cameraparameterse.g.,focallength,pixelsizeandcameralensdistortion,asimpleconsecutiveapplicationofEquationsusuallygavelargererror.Toreducesucherror,wedevelopedatwo-stepoptimizationmethodtoreducetheprojectionerror.Thestep-oneistoapplytheGeneralizedPatterndirectSearchGPSalgorithm[49]tooptimizethecameraparameterse.g.,focallength,sensorsize,andsensororigin.Thestep-twoistoapplyrandomforestalgorithmtomodeltherelationshipbetweentheerrorandthepointlocationscausingtheerrore.g.,distancefromthepointtoUASimagecenter,Zvalueofthepointandrelativelocationofthepointtotheimagecenterintermsofrowdistanceandcolumndistance.Averageerroraround1.6pixelsintherowdirectionand1.8pixelsinthecolumndirectionwereachievedusingthismethod.Giventheoptimizedcameraparameters,theproceduretoderivethepointcoordinateonUASimageisshowninFigure2.

PAGE 6

RemoteSens. 2018 10 ,457 6of24 Figure2.ProceduretoprojectagroundpointXYZtopixelcoordinatesonUnmannedAerialSystemUASimages. 2 6 4 X c Y c Z c 3 7 5 = [ X )]TJ/F152 9.9626 Tf 10.405 0 Td [(X 0 Y )]TJ/F152 9.9626 Tf 10.056 0 Td [(Y 0 Z )]TJ/F152 9.9626 Tf 10.405 0 Td [(Z 0 ] 2 6 4 r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 3 7 5 whereX c,Y c,Z careoutputofthisconversion,representingpointcoordinatesinCameraCoordinateSystem,X,Y,ZrepresentpointcoordinateinWorldCoordinateSystem,X 0,Y 0,Z 0representcameracoordinatesinWorldCoordinateSystemandr ijisthe i throwandj thcolumntheelementofcamerarotationmatrixR.X,Y,ZwereextractedfromArcMapusingsegmentedorthoimageandDSM.X 0 Y 0 Z 0 androtationmatrixRwereextractedfrombundleadjustmentpackage,suchasAgisoft. x s y s # = x o )]TJ/F152 9.9626 Tf 11.551 0 Td [(f X c Z c y 0 )]TJ/F152 9.9626 Tf 11.55 0 Td [(f Y c Z c # wherex s,y sareoutputsinthisconversion,representingpointcoordinatesinSensorCoordinateSystemandfisthefocuslengthofcamera.X c,Y c,Z ccomefromEquation.x o,y oaresensorcoordinateoffsetwithunitofmillimeter,andtheyareabouthalfofwidthandlengthofsensordimension.f x o y owerealsoextractedfrombundleadjustmentresult. x p y p # = round x s p round H )]TJ/F152 7.5716 Tf 11.422 5.182 Td [(y s p # wherex p,y pareoutputsinthisconversionrepresentingcolumnnumberandrownumberofthepointi.e.,rawpixelcoordinatesontheUAVimagetakenbythecameraunderconsideration.x s,y scomefromEquation.pisthepixelsizeinmillimeterandHistheheightinpixelsofUAVimageinthecaseofthisstudy.Sinceintegerisnotguaranteedasaresultofdivisionoperation,theroundingoperationfollows.Inourstudy,segmentationresultsthatweregeneratedfromeCognitionpackageseeSection2.3wereimportedintoArcGIStoextracttheverticesforeachobjectandXYZworldcoordinatesofeachvertex,afterwhichverticeswerethenexportedfromArcGIStoMatlabtogeneratethemulti-viewobjectinstances. 3.2.FullyConvolutionalNetworksThebuildingstructureofFCNisshowninFigure3,includingtheconvolutionaloperation,regularizationdropoutmethod[50],RectiedLinearUnitReLUactivationfunction[51],summationoperation[24],maxpooling,anddeconvolutionaloperation[35].DeconvolutionaloperationisthekeytoimplementtheFCNanddifferentiateitselffromtheDCNN.Itemploystheupsamplingmethodtoturnacoarselayerintoadenselayertomakethenalpredictionoutputhavingthesamerownumberandcolumnnumberastheinputimage,asindicatedbytheendingillustrationofFigure3.

PAGE 7

RemoteSens. 2018 10 ,457 7of24 Figure3.BuildingstructureofFullyConvolutionalNetworkFCNshowingthedeconvolutionaloperationimplementedtomaketheoutputhavingthesamerowandcolumnnumberastheinput.TheFCNcalculatesthecross-entropyforeachpixelandsumthemupacrossallofthepixelsandallthetrainingsamplesinatrainingbatchasthecost. C = )]TJ/F158 9.9626 Tf 39.651 6.744 Td [(1 n i = 1 row i col i n i = 1 row i p = 1 col i q = 1 m j = 1 1 f 1 g y j p q ln a j p q where,1A x = 1 ifx 2 A 0 ifx / 2 A,Aisagivenset,a j p q = e z j p q m k = 1 e z k p q,nisthetotalnumberoftrainingsamplesinagiventrainingbatch,misthetotalnumberofclasses,equalto7forourstudy,a j p qisthesoftmaxoutputforarowpandcolumnqpixellocationforclassjtrainingsamplei,whichisomittedinthenotationforsimplicity,y j p q 2,1indicatingwhetherthegroundtruthclassIDforapixellocatedinrowpandcolumn q is j meanstrueand0meansfalse. TrainingofFCNisconductedthroughstochasticsgradientdescentSGD[52] w updated = w current )]TJ/F288 9.9626 Tf 10.255 0 Td [(l C w wherew updatedistheupdatedparametervalue,w currentisthecurrentvalue,listhelearningrate,and C wisthegradientofwi.e.,derivativeofparameterwwhencostvalueisCforabatchoftrainingsamples.

PAGE 8

RemoteSens. 2018 10 ,457 8of24TheparameterderivativesareobtainedbyalternativelyconductingforwardpropagationEquationandbackwardpropagationEquationsand. y l = h y l )]TJ/F158 7.5716 Tf 6.227 0 Td [(1 C y l )]TJ/F158 7.5716 Tf 6.227 0 Td [(1 = n k = 1 C y l k y l k y l )]TJ/F158 7.5716 Tf 6.228 0 Td [(1 C w l = n k = 1 C y l k y l k w l InEquation,y landy l )]TJ/F158 7.5716 Tf 6.227 0 Td [(1representvariablevaluesinlayerlandl )]TJ/F158 9.9626 Tf 0.996 0 0 1 407.66 608.771 Tm [(1,respectively,connectedwithfunctionhx:=y l )]TJ/F158 7.5716 Tf 6.228 0 Td [(1 y l.y l kisthekthelementinlayery l,throughwhichelementiny l )]TJ/F158 7.5716 Tf 6.228 0 Td [(1hasanimpactoncostC.Thefunctionh x canbeconvolutionaloperation,ReLUactivation,maxpooling,dropout,deconvolutionaloperation,andsumoperation,dependingonthelayertypeusedintheFCNstructure.Equationdoesnotapplytoeverytypeoflayer,sincesomelayersmaynothaveparameterstolearn,e.g.,ReLU,maxpooling,sumoperationetc.Forthoselayers,onlyEquationisusedduringthebackpropagation. 3.3.OBIAClassicationUsingOrthoimagewithFCNBeforeintroducingthemulti-viewOBIAusingFCNinSection3.4,OBIAusingorthoimageonlywithFCNasclassierisbrieyexplainedinthissection.Readersarereferredto[25]formoredetailsaboutthismethod.Theworkowoftraditionalobject-basedimageclassication,commonlyappliedtohigh-resolutionorthoimages,asimplementedintheTrimble'seCognitionsoftware[53]canbesummarizedinthreemainsteps:Imagesegmentationintoobjectsusingapredenedsetofparameters,suchasthesegmentationscaleandshapeweight,Extractionoffeatures,suchasmeanspectralbandvaluesandthestandarddeviationofthebandvaluesforeachobjectinthesegmentedimage,andTrainandimplementaclassier,suchasthesupportvectormachine[54],randomforest[55],orneuralnetworkclassiers[56].LiketraditionalOBIAclassication,OBIAwithFCNstartswithorthoimagesegmentation.However,differentfromtraditionalOBIAclassication,atrainingsampleforFCNiscomposedofanimagepatchandacorrespondingpixellabelmatrixofthesamesize,insteadofanobjectfeatureanditscorrespondinglabelintraditionalobject-basedclassication.Twooptionsforgeneratingtheindividualpixellabelsoftheimagepatchexistresultingfromdifferenttreatmentsofthepixelssurroundingtheobjectunderconsideration.TherstoptionOptionIistodisregardthetrueclasstypesforallofthepixelssurroundingtheobjectwithintheimagepatchbylabelingthemsimplyasbackground,whilethesecondoptionOptionIIistolabeleachpixelwiththeirtrueclasstypes.Figure4illustratesthesetwooptionsforcreatingFCNtrainingsamplesforOBIA.InFigure4a,thepolygonhighlightedatthecenterrepresentsonesampleobjectresultingfromtheorthoimagesegmentationandFigure4b,cillustrateOptionIandIIforpreparingFCNtrainingsamples,respectively.InFigure4b,aredrectangleisformedexactlyenclosingtheobject;withinthisrectangle,onlythecentralobjectpixelshavetrueclasslabel,whilealltheremainingpixelsarelabeledasbackground.Incontrast,Figure4cshowsanimagepatchwhereallofthepixelsinsidethepatcharelabeledwiththeirtrueclasstypes.TheOptionIandIIorthoimagetrainingsamplesarereferredtoasOrtho-IandOrtho-IIhereinafter.Subsequently,FCN-Ortho-I-OBIAisusedtorefertotheOBIAclassicationusingtheOrtho-Ii.e.,Figure4btrainingsamplesandFCNasclassier,whileFCN-Ortho-II-OBIAisthesameasFCN-Ortho-I-OBIAexceptthattheOrtho-IIi.e.,Figure4csampledatasetwasusedtotraintheFCNclassier.

PAGE 9

RemoteSens. 2018 10 ,457 9of24 Figure4.Ortho-IandOrtho-IItrainingsamples.Imagepatchboundaryisindicatedbytheredrectangle:aCogongrassobjectsurroundedbyImprovedPastureandCogonGrassobjects;bpixelswithinthepatchsurroundingtheobjectarelabeledasbackgroundforOrtho-Isample;callpixelswithinthepatcharelabeledusingtheirtrueclasstypesforOrtho-IIsample.AftertheFCNclassierwastrainedusingeithertheOrtho-IorIItrainingsamples,theprocedurethatisillustratedinFigure5isusedtogenerateaclasslabelforagivenobject.InFigure5,anobjectishighlightedatthecenterFigure5aandarectangleisformedenclosingittoextracttheimagepatchFigure5b.Then,thetrainedFCNclassierisappliedtotheimagepatchtogettheclasslabelsforallofthepixelswithintheimagepatchFigure5c.Afterthat,theobjectboundaryisoverlaidontheimagepatchagainFigure5dtondthemajorityoflabeledpixelswithintheobjectasthenalclassicationresultfortheobjectFigure5e. Figure5.Proceduretogettheobjectlabelusingtrainedclassier:aanobjectisoverlaidontheorthoimage;barectangleisformedtoextractimagepatch;capplythetrainedFCNclassiertotheimagepatchtolabelallthepixelswithintheimagepatch;doverlaytheobjectontotheclassicationresult;endthemajorityofpixellabelswithintheobjecttoobtainthenalclassicationresultsfortheobject. 3.4.OBIAClassicationUsingMulti-ViewDatawithFCNThemulti-viewdataderivedfromthemethodexplainedinSection3.1isillustratedinFigure6.AtthecenterofFigure6a,anobjectresultingfromtheorthoimagesegmentationprocedureisshown.SurroundingtheorthoimageobjectaretheUASimageswiththeboundaryoftheobjectinstanceshighlighted.Thegurealsoshowsthelocationofsun.Thevariationoftheautomaticallyexpandedthetrainingdatasetnotonlycomesfromgeometricchangesi.e.,shapedistortion,butalsofromthespectraldifferenceresultingfromtheBRDFpropertiesofthelandcoverclassesseeFigure6b.ItcanbeseeninFigure6bthattheimagesclosertothesuntendtohaveabrightertone.ThephenomenoncanbeattributedtothehotspoteffectoftheBRDF[57].Tomakethisphenomenonappearmore

PAGE 10

RemoteSens. 2018 10 ,457 10of24obvious,themeanvalueofredbandforeachobjectinstanceontheUASimageiscalculatedandplottedonaconcentricdiagraminFigure6b,wherewarmercolorindicatesahigherdigitalnumbervalueandzenithvaluesarerepresentedbycirclesevery5from5to35.Theprojection,asshowninFigure6a,wasimplementedusingtechniquesintroducedinSection3.1.Subsequently,theobjectontheorthoimagethatislocatedatthecenterofFigure6aisreferredtoasorthoimageobject,whiletheobjectsontheUASimagessurroundingtheorthoimageobjectinFigure6aarereferredtoasmulti-viewobjectinstances. Figure6.Multi-viewdataforanorthoimageobject:amulti-viewobjectinstancescorrespondingtoanorthoimageobject;bdistributionofthemeanvalueoftheobject'sredbandforthemulti-viewobjectinstances.ToimplementtheOBIAclassicationusingmulti-viewdatawithFCN,trainingwasrstperformedusingmulti-viewtrainingsamples,insteadoforthoimagetrainingsamplesi.e.,Ortho-IandIItrainingsamplesshowninFigure4.Thisway,trainingsampleswereexpandedto10timesthetrainingsamplesthatwereusedintheOBIArelyingontheorthoimageonly,whenconsideringthatoneorthoimageobjectmaygenerate10objectinstancesontheUASimages,asindicatedinFigure6a.AfterFCNwastrainedusingtheexpandedmulti-viewtrainingsamples,thesameprocedureillustratedinFigure5wasappliedtoeachofthemulti-viewobjectinstances.Afterclassicationresultsforallofthemulti-viewobjectinstanceswereobtained,votingwasconductedtondthemajorityvoteasthenalclassicationresultfortheorthoimageobject.Themulti-viewtrainingsamplesversioncorrespondingtoOrtho-IandOrtho-IIwerereferredtoasMV-IandMV-II,respectively,hereafter.TheobjectiveistoautomaticallygenerateMV-IandMV-IIgivenlabelinformationontheorthoimage,avoidingthelaboriousworktopreparetrainingsamplesformulti-viewclassication.Suchanautomationprocedureiscriticalforperformingmulti-viewclassicationusingFCNfromapracticalpointofviewsincetrainingsamplesforthemulti-viewclassicationare10timestheorthoimageobjectsandpreparingsuchalargenumbertrainingsamplesmanuallyistediousandtime-consuming.ThemethodsproposedinthisstudytoautomaticallygenerateMV-IandMV-IItrainingsamplesareexplainedinthefollowing: 3.4.1.Multi-ViewTrainingSampleswithoutContextInformationMV-ISampleGenerationGivenanOrtho-Itrainingsample,theboundaryoftheobjectcorrespondingtotheOrtho-IwasprojectedontotheUASimage.Then,theMV-Itrainingsamplewasgeneratedautomaticallybysimplylabelingthepixelswithintheboundarywiththeobjectclasslabelandlabelingthepixelsoutsidetheboundaryasbackground. 3.4.2.Multi-ViewTrainingSampleswithExactContextInformationMV-IIASampleGenerationThesituationbecomesmorecomplicatedwhentryingtoautomaticallygeneratetheMV-IItrainingsamplesinceitrequiresthelabellingofallofthepixelsusingtheirtrueclasstypes.Thisstudyproposed

PAGE 11

RemoteSens. 2018 10 ,457 11of24andcomparedtwomethodsforgeneratingMV-IIsamples,andtheyarereferredtoasMV-IIAandMV-IIB,respectively.WhileMV-IIAsamplesareexactreproductionsoflabellinginformationbasedontheorthoimagesamples,MV-IIB,withanapproximatecopyoflabellinginformation,isalsointroducedduetoitssimplicityandcomparableclassicationperformancecomparedwithMV-IIA.EachofthesetwomethodsMV-IIAandMV-IIBareexplainedbelow.Givenoneorthoimageobjectwiththegroundtruthlabelinformation,asshownatthecenterofFigure7a,MV-IIAgenerationstartswithprojectingtotheUASimagessomeverticesselectedreferredtoastheVSset,hereaftersurroundingtheorthoimageobject,afterwhichtrainingsampleontheorthoimagewerereconstructedonUASimagesusingthelabelinformationoftheseprojectedverticesinVS.VSshouldbecarefullyselected:thenumberofverticesinsetVSshouldbehighenoughtoallowaccuratelabellingofeachpixelwithintheimagepatchusedasFCNinput,whileatthesametime,itshouldbelowenoughtofacilitatefastprojectioncomputation.Inthisstudy,weproposethemethodillustratedinFigure7toselecttheVSvertices.TheobjectthatishighlightedinFigure7aorthoimageobjectissurroundedwithlabeledobjectsofImprovedPastureclassandCogonGrassclass.TheblackdotsinFigure7arepresenttheverticesoftheobjectboundary,notingthattheseverticesaresharedbyneighboringobjects.InFigure7b,aseriesofobjectboundingboxesenclosingrectanglesweregeneratedbyrotatingtheobject'sboundingboxontheorthoimagearoundtheorthoimageobjectevery4.5degreestoaccountforthepossiblerotationsoftheobjectontheUASimages.InFigure7c,theareathatiscoveredbyallboundingboxeswithallrotationswasextracted.InFigure7d,atwo-pixelwidebufferareaiscreatedtoaccountforthepotentialdistortionoftheboundingboxresultingfromthedistortionsexpectedinaerialimagery,includingtheeffectoftheprojectiveprojection.Then,theverticesinFigure7athatwerecoincidentwiththeshadedareainFigure7ewereextracted.ThoseselectedverticesshowninFigure7e,makeupalltheverticesinVS.Finally,theseverticesshowninFigure7ewereprojectedontotheUASimagestoreproducetheMV-IIA. Figure7.Illustrationtoshowtheproceduretoselectverticesforprojection:averticesresultingfromsegmentationandanobjectunderconsiderationhighlightedatthecenter;brotatedrectanglesenclosingtheobject;careacoveredbyallrectanglesingrey;dareacoveredbyallrectanglesexpandedbytwopixels;everticesselectedVStorepresentobjectunderconsiderationanditsneighborhoodobjects.

PAGE 12

RemoteSens. 2018 10 ,457 12of24AftertheverticesintheVSwereprojectedfromtheorthoimageontotheUASimage,theywereusedforreconstructingthetrainingsamplesontheUASimages.Itshouldbenotedthatthereisamany-to-manyrelationshipbetweenobjectsandvertices,sothatonevertexmaybesharedbymultipleneighboringobjectsandoneobjectcontainsmultiplevertices.Totakeadvantageofthisrelationshipforgeneratingthemulti-viewtrainingsamplesmoreefciently,webuiltasimplerelationaldatabase,asshowninFigure8,sothatforanycentralobjectoritsneighboringobjectswithinanorthoimagepatchprojectedontheUASimages,wecaneasilydeterminewhichvertexitcontainsandwhatclasslabelitbelongstoandviceversa.Theverticeswithintheprojectedimagepatchwereextracted,denotedasVSp.Clearly,VSp VS,sinceVSpcorrespondstoonexedorientation,whileVSwasextractedfromvirtually360-degreeorientation.WequeriedthedatabasetondalloftheobjectIDscorrespondingtothevertexinVSpandwedenotethefoundobjectIDsassetC.ForeachoftheelementsinC,wequeriedthedatabaseagaintondallthevertexbelongingtothiselementandtheclassIDcorrespondingtotheobject.Afterthat,aclosedboundarywascreatedbasedonthefoundverticesandthefoundclassIDisassignedtotheclosedareaboundedbytheseverticesandpatchboundaries.WerepeatthisprocedureforalltheelementinCtolltheprojectedpatchwithitsassociatedclasslabels.Thislabeledimagepatchmakesuponetrainingsampleforthemulti-viewFCNclassication. Figure8. RelationaldatabaseusedtolabelimagepatchofUASimages.3.4.3.Multi-ViewTrainingSampleswithApproximateContextInformationMV-IIBSampleGenerationAswejustshowed,MV-IIArequirestheimplementationofvertexdeterminationseeFigure7andrelationaldatabaseseeFigure8,notonlyforthesampleobject,butalsoforsurroundingobjectstoaccuratelylabeleachpixelwithintheimagepatchoneachUASimagehavingthesampleobject.Thisisacomplicatedprocessdemandingexpensivecomputations.Tosimplifytheprocedure,wedesignedanothermethodthatusesnearestneighborhoodmethodtoapproximatelabelinformationfortheMV-IIsamples.ThesamplesthatweregeneratedusingthismethodisdenotedasMV-IIBsamples.ThemethodusedtopreparetheMV-IIBsamplesisillustratedinFigure9,whichshowsanorthoimagetrainingsampleontheleftFigure9aandonemulti-viewtrainingsamplethatisautomaticallygeneratedusingthenearestneighborhoodlabelingmethodontherightFigure9b.Itshouldbenotedthatinpracticethemulti-viewsamplemayberotatedascomparedtotheorthoimageobject,butforillustrationpurposes,weletthemulti-viewtrainingsampleandtheorthoimagesamplehavethesameorientationinFigure9.InFigure9a,thetwo-pixelwidebufferareaofthecentralobjectishighlightedinyellow.Foreachpixelwithinthisyellowarea,weextracteditslabelinformationfromtheorthoimagetrainingsample.AfterthepixelswithinthebufferareaareprojectedontotheUASimages,weassignthenearestneighborlabelfromthebufferareatotheunlabeledpixelsbetweentheimagepatchboundaryandbufferarea.Fortheareasurroundedbythebufferarea,wejustsimplyassignthelabelofthecentralobjecttoallofthepixelswithinthisarea.Whilethismethodismucheasiertoimplement,forobjectshavingcomplicatedneighborhoodsetup,itwouldresultinmislabeledpixels.ThisimperfectionisexposedbycomparingtheshadowareainFigure9a,b.IntheupperrightcornerofFigure9b,apatchofimprovedpastureareaismislabeledasshadowusingthenearestneighborhoodlabelingmethod,whichisarecognizedlimitationofthismethod.

PAGE 13

RemoteSens. 2018 10 ,457 13of24 Figure9.IllustrationforusingnearestneighbormethodtoautomaticallygeneratetheMV-IIBmulti-viewtrainingsamplesfromagivenorthoimagetrainingsampleOrtho-II:aanorthoimagetrainingsampleOrtho-IIwithtwo-pixelwideexpansionhighlightedinyellow;bmulti-viewtrainingsampleMV-IIBgeneratedusingthenearestneighborlabelingmethodonaUASimage.FollowingthenamingconventionsinSection3.3,FCN-MV-I-OBIAisusedtodenotetheOBIAclassicationutilizingtheFCNclassierandMV-Itrainingsamples.FCN-MV-IIA-OBIAisthesameasFCN-MV-I-OBIA,exceptforutilizestheIIAmethodtopreparethetrainingsamples.Similarly,usingtheMV-IIBsamplesproducestheFCN-MV-IIB-OBIAclassicationresults. 3.5.BenchmarkClassicationMethodsWealsoimplementedOBIAclassicationusingDCNNforboththeorthoimageandmulti-viewdata.TheformerresultsaredenotedDCNN-Ortho-OBIA,andthelatterisreferredtoasDCNN-MV-OBIA.LikeFCN-Ortho-OBIA,DCNN-Ortho-OBIAusesimagepatchesthatexactlyenclosetheobjects.DifferentfromFCN-Ortho-OBIA,DCNN-Ortho-OBIAonlyneedslabelinformationofthecentralobjectfortraining,insteadofallofthepixelswithintheimagepatch.DCNN-MV-OBIAobtainsthenalclassicationresultforagivengroundobjectbyndingthemajorityvoteofitsmulti-viewobjectinstanceclassicationresults,similartotheFCN-MV-OBIAmethod.ThedifferencebetweenDCNN-MV-OBIAandFCN-MV-OBIAisanalogoustothatbetweenDCNN-Ortho-OBIAandFCN-Ortho-OBIAintermsofhowthetrainingsamplesarebeingprepared.TheDCNNclassierusedinthisstudyhassimilarlayertypesastheFCNexceptthatitdoesnotneeddeconvolutionallayers.Traditionalclassiers,suchasSupportvectormachineSVMandrandomforestRF,weretestedundertheOBIAframeworkusingtheorthoimageandmulti-viewdata.TheclassicationresultsutilizingorthoimagedatawerereferredtoasRF-Ortho-OBIAandtheonesusingmulti-viewdataweredenotedRF-MV-OBIA.SimilarnamingconventionwereappliedtotheSVMclassication,generatingtheSVM-Ortho-OBIAandSVM-MV-OBIAresultswhenusingtheorthoimageandmulti-viewdata,respectively.TheRF-Ortho-OBIAandSMV-Ortho-OBIArepresentedtheimplementationsoftraditionalOBIAclassiersasmentionedinthebeginningofSection3.3.Meanvalue,standarddeviation,maximum,andminimumofthered,green,andbluebandswereextractedandusedasobjectfeaturesinbytheRFandSVMclassiers.Gray-LevelCo-OccurrenceMatrixGLCMtexturefeatureswereexcluded

PAGE 14

RemoteSens. 2018 10 ,457 14of24fromclassicationaftertheyweretestedandfoundhavinglittleeffectonimprovingclassicationaccuracy.Geometricfeaturese.g.,objectarea,borderandshapeindexfeatureswerenotincludedforclassication,sincethesefeatureswerenotfoundtobeusefulforOBIAclassicationbasedonourpreliminaryexperimentsandpreviousstudies[58,59].ThesametypeoffeatureswasusedintheSMV-MV-OBIAandRF-MV-OBIA,similartotheirorthoimagecounterparts.However,thesefeatureswereextractedfromthemulti-viewobjectinstancesandtrainingwasconductedusingalltheobjectinstancesofthetrainingsamplesasdoneinthewithFCN-MV-I-OBIA,FCN-MV-IIA-OBIA,andFCN-MV-IIB-OBIAclassications.Also,foragivenorthoimageobject,itsnalclassicationresultwasobtainedbyndingthemajoritythroughvotingfromitsobjectinstances.TheRFandSVMclassierparameterswereadjustedtomakesurethattheirperformancesasgoodaspossibleforourdataset.Forexample,thenumberoftreesforRFwastestedfrom50to150with10treesinterval,withnoimprovementinclassicationaccuracywhenthenumberoftreeswereincreased.Also,threetypesofkernelsforSMVweretestedinourpreliminaryexperiments,anditfoundchangeofkernelsfromGaussian,lineartopolynomialkernelsmadelittleimpactonSVMclassicationaccuracyforourdataset.SVMisinherentlyabinaryclassier;weadoptedtheone-versus-oneoptionratherthantheone-versus-allstrategytoadaptformulti-classclassicationbasedonpreviousstudies[60].ThesetestsresultedinusingRFwith50treesandSVMwithGaussianaskerneltogeneratetheclassicationresultsinthisstudy.Insummary,weexperimentedwith11classicationmethods,includingFCN-Ortho-I-OBIA,FCN-Ortho-II-OBIA,FCN-MV-I-OBIA,FCN-MV-IIA-OBIA,FCN-MV-IIB-OBIA,DCNN-Ortho-OBIA,DCNN-MV-OBIA,RF-Ortho-OBIA,RF-MV-OBIA,SVM-Ortho-OBIA,andSVM-MV-OBIA.Alloftheseclassicationmethodsusedthesamesetoforthoimageobjectsfortrainingandtesting.400orthoimageobjectswererandomlyselectedforeachclass,generating2800samplesintotal.Amongthe2800samples,10%i.e.,280wererandomlyselectedfortestingandtheremaining90%i.e.,2520wereusedfortraining.The2520orthoimageobjectswereusedtotrainalloftheclassiersthatutilizedorthoimagedata.30,807trainingobjectinstanceswereautomaticallygeneratedusingboundaryprojectionontheUASimagesfromthe2520orthoimageobjectsandwereusedtotrainalloftheclassiersthatutilizedmulti-viewdata.Regardlessofthetrainingobjecttypesi.e.,orthoimageormulti-viewdata,alltheclassierswereevaluatedusing280orthoimageobjectsfortesting.Obviously,toevaluatethemulti-viewclassicationresults,multi-viewobjectinstancescorrespondingtothe280orthoimageobjectswereextractedandusedintheclassication.3447objectinstancesonUASimageswereextractedforthe280orthoimageobjects.Foreachtestingorthoimageobject,itslabelwasgeneratedviavotingfromitsobjectinstancesforallofthemulti-viewclassications.Figure10showsasimpliedowchartoftheOBIAclassicationexperimentsthatwereconductedinthisstudy.Givenanorthoimageobject,verticesonitsboundarywereprojectedontoUASimagestogenerateobjectinstancesonUASimagesusingDSM,data,camerarotationmatrices,andboundaryprojectiontool.Theorthoimageobjectandobjectinstanceswereused,respectively,withclassiersSVM,RF,FCN,andDCNN,resultingin11setsofexperimentresults.AsshowninFigure10:

PAGE 15

RemoteSens. 2018 10 ,457 15of24 Figure10. Simpliedowchartofthisstudyexperimentaldesign. 4.ResultsFigure11rankstheoverallaccuracyforallclassicationresultspresentedinFigure10fromthelowesttothehighest.Bothdeeplearningi.e.,FCNandDCNNandtraditionalclassiersi.e.,SVMandRFarehighlighted.TheclassicationaccuracyobtainedinthisstudybyconventionalclassiersarecomparablewithpreviousstudiesconductingwetlandmappingusingtheSVMclassier[7].Thelowestaccuracyof66.1%wasobtainedbytraditionalclassicationSVM-Ortho-OBIA,whilethehighestof87.1%wasachievedbytheproposedmethodFCN-MV-IIA-OBIA,achieving21.0%improvement.Regardlessofusingorthoimageormulti-viewdata,deeplearningclassierstendtoshowhigherclassicationaccuracythantraditionalclassiers,eventhoughtheadvantagesvarywithtypesofdatasetsandclassiers.Forexample,FCN-MV-IIA-OBIAgota10.0%improvementwhencomparedwithRF-MV-OBIAusingmulti-viewdata.1%forFCN-MV-IIA-OBIAversus77.1%forRF-MV-OBIA;incontrast,FCN-Ortho-II-OBIAobtained16.0%increasewhencomparedwithRF-Ortho-OBIAusingorthoimage.1%forFCN-Ortho-II-OBIAversus66.1%forRF-Ortho-OBIA.FCNproducedmuchhigheraccuracycomparedtotheotherthreeclassierswhenallofthemusedorthoimageinformation.8%forFCN-Ortho-I-OBIAversus66.1%forSVM-Ortho-OBIA,66.9%forRF-Ortho-OBIA,and67.1%forDCNN-Ortho-OBIA.AfteraddingindividualpixelinformationtoFCN,itproducedevenhigheraccuracy.1%forFCN-Ortho-II-OBIAversus76.8%forFCN-Ortho-I-OBIA.Multi-viewdatastillbenettedtheFCNforclassication.8%forFCN-MV-I-OBIAversus76.8%forFCN-Ortho-I-OBIA,87.1%forFCN-MV-II-OBIAversus82.1%forFCN-Ortho-II-OBIA.

PAGE 16

RemoteSens. 2018 10 ,457 16of24 Figure11. Overallaccuraciesobtainedfromthe11classicationmethodstestedinthisstudy.Figure12showstheproduceranduseraccuracyforallthe11classicationexperiments.InFigure12,deeplearningclassiersi.e.,FCNandDCNNandtraditionalclassiersi.e.,RFandSVMaredenotedusingtwodifferentcolors.Classicationresultsusingtheorthoimageandmulti-viewdataarerepresentedbytrianglesandcircles,respectively.Itshouldalsobenotedthatforagivenclassier,theresultsofusingtheorthoimageandmulti-viewdataareplacedtogetherinFigure12seeaxisnotationsontheleftboundaryofFigure12,withclassicationresultsoftheorthoimagedataalwaysbeingplacedabovetheclassicationusingmulti-viewdata.Figure12showsthatwithonlyfewexceptions,multi-viewclassierstendtogivehigherclassicationaccuraciesthanthoseusingorthoimagedataonlyforallclasses.Additionally,deeplearningclassierstendtoshowhigheraccuraciesthanthetraditionalclassiersgenerally.FortheinvasiveCogangrassclassCG,DCNN-MV-OBIAobtainedthehighestproduceranduseraccuracy,implyingthatthisclassicationmethodisusefulformappingthisinvasivevegetation.WhileRFshowedslightlybetteraccuracythanSVMfortheCGandFHpclasses,thesetwoclassierspresentedcomparableaccuraciesforotherclasses.FCN-MV-II-OBIAshowedhigheraccuracythanFCN-MV-I-OBIAforalltheclassesexcepttheproduceraccuracyoftheIPclassandtheuseraccuracyoftheMFGclass,indicatingthattheobject'ssurroundinginformationbenettedtheFCNclassicationingeneral.Figure12alsoindicatesthathillylandscapesseemtobenetmorefrommulti-viewclassicationthantherelativelyattenlandscapesdo.Forexample,FHpconsistsofvarioustrees,resultinginmoreelevationvariationsthanotherlandcovertypesinourstudyarea,andFigure12showsformostclassierstheaccuracyimprovementsduetotheuseofmulti-viewdatatendtobehigherforFHpclassthanthatforotherclasses.Thisconclusionneedsfurtherinvestigationinatopographicallyruggedlandscape,whichcanbeasubjectforfuturestudies.

PAGE 17

RemoteSens. 2018 10 ,457 17of24 Figure12. Produceranduseraccuraciesfordifferentclassicationmethods.Figure13presentstheclassicationmapsthatwerederivedfromFCN,togetherwithorthoimageandreferencemap.Whencomparedwithmapsthataregeneratedbytheotherclassiers,Figure13eisclosertothereferencemapbasedonvisualinspection,whichisconsistentwithwhatweobservedinFigure11,emphasizingthesuperiorityoftheFCN-MV-IIA-OBIAclassier.Figure13alsoindicatesmanyIPareasaremislabeledasCGinFigure13a,d,implyingtherelativelyhighcommissionerrori.e.,loweruseraccuracyoftheCGclassusingFCN-Ortho-I-OBIAandFCN-MV-I-OBIA,whichisinlinewiththeresultsinFigure12.Figure14displaysthezoom-inversionofFigure13tohighlighttheareaimpactedbyCogongrass.Figure14b,cshowtheFCN-Ortho-II-OBIAandFCN-MV-IIA-OBIAhavingsimilarqualityformappingCogongrass,reectingsimilaraccuracyforCogongrassasshowninFigure12.Noticethatinthelowerrightcorner,IPareaismoreeasilytobeoodedthanotherareasduetoitsrelativelylowerelevationinthiswetlandsetup,makingthissmallpatchofIPspectrallysimilartotheMFGclass.Suchaphenomenonthatdifferentlandcoversmayexhibitsimilarspectralresponseisnotuncommoninawetlandarea,asindicatedinseveralwetlandstudies[61,62].Themulti-viewclassicationapproachseemsmoresensitivetothissubtlechangethantheircounterpartsusingorthoimageforclassication,withmorepixelsoftheIPclassinthisareabeingevenmistakenlyclassiedasMFGbytheFCN-MV-IIA-OBIAthanthatbytheFCN-Ortho-II-OBIA,whichpotentiallyconstitutesoneofreasonstoaccountfortherelativelyhigherproduceraccuracyfortheIPclassthatwasobtainedbyFCN-Ortho-II-OBIAshowninFigure12.

PAGE 18

RemoteSens. 2018 10 ,457 18of24 Figure13.Classicationmaps:aFCN-Ortho-I-OBIA;bFCN-Ortho-II-OBIA;corthoimage; d FCN-MV-I-OBIA; e FCN-MV-IIA-OBIA; f ReferenceMap. Figure14. Cont.

PAGE 19

RemoteSens. 2018 10 ,457 19of24 Figure14.Zoom-inareahighlightingtheCogongrassareaaFCN-Ortho-I-OBIA;bFCN-Ortho-II-OBIA; c orthoimage; d FCN-MV-I-OBIA; e FCN-MV-IIA-OBIA; f ReferenceMap. 5.DiscussionFCNshowedhigheraccuracythantraditionalclassiersi.e.,RF,SVM,regardlessofwhethertheseclassierswereappliedonorthoimagedata.8%forFCN-Ortho-I-OBIAversus66.1%forSVM-Ortho-OBIAand66.9%forRF-Ortho-OBIAinFigure11ormulti-viewdata.8%forFCN-MV-I-OBIAversus77.1%forSVM-MV-OBIAand77.9%forRF-MV-OBIA.TheimprovementbyFCN,whencomparedtoRF,ismoreobviousthantheresultsofthestudythatarepresentedby[41],whichshowedFCNproducingonly1.7%improvement.0%forFCNversusaccuracy86.3%forRFinanurbanenvironment.Thisisincontrastwiththe9.9%and3.9%improvementshownbythepresentstudy,respectively,fortheorthoimageandmulti-viewdata.FCNevenobtainedcomparableaccuracyusingorthoimageonlywithoutcontextinformationwhencomparedwithRFandSVMthatusedmulti-viewdata.8%forFCN-Ortho-I-OBIAversus77.1%forSVM-MV-OBIAand77.9%forRF-MV-OBIA,whichshowstherelativelyhighefciencyofFCNinutilizingthetrainingdataforclassicationwhencomparedwithtraditionalclassiers.Inadditiontoitssuperiorclassicationaccuracy,FCNdoesnotrequirefeatureextractionandselection.TheseattributesmakeFCNapreferredclassieroverRFandSVMforOBIAclassicationsfromaperspectiveofaccuracy.However,itshouldbementionedthattrainingFCNisextremelyslowwhencomparedwithtraditionalclassiers,eventhoughapplyingthetrainedFCNtotestingdataisasfastastraditionalclassiers.WhileitonlytookfewminutestotrainSVMandprobablyshortertimeforRF,trainingFCNforFCN-Ortho-I-OBIAandFCN-MV-I-OIBAtookabout17and76h,respectively,evenwithacomputerequippedwithpremiumGraphicsProcessingUnitGPU,likeNVIDIAGPUPascalTitanX.WhileFCNobtainedhigheraccuracythanDCNNusingtheOrtho-Isamples.8%forFCN-Ortho-I-OBIAversus67.1%forDCNN-Ortho-OBIAinFigure11,DCNNovertookFCNwhenMV-Isampleswereused.9%forDCNN-MV-OBIAversus81.8%forDCNN-FCN-OBIA,indicatingthatDCNNismoresensitivethanFCNtotherichnessoftrainingsamplesandmulti-viewextractionprovidesaneffectiveavenuetoenrichthetrainingsamplesforimprovingdeeplearningclassierperformance.Thisobservationisconsistentwiththestudyby[25],whichshowedwhenthetrainingsamplesizewasincreased,DCNNtendedtoshowcomparableorevenslightlybetterresultswhencomparedtoFCN.ObjectsurroundinginformationseemsveryusefulforFCNtoimproveclassicationaccuracy.1%forFCN-Ortho-II-OBIAversus76.8%forFCN-Ortho-I-OBIAinFigure11,andthisadvantageresultingfromincludingthecontextinformationintrainingsamplesstillholdforthemulti-view

PAGE 20

RemoteSens. 2018 10 ,457 20of24case.1%forFCN-MV-IIA-OBIAversus81.8%forFCN-MV-I-OBIA.ContextinformationletFCNsurpasstheDCNNagainregardingtheclassicationaccuracy.1%forFCN-MV-IIA-OBIAversus83.9%forDCNN-MV-OBIA,implyingthatbothtrainingsamplesizeandcontextinformationseemtocontroltherelativeperformancebetweenDCNNandFCN.The3.2%classicationaccuracyincreasefrom83.9%byDCNN-MV-OBIAto87.1%byFCN-MV-IIA-OBIAhappenstobeveryclosetotheresultfromthestudyby[42],whichwhencomparedpatch-basedDCNNandFCNforpixel-basedclassicationusingonlyorthoimageandfoundFCNoutperformedpatch-basedDCNNwith3.7%improvement.17%forFCNversus83.46%forpatch-basedDCNN.Approximatetrainingdatapreparationmethodtradedclassicationaccuracyforimplementationsimplicity.AddingcontextinformationtothetrainingdatausingthenearestneighborhoodMV-IIBsamplesshowedalowerbutcloseaccuracywhencomparedwiththeclassicationthatusedtheMV-IIAtrainingsamples.4%forFCN-MV-IIB-OBIAversus87.1%forFCN-MV-IIA-OBIA.ThisobservationfurtherconrmstheimpactsofincludingtheaccurateandrichcontextinformationinthetrainingsamplesonimprovingtheclassicationaccuracyusingFCN.Ourresultsdemonstratedthatinexpensiveoff-the-shelfcameramountedonUAScanbeusedtocreateadecentmapforwetlandareawhentheUASimageswereprocessedusingmulti-viewclassicationschemeanddeeplearningtechniques.However,thisstudyonlyemployedRGBimagesforwetlandmapping,whilerecentstudiesindicatedthatmultispectralandsyntheticapertureradarSARhavethepotentialtoimprovewetlandclassication[6166].Therefore,integratingthemulti-viewdatafrommultiplesourcesintothemulti-viewclassicationschemeshouldbeinvestigated,asonedirectionforfuturestudies.Additionally,eventhoughthisstudydealtwithobject-basedareathatcanbeatthesquarecentimeterslevel,wedonotseemajorobstaclesforimplementingthemethodologydevelopedinthisstudyatthelandscapelevelaslongasthemulti-viewimagecanbeproduced.Forsuchapurpose,itmayrequiretheremotesensingplatformtooperateatamuchhigherightelevation. 6.ConclusionsThisstudyproposedmethodstoutilizemulti-viewdataforOBIAclassicationwiththeFCNastheclassiertoinvestigatewhethermulti-viewdataextractionandusecanimproveFCNperformance.Italsoexperimentedwithtwomethodsforpreparingthemulti-viewtrainingsamples,totestiftheobjectsurroundingsinformationwouldimproveFCNperformance.Thisstudydevelopedtwomethodstoexactlyandapproximatelylabelthetrainingsamplestoexplorethebestpracticalmethodstoimplementthemulti-viewOBIAusingFCN.ThestudyalsocomparedtheperformanceofFCNwithotherclassiers,suchastheSVM,RF,andDCNNusingorthoimageandmulti-viewdata.Ourresultsindicatedthatmulti-viewdataenabledFCNtoimproveclassicationaccuracy,regardlessoftheusedmethodfortrainingdatapreparation.Italsoshowedthatmulti-viewOBIAusingFCNthatwastrainedwithsamplescontainingobjectsurroundinginformationshowedamuchbetterperformancethanclassicationthatusedtrainingsampleswithoutcontextinformation.Inaddition,ourresultsindicatedthattrainingsamplesthatweregeneratedbyanapproximatemethodtolabeltrainingobjectsurroundingsshowedlowerbutcomparableclassicationaccuracytoclassicationthatusedexactobjectsurroundingslabelingmethodtogeneratethemulti-viewtrainingsamples.Finally,thisstudyconcludesthatFCNisrecommendinpreferencetoRF,SVM,andDCNN,forOBIAusingeitherorthoimageormulti-viewdata,ifrelativelylongertrainingtimeistolerable. Acknowledgments:PublicationofthisarticlewasfundedinpartbytheUniversityofFloridaOpenAccessPublishingFund. AuthorContributions:TaoLiuandAmrAbd-Elrahmanconceivedanddesignedtheexperiments;TaoLiuperformedtheexperiments,analyzedthedataandwrotetherstversionofthepaper. ConictsofInterest: Theauthorsdeclarenoconictofinterest.

PAGE 21

RemoteSens. 2018 10 ,457 21of24 References 1.Rango,A.;Laliberte,A.;Steele,C.;Herrick,J.E.;Bestelmeyer,B.;Schmugge,T.;Roanhorse,A.;Jenkins,V.Usingunmannedaerialvehiclesforrangelands:Currentapplicationsandfuturepotentials.Environ.Pract.2006 8 ,159.[CrossRef] 2.Colomina,I.;Molina,P.Unmannedaerialsystemsforphotogrammetryandremotesensing:Areview.ISPRSJ.Photogramm.RemoteSens. 2014 92 ,79.[CrossRef] 3.Im,J.;Jensen,J.;Tullis,J.Object-basedchangedetectionusingcorrelationimageanalysisandimagesegmentation. Int.J.RemoteSens. 2008 29 ,399.[CrossRef] 4.Ke,Y.;Quackenbush,L.J.;Im,J.SynergisticuseofQuickBirdmultispectralimageryandLIDARdataforobject-basedforestspeciesclassication. RemoteSens.Environ. 2010 114 ,1141.[CrossRef] 5.Blaschke,T.Objectbasedimageanalysisforremotesensing.ISPRSJ.Photogramm.RemoteSens.2010,65,2.[CrossRef] 6.Grybas,H.;Melendy,L.;Congalton,R.G.Acomparisonofunsupervisedsegmentationparameteroptimizationapproachesusingmoderate-andhigh-resolutionimagery.GISci.RemoteSens.2017,54,515.[CrossRef] 7.Pande-Chhetri,R.;Abd-Elrahman,A.;Liu,T.;Morton,J.;Wilhelm,V.L.Object-basedclassicationofwetlandvegetationusingveryhigh-resolutionunmannedairsystemimagery.Eur.J.RemoteSens.2017,50,564.[CrossRef] 8.Wang,C.;Pavlowsky,R.T.;Huang,Q.;Chang,C.Channelbarfeatureextractionforamining-contaminatedriverusinghigh-spatialmultispectralremote-sensingimagery.GISci.RemoteSens.2016,53,283.[CrossRef] 9.Belgiu,M.;Drgut,L.Randomforestinremotesensing:Areviewofapplicationsandfuturedirections.ISPRSJ.Photogramm.RemoteSens. 2016 114 ,24.[CrossRef] 10.Cortes,C.;Vapnik,V.Support-vectornetworks. Mach.Learn. 1995 20 ,273.[CrossRef] 11.Hinton,G.E.;Osindero,S.;Teh,Y.-W.Afastlearningalgorithmfordeepbeliefnets.NeuralComput.2006,18,1527.[CrossRef][PubMed] 12.Krizhevsky,A.;Sutskever,I.;Hinton,G.E.Imagenetclassicationwithdeepconvolutionalneuralnetworks.InProceedingsoftheAdvancesinNeuralInformationProcessingSystems,LakeTahoe,NV,USA,3December2012;pp.1097. 13.LeCun,Y.;Bengio,Y.;Hinton,G.Deeplearning. Nature 2015 521 ,436.[CrossRef][PubMed] 14.Hinton,G.;Deng,L.;Yu,D.;Dahl,G.E.;Mohamed,A.-R.;Jaitly,N.;Senior,A.;Vanhoucke,V.;Nguyen,P.;Sainath,T.N.Deepneuralnetworksforacousticmodelinginspeechrecognition:Thesharedviewsoffourresearchgroups. IEEESignalProcess.Mag. 2012 29 ,82.[CrossRef] 15.Suk,H.-I.;Lee,S.-W.;Shen,D.Alzheimer'sDiseaseNeuroimagingInitiative.HierarchicalfeaturerepresentationandmultimodalfusionwithdeeplearningforAD/MCIdiagnosis.NeuroImage2014,101,569.[CrossRef][PubMed] 16.Huval,B.;Wang,T.;Tandon,S.;Kiske,J.;Song,W.;Pazhayampallil,J.;Andriluka,M.;Rajpurkar,P.;Migimatsu,T.;Cheng-Yue,R.Anempiricalevaluationofdeeplearningonhighwaydriving.arXiv2015,arXiv:1504.01716. 17.Silver,D.;Schrittwieser,J.;Simonyan,K.;Antonoglou,I.;Huang,A.;Guez,A.;Hubert,T.;Baker,L.;Lai,M.;Bolton,A.Masteringthegameofgowithouthumanknowledge.Nature2017,550,354.[CrossRef][PubMed] 18.Silver,D.;Huang,A.;Maddison,C.J.;Guez,A.;Sifre,L.;VanDenDriessche,G.;Schrittwieser,J.;Antonoglou,I.;Panneershelvam,V.;Lanctot,M.MasteringthegameofGowithdeepneuralnetworksandtreesearch. Nature 2016 529 ,484.[CrossRef][PubMed] 19.Alshehhi,R.;Marpu,P.R.;Woon,W.L.;DallaMura,M.Simultaneousextractionofroadsandbuildingsinremotesensingimagerywithconvolutionalneuralnetworks.ISPRSJ.Photogramm.RemoteSens.2017,130,139.[CrossRef] 20.Ma,L.;Li,M.;Ma,X.;Cheng,L.;Du,P.;Liu,Y.Areviewofsupervisedobject-basedland-coverimageclassication. ISPRSJ.Photogramm.RemoteSens. 2017 130 ,277.[CrossRef]

PAGE 22

RemoteSens. 2018 10 ,457 22of24 21.Ma,X.;Wang,H.;Wang,J.Semisupervisedclassicationforhyperspectralimagebasedonmulti-decisionlabelinganddeepfeaturelearning. ISPRSJ.Photogramm.RemoteSens. 2016 120 ,99.[CrossRef] 22.Makantasis,K.;Karantzalos,K.;Doulamis,A.;Doulamis,N.Deepsupervisedlearningforhyperspectraldataclassicationthroughconvolutionalneuralnetworks.InProceedingsofthe2015IEEEInternationalGeoscienceandRemoteSensingSymposiumIGARSS,Milan,Italy,26July2015;pp.4959. 23.Vetrivel,A.;Gerke,M.;Kerle,N.;Nex,F.;Vosselman,G.Disasterdamagedetectionthroughsynergisticuseofdeeplearningand3Dpointcloudfeaturesderivedfromveryhighresolutionobliqueaerialimages,andmultiple-kernel-learning. ISPRSJ.Photogramm.RemoteSens. 2017 .[CrossRef] 24.Chen,G.;Weng,Q.;Hay,G.J.;He,Y.GeographicObject-basedImageAnalysisGEOBIA:Emergingtrendsandfutureopportunities. GISci.RemoteSens. 2018 55 ,159.[CrossRef] 25.Liu,T.;Abd-Elrahman,A.;Jon,M.;Wilhelm,V.L.ComparingFullyConvolutionalNetworks,RandomForest,SupportVectorMachine,andPatch-BasedDeepConvolutionalNeuralNetworksforObject-BasedWetlandMappingUsingImagesfromSmallUnmannedAircraftSystem.GISci.RemoteSens.2018,55,243.[CrossRef] 26.Marcos,D.;Volpi,M.;Tuia,D.Learningrotationinvariantconvolutionalltersfortextureclassication.arXiv 2016 ,arXiv:1604.06720. 27.Zhao,W.;Du,S.Learningmultiscaleanddeeprepresentationsforclassifyingremotelysensedimagery.ISPRSJ.Photogramm.RemoteSens. 2016 113 ,155.[CrossRef] 28.Celikyilmaz,A.;Sarikaya,R.;Hakkani-Tur,D.;Liu,X.;Ramesh,N.;Tur,G.ANewPre-trainingMethodforTrainingDeepLearningModelswithApplicationtoSpokenLanguageUnderstanding.InProceedingsoftheInterspeech2016,SanFrancisco,CA,USA,8September2016;pp.3255. 29.Pan,S.J.;Yang,Q.Asurveyontransferlearning.IEEETrans.Knowl.DataEng.2010,22,1345.[CrossRef]30.Xie,M.;Jean,N.;Burke,M.;Lobell,D.;Ermon,S.Transferlearningfromdeepfeaturesforremotesensingandpovertymapping. arXiv 2015 ,arXiv:1510.00098. 31.Koukal,T.;Atzberger,C.;Schneider,W.Evaluationofsemi-empiricalBRDFmodelsinvertedagainstmulti-angledatafromadigitalairborneframecameraforenhancingforesttypeclassication.RemoteSens.Environ. 2014 151 ,27.[CrossRef] 32.Su,L.;Chopping,M.J.;Rango,A.;Martonchik,J.V.;Peters,D.P.Supportvectormachinesforrecognitionofsemi-aridvegetationtypesusingMISRmulti-angleimagery.RemoteSens.Environ.2007,107,299.[CrossRef] 33.Abuelgasim,A.A.;Gopal,S.;Irons,J.R.;Strahler,A.H.ClassicationofASASmultiangleandmultispectralmeasurementsusingarticialneuralnetworks. RemoteSens.Environ. 1996 57 ,79.[CrossRef] 34.Longbotham,N.;Chaapel,C.;Bleiler,L.;Padwick,C.;Emery,W.J.;Pacici,F.Veryhighresolutionmultiangleurbanclassicationanalysis. IEEETrans.Geosci.RemoteSens. 2012 50 ,1155.[CrossRef] 35.Long,J.;Shelhamer,E.;Darrell,T.Fullyconvolutionalnetworksforsemanticsegmentation.InProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition,Boston,MA,USA,7June2015;pp.3431. 36.Li,S.;Jiang,H.;Pang,W.JointMultipleFullyConnectedConvolutionalNeuralNetworkwithExtremeLearningMachineforHepatocellularCarcinomaNucleiGrading.Comput.Biol.Med.2017,84,156.[CrossRef][PubMed] 37.Pei,M.;Wu,X.;Guo,Y.;Fujita,H.Smallbowelmotilityassessmentbasedonfullyconvolutionalnetworksandlongshort-termmemory. Knowl.-BasedSyst. 2017 121 ,163.[CrossRef] 38.Huang,L.;Xia,W.;Zhang,B.;Qiu,B.;Gao,X.MSFCN-multiplesupervisedfullyconvolutionalnetworksfortheosteosarcomasegmentationofCTimages.Comput.MethodsProgramsBiomed.2017,143,67.[CrossRef][PubMed] 39.Zhu,Y.;Zhang,C.;Zhou,D.;Wang,X.;Bai,X.;Liu,W.Trafcsigndetectionandrecognitionusingfullyconvolutionalnetworkguidedproposals. Neurocomputing 2016 214 ,758.[CrossRef] 40.ISPRS.2DSemanticLabelingVaihingenData.2017.Availableonline:http://www2.isprs.org/commissions/comm3/wg4/2d-sem-label-vaihingen.htmlaccessedon8October2017. 41.Piramanayagam,S.;Schwartzkopf,W.;Koehler,F.;Saber,E.Classicationofremotesensedimagesusingrandomforestsanddeeplearningframework.InProceedingsoftheSPIERemoteSensing,Edinburgh,UK,26September2016;pp.100040.

PAGE 23

RemoteSens. 2018 10 ,457 23of24 42.Sherrah,J.Fullyconvolutionalnetworksfordensesemanticlabellingofhigh-resolutionaerialimagery.arXiv2016 ,arXiv:1606.02585. 43.Marmanis,D.;Schindler,K.;Wegner,J.D.;Galliani,S.;Datcu,M.;Stilla,U.Classicationwithanedge:Improvingsemanticimagesegmentationwithboundarydetection. arXiv 2016 ,arXiv:1612.01337. 44.Grasslands,L.BlueHeadRanch.Availableonline:https://www.grasslands-llc.com/blue-head-oridaaccessedon1November2017. 45.Holm,L.G.;Plucknett,D.L.;Pancho,J.V.;Herberger,J.P.TheWorld'sWorstWeeds;UniversityPress:HongKong,China,1977. 46.Rutchey,K.;Schall,T.;Doren,R.;Atkinson,A.;Ross,M.;Jones,D.;Madden,M.;Vilchek,L.;Bradley,K.;Snyder,J.VegetationClassicationforSouthFloridaNaturalAreas;USGeologicalSurvey:St.Petersburg,FL,USA,2006. 47.Koukal,T.;Atzberger,C.Potentialofmulti-angulardataderivedfromadigitalaerialframecameraforforestclassication. IEEEJ.Sel.Top.Appl.EarthObs.RemoteSens. 2012 5 ,30.[CrossRef] 48.Im,J.;Quackenbush,L.J.;Li,M.;Fang,F.OptimumScaleinObject-BasedImageAnalysis.ScaleIssuesRemoteSens. 2014 ,197.[CrossRef] 49.Audet,C.;Dennis,J.E.,Jr.Analysisofgeneralizedpatternsearches.SIAMJ.Optim.2002,13,889.[CrossRef] 50.Hinton,G.E.;Srivastava,N.;Krizhevsky,A.;Sutskever,I.;Salakhutdinov,R.R.Improvingneuralnetworksbypreventingco-adaptationoffeaturedetectors. arXiv 2012 ,arXiv:1207.0580. 51.Nair,V.;Hinton,G.E.Rectiedlinearunitsimproverestrictedboltzmannmachines.InProceedingsofthe27thInternationalConferenceonMachineLearningICML-10,Haifa,Israel,21June2010;pp.807. 52.Bottou,L.Large-scalemachinelearningwithstochasticgradientdescent.InProceedingsofCOMPSTAT'2010;Springer:Berlin,Germany,2010;pp.177. 53. eCognition Developer8.8UserGuide ;TrimbleDocumentation:Munich,Germany,2012. 54.Scholkopf,B.;Smola,A.J.LearningwithKernels:SupportVectorMachines,Regularization,Optimization,andBeyond ;MITPress:Cambridge,MA,USA,2001. 55.Breiman,L.Randomforests. Mach.Learn. 2001 45 ,5.[CrossRef] 56.Yegnanarayana,B. ArticialNeuralNetworks ;PHILearningPvt.Ltd.:Delhi,India,2009. 57.Kuusk,A.Thehotspoteffectinplantcanopyreectance.InPhoton-VegetationInteractions;Springer:Berlin,Germany,1991;pp.139. 58.Gupta,N.;Bhadauria,H.ObjectbasedInformationExtractionfromHighResolutionSatelliteImageryusingeCognition. Int.J.Comput.Sci.IssuesIJCSI 2014 11 ,139. 59.Yu,Q.;Gong,P.;Clinton,N.;Biging,G.;Kelly,M.;Schirokauer,D.Object-baseddetailedvegetationclassicationwithairbornehighspatialresolutionremotesensingimagery.Photogramm.Eng.RemoteSens.2006 72 ,799.[CrossRef] 60.Hsu,C.-W.;Lin,C.-J.Acomparisonofmethodsformulticlasssupportvectormachines.IEEETrans.NeuralNetw. 2002 13 ,415.[PubMed] 61.Mahdavi,S.;Salehi,B.;Granger,J.;Amani,M.;Brisco,B.;Huang,W.Remotesensingforwetlandclassication:Acomprehensivereview. GISci.RemoteSens. 2017 .[CrossRef] 62.Amani,M.;Salehi,B.;Mahdavi,S.;Granger,J.;Brisco,B.WetlandclassicationinNewfoundlandandLabradorusingmulti-sourceSARandopticaldataintegration.GISci.RemoteSens.2017,54,779.[CrossRef] 63.Amani,M.;Salehi,B.;Mahdavi,S.;Granger,J.E.;Brisco,B.;Hanson,A.WetlandClassicationUsingMulti-SourceandMulti-TemporalOpticalRemoteSensingDatainNewfoundlandandLabrador,Canada.Can.J.RemoteSens. 2017 43 ,360.[CrossRef] 64.Rapinel,S.;Hubert-Moy,L.;Clment,B.CombineduseofLiDARdataandmultispectralearthobservationimageryforwetlandhabitatmapping. Int.J.Appl.EarthObs.Geoinf. 2015 37 ,56.[CrossRef]

PAGE 24

RemoteSens. 2018 10 ,457 24of24 65.Mahdianpari,M.;Salehi,B.;Mohammadimanesh,F.;Brisco,B.;Mahdavi,S.;Amani,M.;Granger,J.E.FisherLinearDiscriminantAnalysisofcoherencymatrixforwetlandclassicationusingPolSARimagery.RemoteSens.Environ. 2018 206 ,300.[CrossRef] 66.Wilusz,D.C.;Zaitchik,B.F.;Anderson,M.C.;Hain,C.R.;Yilmaz,M.T.;Mladenova,I.E.MonthlyoodedareaclassicationusinglowresolutionSARimageryintheSuddwetlandfrom2007to2011.RemoteSens.Environ. 2017 194 ,205.[CrossRef] 2018bytheauthors.LicenseeMDPI,Basel,Switzerland.ThisarticleisanopenaccessarticledistributedunderthetermsandconditionsoftheCreativeCommonsAttributionCCBYlicensehttp://creativecommons.org/licenses/by/4.0/.