Citation
Scene Analysis Using the Markov Ground Region Segmentation System (MGRSS)

Material Information

Title:
Scene Analysis Using the Markov Ground Region Segmentation System (MGRSS)
Creator:
Dobbins, Peter J
Place of Publication:
[Gainesville, Fla.]
Florida
Publisher:
University of Florida
Publication Date:
Language:
english
Physical Description:
1 online resource (159 p.)

Thesis/Dissertation Information

Degree:
Doctorate ( Ph.D.)
Degree Grantor:
University of Florida
Degree Disciplines:
Computer Engineering
Computer and Information Science and Engineering
Committee Chair:
WILSON,JOSEPH N
Committee Co-Chair:
GADER,PAUL D
Committee Members:
RANGARAJAN,ANAND
GLENN,ALINA ZARE

Subjects

Subjects / Keywords:
constrained-clustering -- ground-penetrating-radar -- image-segmentation -- markov-random-field -- scene-analysis -- semi-supervised-clustering -- semi-supervised-learning
Computer and Information Science and Engineering -- Dissertations, Academic -- UF
Genre:
bibliography ( marcgt )
theses ( marcgt )
government publication (state, provincial, terriorial, dependent) ( marcgt )
born-digital ( sobekcm )
Electronic Thesis or Dissertation
Computer Engineering thesis, Ph.D.

Notes

Abstract:
This work performs scene analysis, representing and understanding the elements contained in a defined area under the ground. Elements of interest include: the ground layer, sub-surface layers, explosive hazards, and non-explosive (clutter) objects. The following sections describe my implementation of the Markov Ground Region Segmentation System (MGRSS). MGRSS provides the user with an interactive tool for evaluating segmentation algorithm scenarios and viewing three-dimensional scene models. The results produced may be used to classify buried hazards as well as identify false alarms. My technique employs a multi-stage process: data correlation, over-segmentation, region clustering, and model segmentation. The data representation of the scene is composed of response vectors collected by ground penetrating radar (GPR) devices. MGRSS examines data collected by hand-held and vehicular-mounted GPR detection systems. In order for data from different sensor types to be utilized, the data is formatted into a structured representation. Surrounding a target area of interest, sequences of response vectors are grouped into frames and the sequence of frames is ordered into a three-dimensional collection. However, the collection sequence of hand-held system data may not be uniformly sampled when collected by a human-operator. Non-uniformly sampled data are not compatible with a grid representation. Interpolation techniques create a structured view of such non-uniform data. MGRSS uses the structured scene to segment images into super-voxels representing related regions in the volume. Super-voxel regions serve as the nodes in a Markov Random Field (MRF). Edges in the graph are weights between region neighbors. Inference is drawn from the MRF to connect similar regions and model the scene. The principles of semi-supervised clustering are used to implement the Probability-Based Training Realignment (PBTR) algorithm that assists in the inference process. Different clustering and over-segmentation methods are incorporated into MGRSS. The Jaccard Index and the area under the ROC curve are used to evaluate MGRSS model scenarios. ( en )
General Note:
In the series University of Florida Digital Collections.
General Note:
Includes vita.
Bibliography:
Includes bibliographical references.
Source of Description:
Description based on online resource; title from PDF title page.
Source of Description:
This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Thesis:
Thesis (Ph.D.)--University of Florida, 2017.
Local:
Adviser: WILSON,JOSEPH N.
Local:
Co-adviser: GADER,PAUL D.
Statement of Responsibility:
by Peter J Dobbins.

Record Information

Source Institution:
UFRGP
Rights Management:
Applicable rights reserved.
Classification:
LD1780 2017 ( lcc )

Downloads

This item has the following downloads:


Full Text

PAGE 1

SCENEANALYSISUSINGTHEMARKOVGROUNDREGIONSEGMENTATIONSYSTEM(MGRSS)ByPETERJONATHANDOBBINSADISSERTATIONPRESENTEDTOTHEGRADUATESCHOOLOFTHEUNIVERSITYOFFLORIDAINPARTIALFULFILLMENTOFTHEREQUIREMENTSFORTHEDEGREEOFDOCTOROFPHILOSOPHYUNIVERSITYOFFLORIDA2017

PAGE 2

c2017PeterJonathanDobbins

PAGE 3

ACKNOWLEDGMENTSMyworkwouldnotbepossiblewithoutthewisdomandinsightofJoeWilson.Joe,Iadmiretheexampleyousetinlifeandwork.Youmotivatemetobeabetterresearcherandtoalwayscreateahighqualityproductbydoingthe\right"thing.ThankyoutomycommitteemembersAlinaGlenn,AnandRangarajan,andPaulGaderforyourguidance.Yourexpertisewasvaluableinenhancingboththebreadthanddepthofmywork.Yourfeedbacknotonlyhelpedmelearnwhatwasmissingfrommyeorts,butalsogavemethetoolstoenhancemynalproduct.ThankyoutoAdrienneCook,JoanCrisman,JohnBowers,KristinaSapp,andallofthemembersofCISEStudentServices.Youalwaysgreetedmewithasmile,broughtlevitytothelongdaysintheoce,andhelpedmestayontrack.ThankyoutoBrandonSmockforbeingmyfriendanddeskneighbor,settinganexampleofhardwork,andpushingmetogetthingsdone.ThankyoutomyfriendsfortheirencouragmentandtakingthetimetotmycrazyscheduleintotheirlivessothatwecouldconnectwhenIwasfree.AmyandJegavemerunningbreaksandkeptmefocused,Andymadetimeforlunchesandmodeledperseverance,myroommatesAnthonyandEdwardencouragedmeeveryday,Derekgavemeperspectiveandwentonwalksaroundtheneighborhood,andSuperSeanRegisfordhelpedmestaysaneandwas,asalways,super!ThankyoutoallofmyfamilyandspecicallymyparentsIvanandCarol,mybrotherTom,andmydearauntSallyfortheirconstantsupportandfaithfulinvolvementinmylife.Yourdesiretoalwaysjourneywithmeispreciousandyourunwaveringdevotionhashelpedmeseethisthroughtotheend.Vikki,Iamgratefulforalltheencouragementandcheeringonyougivetome.Iappreciatehowyoumakeyourselfavailabletohelpmeoutandseekoutmysuccessateveryturn.Thankyou. 3

PAGE 4

TABLEOFCONTENTS page ACKNOWLEDGMENTS ................................. 3 LISTOFTABLES ..................................... 7 LISTOFFIGURES .................................... 9 ABSTRACT ........................................ 15 CHAPTER 1INTRODUCTION .................................. 17 2LITERATUREREVIEW .............................. 24 2.1SceneAnalysis ................................. 24 2.2GroundPenetratingRadar(GPR)Data ................... 25 2.2.1Vehicle-MountedDetectionSystems .................. 27 2.2.2Hand-HeldDetectionSystems ..................... 27 2.2.2.1Human-operatorhand-helddistinctions ........... 28 2.2.2.2Recoveryprocedures ..................... 30 2.3ReciprocalPointerChains(RPC) ....................... 30 2.4ALayerTrackingApproachtoBuriedSurfaceDetection .......... 31 2.5Segmentation .................................. 34 2.5.1SegmentationUsingCIE)]TJ /F3 11.955 Tf 11.95 0 Td[(Labandk-means ........... 34 2.5.2MarkovRandomFields(MRF) ..................... 35 2.5.3CartoonModel ............................. 36 2.5.4Super-PixelandSuper-VoxelSegmentation .............. 40 2.5.4.1Ecientgraph-basedsegmentation(GB) .......... 40 2.5.4.2Ecienthierarchicalgraph-basedsegmentation ...... 44 2.5.4.3Simplelineariterativeclustering(SLIC) .......... 44 2.5.5Supervoxel-basedSegmentationof3DVolumetricImages ...... 46 2.6Clustering .................................... 48 2.6.1HierarchicalClustering ......................... 49 2.6.2CompetitiveAgglomeration ...................... 49 2.6.3ConstrainedClustering ......................... 52 2.7MRFTuning .................................. 52 2.7.1Graph-BasedSemi-SupervisedClassicationonVeryHighResolutionRemoteSensingImages ......................... 53 2.7.2ANoteonSemi-SupervisedLearningUsingMarkovRandomFields 55 2.7.3ParameterLearning ........................... 57 2.7.4Semi-SupervisedClustering ....................... 60 4

PAGE 5

3METHODOLOGY .................................. 63 3.1ImplementationofMGRSS ........................... 63 3.1.1Pre-Processing .............................. 64 3.1.2AlgorithmProcessing .......................... 65 3.1.3Post-Processing ............................. 69 3.2ModellingHuman-OperatorCollectedHand-HeldData ........... 71 3.3MRFTuning .................................. 73 3.4Probability-BasedTrainingRealignment ................... 74 3.5MGRSSModelScenarios ............................ 76 4RESULTS ....................................... 80 4.1ProcessElements ................................ 80 4.1.1StructuringNon-UniformSamples ................... 80 4.1.2GeneratingSuper-VoxelRegions .................... 83 4.1.3TruthScenes ............................... 89 4.1.4TrainingRegions ............................ 91 4.2EvaluationofTruthandTrainingClusterSeparability ............ 92 4.3EvaluationofMGRSSModelScenarioPerformance ............. 93 4.3.1JaccardIndexEvaluation ........................ 95 4.3.2AreaUndertheCurveEvaluation ................... 97 4.3.2.1Comparing3DUCMHierarchybinarycondencetoMGRSSprobabilitycondence .................... 100 4.3.2.2Comparing3DUCMHierarchybinarycondencetoMGRSSbinarycondence ....................... 102 4.4MGRSSSystemComponents ......................... 105 4.4.1MGRSSViewModelGUI ....................... 105 4.4.2MGRSS3DUCMHierarchyGUI .................... 106 4.4.3MGRSSViewer ............................. 106 4.4.4MGRSSTruthTool ........................... 109 4.4.5MGRSSTrainingTool ......................... 113 5CONCLUSIONS ................................... 114 5.1MGRSS ..................................... 114 5.2FutureWork ................................... 115 APPENDIX AMGRSSPBTRALGORITHM ........................... 117 BTRUTHSCENEVISUALIZATIONS ........................ 123 CMGRSSMODELSCENARIORESULTS ...................... 127 DMGRSSAUCRESULTS ............................... 148 5

PAGE 6

REFERENCES ....................................... 155 BIOGRAPHICALSKETCH ................................ 159 6

PAGE 7

LISTOFTABLES Table page 1-1Historyofprobabilityofdetection(PD)andfalsealarmrates(FAR)permetersquared(m2)intestswithexplosivehazardsemplacedinpreparedlanes. .... 20 3-1AlgorithmcombinationsincludedinMGRSSalgorithmprocessing,formingtwelveofthirteenmodelscenarios. ............................. 79 4-1MeanperformancevaluesforEHGBandSLICcalculatedusingscenariosof11and35scenestoanalyze. ............................... 84 4-2Numberof3D-UCMandSLICsuper-voxelregions(SVR)andneighboredges. 85 4-3NumberofclustersfoundbyCAandHCfor3D-UCMandSLIC. ........ 86 4-4Sixtypesidentiedineachtruthscene. ....................... 90 4-5Detailsofthetenuser-annotatedtruthscenesusedintesting. .......... 91 4-6Detailsofthefourteentrainingscenesusedtoselecttrainingregions. ...... 92 4-7ScenebysceneJ3values. ............................... 94 4-8Numberofvoxelsassignedtoeachtruthtypeacrossallscenes. .......... 96 4-9Percentofvoxelsassignedtoeachtruthtypeacrossallscenes. .......... 96 4-10PercentJaccardIndexmeanvaluesofeachmodelscenariooverthesetofscenes,includingtheaverage. ................................ 98 4-11RankingtheJaccardIndexvalueofeachmodelscenariooverthesetofscenes,includingtheaveragerank. .............................. 99 4-12Tableofthemostfrequentrankforeachmodelscenario. ............. 99 4-13CompletesceneAUCpercentagescalculatedfromROCsgeneratedusing3DUCMHierarchybinarycondencesandMGRSSbinarycondences. .......... 104 4-14CompletesceneAUCrankingscalculatedfromROCsgeneratedusing3DUCMHierarchybinarycondencesandMGRSSbinarycondences. .......... 105 4-15Completelistoflabeloptions. ............................ 112 B-1Detailsofthetenuser-annotatedtruthscenesusedintesting. .......... 123 C-1Figureandmodelscenariomap. ........................... 127 D-1CompletesceneAUCpercentages. .......................... 148 D-2CompletesceneAUCrank. ............................. 148 7

PAGE 8

D-3GroundAUCpercentages. .............................. 149 D-4GroundAUCrank. .................................. 149 D-5BackgroundAUCpercentages. ............................ 150 D-6BackgroundAUCrank. ............................... 150 D-7LowenergyobjectAUCpercentages. ........................ 151 D-8LowenergyobjectAUCrank. ............................ 151 D-9HighenergyobjectAUCpercentages. ........................ 152 D-10HighenergyobjectAUCrank. ............................ 152 D-11LowenergylayerAUCpercentages. ......................... 153 D-12LowenergylayerAUCrank. ............................. 153 D-13HighenergylayerAUCpercentages. ........................ 154 D-14HighenergylayerAUCrank. ............................ 154 8

PAGE 9

LISTOFFIGURES Figure page 1-1ViewofParisfromtheEielTower,November9,2010.CourtesyofPeterJonathanDobbins. ........................................ 18 1-2Parislandmarks,November9,2010.CourtesyofPeterJonathanDobbins. ... 19 1-3Layertransitionsinascene. ............................. 21 1-4Layersandobjectinatwo-dimensionalframe. ................... 21 1-5Objectofinterestinthree-dimensionalspace. ................... 22 1-6GUIshowingMGRSSresults. ............................ 23 2-1Allscantypes. .................................... 26 2-2Structureofdatainathree-dimensionalview. ................... 26 2-3Collectionsampleofvehiculardata. ......................... 27 2-4Hand-heldsweepcollectedbyarobotarm. ..................... 29 2-5One-dimensionallayersfoundbyRPC. ....................... 31 2-6Exampledissimilaritygraph. ............................. 32 2-7Findingreciprocationinlayertransitions. ..................... 33 2-8Wireframelayertracking. .............................. 33 2-9SampleGPRvoltagedisplayedasanimage. .................... 34 2-10CIE)]TJ /F3 11.955 Tf 11.96 0 Td[(Labandk-meanssegmentationlabelresult. ............... 36 3-1MGRSSsystemmodel. ................................ 64 3-2MGRSSpre-processingmodule. ........................... 65 3-3MGRSSalgorithmprocessingmodule. ....................... 66 3-4SampleEHGBsegmentation. ............................ 66 3-5SampleSLICsegmentation. ............................. 66 3-6Sample3D-UCMsegmentationgeneratedfromframe31ofscene#8. ...... 67 3-7SampleSLICsegmentationgeneratedfromframe31ofscene#8. ........ 67 3-8MGRSSpost-processingmodule. .......................... 70 3-9PBTRsampletrainingregionsandtypes. ..................... 77 9

PAGE 10

3-10PBTRtrainingclusterre-alignmentiteration#1. ................. 77 3-11PBTRassigninglabelsexampleI. .......................... 77 3-12PBTRassigninglabelsexampleII. ......................... 78 3-13PBTRclusterpruning. ................................ 78 3-14PBTRtrainingclusterre-alignmentiteration#2. ................. 78 4-1PlotofthemeansquarederrorfornearestneighborresamplingusingB-scan(black),Grid(blue),andUTM(orange)coordinates. ............... 82 4-2SampleB-scanframesfromnon-uniformre-samplinginterpolation. ....... 82 4-3Numberof3D-UCMandSLICsuper-voxelregions(SVR)andneighboredges. 85 4-4NumberofclustersfoundbyCAandHCfor3D-UCMandSLIC. ........ 86 4-5Histogramofvoxelsizesobservedin3D-UCMhierarchy0super-voxels. ..... 87 4-6HistogramofvoxelsizesobservedinSLICsuper-voxels. .............. 88 4-7Bleedeectmergingtwodistincttypesintoonesuper-voxelregion. ....... 89 4-8Disconnectedtypesobservedintruthscene. .................... 91 4-9J3valuemeasuringtruthandtrainingseparability. ................ 94 4-10JaccardIndexresultsforallmodelscenariosoverallscenesemphasizingthedierenceswithandwithoutPBTR. ........................ 97 4-11JaccardIndexresultsforallmodelscenariosoverallscenesemphasizingthedierentsuper-voxelsegmentations. ......................... 98 4-123DUCMHierarchybinarycondenceROCs. .................... 100 4-13MGRSSprobabilitycondenceROCs. ....................... 101 4-14MGRSSbinarycondenceROCs. .......................... 101 4-15AveragetruthtypepercentageAUCforallmodelscenariosoverallscenesgeneratedfrom3DUCMHierarchybinarycondenceROCsandMGRSSprobabilitycondenceROCs. ......................................... 102 4-16Averagetruthtyperankingsforallscenariosoverallscenesgeneratedfrom3DUCMHierarchybinarycondenceROCsandMGRSSprobabilitycondenceROCs. 103 4-17AveragetruthtypepercentageAUCforallmodelscenariosoverallscenesgeneratedfrom3DUCMHierarchybinarycondenceROCsandMGRSSbinarycondenceROCs. ......................................... 103 10

PAGE 11

4-18Averagetruthtyperankingsforallscenariosoverallscenesgeneratedfrom3DUCMHierarchybinarycondenceROCsandMGRSSbinarycondenceROCs. .... 104 4-19MGRSSViewModelGUI. .............................. 106 4-20MGRSS3DUCMHierarchyGUIofscene#8fromhierarchy#17. ........ 107 4-21MGRSS3DUCMHierarchyGUIofscene#8fromhierarchy#21. ........ 107 4-22MGRSSViewer. .................................... 108 4-23Default3Delementview. .............................. 108 4-24Zoomed3Delementview. .............................. 109 4-25MGRSSTruthTool. ................................. 110 4-26SelectingregionsontheTruthTool. ......................... 110 4-27Pop-upprovidinglabelselection. .......................... 111 4-28Allregionslabeled. .................................. 111 4-29Choosingaregionfordetailviewing. ........................ 112 4-30Detailsoftheselectedregion. ............................ 113 4-31MGRSSTraingTool. ................................. 113 A-1PBTRtrainingregions. ................................ 117 A-2PBTRunknown(new)regions. ........................... 118 A-3PBTRinitialsetup:assignlabels. .......................... 118 A-4PBTRinitialsetup:estimateparameters. .................... 118 A-5PBTRre-aligntrainingregions,formingthehardconstraint. ........... 119 A-6PBTRestimateparameters0. ........................... 119 A-7PBTRassigninglabelspart1. ............................ 119 A-8PBTRassigninglabelspart2. ............................ 120 A-9PBTRassigninglabelspart3. ............................ 120 A-10PBTRassigninglabelspart4. ............................ 120 A-11PBTRassigninglabelspart5. ............................ 121 A-12PBTRassigninglabelspart6. ............................ 121 11

PAGE 12

A-13PBTRpruningcluster. ................................ 121 A-14PBTRparameterestimation00. ........................... 122 A-15PBTRuntilconvergence,returntostep2. ..................... 122 A-16PBTRre-aligntrainingregions,formingthehardconstraint. ........... 122 B-1Scene#1truthatframe31. ............................. 123 B-2Scene#2truthatframe31. ............................. 124 B-3Scene#3truthatframe31. ............................. 124 B-4Scene#4truthatframe31. ............................. 124 B-5Scene#5truthatframe31. ............................. 125 B-6Scene#6truthatframe31. ............................. 125 B-7Scene#7truthatframe31. ............................. 125 B-8Scene#8truthatframe31. ............................. 126 B-9Scene#9truthatframe31. ............................. 126 B-10Scene#10truthatframe31. ............................ 126 C-1MGRSSViewModelGUIwithinterfacedetails. .................. 127 C-2MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame21. ...... 128 C-3MGRSS3DUCMCAand3DUCMCAPBTRscene#8frame21. ........ 128 C-43DUCMHierarchyscene#8frame21. ....................... 128 C-5MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame22. ...... 129 C-6MGRSS3DUCMCAand3DUCMCAPBTRscene#8frame22. ........ 129 C-73DUCMHierarchyscene#8frame22. ....................... 129 C-8MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame23. ...... 130 C-9MGRSS3DUCMCAand3DUCMCAPBTRscene#8frame23. ........ 130 C-103DUCMHierarchyscene#8frame23. ....................... 130 C-11MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame24. ...... 131 C-12MGRSS3DUCMCAand3DUCMCAPBTRscene#8frame24. ........ 131 C-133DUCMHierarchyscene#8frame24. ....................... 131 12

PAGE 13

C-14MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame25. ...... 132 C-15MGRSS3DUCMCAand3DUCMCAPBTRscene#8frame25. ........ 132 C-163DUCMHierarchyscene#8frame25. ....................... 132 C-17MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame26. ...... 133 C-18MGRSS3DUCMCAand3DUCMCAPBTRscene#8frame26. ........ 133 C-193DUCMHierarchyscene#8frame26. ....................... 133 C-20MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame27. ...... 134 C-21MGRSS3DUCMCAand3DUCMCAPBTRscene#8frame27. ........ 134 C-223DUCMHierarchyscene#8frame27. ....................... 134 C-23MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame28. ...... 135 C-24MGRSS3DUCMCAand3DUCMCAPBTRscene#8frame28. ........ 135 C-253DUCMHierarchyscene#8frame28. ....................... 135 C-26MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame29. ...... 136 C-27MGRSS3DUCMCAand3DUCMCAPBTRscene#8frame29. ........ 136 C-283DUCMHierarchyscene#8frame29. ....................... 136 C-29MGRSSSLICCAandSLICCAPBTRscene#8frame30. ........... 137 C-30MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame30. ... 137 C-313DUCMHierarchyscene#8frame30. ....................... 137 C-32MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame31. ...... 138 C-33MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame31. ... 138 C-343DUCMHierarchyscene#8frame31. ....................... 138 C-35MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame32. ...... 139 C-36MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame32. ... 139 C-373DUCMHierarchyscene#8frame32. ....................... 139 C-38MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame33. ...... 140 C-39MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame33. ... 140 C-403DUCMHierarchyscene#8frame33. ....................... 140 13

PAGE 14

C-41MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame34. ...... 141 C-42MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame34. ... 141 C-433DUCMHierarchyscene#8frame34. ....................... 141 C-44MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame35. ...... 142 C-45MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame35. ... 142 C-463DUCMHierarchyscene#8frame35. ....................... 142 C-47MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame36. ...... 143 C-48MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame36. ... 143 C-493DUCMHierarchyscene#8frame36. ....................... 143 C-50MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame37. ...... 144 C-51MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame37. ... 144 C-523DUCMHierarchyscene#8frame37. ....................... 144 C-53MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame38. ...... 145 C-54MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame38. ... 145 C-553DUCMHierarchyscene#8frame38. ....................... 145 C-56MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame39. ...... 146 C-57MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame39. ... 146 C-583DUCMHierarchyscene#8frame39. ....................... 146 C-59MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame40. ...... 147 C-60MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame40. ... 147 C-613DUCMHierarchyscene#8frame40. ....................... 147 14

PAGE 15

AbstractofDissertationPresentedtotheGraduateSchooloftheUniversityofFloridainPartialFulllmentoftheRequirementsfortheDegreeofDoctorofPhilosophySCENEANALYSISUSINGTHEMARKOVGROUNDREGIONSEGMENTATIONSYSTEM(MGRSS)ByPeterJonathanDobbinsDecember2017Chair:JosephN.WilsonMajor:ComputerEngineeringThisworkperformssceneanalysis,representingandunderstandingtheelementscontainedinadenedareaundertheground.Elementsofinterestinclude:thegroundlayer,sub-surfacelayers,explosivehazards,andnon-explosive(clutter)objects.ThefollowingsectionsdescribemyimplementationoftheMarkovGroundRegionSegmentationSystem(MGRSS).MGRSSprovidestheuserwithaninteractivetoolforevaluatingsegmentationalgorithmscenariosandviewingthree-dimensionalscenemodels.Theresultsproducedmaybeusedtoclassifyburiedhazardsaswellasidentifyfalsealarms.Mytechniqueemploysamulti-stageprocess:datacorrelation,over-segmentation,regionclustering,andmodelsegmentation.Thedatarepresentationofthesceneiscomposedofresponsevectorscollectedbygroundpenetratingradar(GPR)devices.MGRSSexaminesdatacollectedbyhand-heldandvehicular-mountedGPRdetectionsystems.Inorderfordatafromdierentsensortypestobeutilized,thedataisformattedintoastructuredrepresentation.Surroundingatargetareaofinterest,sequencesofresponsevectorsaregroupedintoframesandthesequenceofframesisorderedintoathree-dimensionalcollection.However,thecollectionsequenceofhand-heldsystemdatamaynotbeuniformlysampledwhencollectedbyahuman-operator.Non-uniformlysampleddataarenotcompatiblewithagridrepresentation.Interpolationtechniquescreateastructuredviewofsuchnon-uniformdata. 15

PAGE 16

MGRSSusesthestructuredscenetosegmentimagesintosuper-voxelsrepresentingrelatedregionsinthevolume.Super-voxelregionsserveasthenodesinaMarkovRandomField(MRF).Edgesinthegraphareweightsbetweenregionneighbors.InferenceisdrawnfromtheMRFtoconnectsimilarregionsandmodelthescene.Theprinciplesofsemi-supervisedclusteringareusedtoimplementtheProbability-BasedTrainingRealignment(PBTR)algorithmthatassistsintheinferenceprocess.Dierentclusteringandover-segmentationmethodsareincorporatedintoMGRSS.TheJaccardIndexandtheareaundertheROCcurveareusedtoevaluateMGRSSmodelscenarios. 16

PAGE 17

CHAPTER1INTRODUCTION Onedaywhenhewasoutwalking,hecametoanopenplaceinthemiddleoftheforest,andinthemiddleofthisplacewasalargeoak-tree,andfromthetopofthetree,therecamealoudbuzzing-noise. |A.A.MilneWinniethePoohObjectsfoundinagivenspacecontributetotherepresentationofthesurroundingspace.Inthesamewaythespacesurroundingobjectshelpstodenetheobjectsthespacecontains.Understandingasceneiseasierwhenthesceneisconstrainedtoadenedregionandwheneachobjectinthescenecanbemoreclearlyidentied.Considerafewexamples.First,thescenedescribedabovebyMilnecreatesasettingthatiseasytoimagine;aforest,alargetreestandsout,andacommonnoise,buzzing.Knowingthebuzzingnoiseisabeehivehelpsleadtotheentirecontext,specicallythatthisistheintroductiontoWinnie-the-Pooh[ 1 ].Anotherexampleisasentence.Individualwordsareobjects.Eachwordcontributestothecomprehensionofthesentence.Havingmoreknowledgeaboutaworddenition,colloquialcontext,andgrammaticalusageincreasesthereader'sabilitytoparseandthencomprehendtheentiresentence.AlsoconsiderthevisualsceneprovidedinFigure 1-1 .Inthisscene,identifyingfamouslandmarkswillhelpidentifythecitybeingdisplayed.Inthiscase,thesceneisaphotographtakenfromtheEielTower,lookingacrossParis,France.ObservingtheSacre-CurBasilica,theGrandPalais,andtheSeineRiverindicatetotheviewerthatthisis,infact,Paris.Beforeviewingthepicture,ifitwasrevealedthatthescenewouldbeParis,anobserverwouldbegintoanticipateseeingfamousParislandmarksliketheoneslistedandhighlightedinFigure 1-2 .Aswell,identifyinganindividualobjectinthescene, 17

PAGE 18

liketheGrandPalais,providestheviewerwiththeanticipationofobservingotherParislandmarks,suchastheonesnoted. Figure1-1. ViewofParisfromtheEielTower,November9,2010.CourtesyofPeterJonathanDobbins. Astheseexamplesshow,whenmoreknowledgeisdiscoveredaboutagivenscene,abetterunderstandingoftheelementsinthesceneisdevelopedandviceversa.Whiletherearemanywaystocategorizescenesandmanydierenttypesofscenesthatcanbeexamined,myworkfocusesongroundscenesinpost-warandactiveconictregionsaroundtheworld.Thesearenascontainmanyexplosivehazardsinundocumentedlocationseventhoughtheregionalconictmayhaveended[ 2 ].Thiscreatesaneedtondsuchobjectsandsafelyremoveordetonatethembeforetheycauseharmtotheinhabitantsoftheregion.Whensearchingsuchascene,manyelementsmightbefoundinthesubstructureoftheground,including:clutterobjects,explosiveobjects,andthelayerswithinthesoilitself.Theabilitytoclassifydierenttypesofanomaliesobservedwithin 18

PAGE 19

Figure1-2. Parislandmarks,November9,2010.CourtesyofPeterJonathanDobbins. thegroundassistsinrepresentingtheentiregroundsceneandsubsequentlyidentifyingobjectsofspecialsignicance.In2011Glenn,Gader,andWilsonreportedthatfrom1997to2004manyadvancesweremadeintheabilitytonotonlydetectexplosivehazards,butalsoreducefalsealarmrates[ 3 ].Table 1-1 highlightstheprogressmadeduringthattime.Theseeortsfocusedonndinganomaliesinthegroundataspecicpointinthedetectionsequence[ 4 ].Hazardclassicationtechniquesarenowexaminingthesamedatausingnewparadigmsinordertoincreasedetectionandreducefalsealarmsbyanotherorderofmagnitude.Whilebackgroundnoiseremovalwasusedtobetterisolateanomalies,priorworkhasnotconsideredtheprogressionthroughtheentirescenewhenclassifyingelementswithinthescene.Examiningtheentiresceneisonewaytohelpmovethedetectionprocessintoitsnextevolution.Myworkextendstheeortsinexplosivehazarddetectionbyusingscenemodelstoenhancetheaccurateclassicationofelementswithinthedetectionscene. 19

PAGE 20

Table1-1. Historyofprobabilityofdetection(PD)andfalsealarmrates(FAR)permetersquared(m2)intestswithexplosivehazardsemplacedinpreparedlanes. YearPDFAR/m2 1997830.451998950.042003900.022004900.00022004990.002 WhenmovedoveradenedareaoftheEarth,groundpenetratingradar(GPR)devicescollectasequenceofsensedvalues.Hand-heldandvehicular-mounteddetectiondevicesbothuseradartransmittersandreceiverstocollectdata.Whenatransmittersendsoutasignal,eachreceivercollectsaresponsevectorwithvoltageresponsesatasequenceofdepthsinthegroundbelowthereceiverposition.Thestructureofthetwodevicesdierandthemannerinwhicheachdeviceisusedalsodiers.Vehicular-mounteddetectiondevicesobtainresponsevectorsinanaturallyframedstructurebecausethephysicaldetectiondeviceismadeupofaseriesofreceiversorderedoneafterthenextinalineararrangement.Combiningindividualresponsevectorsinthereceiverarraycreatesatwo-dimensionalframe.Asthedeviceismovedforward,athree-dimensionalsceneofcollecteddatadevelops.Commonhand-helddetectiondeviceshavebetweenoneandthreereceiversonboardtheunit.Thehand-helddeviceisswungbackandforthwhencollectingdata.Datafromeachswing,orsweep,ofthedevicecanberepresentedasatwo-dimensionalframe.Subsequentsweepscanbeviewedasthesubsequentframesandsolutionsimplementedforvehicular-mountedsystemscanpotentiallybeutilized.However,therearedierencesinthecollectionprocessimplementedbythesesystems.Typically,hand-heldsystemsareheldclosetothegroundandexhibitradargroundcoupling.Vehicularsystemsmusthavemorestand-obetweenthesensorandtheground,thusgroundcouplingislesslikely.Inaddition,ahand-helddevicemightnotalwaysbeusedinanorderedfashionlikethevehicularsystem.In2006,Nganetal.examinedhowaspecicareamightbeinterrogated 20

PAGE 21

repeatedlybyahand-helddevicewhenanobjectofinterestisbelievedtobelocatedwithinthatarea[ 5 ].Asdierentobjectsofinterestarefound,thesweeppatterncollectedovertheregionofinterestwillnotbestructured.Therefore,individualresponsevectorsmustbere-constructedintoneighborhoodsofrelatedpointssothatathree-dimensionalvolumecanbeexaminedandappropriatetransitionsbetweenscanscanbemade.Figure 1-3 showsaseriesofframesandhighlightshowoneofthelayersinthesceneisobservedthroughout.Notethattheframesaretwo-dimensionalandcontainone-dimensionallayers.Whencombinedtogether,thesequenceofimagesformsathree-dimensionalvolumewhichhastwo-dimensionallayersrunningthroughit.Figure 1-4 isanexampleofatwo-dimensionalscenethathasanobservedgroundlayer,areectedsignalofthegroundlayer,andanobjectofinterest.Figure 1-5 highlightsanobjectofinterestbelowthegroundlayerinthree-dimensionalspace. Figure1-3. Layertransitionsinascene. Figure1-4. Layersandobjectinatwo-dimensionalframe. 21

PAGE 22

Figure1-5. Objectofinterestinthree-dimensionalspace. Thepixelsinatwo-dimensionalimageandthevoxelsinthevolumeofthethree-dimensionalscenenaturallytintothestructureofaMarkovRandomField(MRF)[ 6 7 ].Thesimplestwaytodeneanodeneighborhoodisbythenearesttwoandthree-dimensionalvoxelpositionsineverydirection.Inthiscase,eachvoxelisanodeinthegraph.Twocomplicationsariseinthisimplementation.First,graphsizesareontheorderofhundredsofthousandsofnodesmakingitcomputationallyinfeasibletodrawinferencefromtheMRF.Second,whilethisgraphframeworknaturallytswithvehiculardata,thehand-heldcollectionprocessdoesnotalwaysproducesuchanicelystructuredgraphofconnections.Toovercometheselimitations,MGRSSdenesneighborsbyregionsofpixels(super-pixels)orvoxels(super-voxels).Eachsuper-pixelorsuper-voxelisanodeintheMRF.InordertoperforminferenceinanMRF,aninitiallabelsetisrequired.Clusteringisusedtogroupsuper-voxelsintoaninitiallabelset.AfterestimatingtheparametersoftheMRF,labelsarereassigned.AteachiterationoftheMRF,trainingparametersareusedtoinuencethelearnedstructureintoamorerepresentativemodel.Duringthisprocess,anylabelsthatarenolongerusedareprunedfromthelabelset.Oncethechangebetweenlabelsetshasconverged,thecurrentvalueoftheoriginalGPRdata,theuser-annotatedtruthlabeling,andtheMGRSSgeneratedlabelingwithand 22

PAGE 23

withouttrainingparametersaredisplayedinagraphicalenvironmentallowingtheusertonavigatethroughtheframesofthescene.Figure 1-6 isasampleframefromthisGUI. Figure1-6. GUIshowingMGRSSresults. Inthefollowingchapters,IdescribetheimplementationoftheMarkovGroundRegionSegmentationSystem(MGRSS).Chapter 2 providesbackgroundtothisworkandhowdatacollectedbydierentcollectionsystemsisdisplayedinascenegrid.ThemethodologyfortrackingnotableelementswithinthesceneisgiveninChapter 3 .Theresultsproducedbydierentmodelingscenarios,theMGRSSsystemarchitecture,andthegraphicalinterfaceMGRSSprovidesareevaluatedinChapter 4 .Finally,Chapter 5 summarizesmyeortsanddiscussesongoinginitiativesrelatedtothisproject. 23

PAGE 24

CHAPTER2LITERATUREREVIEWInthischapter,Ireviewhowpreviousworkinuencesandassistsinmypursuits.First,inSection 2.1 Ireviewsceneanalysisandhowitrelatestoradardata.ThedetailsrelatedtoGPRdevices,howtheycollectdata,andthedistinctionsbetweenvehicle-mountedandhand-heldcollectionsystemsarecoveredinSection 2.2 .ReciprocalPointerChains(RPC)ndpotentialone-dimensionallayersintwo-dimensionalframesandarediscussedinSection 2.3 .PreviousworkIimplementedusesRPCandtheViterbialgorithmtocreateawireframemodeloflayersinthree-dimensionsandisanalyzedinSection 2.4 .Sections 2.5 through 2.7 discussthebackgroundnecessaryfortheeachofthestepsintheprocessdescribedwithinChapter 3 ;specically,segmentingimages,optimizinganMRF,clusteringmethodologies,andemployingMRFtuningtechniques. 2.1SceneAnalysisAstheexamplesinChapter 1 demonstrate,therearediversewaystorecordascene:auditorydescriptions,textualnotes,andvisualimagery.Inthisanalysis,Iexplorevisualscenesfoundintheformatofasequenceoftwo-dimensionalimages.IrefertotheseimagesasGPRimages,sincetheyaregeneratedfromthevoltageresponsevectorsobtainedwhenusingaGPRcollectiondevice.ThesequenceofGPRimagescomposestherepresentationofthethree-dimensionalGPRscene.WhilethesequentialsetofGPRimagesissimilartoasequenceofvideoframes,therearenoticeabledistinctions.First,theGPRimagesarenotsubjecttointerpretationinthesamewayasothervisualscenes;thereisnoperspectivethroughwhichtodierentiateforegroundandbackgroundelementsintheimage.Next,GPRimageelementsarestatic.Whileelementsmightgroworshrinkasthesceneprogresses,theydonotmovedynamicallywithinthescene.Considervideorecordings.Avideosceneanditselementsmaybeperceiveddierentlydependingonthedistancebetweenthecameraandeachsceneelement.Aswell,thescenewillappeardierentifthecameraisstationaryopposedtoifitisinmotion.Incontrast,eachframe 24

PAGE 25

ofthesetofGPRimagesdenotesamovementofthecollectiondevice.Anylackofchangebetweentwoframesshowsthattheelementswithinthecombinedsceneexistinthesamewayatthesubsequentgeographiclocationoftheground,nomatterhowclosetheframesareindistance.Gestaltfeatureanalysisforhumanvisualperceptionisacommonmethodologyusedtoidentifyelementsinvisualimages[ 8 ].Usingthefeaturesahumanperceivesinvisualobservationhelpsidentifyregionsofnotewithinagivenimage.Identifyingtheelementsofanimageperformsasegmentationoftheimage.ManyalgorithmslookforsalientregionsusingGestaltprinciples,forexampleWuandZhangdidthisin2013[ 9 ].Saliencyintheimageemphasizeslookingforadistinctionbetweenforegroundandbackgroundelementsintheimage.In2011,Kootstra,Bergstrom,andKragicreferencethehumanxationsinanimageasthepointsintheimagethatdrawthehumaneyeandthecenterpointofthesalientelements[ 10 ].GPRimagesdonothavesuchadistinction,sincethecontentateachframeisallatthesamelevelandindividualframesdonothaveanythree-dimensionaldepth.Aswell,whensearchingforthesalientregions,thereisanemphasisonsimplylabelingtheforegroundinformationasasingleentity.However,withinaGPRsceneMGRSSmustndmultipleelementsofinterest,suchasdierentgroundlayers,clutterobjects,andexplosiveobjects,thatallneedtobedistinguishedfromeachother. 2.2GroundPenetratingRadar(GPR)DataTwotypesofGPRsensorplatformsarevehicular-mounteddetectionsystemsandhand-helddetectionsystems[ 11 ].Bothtypescollectaseriesofone-dimensionalrepresentations,orA-scans,oftheinterrogatedsoil.EachA-scanisavectorofresponsevaluescollectedbyaGPRdevice.Whencombinedtogetherintoaset,asequenceofA-scansisreferredtoasaB-scan.EachB-scancanbethoughtofasaframe(vehicular)orsweep(hand-held)ofdata.Thespecicsofthedatacollectedbyeachsensorformatmustbeexaminedinordertounderstandandmodelthesceneinquestion.Figure 2-1 (A) 25

PAGE 26

isasampleA-scanresponsevector.Figure 2-1 (B)isaB-scan.TheprogressionoflayersthroughaC-scan,orsequenceofframes,isshowninFigure 2-1 (C). Figure2-1. (A)A-scan.(B)B-scan.(C)C-scanwithlayertransitions. Relevantfeaturesofthecollecteddataare:channel,depth,frame,andvoltage. Channel:thereceiverchannelonthecollectiondevice. Depth:thetimeseriesresolveddepthatwhichthevoltageisfound. Frame:theframeinwhichthevoltageisfound. Voltage:theenergysignaturecollectedbythereceiverontheGPRdevice.Inmyrepresentationofthescenevolume,thedepthistherstdimension,thechannelistheseconddimension,andtheframeisthethirddimension.Thevalueofthepixelatpositiondepthbychannelbyframeisbaseduponthevoltageresponse.Figure 2-2 isathree-dimensionaldiagramoftherepresentationIuse. Figure2-2. Structureofdatainathree-dimensionalview. 26

PAGE 27

2.2.1Vehicle-MountedDetectionSystemsThevehicular-mountedsystemhasalineararrayofradarsensorsthatprovidedataatevenlyspacedcross-trackpositions.EvenspacingprovidesuniformspatialpositioningofthesensorsastheycollecteachA-scan.Eachradarsensorcorrespondstoachannel.Thearrayofsensorscomprisesacross-trackB-scan.Acompleteframeofdataiscollectedbyinterrogatingthegroundwitheachsensorinthearrayatroughlythesametime.TheA-scanrecordedateachchannelcapturesvoltageresponsesatxedtimeintervalsatasinglelocationcorrespondingapproximatelytodepthsinthegroundatthatlocation.Thus,eachframeisatwo-dimensionalcollectionofdepthbychannel.Thenumberofdepthsisxedacrossallchannels.Thenumberofchannelsisalsoxedacrossallframes.Fixeddimensionsofthedatacollectedprovideanaturalframingofthesceneandnaturalconnectionsfromframetoframewhendiscriminatinglayertransitionsbetweenframes.Figure 2-3 plotsthe(x;y)-coordinatepointsofsampledatacollectedbyavehicular-mountedsystem. Figure2-3. Collectionsampleofvehiculardata. 2.2.2Hand-HeldDetectionSystemsCommonhand-heldsystemshaveone,two,orthreeradarsensors.ThesystemcollectsA-scandataevenlyspacedintime,butnotwithspatialuniformity.Whenthedeviceismovedquickly,ittravelsagreaterdistancebetweensamplesthanifitismovedslowly.However,thetimebetweenresponsesisconstant.Inordertoanalyzethedata,I 27

PAGE 28

requireknowledgeoftherelativespatialpositioningofthedevicewheneachresponseiscollected.Onewaytoachievethisistoincludeasixdegreesoffreedompositioningdeviceonthehand-heldsystem,providing(x;y;z)-coordinatesaswellaspitch,roll,andyawforeachresponsecollected.Ahand-helddevicemustbeswungbackandforthasdataiscollected.Eachswingisalsoreferredtoasasweep.Inthiswork,nodistinctionismadebetweenleft-to-rightandright-to-leftsweepingmotions.Intuitively,analysisofthisdatamaybeimprovedifthebeginningandendsofthesweepareknown(sweepdetection)andifacorrespondencemapbetweensweepsoverthesamelocationisknown(sweepalignment).Figure 2-4 plotsthe(x;y)-coordinatepointsofsampledatacollectedbyahand-heldsystem.Eachsweepisacollectionofsweeppointsoruniformlyspacedvirtualchannels.Successivesweepsarelogicallysimilartothesequenceofvehicularframes,formingthevolumeofthescenegrid.Eachsweeppointhasadepthvector,theA-scanofvoltagereturnssensedbythereceiver.Thenumberofdepthsisxedacrossallsweeps.However,thenumberofsweeppointsineachsweepisnotxed,presentingtheproblemofhowtoconnectthepointsinsuccessivesweepsandthenidentifycontiguouslayers.Theuniformcollectionprocessprovidedbyhand-heldsystemsoperatedbyarobot-controllercollectdatainastructuredenoughformatthatthedatacanbetransformedintoarepresentationthatresemblesvehiculardata.Individualsweepsarethetwo-dimensionalframesandsuccessivesweepsmakeupthethree-dimensionalvolume.Dobbins,Wilson,andSmockpresentedasolutiontothisproblem[ 12 ].Anexampleofthepositionalinformationrecordedwhenarobotisusingahand-helddeviceisshowninFigure 2-4 2.2.2.1Human-operatorhand-helddistinctionsNganetal.describethegeneralprocedureahuman-operatorofahand-helddeviceperforms[ 5 ].Thedeviceoperatesintwomodes,sweepandinvestigation.Whileinsweepmode,thedeviceissweptback-and-forth,cyclingthroughthesequenceofaleft-to-right 28

PAGE 29

Figure2-4. Hand-heldsweepwiththreereceiverscollectedbyarobotarm. sweep,followedbyadvancingthedeviceforward,andthenaright-to-leftsweeping.Thissequenceendswhentheoperatorndsanobjectofinterest.Thegoalduringsweepmodeistocoverasmuchareaaspossibleuntiltheobjectofinterestisidentied.Onceanobjectofinterestisidentied,theoperatorbeginstouseinvestigationmode,allowingforregionprocessingoveramoreextensivelyinvestigatedregion.Duringinvestigationmode,thedeviceissweptdirectlyovertheobjectitselfandtheareasurroundingtheobjectofinterest.Thisareaisreferredtoastheregionofinterestandiscenteredattheobjectofinterest.Thedeviceissweptovertheregionofinterestmanytimesandincludessweepsmovinginbothcross-trackanddown-trackorientations.Myanalysisfocusesondatacollectedduringinvestigationmode.However,buildingagraphofpointsfromthisdataisdicultbecausesweepscrossovereachother,sweepintervalsdonotformaconsistentpattern,andthereisnostandardEuclideandistancebetweencollectionpoints.Sincethisdataisnon-uniformlysampled,anapproachthatassociatesneighborhoodsbaseduponEuclideandistanceinagridgraphisnoteective.Inordertoimplementasolution,Imustmodifythegridgraphrepresentationofthenetworkorcreateagridlikestructurefromthedatathatareavailabe.UsingthenearestneighborprinciplespresentedbyFelzenszwalbandHuttenlocher[ 13 ],acollectionofassociatedregionscanbeformed.Ifthedataaretobeconnectedgrid,thenarecoveryprocesscanbeusedtointerpolatevaluesthatllingapswithinandbetweentheseregions. 29

PAGE 30

2.2.2.2RecoveryproceduresIn2009,Cassidynotesthatrecoveringmissingscansbyinterpolationisacommonpracticetollingapsandreplacecorruptedscansinthedatacollected[ 14 ].Splineinterpolationtsafunctiontoasetofexistingdatapoints.Thatfunctionisthenusedtointerpolatevaluesatintermediate(missing)points.Safont,Salazar,Rodriguez,andVergarauseamixturemodelintheirworkfrom2014torecovermissingGPRsignalsinB-scans[ 15 ].TheirtechniquegeneratesmixturesbyseparatingtheB-scanintoevenlyspacedpatches.Fromthemixturedistribution,replacementpatchpointsaregenerated.Whiletheirndingsdemonstratedtheirworkoutperformedsplineinterpolationinsomecases,anydierenceswereminimalcomparedtootherstatisticalinterpolationmethodssuchasKriging[ 16 ]andWienerstructures[ 17 ].Inaddition,theirworkisapplicabletotwo-dimensionaldataB-scansbutisnotcurrentlyimplementedforthethree-dimensionaldataMGRSSwillneedtorecover.Finally,thedatacollectedbyahuman-operatormaybespacedinsuchawaythatidentifyingarepresentativepatchsizeisnotpossiblewithoutleavingsomepatchesdenselypopulatedandotherssosparsethatanaccuratemixturemodelcannotbeidentied.Giventheseresults,Section 3.2 discusseshowinterpolationtechniques,suchassplines,areevaluatedastheMGRSSsolutiontorecoveringmissingsamples. 2.3ReciprocalPointerChains(RPC)Inordertobegindistinguishingsceneelements,Iconsideredhowtotracksurfaceandsub-surfacelayersinandacrossframes.In2012,SmockandWilsonimplementedReciprocalPointerChains(RPC)tondlayersinframes[ 18 19 ].TheRPCalgorithmusestheViterbialgorithm[ 20 ]tondpotentialone-dimensionallayerswithinanindividualframe.Theelementsofthedepthvectorwithinaframearethoughtofasindividualnodesinagraph.Edgesconnecteachnodetoitsneighborsatthesamedepthlevelandalsototheneighboringnodesintheprecedingandfollowingdepthlevels.Thisformsalatticeofinterconnectednodeswithinthegraph.Edgesdenethesimilarity 30

PAGE 31

betweennodes.Sourceandtargetnodesareaddedtothebeginningandendofthelattice.Then,theshortestcostpathsfromthesourcetotargetandfromthetargettothesourcearecalculated.Wherevertheedgesoftwoshortestcostpathsoverlaptherearereciprocalpointers.Allpathsofreciprocalpointersaretakentobepotentiallayerswithintheframe.OntheleftsideofFigure 2-5 isamid-processdisplayofpotentiallayersbeingconsideredwithinasingleframeofvehiculardata.TherightsideofFigure 2-5 showsthenalresultswheretheRPCalgorithmyieldslayersofinterestwithintheframe. Figure2-5. One-dimensionallayersfoundbyRPC. Inordertotracklayersacrossthescene,thepotentiallayersfoundineachframeneedtobeassociatedwiththelayersintheprecedingandfollowingframes.Layersareassociatedbycomparingthedepthvaluewherethelayerisfoundtoalloftheotherlayersintheneighboringframes.Depthsarecomparedchannelbychannel.WorkingwithSmockandWilsonin2015,IimplementedALayerTrackingApproachtoBuriedSurfaceDetection[ 12 ]asdescribedinSection 2.4 2.4ALayerTrackingApproachtoBuriedSurfaceDetectionTheprocessusedtocomparechannelanddepthvaluesofpotentiallayersbetweenframesissimilartotheRPCsolution.First,adissimilaritygraphconnectingalllayersfromoneframetothelayersinthenextframeisbuilt.Dissimilarityismeasuredbythedepthdierencebetweeneverypairofpointsincorrespondingpositionsofthetwolayers.AGaussianweightisappliedtothedepthdierencewith=0and=3.ThemeanoftheGaussianweightedvaluesassociatedwithcorrespondinglayerpointsyieldsasimilarity 31

PAGE 32

value.Thesigmoid,1 1)]TJ /F8 7.97 Tf 6.59 0 Td[(e)]TJ /F10 5.978 Tf 5.75 0 Td[(similarity,isusedtoperformtheconversionbetweensimilarityanddissimilarity.Figure 2-6 isanexampleofthedissimilaritygraph.Noticethatframes#1,#2,and#3have#5,#3,and#4potentiallayersrespectively.Eachofthevelayersinframe#1hasadissimilarityedgetoeveryoneofthethreelayersinframe#2,producingatotaloffteenedges. Figure2-6. Exampledissimilaritygraph. ThefollowingprocedureisusedtoidentifythesetofRPCs.Sincethedissimilaritymeasureisthesamemovingineitherdirection,thedissimilaritygraphisundirected.Thesourceandtargetnodesareplacedatthebeginningandendofthegraph.Adissimilarityofzeroisassociatedwiththesourceandtargetnodesandtheirrespectiveneighboringlayers.Theshortestpathsfromsource-to-targetandthenfromtarget-to-sourcearefound.Anyreciprocatingpairsareclassiedasalayertransitionbetweenframes.Eachsequenceofuninterruptedreciprocalchainsisreportedasadierentlayer.Achainthresholdminimumoftenelementshasbeensettoprunenoiseandbetterisolatetheprimarylayersfound.Figure 2-7 showstherearereciprocatinglayerconnectionsbetweenframes#1,#2,#3,and#4atlayers#3,#3,#2,and#2respectively.Theshortestpathsdonotreciprocatebetweenframes#4and#5,sonolayertransitionexistsforthelayerendingatlayer#2offrame#4. 32

PAGE 33

Figure2-7. Findingreciprocationinlayertransitions. Figure 2-8 demonstratestheresultsproducedbythisapproach.Atwo-dimensionalperspectiveoflayersinthethree-dimensionalsceneisdisplayedandlayersappearaspartiallyconnectedwire-frameswithinthescene.Thisapproachislimitedtoawire-frameobservationoflayersanddoesnotlocatelayerboundaries.Itisalsonecessarytodeneatrellisofconnectednodes,whichdoesnotworkwiththedataformatofhand-heldsystemsasdescribedinSection 2.2.2.1 Figure2-8. Wireframelayertracking. TheintermediateresultsoftheRPCalgorithmdisplayhowthevoltageofB-scanframesinGPRdatacanbeviewedasanimage.Manypixelsofapproximatelythesamevalueareseen,ttingintoequivalenceclasses.NoticeinFigure 2-9 therearelargesectionsofsimilarcolorsnearthetopandbottomoftheframe.Theseregionslooklikebackgroundinformationwhich,ifisolated,couldberemovedinordertofocusontheremainingregions.Eachregionmustbefoundinordertoidentifywhichregionsindicateelements 33

PAGE 34

ofinterest.Imagesegmentationperformstheprocessoflabelingsimilarimageelementregions. Figure2-9. SampleGPRvoltagedisplayedasanimage. 2.5SegmentationImagesegmentationistheprocessfordeningasetoflabelsthatrepresentthedierentregionsintheimage.Similaritymeasuresareusedtoassociatepixelsinanimage.Thesepixelsetsmakeupidentiedregions.CommonsegmentationmethodsaretoperformclusteringoveracolorspaceasdescribedinSection 2.5.1 andoptimizeanMRFasdescribedinSection 2.5.2 .Morerecentresearcheortsidentifysuper-pixelandsuper-voxelregionsasusefulstepsintheinitialsegmentationprocess.Super-pixelsandsuper-voxelsaredescribedinSection 2.5.4 .Idescribefourover-segmentationalgorithmsforsuper-voxellabeling:EcientGraph-BasedSegmentation(GB)inSection 2.5.4.1 ,EcientHierarchicalGraph-BasedVideoSegmentation(EHGB)inSection 2.5.4.2 ,SimpleLinearIterativeClustering(SLIC)inSection 2.5.4.3 ,andSupervoxel-basedSegmentationof3DVolumetricImages(3D-UCM)inSection 2.5.5 2.5.1SegmentationUsingCIE)]TJ /F3 11.955 Tf 11.96 0 Td[(Labandk-meansThissegmentationprocessdenestheluminance,a,andbcolorspacevaluesasfeatures[ 21 ].Eachimagepixelisanobservationhavingthethreefeaturedimensions.Then,k-meansclusteringplacestheobservationsintooneofkdierentclusters[ 20 ].Thealgorithmisgivenastaticlabelsize,calledk.Thenalclusterlabelsarethesegmentation 34

PAGE 35

oftheimage.Thek-meansalgorithmimplementsatwostageprocesswhereeachimagepixelisanobservationtobeclustered. Clustercentersarecalculatedbytakingthemeanvalueofallobservationsassignedtoacluster. Observationsarere-assignedtoclustersbaseduponEuclideanproximitytothenearestclustercenterinthefeaturespace.Bothstepsarerepeateduntilconvergence.Convergenceisobservedwhennopointsmovetoadierentclusterassignment.Inordertopreventlocalminima,aniterationboundcanbesettoensurethealgorithmcompletes.Acommoninitiallabelingistorandomlyselectkcenterpointsfromthesetofobservationpoints.AsFigure 2-10 shows,thisalgorithmdoesidentifykeyregionswithintheGPRframe.Unfortunately,threelimitationsmakeitanincompletesolutionpathformypurposes.First,theexecutiontimeisslowcomparedtoothersegmentationalgorithms.Second,thealgorithmdoesnotisolateelementsasdistinctlyasneeded.Forexample,thehyperbolainthecenterofthegureisapotentialexplosivehazard.However,theregionassignedtothehyperbolacanalsobeobservedonthefarrightinthemiddleoftheimage.Finally,thisisonlyasolutionforasingleframe.Asotherframesareintroduced,notonlywouldtheexecutiontimeincrease,butelementswouldalsoneedtobetrackedbetweenframes.Thelabelingprovidedheredoesnothaveanyassociationwithapotentiallabelingofthesubsequentframe.Inordertoconsidereachpixelinthecontextoftheentiremodel,Section 2.5.2 examineshowanMRFcanbeoptimizedtosegmentimages. 2.5.2MarkovRandomFields(MRF)AcommontechniqueforperformingimagesegmentationistomodeltheimagewithanMRFandthenuseaninferencetechniquesuchasExpectationMaximization(EM)toinferthesegmentedlabels[ 7 22 ].AnMRFisanundirectedgraphwhereeachnodeisconditionallyindependentofnodesthatarenotneighborstothegivennode.BytheHammersley-Cliordtheorem,thisisequivalenttoaGibbsdistributionofpositivefunctionsovercliquescoveringallthenodesandedgesinthegraph[ 23 24 ].Conditional 35

PAGE 36

Figure2-10. CIE)]TJ /F3 11.955 Tf 11.96 0 Td[(Labandk-meanssegmentationlabelresult. independenceisdenedinEquation 2{1 .Inthisequation,xGnidenotesallgraphnodesexceptxiandxNidenotesalltheneighborsofxi.BytheHammersley-Cliordtheorem,P(X)followsaGibbsdistributionandisgiveninEquation 2{2 P(xijxGni)=P(xijxNi)(2{1) P(X)=1 ZYc2CGc(Xc)(2{2) Z=XxYc2CGc(Xc)(2{3)ThepartitionfunctionZisdenedinEquation 2{3 ,normalizingtheobservationprobability:PxP(x)=1.cisacliqueinthesetofmaximalcliquesCGofthegraphGandcisthecliquepotentialofXc.P(X)canbeexpressedintermsofaproportionalenergyorcostfunction.Asampleenergyfunction,E(x),isshowninEquation 2{4 .Thisequationhasthecliquepotentialfunction c. P(X)/exp()]TJ /F3 11.955 Tf 9.3 0 Td[(E(x));whereE(x)=Xc2CG c(Xc)(2{4) 2.5.3CartoonModelIn2006,KatoandPongproposedthecartoonmodelforsegmentingimageswithanMRF[ 6 ].Thealgorithmnotesthatwhileascenehasasetofregions,observedlow 36

PAGE 37

levelfeaturesmaychangeslowly.Atthesametime,cross-regionboundariesmightchangeabruptly.Theyusethisanalysistoidentifytheregionsinthescene.ThemodeldenesthefeaturesoftheMRFtobecolorandtexture.TheCIE)]TJ /F3 11.955 Tf 12.22 0 Td[(Labcolorspaceprovidesametricforevaluatingpixelcolor.In1991,JainandFarrokhniausedGaborlterstoidentifyimagetexture[ 25 ].ThistechniqueconvolvesGaborltersatdierentorientationsovereachpixeltoaccentuatetextureandedgeboundaries.Inthemodel,neighborsaredenedbythepixels(orregions)alongtheboundaryofthegivenpixel(orregion)andnodesareassociatedwithinalabeledneighborhood.AstraightforwardapplicationofthistechniquetoGPRimageryistousepseudo-colorwhensegmentingtheimages.Thealgorithmisperformedbythefollowingsteps.Acartoon!,orabstractversionoftheimagewithlabeledregions,isinferred.Thediscontinuitybetweenregionsformsacurve.Thepair(!;\speciesasegmentation.Thehighestprobability!isfoundandusedtodetermine.Intheimplementationthefeatureobservationsareobservedvariables,F2Y.Thecartoonlabeling,!2X,isahiddenrandomvariable.MaximizingEquation 2{5 withrespectto!andgiventhefeaturesF,optimizestheresult(^!). ^!=argmax!2P(Fj!)P(!)=argmax!2P(!jF)(2{5)ThelikelihoodinEquation 2{5 ,orimagingmodel,isP(Fj!)andtheprior,P(!),istheprobabilityagiven!satisesthepropertiesanysegmentationmustposses.Thepriordenesanormalizedcliquepotential.Thelikelihoodestimatefollowsanormaldistribution,wherethepixelclass2=f1;2;:::;Lghasmean~andcovariance.Insteadofexaminingthesefunctionsfurther,Ifocusontheposteriorprobability.Byinvertingtheposteriorprobability,thisbecomesaminimizationproblemandismorenaturallyevaluated.Equation 2{6 speciestheproportionalitybetweentheposteriorprobabilityandtheenergyfunctionforthisscenarioasdescribedinSection 2.5.2 .Equation 2{7 istheconversionintoaminimizationproblem. P(!jF)/exp()]TJ /F3 11.955 Tf 9.3 0 Td[(U(!;F))(2{6) 37

PAGE 38

argmax!2exp()]TJ /F3 11.955 Tf 9.3 0 Td[(U(!;F))=argmin!2exp(U(!;F))(2{7)Recall,thecartoon!labelsaninputimageG.Foreachpixels,theregionthepixelbelongstoisspeciedbyaclasslabel!s,whichismodeledasadiscreterandomvariabletakingvaluesfrom=f1;2;:::;Lg.Sisthesetofallpixelsintheimage.Thecompletemodelis!=f!s;s2SgandthesetoffeaturesovereachpixelisF=f~fsjs2Sg.Thepotentialenergy,U(!;F),isgiveninEquation 2{8 .Inthisequation,Cisthesetofsecondordercliques.Theenergyfunctionutilizestwosub-functions.Therst,referredtoasthesingletonfunctionVs(!s;~fs),estimatestheinuenceoffeaturesfromagivenobservationasdenedinEquation 2{9 andisbaseduponthelikelihoodestimateP(Fj!).Thesecondisthedoubletonfunctionanddeterminesinuencebythesumofthelabelsimilarityofallobservationneighbors.Thedoubletonfunction,(!s;!r),isgiveninEquation 2{10 .Thedoubletonneighborhoodisdenedbytheeightpixelssurroundingtheobservationnodeinthestandardgridgraphrepresentation. exp(U(!;F))/exp Xs2SVs(!s;~fs)+Xfs;rg2C(!s;!r)!(2{8) Vs(!s;~fs)=log(p (2)nj!sj)+1 2(~fs)]TJ /F3 11.955 Tf 11.96 0 Td[(~!s))]TJ /F5 7.97 Tf 6.59 0 Td[(1!s(~fs)]TJ /F3 11.955 Tf 11.95 0 Td[(~!s)T(2{9) (!s;!r)=8>><>>:+1if!s6=!r)]TJ /F1 11.955 Tf 9.3 0 Td[(1otherwise(2{10)Minimizationisperformedonthethelog-likelihoodoftheenergyfunctionasshowninEquation 2{11 .Whentheweightofthedoubletontermishigher,resultstrendtowardhomogeneoussegmentations.Experimentsfound2wassatisfactory.WhilethenumberofpixelclassesLisdeterminedbythecompositionoftheimage,theauthorssetthisvaluemanually.Parametersandarecomputedfromtheimageusingmaximumlikelihood.Givenalabeleddataset,~xi,i=1;2;:::;N,denotesfeaturevectorsassignedto 38

PAGE 39

theclass2.ThenandarecalculatedbyEquations 2{12 and 2{13 respectively. log exp Xs2SVs(!s;~fs)+Xfs;rg2C(!s;!r)!!=Xs2SVs(!s;~fs)+Xfs;rg2C(!s;!r)(2{11) ~=1 NNXi=1~xi(2{12) ~=1 N)]TJ /F1 11.955 Tf 11.95 0 Td[(1NXi=1(~xi)]TJ /F3 11.955 Tf 11.96 0 Td[(~)T(~xi)]TJ /F3 11.955 Tf 11.95 0 Td[(~)(2{13)Givenanunlabeleddataset,theEMalgorithmisusedtocalculatethemaximumlikelihoodestimates.Calculating~and~isnecessaryforeachpixelclass2.Observationsarehistogramdata~di,(i=1;2;:::;D),ofthefeatureimages.Theparametersarefoundbymaximizingthelog-likelihoodfunctioninEquation 2{14 L=1 DDXi=1log X2P(j~di)!(2{14)ThestepsintheEMalgorithmare: 1. Calculatetheexpectationusingposteriorprobabilitytoassignlabelsto~liofdata~di.Equation 2{15 denestheexpectationfunction,whereP(Dj)isgivenbyP(Fj!)andP()byP(!). P(j~di)=P(~dij)P() P2P(~dij)P()(2{15) 2. Performmaximizationusingexpectedlabelsfromstep1byestimatingP(),~,and.Equations 2{16 through 2{19 showthefunctionsthatcalculatethemaximizationparameters. 3. Repeatstepsuntilconvergence,i.e.,whenthechangeinLdropsbelowadenedthreshold. P()=K D(2{16) K=DXi=1P(j~di)(2{17) ~=1 KDXi=1P(j~di)~di(2{18) 39

PAGE 40

=1 KDXi=1P(j~di)(~di)]TJ /F3 11.955 Tf 11.96 0 Td[(~)T(~di)]TJ /F3 11.955 Tf 11.96 0 Td[(~)(2{19)Similartothetwo-dimensionalimagesthecartoonmodelrepresents,thepatternofdepthbychannelbyframefoundintheGPRscenenaturallylendsitselftoathree-dimensionalgrid.ThisgridcaneasilybestructuredasanMRF.However,standardscenesizesareontheorderof415depthsby24channelsby61framesor607,560individualnodesinthegraph.Givenmyapplicationscenario,objectrepresentationandclassicationwhileusingaGPRdetectiondevice,systemperformanceneedstoberelativelyecient.Itiscomputationallyinfeasibletodrawinferencefromanetworkofthissizeinthetimeframerequired.Therefore,Iconsiderhowusingover-segmentationtechniquescanreducetherepresentationsizerequiredfortheMRF.Whenanimageisover-segmented,ithastoomanysegmentedregions,whichmeansthereareregionsthatshouldbemergedtogether. 2.5.4Super-PixelandSuper-VoxelSegmentationOver-segmentingimagestoidentifysuper-pixelsandsuper-voxelshasbecomeacommonpre-processingstepinimagesegmentation[ 26 27 ].Pixelsaretheindividualgraphicalelementsinatwo-dimensionalimageandinthiscontexthavevalueandtextureproperties.Asuper-pixelisaregionofpixelsthatarecloselyrelatedbysomesimilaritymeasure.Avoxelisapixelinthree-dimensionalspace.Asuper-voxelisaregionofvoxelsthatarecloselyrelatedbysomesimilaritymeasure.Sinceover-segmentingthesequenceofimagesintheGPRsceneisnotconcernedwithbeingaspreciseastheoptimalsegmentation,thisprocessexecutesfasterthansegmentation.Thesuper-pixelsandsuper-voxelsidentiedserveasareducedsetofnodesintheMRF. 2.5.4.1Ecientgraph-basedsegmentation(GB)In2004,EcientGraph-BasedSegmentationwasreportedbyFelzenszwalbandHuttenlocherinordertoidentifyimagesuper-pixels[ 13 ].Thisalgorithmformsagraphoftheimage.Apredicateisusedtosegmenttheimagewithglobalproperties.The 40

PAGE 41

processnotonlymaintainsdetailinlow-variabilityregions,butalsoignoresdetailinhighvariabilityregions.Thisallowsforalongandgradualchangeinvariationtobeviewedasthesameregion.Whileinahighvariabilityregion,frequentchangesneartoeachotherarenotalwayssplitintoseparateregions.Instead,thelocalgroupofhighvariabilityregionsiscollectedintoonesuper-pixel.Thisconsiderationofnotonlylocal,butalsoglobalcharacteristics,capturestheperceptualgroupingsintheimage.Thealgorithmisecient.Theauthorsshowthatitapproacheslineartimeinthenumberofpixels.Theapproachcanbeappliedtonotonlyatwo-dimensionalimage,butalsotosequencesoftwo-dimensionalimages.Oneapplicationistovideoprocessing,discussedbyEcientHierarchicalGraph-BasedVideoSegmentation[ 28 ],andreviewedinSection 2.5.4.2 .FelzenszwalbandHuttenlocherincorporatelocalitybyusingclusteringandextendingShiandMalik's2000workwithNormalizedCutsandImageSegmentation[ 29 ].Theprocessinvolvescuttingconnectionsbetweendissimilarregions.Intensityvariationalonedoesnotdetermineregionseparation.Thealgorithmconsiderselementdierenceacrosstheregionboundaryrelativetotheelementdierencewithintheregions.Akeyobservationiswhencross-boundaryintensityisrelativelydierentthanthewithin-regionintensityofatleastoneofthetwoneighboringregionsbeingcompared.ThegraphG=(V;E)isformedwherethenodesVaretheindividualpixelsintheimage.Adissimilarityweightw(vi;vj)>0isdenedovertheedges(vi;vj)2E.Asampleweightfunctionmeasuresthedierenceinintensity.I(pi)istheintensityofpixelpiandtheweightfunctionisgiveninEquation 2{20 .AsegmentationSoftheimageisapartitioningofVintocomponents(pixelregions)C2S,correspondingtoagraphG0=(V;E0)whereE0E.Edgeswithinthesamecomponenthavelowerweight.Comparatively,edgesbetweendierentcomponentshavehigherweight.IfthereisaboundarybetweentwocomponentsapredicateDisdened.Dconsiderstheelementdissimilarityalongtheboundarybetweenthetwocomponents(cross-regiondierence)relativetothedissimilarityamongneighboringelementswithineachofthetwo 41

PAGE 42

components(within-regiondierence). w((vi;vj))=jI(pi))]TJ /F3 11.955 Tf 11.95 0 Td[(I(pj)j(2{20)GivenC1;C2VtheminimumweightedgeconnectingC1andC2isthecross-regiondierenceandisshowninEquation 2{21 .InordertoavoidtheproblembecomingNP-Hard,onlythesmallestedgebetweenthetworegionsisconsidered.Inpracticetheauthorsfoundthisheuristicwassatisfactoryforsegmentingtheimagewhilealsoincreasingperformance.GivenCVandaminimumspanningtree,MST(C;E),ofthecomponentC,thewithin-regiondierenceismeasuredbyEquation 2{22 Dif(C1;C2)=8>><>>:minvi2C1;vj2C2;(vi;vj)2Ew((vi;vj))forneighborsvi;vj1otherwise(2{21) Int(C)=maxe2MST(C;E)w(e)(2{22)Equation 2{23 istheminimuminternaldierencewithintwocomponentsC1andC2.ToaddressthecasewhenjCj=1andInt(C)=0,thefunctionprovidesaminimumthresholdwhere(C)=k jCj.kisaconstantandprovidesscalingofcomponentsize.Asthevalueofkgrows,largercomponentsarepreferred.Ascomponentsizesincrease,kbecomeslessinuentialandresultsadapttothecharacteristicsofthedata.TheregioncomparisonpredicateDisdenedinEquation 2{24 .Thepredicatedeterminesifthereisaboundarybetweenthetworegions.Aboundaryisobservedwhenthedierence,orminimumedge,betweenthetworegionsisgreaterthantheinternaldistance,ormaximuminternaledge,ofbothclusters. MInt(C)=min(Int(C1)+(C1);Int(C2)+(C2))(2{23) D(C1;C2)=8>><>>:trueifDif(C1;C2)>MInt(C1;C2)falseotherwise(2{24) 42

PAGE 43

AsegmentationTisarenementofasegmentationSwhen8iCi2Tand9jCj2SsuchthatCiCj.WhenT6=S,TisaproperrenementofS.AsegmentationSistoonewhentherearetoomanycomponents.Forexample,ifC1;C22SandthereisnotaboundarybetweenC1andC2.AsegmentationSistoocoarsewhentherearetoofewcomponents.ThisisobservedwhenasegmentationTcanbefoundsuchthatTisaproperrenementofSandTisnottoone.ThesegmentationalgorithmreceivesthegraphG=(V;E)asinput,wherejVj=nandjEj=m.TheoutputisasegmentationSofVintocomponentsS=(C1;:::;Cr).Then,thealgorithmfollowsthesesteps. 0. SortEinto=(o1;:::;om)bynon-decreasingweightwhereoq=(vi;vj). 1. InitializesegmentationS0whereCi=vi. 2. Repeatstep3forq=1;:::;m. 3. GivensegmentationSq)]TJ /F5 7.97 Tf 6.59 0 Td[(1, ifCi6=Cjandw((vi;vj))MInt(Ci;Cj), obtainSqbymergingCiandCj otherwiseSq=Sq)]TJ /F5 7.97 Tf 6.58 0 Td[(1. WhenperformingCk=Ci[Cj,updatetheMSTofCk. ThisstepensuresthatifD(Ci;Cj)doesnothold,thenCiandCjwillbemerged. 4. ReturnS=Sm.FelzenszwalbandHuttenlocherproposetwowaystodetermineneighbors,andthusedges,inthegraph.First,thegridgraphapproachusesthetwo-dimensionalframeofpixelstoinitializetheneighborsofeachpixeltotheeight-neighborhoodthatsurroundsthepixel.Anotherapproachistodeterminethenearestneighborsthroughafeaturespaceotherthanthe(x;y)-coordinatesofthegraph.Oncethefeaturespaceisdened,aspatialboundboraneighborhoodsizezisselected.Usingthespatialbound,allnodeswhosefeaturepointsarewithinbcomprisetheneighborhoodofagivennode.Instead,giventheneighborhoodsizez,thenodescorrespondingtotheclosestzfeaturepointsaretheneighborsofeachnode. 43

PAGE 44

Thegridgraphapproachmatchesthenaturalstructureofvehicleandrobot-collectedhand-helddatabecausethesecollectionsareformedinaveryorderedpattern.Thedataiscollectedoverconsistentintervalsandeuclideandistances.However,itdoesnottthedynamicnatureofhuman-operatorcollectedhand-helddata.Thepotentialdiversityoftheintervalsbetweenpointsinhuman-operatorcollectionslendsitselftothenearestneighbormethodology. 2.5.4.2Ecienthierarchicalgraph-basedsegmentationIn2010Grundmann,Kwatra,Han,andEssaextendedGBintothree-dimensionalspaceinordertosegmentvideoframes[ 28 ].Theyremovedtheneedfor,theparameterthatdenestheminimumclustersizeinGB.Thecalculationofinternalvariation,Int(C),isbaseduponameasureovertheclusteredregion,insteadofoverpixels.Thisallowsforscalingasregionsizesgrow.RegiondescriptorsformthebasisforcomparisonandtheregionsareconnectedusingthenearestneighbormethodologyofGB.Thisprocessisrepeated,creatinghierarchies.Eachhierarchyformsadierent,potentiallymorerenedsuper-voxelsegmentation.In2012,XuandCorsorstpresentedLIBSVX:ASupervoxelLibraryandBenchmarkforEarlyVideoProcessing[ 26 30 ].TheiranalysisfoundtheEcienthierarchicalgraph-basedsegmentationalgorithmperformedwellwhenidentifyingboundaries,akeyelementinsceneanalysisandobjectdiscrimination. 2.5.4.3Simplelineariterativeclustering(SLIC)Radhakrishna,etal.implementedtheSLICsystemforsuper-pixelandsuper-voxelsegmentation[ 27 31 ].Thissystemisanadaptionofk-meansclustering,whereaconstrainedsearchspacelimitspotentialclusterpointstobeexamined.Onlythepointswithinadenedareasurroundingtheclustercenterareconsidered.ThereducedsearchspacecreatesatimecomplexityofO(N),whereNisthenumberofpixels.Thecomplexityisalsoindependentofthenumberofclusters,k.Note,k-meansexecutesinO(kNI)time,wherek,N,andIarethenumberofclusters,pixels,anditerationsrespectively.Comparedtok-means,SLICsolvesaspecializedproblem,namelysuper-pixel 44

PAGE 45

clustering.Specializingfacilitatestheperformanceenhancement.Whenappliedtomedicalimagery[ 27 32 ],SLIChasbeenshowntoreducethespacecomplexityrequiredtosegmentimages.IleveragethissamereductionintheMRFrepresentationoftheGPRscene.TheSLICalgorithmusesaweighteddistancemeasure,D,tocontrolthesizeandcompactnessofsuper-voxels.CIE)]TJ /F3 11.955 Tf 12.37 0 Td[(LabcolorspaceandspatialproximityvaluesareusedtocreateD.SLICrequiresasingleinputparameter,k,deningtheanticipatednumberofclusters.Theconstantminuencesthecompactnessofsuper-voxels.ThelatestversionofSLIC,calledSLICO[ 33 ],nowcalculatesthevalueform,thusremovingtherequirementthatitbeprovidedtothefunction.Pointsarerepresentedby:Ci=[liaibixiyizi]T.Where,l,a,baretheluminosity,a,andbcomponentsoftheCIE)]TJ /F3 11.955 Tf 12.83 0 Td[(Labcolorspaceandx,y,andzarethespatialpositionsofthepointinthethree-dimensionalvolume.Theclustersizeiscalculatedfromthenumberofsuper-voxels:S=3p N=k.Clustercentersaretakenfromthelowestgradientpositionina333neighborhood.Thespatialextentofthesuper-voxelregionsisSSSand2S2S2Sisthesearchregionforpotentialclusterpoints.Therearetwostepsintheprocess.First,pointsareassignedtoclustersbyndingthenearestclustercenter.Then,clustercentersareupdatedtothemeanvector[labxyz]Toftheclusterpoints.E,theresidualerrorfromoneclustercentertothenext,iscalculatedusingtheL2norm.Theassignmentandupdatestepsarerepeateduntiltheerrorconverges.Inpractice,teniterationswerefoundtobesatisfactory.Post-processingassignsanydisjointpixelstothenearestsuper-voxel.ThedistancemeasureDisdenedbycombiningthecolorspace[lab]Tandvoxelposition[xyz]T.Therangeofthecolorspaceisdened,whilethevoxelpositioningvariesbaseduponimagesize.Placinggreaterweightonspatialdistancecreateslargersuper-voxels.Thiscreatesmorecompactsuper-voxelsandmaynotadhereaswelltoimageboundaries.Placinggreaterweightoncoloremphasizesnon-spatialfeaturesandcreatessmallersuper-voxels. 45

PAGE 46

InorderforthemtobecombinedintoD,thecolorandspatialproximitiesmustbenormalized.Equations 2{25 and 2{26 arethecalculatedcolorandspatialdistances,respectively.NcandNsarethemaximumcolorandspatialdistances.Equation 2{27 denesthenormalizeddistancemeasure. dc=q (lj)]TJ /F3 11.955 Tf 11.95 0 Td[(li)2+(aj)]TJ /F3 11.955 Tf 11.96 0 Td[(ai)2+(bj)]TJ /F3 11.955 Tf 11.96 0 Td[(bi)2(2{25) ds=q (xj)]TJ /F3 11.955 Tf 11.95 0 Td[(xi)2+(yj)]TJ /F3 11.955 Tf 11.95 0 Td[(yi)2+(zj)]TJ /F3 11.955 Tf 11.96 0 Td[(zi)2(2{26) D0=s dc Nc2+ds Ns2(2{27)WhilethemaximumspatialdistanceisknowntobeNs=S=3p N=k,themaximumcolorismorevariable.Dierentimagesandclustershavevaryingmaximumcolorvalues.Therefore,themaximumisxedtoaconstant,Nc=m.Asmincreasesspatialpriorityincreasesandclustersaremorecompact.Decreasingmenhancestighterboundaries,i.e.thecolordistancehasgreateremphasis,andclustershavelessregularizedsizeandshape.Equation 2{28 showsthenalrepresentation. D0=s dc m2+ds S2=s dc2+ds S2m2(2{28) 2.5.5Supervoxel-basedSegmentationof3DVolumetricImagesIn2016,Yang,Sethi,Rangarajan,andRankausedtheprinciplesofthegPb-UCM'stwo-dimensionalsegmentation[ 34 ]andimplementedthe3D-UCMalgorithmtosegmentthree-dimensionalimagevolumes[ 35 ].3D-UCMisimplementedinthreestages.First,localfeaturesareidentiedandusedtocreateasuper-voxelrepresentationoftheimagedata.Then,descriptorsareextractedfromthesuper-voxelregionsandusedtodevelopaglobalviewoftheimagevolume.Finally,boththeglobalandlocalfeaturesareprovidedtotheEHGBalgorithmreviewedinSection 2.5.4.2 toproducehierarchicalsegmentationsoftheimagevolue. 46

PAGE 47

Stage#1performsgradientfeaturedetectionontheimagevoxels.Thesedescriptorsareusedbyawatershedtransformtoidentifysuper-voxelregions.Alocalgradientmagnitudeiscalculatedforeachvoxelinthefollowingmanner.Aspherearoundeachvoxeldenesthevoxel'sneighborhood.Sphereswithradiusr2f2;4;6gvoxelsweresampled.Thesphereissplitinhalf,creatingtwohemispheres.Thevectorsgandhstorethehistogramoftheintensityfromeachhemisphere.Theorientationoftheseparatingplaneinthesphereisdenedbythenormalvectort(;'),where2f0; 4; 2;3 4g,'2f)]TJ /F8 7.97 Tf 30.19 4.71 Td[( 4;0; 4gandonecasewhere'= 2.Thegradientmagnitudeofthedistancebetweengandhindirectiont(;')isshowninEquation 2{29 .Equation 2{30 istheorientedgradientoperatorGweightedbyr.ThemaximumresponsefromthedirectionstestedandthelocalfeaturevalueforeachvoxelarecalculatedinEquation 2{31 2(g;h)=1 2Xi(g(i))]TJ /F3 11.955 Tf 11.95 0 Td[(h(i))2 g(i)+h(i)(2{29) G(x;y;z;;')=XrrG(x;y;z;;';r)(2{30) mPb(x;y;z)=max;'G(x;y;z;;')(2{31)Stage#2ndsglobalcues.Thesecuesarecombinedwiththelocalcuestoformaglobalizationdescriptoroftheimagevolume.Theprocessisperformedinthefourstepsdescribedhere. 1. Yangetal.havedenedorientedinterveningcontourcueastheanitybetweenvoxelsiandjintheanitymatrixgiveninEquation 2{32 .Intheequation,distheunitdirectionvectorbetweeniandj,whileisaconstant.Pointswithin1voxelofthelinesegmentbetweeniandjareplacedinthesetP.Then,forp2P,thegradientfeaturefoundbymPb(p)hastheunitdirectionvectorn.AsEquation 2{32 reects,thepointpthatmaximizestheresultisselectedandcalculatestheanitybetweeniandj. Wij=exp()]TJ /F1 11.955 Tf 11.29 0 Td[(maxpfmPb(p)jhd;nijg=)(2{32) 2. Theeigenspaceoftheimagevolumeisusedasaglobalrepresentation.However,calculatingtheeigenvectorsoftheentirevoxelsetistimeconsuming.Toreducethiscalculation,theeigenspaceofthesuper-voxelregionsiscalculatedinstead[ 36 ].Thesuper-voxelbasedeigensystemisshowninEquation 2{33 .Intheequation,Lmaps 47

PAGE 48

thevoxelstotheirsuper-voxelregions,Disadiagonalmatrixcontainingthesumoftheanityweightsassociatedwitheachvoxel,Wistheanitymatrix,andxistheeigenvectorrepresentationofthesuper-voxelregion. (LT(D)]TJ /F3 11.955 Tf 11.96 0 Td[(W)L)x=0LTDLx(2{33) 3. Equation 2{34 ndsspectralcuesfromthedatabyapplyingtheeigenvectorimagestothelocalizedgradientofmPb.ExperimentsusedK=16eigenvectorsintheglobalrepresentation. sPb(x;y;z)=KXi=11 p imPbvi(x;y;z)(2{34) 4. Finally,aglobalizedviewoftheboththelocalandglobalcuesiscreatedbycombiningthemtogetherinEquation 2{35 .Yangetal.weightedthetermsmPb(x;y;z)andsPb(x;y;z)equally. gPb(x;y;z)=w[mPb(x;y;z)]+(1)]TJ /F3 11.955 Tf 11.96 0 Td[(w)[sPb(x;y;z)](2{35)Stage#3divergesfromthegPb-UCMtwo-dimensionalsuper-pixelprocess.Instead,theprinciplesofEHGB,detailedinSection 2.5.4.2 ,createahierarchyofsuper-voxelsandatdenedthresholdlevelsthesegmentationisformalized.Section 4.1.2 reportstheresultsfoundwhentestingtheperformanceofMGRSSwhileusingthesuper-voxelregionsgeneratedbytheinitial3D-UCMhierarchy.Also,resultsproducedbytestingGPRdatawiththehierarchylevels3D-UCMproducesarediscussedinSection 4.3 2.6ClusteringAnMRFrequiresaninitiallabelingofdata.Ihaveexaminedthreedierentinitiallabelingtechniques.First,Itestusingeachofthesuper-voxelregionsasitsownlabelintheMRF.WhilethissolutionissimpletopassalongtotheMRF,itrequirestheMRFtoreducethelabelsizesignicantly.InthesamewaySLIChasidentiedsuper-voxelregions,anyalgorithmthatclustersdatawouldprovidetheMRFwithaninitiallabelingofthesuper-voxelregions.Therefore,IalsoconsiderHierarchicalClusteringandCompetitiveAgglomeration,discussedinSections 2.6.1 and 2.6.2 respectively.Finally,Section 2.6.3 introducestheprinciplesofConstrainedClusteringimplementedinMGRSSthatallowtrainingregionassignmentsinuenceoverthesegmentationofanewscenemodel. 48

PAGE 49

2.6.1HierarchicalClusteringHierarchicalClustering(HC)isformedbygroupingelementstogetheratdierentlevelsinatree-basedhierarchy[ 37 ].Eachlevelofthetreeidentiesconditionsforgroupingelementstogether.Whenmovingupthehierarchytotherootofthetree,groupsizesincreaseandthenumberofclustersdecreasesuntilonlyoneclusterremains,theentiredataset.Leafnodesatthebottomofthetreearetheindividualelementswithinthedatasetandformthelowestlevelhierarchywhereeveryelementisitsowncluster.Eachlevelofthehierachycontainsaseriesofsub-trees.Whentwoelementsaregroupedtogetheratagivenlevel,theymaysplitintoseparatesub-groupswhenmovingdownthetree.However,whenmovingupthetree,oncetwoelementsarepairedintothesamegroup,theymustalwaysremaintogether. 2.6.2CompetitiveAgglomerationCompetitiveAgglomeration(CA)proposedbyFriguiandKrishnapuramin1997canalsoprovidetotheMRFaninitiallabelingofthesuper-voxelregions[ 38 ].CAemploysacombinationofhierarchicalandpartitionalclustering.Thealgorithmreceivesaninitiallabelsize,partitionselementsintosizenumberofclusters,andthenrenestheclusterassignments.Duringrenement,clusterscanbereducedtozeroelementsandremoved.Partitioningassistsinreducingtheexecutiontimeofthealgorithmandhelpstoavoidlocalminima.Thehierarchicalportionofthesolutionfacilitatesclusterreduction.CAusesanapproachsimilartoFuzzyC-meansbydeninganobjectivefunction,introducingLagrangeparameters,andprovidingthecorrespondingconstraints[ 20 ].Equation 2{36 istheobjectivefunctionand 2{37 isthecorrespondingconstraint.Intheseequations,iistheprototypeofclusterCi,UisaCNmatrixofclustermembershipforeachobservation,andXisthesetofobservations.ThersthalfofEquation 2{36 isthesumofthesquareddistancestoclustercentersweightedbythemembershippercentageanddrawnfromtheFuzzyC-Meansobjectivefunction[ 39 ].ThesecondterminEquation 2{36 usestoadjustmembershipinuence.Aschanges,sowilltheemphasison 49

PAGE 50

mergingclusterstogether. J(B;U;X)=CXi=1NXj=1(uij)2d2(xj;i))]TJ /F3 11.955 Tf 11.95 0 Td[(CXi=1"NXj=1uij#2(2{36) CXi=1uij=1;forj2f1;:::;Ng(2{37)Equation 2{38 incorporatestheconstraintshowninEquation 2{37 intotheobjectivefunctionbyintroducingaLagrangeparameter[ 20 ]andmembershipisdeterminedbyminimizingwithrespecttoU.Holdingxed,thederivativewithrespecttoUisgiveninEquation 2{39 .Then,solvingforust,Equation 2{39 reducestoEquation 2{40 wherethecardinalityofclustersisgiveninEquation 2{41 .Thesolutionissimpliedbycalculatingmembershipcardinalityusingthemembershipsofthepreviousiterationandassumingthemembershipsdonotchangesignicantlybetweeniterations. minUJ(B;U;X)=CXi=1NXj=1(uij)2d2(xj;i))]TJ /F3 11.955 Tf 11.95 0 Td[(CXi=1"NXj=1uij#2)]TJ /F8 7.97 Tf 16.81 14.94 Td[(NXj=1i CXi=1)]TJ /F1 11.955 Tf 9.3 0 Td[(1!(2{38) J ust=2ustd2(xt;s))]TJ /F1 11.955 Tf 11.96 0 Td[(2NXJ=1usj)]TJ /F3 11.955 Tf 11.96 0 Td[(t=0;fors2f1;:::;Cg;t2f1;:::;Ng(2{39) ust=2Ns+t 2d2(xt;s)(2{40) Ns=NXj=1usj(2{41)UsingtheconstraintPCi=1uij=1fromEquation 2{37 ,Equation 2{40 canbetransformedintoEquation 2{42 orEquation 2{43 CXk=12Nk+t 2d2(xt;k)=1(2{42) CXk=1Nk d2(xt;k)+tCXk=11 2d2(xt;k)=1(2{43)Equation 2{44 solvesfortinEquation 2{43 .Themembership,ust,ofobservationxtinclustersisfoundbyinsertingEquation 2{44 intoEquation 2{40 andisgivenin 50

PAGE 51

Equation 2{45 t=1)]TJ /F3 11.955 Tf 11.96 0 Td[(PCk=1[Nk=d2(xt;k)] PCk=1[1=d2(xt;k)](2{44) ust=Ns d2(xt;s)+1)]TJ /F3 11.955 Tf 11.95 0 Td[(PCk=1[Nk=d2(xt;k)] PCk=1[d2(xt;s)=d2(xt;k)](2{45)Themembershipustissplitintothetwosub-functionsshownin 2{46 .Equation 2{47 denesuFCMstanduBiasstisgiveninEquation 2{48 .Functionally,uFCMstcontributestothemembershipweightbydetermininghowcloseanobservationistoagivencluster.WhileuBiasstservesasbiastowardoragainstclustersbaseduponthecluster'scardinality.Also,thebiasisampliedbytheinverseproximitytotheobservationinquestion. ust=uFCMst+uBiasst(2{46) uFCMst=[1=d2(xt;s)] PCk=1[1=d2(xt;k)](2{47) uBiasst= d2(xt;s)(Ns)]TJ /F1 11.955 Tf 15.32 3.02 Td[(Nt)(2{48)Equation 2{48 issimpliedbyintroducingNtasitisdenedinEquation 2{49 .Ntoperatesasameansubtraction.Itistheweightedaverageoftheclusterswithrespecttheobservationxt. Nt=PCk=1[1=d2(xt;k)]Nk PCk=1[1=d2(xt;k)](2{49)Ifisproportionaltotheratioofthefeaturecontributiontothesizeoftheclusters,asshowninEquation 2{50 ,itisindependentofthefeaturedimensionality.Aschangesovertime,sodoesitsinuenceonthefunctionalresult.FriguiandKrishnapuramrecommendalargerinitialsothatthesecondterminEquation 2{36 willhavegreateremphasisintheinitialstages.Thiswillinuenceclusterstomergeandanearly-in-processreductioninthetotalnumberofclusters.Intheirexamplescenario,FriguiandKrishnapuramchooseasaninverseoftheiterationnumberk.Asiterationsincrease,(k)isscaledby(k)sothatthersttermintheobjectivefunctionhashigher 51

PAGE 52

priorityandthereislessemphasisonreducingthetotalnumberofclusters.InEquation 2{51 ,(k)isdenedbyEquation 2{52 /PCi=1PNj=1(uij)2d2(xj;i) PCi=1[PNj=1uij]2(2{50) (k)/(k)PCi=1PNj=1(uij)2d2(xj;i) PCi=1[PNj=1uij]2(2{51) (k)=0exp()]TJ /F3 11.955 Tf 9.3 0 Td[(k=)(2{52)TheCAalgorithmfollowsthesesteps: 1. Initializesystem,includingrstclusterassignmentdistribution. 2. Computethepointstoclustersdistances. 3. Updateparameters,including,distributionmatrixU,andclusterrepresentationsandN. 4. Repeatsteps2and3untilreachingparameterstabilization. 2.6.3ConstrainedClusteringIn2000,WagstaandCardieexaminedhowconstrainedclusteringcouldincorporatetheknowledgeofanexperiencedusertodenemust-linkandcannot-linksetsforspecicobservationsinthecompletedataset[ 40 ].Theinformationinthesesetsisthenusedtoaecthowsubsequentlabelingwillbeperformed.Theprocessidentiespairsofobservationswithsomefeaturevaluesthatindicatetheyshouldbelonginthesameclusterandthereforemustbelinkedtogether,andconversely,pairsofobservationsthatcannotbelinked.Whennewlabelingsareassigned,theycannotviolatetheidentiedmust-linkandcannot-linkpairings.TheseprinciplesallowMGRSStheabilitytodenespecicsuper-voxelregionsthatmusteitherbelinkedtogetherorbedistinct.Thefeaturespaceofsuchlinkingsisthenusedtoinuencethesegmentationofnewscenes. 2.7MRFTuningMygoaltodiscriminateobjectsofinterestisfurtheredbymoreaccuratelylabelingregionsinthescene.IusethetrainingtooldiscussedinSection 3.1.3 tosupplement 52

PAGE 53

thelabelingprocess.Usinghumanannotation,thistoolcreatesregionlabelsfromtrainingscenes.Onewaytoincreaselabelaccuracyusingtrainedregionsistoperformsemi-supervisedlearning.Idiscusstwomethodologiesforsemi-supervisedlearninginSections 2.7.1 and 2.7.2 .Anotheroptionistouseparameterlearningtondfeatureweights.Thelearnedfeatureweightsareusedtoparameterizefuturelabelings.ParameterlearningisreviewedinSection 2.7.3 .ThenalprocessforusingtraininglabelsthatIanalyzeissemi-supervisedclustering.Section 2.7.4 discussesdierentsemi-supervisedclusteringalgorithms.Byextendingsemi-supvisedclustering,MGRSSstrengthensandthenrelaxesmust-linkandcannot-linkconstraintstocreateanaugmentedsegmentation.Section 3.4 detailsthesemi-supervisedclusteringmethodologyIhaveimplemented,calledProbability-BasedTrainingRealignment(PBTR). 2.7.1Graph-BasedSemi-SupervisedClassicationonVeryHighResolutionRemoteSensingImagesRecentworkbyYan,Sethi,Rangarajan,Vatsavai,andRankain2017segmentsVHRimagesusingsemi-supervisedlearning[ 41 ].Therststepistoover-segmenttheimageintosuper-pixelregions.Then,featuresareextractedfromthesuper-pixelregions.Agraphisformedwheretheregionsarethenodesandtheedgesconnecteachnodetoitsneighbors.Finally,semi-supervisedlearningisincorporatedintotheobjectivefunctionandthesegmentationresultismaximized.First,localandglobalfeatureinformationisextractedtodevelopthesuper-pixelrepresentationoftheimage.Herearethesteps: 1. Amulti-scaleorientedsignalisconstructedusingEquation 2{53 .Acircularregionaroundeachpixelissplitintotwohemispheresatdierentangles.Histogramsfromeachhemispherearecalculatedfrombrightness,color,andtextonfeatures.G(x;)calculatesamultiscaleorientedgradientsignalandwrepresentstheweightsforfeaturesandscales. Ilocal(x;)=XsXiwi;sGi;s(x;)(2{53) 53

PAGE 54

2. Formulateaweightedgraph,usingedgeweightsW(x;y)denedinEquation 2{54 W(x;y)=exp)]TJ /F3 11.955 Tf 11.96 0 Td[(maxz(x;y)maxIlocal(z(x;y);)(2{54) 3. Thesetfek(x)gcontainsthetopkeigenvectorswiththesmallesteigenvaluesandiscalculatedfromtheweightedgraphW(x;y). 4. ThekeigenvectorsareusedinEquation 2{55 toextractspectralinformationfromtheimage.ThefunctionOevaluatesthegradientatdierentorientations. Ispectral(x;)=XkO(ek(x))(2{55) 5. Now,thelocalfeatureinformationprovidedbyEquation 2{53 islinearlycombinedwiththespectralinformationofEquation 2{55 tocreatethecontourprobability. 6. Toverifyclosedcontoursareproduced,theOrientedWatershedTransform(OWT)[ 34 ]isperformedoverthefeaturesetprovidedbythecontourmap. 7. Regiondissimilarityformsanultrametriccontourmapwithboundaryweightsrepresentingthestrengthofaregionboundary.UsingtheprinciplesofUCM[ 34 ]regionsarehierarchicallymergeduntilathresholdcriterionismet.Thismergingdenestheregionmakeupofthesuper-pixelsusedinthefollowingstages.Now,featuredescriptorsaretakenfromthesuper-pixelregionsandabroaderviewoftheimageisgivenbybinaryfeaturestakenfromregiongroupingshigherintheUCMhierarchy.Speccally,threefeaturesareemphasizedatthesuper-pixelregions:intensityhistograms,textons,andcornerdensity.Binaryfeaturesarelearnedbythealgorithmanddescribeunder-segmentedregionsintheUCMhierarchy.Thesefeaturesarepropagateddowntothelowerlevelsuper-pixelregionscontainedunderneaththecorrespondinghigherlevelintheUCM.Finally,groundtruthlabeledsuper-pixelregionssLassistinassigninglabelstotheunlabeledsetofsuper-pixelregionssU.ThenumberofclasslabelskisderivedfromsL.TheobjectivefunctiontomaximizetakestheformofthecommonSupportVectorMachine(SVM)objectivefunctionandisshownin 2{56 .Therstterm,1 2wTkwk,istakendirectlyfromtheSVM.TheothertwotermsareconstraintsintroducedwiththeLagrangeparametersHandS.Thefunctionofthesecondterm,f(i)H(wk;bk),issummedoverthe 54

PAGE 55

knownlabels,i2sL,usingtheSVMhingelossfunctionforthekthlabel.Thefunctioninthenalterm,fS(wk;bk),incorporatesthegraphLaplacianwithweightsfromadjacentneighborsintotheobjectivefunctionanalysis. L(wk;bk)=1 2wTkwk+HXi2sLf(i)H(wk;bk)+SfS(wk;bk)(2{56) 2.7.2ANoteonSemi-SupervisedLearningUsingMarkovRandomFieldsIn2004,LiandMcCallumproposedusingtheprinciplesofMRFsandsupervisedlearningtoimplementaprocessforsemi-supervisedlearning[ 42 ].ThismethodisbaseduponpriorworkbyMcCallumandMinkain1999thatutilizedPolyatreestoallowfordistinctvariancesindierentdimensions[ 43 ].LiandMcCallumreportthatdiscriminativemodelsperformbetterthangenerativemodels,howevergenerativeprinciplesarestillleveraged.Thesolutionrequiresagooddistancemetric,butparametersneedtobeadjustedtoperformtaskspecicclassication.Thus,adiscriminativemodelisused,butthemodellearnsthedistancemetricduringtraining.Learnedparametersinuencetheclassicationandlabelpropagation.TheinitialmodelissupervisedandbaseduponaMaximumEntropyclassier.Theobservationsarex2Xandthecorrespondinglabelsarey2Y.Featuresarefoundbyasetoffunctions,fk(x;y)isthekthfeaturefunction.TheconditionallikelihoodisgiveninEquation 2{57 ,wherethepartitionfunctionisZx=Py02Yexp(Pkkfk(x;y0)).Thetrainedparametersare=1;:::;k;:::,maximizingthelabelsetDL=(x1;y1);:::;(xl;yl).XLarealltheobservationsthathavelabelsandYLarethecorrespondinglabelassignments.AGaussianprior,Pk2k 22,isusedtohandlesparsity.Thepenalizedlog-likelihoodfunctionisgiveninEquation 2{58 P(yjx)=1 Zxexp(Xkkfk(x;y))(2{57) L=log(P(yjx))=lXi=1log(P(yijxi)))]TJ /F11 11.955 Tf 11.96 11.36 Td[(Xk2k 22(2{58) 55

PAGE 56

ThismodelcorrespondstoanMRFwhere,observationsxiandlabelsyi,arecliquesinthegraphwithpotentiali=exp(Pkkfk(xi;yi)).Inthemodel,kindexesthefeaturesandkistheweightgiventothekthfeature. P(YL;YUjXL;XU)=1 Zxexp(XiXkkfk(xi;yi)+Xi
PAGE 57

Theedgebetween(xi;xj)istheprobabilityofasharedlabelbetweenthenodesxiandxi.Whenyij=f0;1gthesharedlabelprobabilityisexp(Pk0k0(fk0(xi;xj;1))]TJ /F3 11.955 Tf -422.13 -14.45 Td[(fk0(xi;xj;1))=2).Agglomerativeclusteringisusedtomergeclusters.Herearethestepsforagreedyalgorithmapproach: 1. InitializeeachclusterinCjYj,whereeachCihasoneandonlyoneclasslabel,thenodeyi. 2. DistributetheobservationsinXLintotheirlabeledclusterCi. 3. InitializeD=XL. 4. WhileDisnotemptydo: (a) Findthelargestweightpairinganx2Dandaclassnode. (b) FindtheclusterCithathasthemeanedgeclosesttox. (c) Then,Ci=Ci+xandD=D)]TJ /F3 11.955 Tf 11.96 0 Td[(x. 5. Endwhile.Maximizingtheincompletepenalizedlog-likelihoodL=logPYUP(YL;YUjXL;XU)estimatesparametersforthemodel.However,itisnecessarytoavoidthepotentialforexponentialaddendsfromZX.Onesolutiontothisproblemistouselocal-jointtrainingtoconstructanapproximatingfunction. 2.7.3ParameterLearningInordertooptimizethelabelassignments,featurevaluesneedtobeweightedappropriatelyforthecontextofimagesegmentationwithinGPRimagesequences.IcreateatruthedlabelsetofGPRimagesusingtheTruthTooldescribedinSection 3.1.3 .In2005Anguelovetal.discussedanalgorithmforlearningparameterweightsofanMRF[ 44 ].Theyuseanintegerprogramwithalinearrelaxationtoperformthelearningprocedure.Equation 2{62 isthelinearprogramandseekstheoptimalassignmentofthelabelmembershipsiny.Labelsareintegersbetween1toK.Theimplementationdiscussescompleteandpartialmembership.Theirresultsfoundtheweightslearnedperformedwellinpractice.Equation 2{62 hastwoparts.Thersttermevaluatesindividualobservations 57

PAGE 58

storedinX.Thesecondtermincludestheedges,appendedtotheobservationsinX,betweennodesinthegraph.wnandwearethelearnedweightsfortheobservationsandtheedges.ThefunctionisconstrainedbyEquation 2{63 .Astandardmembershipismaintainedbyybeingnolessthan0andthesumofthemembershipvaluesforeachobservationequals1.Thequadratictermykiykj,usedwhencombiningtwonodes,isreplacedbyasingleedgemembershipvalueykij.Themembershipreectedbyanedgecannotbegreaterthaneitherofthetwonodestheedgeconnects.Commonwaystotheenforcethisconstraintaretocalculatetheproductofthetwomembershipsykij=ykiykjortaketheminimumofthetwonodemembershipsykij=min(yki;ykj). maxyNXi=1KXk=1(wknxi)yki+Xij2EKXk=1(wkexij)ykij(2{62) yki0;8i;k;Xkyki=1;8i;ykijyi;ykijykj;8ij2E;k:(2{63)ThesoftmarginSVMisintroducedinEquation 2{64 tomaximizethemarginoftheweightswgivenalabeledtrainingset(x;^y).ThemethodproposedbyTasker,Guestrin,andKollerin2004isusedtolearnw[ 45 ].Thegoalistomaximizethecondencemarginbetween^yandtheotherpossiblelabelassignmentsofy.ThenumberofincorrectlabelingsinycanbeexpressedasN)]TJ /F1 11.955 Tf 13.03 0 Td[(^yTnyn,whereNisthetotalnumberofobservationstolabel.Equation 2{65 constrainsthesoftmarginSVMbyenforcingthegainofthetruelabelingstobegreaterthanorequaltothetotalobservations,N,minusthemisclassicationsandanyslackfornon-separabledata,.Equation 2{66 rearrangesthefunctionsothatthemaximizationofyissolelycontainedontherightsideoftheinequality. minw1 2jjwjj2+C(2{64) wX(^y)]TJ /F3 11.955 Tf 11.96 0 Td[(y)N)]TJ /F1 11.955 Tf 12.74 0 Td[(^yTnyn)]TJ /F3 11.955 Tf 11.96 0 Td[(;8y2Y:(2{65) wX^y)]TJ /F3 11.955 Tf 11.96 0 Td[(N+maxy2YwXy)]TJ /F1 11.955 Tf 12.75 0 Td[(^yTnyn:(2{66) 58

PAGE 59

TheLagrangianofEquation 2{62 isformedwiththeLagrangeparameter.ThestandardSVMsolutionisusedbytakingthederivativeoftheLagrangianwithrespecttoy,settingtheresultequalto0,andsolvingfory.ThisfunctionalvalueofyisinsertedintoEquation 2{66 .Equation 2{67 showsthatthesoftmarginSVMhasnotchanged.ThenewlycalculatedconstraintsaregiveninEquation 2{68 .Therstconstraintisderivedbyinsertingthecalculatedvalueofy=PNi=1iasthemaximumyinEquation 2{66 .Theconstraintsonwe,kij,andkijallensurepositivity.ThetworemainingconstraintsinEquation 2{68 arefoundinthecoecientsoftheobservationandedgelabels,respectivelyyiandyij. minw1 2jjwjj2+C(2{67) wX^y)]TJ /F3 11.955 Tf 11.96 0 Td[(N+NXi=1i;we0;i)]TJ /F11 11.955 Tf 17.18 11.35 Td[(Xij;ji2Ekijwknxi)]TJ /F1 11.955 Tf 12.75 0 Td[(^yki;8i;k;kij+kjiwkexij;kij;kji0;8ij2E;k:(2{68)RepeatingtheprocessofintroducingconstraintswithLagrangeparametersintothefunction,takingthederivative,andsolvingisrepeatedonEquation 2{67 tosolveforw.Equation 2{69 istheresultwiththeconstraintsdenedinEquation 2{70 .Equations 2{71 and 2{72 aretheresultingfunctionsforcalculatingtheobservationandedgeweights,respectivelywnandwe. maxNXi=1KXk=1(1)]TJ /F1 11.955 Tf 12.75 0 Td[(^yki)ki)]TJ /F1 11.955 Tf 13.15 8.09 Td[(1 2Xk=1KNXi=1xi(C^yki)]TJ /F3 11.955 Tf 11.95 0 Td[(ki)2)]TJ /F1 11.955 Tf 13.15 8.09 Td[(1 2KXk=1k+Xij2Exij(C^ykij)]TJ /F3 11.955 Tf 11.95 0 Td[(kij)2(2{69) ki0;8i;k;Xki=C;8i;kij0;kijki;kijkj;8ij2E;k;k0;8k:(2{70) 59

PAGE 60

wn=NXi=1xi(C^yki)]TJ /F3 11.955 Tf 11.95 0 Td[(ki)(2{71) we=k+Xij2Exij(C^ykij)]TJ /F3 11.955 Tf 11.96 0 Td[(kij)(2{72) 2.7.4Semi-SupervisedClusteringSimilartosemi-supervisedandparameterlearning,semi-supervisedclusteringusesatruthsubsetprovidedbyanexperttoinuencelabelingtheremainderofthedata.Tuplesinthemust-linksetcontainsamplesofobservationsthatshouldbelinkedtogether.Tuplesinthecannot-linksetrepresentobservationsthatshouldnotbegiventhesamelabel.Somealgorithmsdoesnotutilizeacompletemappingbetweenalltruthedelements.Ifalabelingconstraintisbroken,eithertheassignmentofthecurrentobservationtoagivenlabelisnotallowed(ahardconstraint)oramiss-classicationpenaltyisassessed(asoftconstraint).In2001,Wagsta,Cardie,Rogers,andSchroedlincorporatedmust-linkandcannot-linktruthdataashardconstraintsintothek-meansalgorithm,implementingCOP-KMeans.Thek-meansalgorithmismodiedtoincludehardmust-linkandcannot-linkconstraints,meaningthealgorithmfailsinsteadofbreakingtheconstraints.Whenk-meansreachesthestepofidentifyingthenearestclustertoassignanobservation,thenextnearestclusterisexaminedinsteadifthecurrentnearestbreaksoneoftheconstraints.Ifnovalidclusterassignmentisfound,thealgorithmreturnsanemptypartitionofthedata.InitialworkbyBasu,Bilenko,andMooneyin2004,wasexpandedin2006byBasu,Bilenko,Banerjee,andMooneyastheyimplementedHMRF-KMeans[ 46 47 ].HMRF-KMeansincorporatesconstraint-basedclusteringfromlabeledmust-linkandcannot-linkpairsetswithadistortionmeasureintoaprobabilisticmodelandanoptimizationfunction.ThedistortionmeasurecomputesthedistancebetweenpointsandclustermeansintheMRF.ThedistortionmeasuressampledincludeEuclidean 60

PAGE 61

distanceandBregmandivergences[ 48 ].Thedistortionmeasureservesasapenaltytermforsoftconstraintviolationbyaclusterlabeling.InferenceisdrawnfromtheMRFusingtheEMalgorithmwhileminimizingtheobjectivefunction.Inaddition,thedistortionmeasureisre-estimatedduringeachiterationinordertoadapttoconstraintsandvarianceinthedata.Huang,Cheng,andZhaocreatedMLC-KMeansin2008byextendingCOP-KMeans[ 49 ].Huangetal.,identiedthatthelabeltransitivityshowninEquation 2{73 canbeusedtocreatethecompleteinstancemapping(NewCon=)fromaCOP-KMeansmust-linkset(Con=).Thenewcannot-linkset(NewCon6=)isformedbyndingtheinstancesinoriginalcannot-linkset(Con6=)thatdonotviolateanyNewCon=constraints.Whilethiscreatesacompleteversionoftheknowndata'smust-linksetandthecannot-linksetbecomesmoreaccurate,acompletecannot-linknetworkisnotcreated.Itislikelythattheadditionofmorecannot-linkpairingswouldenhancetherepresentationprovided.Thenewmust-linkandcannot-linksetsareusedashardconstraintsink-meansasinCOP-KMeans. n8x;y;zj(x;y)2Con=^(y;z)2Con=o=)(x;z)2NewCon=(2{73)TheMGRSSalgorithmPBTRframeworkstheproblemdierently,allowingforbothhardandsoftincorporationofthelabeleddata.SincethenodesintheMRFaresuper-voxelregions,eachobservationisthesetoffeaturesassociatedwiththespecicsuper-voxelregion.Irefertotheexpertlabeledregionsastrainingregions.Insteadofcreatingamust-linkandcannot-linksetofregionpairs,eachobservationisgivenoneofadenedsetoflabelsbyanexpert.Thelabelsetsdenetheentiregroupofmust-links.Regionsassignedwithoneclusterlabelareinherentlycannot-linkedfromeveryregionineveryothercluster.Asthealgorithmexecutes,bothnewregionsandtrainingregionsaregivenlabelsbasedupontheprobabilityoftheirassociationwiththelabelpriorssimilartoothersemi-supervisedconstrainedclusteringalgorithms.Parametersarecalculated 61

PAGE 62

usingthesoftconstraintassignments.However,beforethealgorithmmovestothenextiteration,eachclusteroftrainingobservationsisre-groupedintothenewclusterregionhavingthehighestprobabilityassociation.Therefore,beforethenextiterationofthealgorithmallhardconstraintsarere-instituted.Section 3.4 describestheimplementationofthePBTRalgorithm. 62

PAGE 63

CHAPTER3METHODOLOGYThischapterexaminestheimplementationdetailsoftheMarkovGroundRegionSegmentationSystem(MGRSS).Briey,MGRSScreatesascenemodelfromthesamplescollectedbyGPRdetectiondevice.ThissystemisaprototypeimplementationusingdatacollectedwithaGPRhand-helddeviceandGPRarraymountedtoavehicle.MGRSSresultscanbeusedtoidentifyandisolateobjectsofinterestfoundwithinthescene.MGRSSenhancementsandfutureworkaredescribedinChapter 5 .TheMGRSSprocessrepresentsaGPRdatacollectionasasequenceofframes,creatingathree-dimensionalsceneorvolume.Asetoffeaturesisextractedfromthedataandusedtosegmentthescene.Over-segmentationisperformedtoidentifysuper-voxelregions.RegionlabelsarerenedusinganMRFandsemi-supervisedclusteringtoformthemodelrepresentationofthescene.Section 3.1 detailseachstepintheprocessIuse.Duringdevelopmentofthesystem,dierenttechniquesforsegmentationandclusteringwereresearchedandarediscussedinChapter 2 .TheknowledgelearnedfromthisresearchinuencedthedevelopmentofthecoreMGRSSprocessandisdiscussedinSection 3.1.2 .Section 3.2 analyzeshowdatacollectedusingthehand-helddeviceisbeingformulatedintoastructuredgrid.Section 3.3 discussestheimplementationofProbability-BasedTrainingRealignment(PBTR).PBTRusessemi-supervisedclusteringandtrainedparameterstorenethelabelmodelproducedfornewdatacollections.MultiplealgorithmsweretestedduringthesamestageoftheMGRSSprocess.ThemixingandmatchingbetweenalgorithmshascreatedthirteendierentmodelscenariosfortheexecutionofMGRSS.ThesescenariosarediscussedinSection 3.5 .AnalysisandexamplesofMGRSSresultsaregiveninChapter 4 3.1ImplementationofMGRSSFigure 3-1 showsthetransitionofdatabetweenthethreesystemmodulesthatcompriseMGRSS.Thepre-processingmodule,whichplacesdataintoaninitialMGRSS 63

PAGE 64

representationstructure,isdiscussedinSection 3.1.1 .Section 3.1.2 describesthemajorityoftheprocessingworkperformedbyMGRSStoproducescenelabels.Finally,Section 3.1.3 discussesthepost-processingprocedures. Figure3-1. MGRSSsystemmodel. 3.1.1Pre-ProcessingThecomponentsofthepre-processingmoduleareshowninFigure 3-2 .First,dataiscollectedbyahand-heldorvehicle-mountedGPRdetectiondevice.Thevoltagereturnsarestructuredasathree-dimensionalvolume.EachvolumeisconvertedintoasequenceofGPRimageframesandinitialtestsusedtheentireCIE)]TJ /F3 11.955 Tf 12.38 0 Td[(Labcolorspace.However,theluminancepropertydirectlycorrelatestothevoltagevalueandisnowtheonlycolorspacecomponentincludedwhenexaminingtheGPRimage.Inaddition,theabsolutevalueofthevoltagesistakentoasasurrogateformeasuringcurrent.ThecurrentisusedasthedimensioninthefeaturespaceMGRSSexamines.RefertoSection 2.2 toreviewthespecicsofthedatarepresentationMGRSSusesandtheformatofhand-heldandvehicularsystems.Finally,theGPRdataandGPRimageframesarepassedtothealgorithmprocessingmodulecoveredinSection 3.1.2 64

PAGE 65

Figure3-2. MGRSSpre-processingmodule. 3.1.2AlgorithmProcessingThemajorityofcomputationisperformedinthealgorithmprocessingmodule.ThecomponentsimplementedinthismoduleareshowninFigure 3-3 .Steps1aand1bindicatethemoduleexpectstoreceiveGPRdataandGPRimagesinthesystemrepresentation.Steps7a,7b,7c,and7dshowthatthismoduleproduceslabelsetscorrespondingtothesuper-voxelregions,initialpre-clusteredlabels,MRFlabels,andunconnectedMRFlabelsidentiedbymodulecomponentsduringexecution.Step2performsover-segmentation.InitialtestingevaluatedhowEHGB-andSLIC-generatedsuper-voxelregionsperformedwhenprovidedtotherestoftheMGRSSsystem.SampleEHGBandSLICover-segmentationsgeneratedfromthesameover-sampledtestframeareshowninFigure 3-4 andFigure 3-5 .Chapter 4 andSection 3.2 discusshowtheresultsSLICproducesandpossibleenhancementstotheSLICalgorithmmakeitthebetterchoicetobeusedinmywork.Thenalcomparisonbetweenover-segmentationalgorithmsevaluatedthesuper-voxelregionsgeneratedby3D-UCMandSLIC.AnalysisofthesetwoalgorithmsastheyareutilizedinMGRSSisgiveninSection 4.3 .Figure 3-6 isinitialover-segmentationhierarchygeneratedby3D-UCMfromframe31ofscene#8.Thecorrespondingover-segmentationcreatedbySLICisgiveninFigure 3-7 .Oncesuper-voxelregionsaredetermined, 65

PAGE 66

Figure3-3. MGRSSalgorithmprocessingmodule. Figure3-4. SampleEHGBsegmentation. Figure3-5. SampleSLICsegmentation. 66

PAGE 67

neighborsareidentiedbyndingthevoxelEuclideanboundarysurroundingeachregion.EachneighborRitotheregionRjissavedinalistassociatedwithRj. Figure3-6. Sample3D-UCMsegmentationgeneratedfromframe31ofscene#8. Figure3-7. SampleSLICsegmentationgeneratedfromframe31ofscene#8. Step3extractsfeaturesfromindividualvoxelpositionswithintheGPRvolume.Featuresincludethecurrentvaluemeasuringthevoxelintensity,thedepth,channel,andframepositionintheGPRvolume,aswellasGaborFiltersatdierentorientationsandperspectives.Acommonindicatorofregionsimilarity,especiallywhenexamininganobjectofinterest,isthepresenceofahyperbolicshape.Thiscontourcanbeidentiedbyconvolvingthevoxelsfromeveryframewithamulti-dimensionalGaborlter.Suchahyperbolaisobservedinthedepthbychannel(cross-track)anddepthby 67

PAGE 68

frame(down-track)directions,butnotwhenlookingatthedatainachannelbyframe(plan-view)perspective.Thisprovedtrueintesting,removingGaborlterscollectedintheplan-viewperspectivedidnotaectlabelaccuracy.TwosetsofGaborlterorientationsareconsidered.Therstscenariouseseightorientationscollectedfromboththecross-trackanddown-trackperspectivesforatotalofsixteenGaborfeatures.Theorientationsstartfrom0andarecollectedevery22:5.Orientationsevery22:5startingat180wereincludedbytakingtheabsolutevalueofthelter.Thesecondscenariocollectsthreeorientationsfromboththecross-trackanddown-trackperspectivesforatotalofsixGaborfeatures.Thethreeorientationsareat0,15,165.NeitherofthesetwoGaborfeaturesetsproducedbetterlabeledresultsthantheother.TheMRFtuningdiscussedinSection 3.3 mayprovidebetterresultsthanattemptingtorenetheGaborlterorientationcollected.PerforminginferenceoveranMRFrequiresaninitiallabeling.MGRSSevaluatestheperformanceofthreedierentclusteringmethods:CA,HC,andusingeachindividualsuper-voxelregionasaninitialcluster.IhavereviewedtheCAandHCalgorithmsinSection 2.6 .Thepre-clusteringisimplementedatStep4andprovidestheinitiallabelsettotheMRF.Themeanvoxelvalueofeachregion'scurrent,depth,channel,andframenumberarecalculatedandusedasthefeaturesetbyCAandHC.Theclustersproduceddenethelabelset.CAprovidesthebenetofclusterpruning.Whenthenumberofregionsinaclusterdropsto=10,theclusterisprunedandtheregionsaredistributedtothenearestoftheremainingclusters.Thisvalueforwaschosenexperimentallytoprovideaminimumnumberofregionsperclusterthatexceededthenumberoffeaturedimensions.TheHCimplementationcalculatestheEuclideandistancebetweenallthefeaturesandlinksthemintothehierarchytreeusingtheunweightedaveragedistancebetweenthenodes.Acutooftenpercentofthemaximumlinkeddistanceisusedtoidentifythehierarchylevelofthenalclusteringassignment. 68

PAGE 69

ThepatternofdepthbychannelbyframeintheGPRvolumenaturallylendsitselftoagridandcaneasilybestructuredasanMRF.However,standardscenesizeswillbeontheorderof415depthsby24channelsby61framesor607,560individualnodesinthegraph.Drawinginferenceoveranetworkthissizeiscomputationallyinfeasible.Therefore,theMRFinStep5aissetupasaprobabilitydistributionoverthesuper-voxelregionsandnottheindividualvoxels.Thereducedsetofnodesprovidedbythesuper-voxelregionscreatesamanageablegraph,whilenotlosinginformationsincethesuper-voxelregionsarecomposedofthesimplestvoxelstocluster.TheMRFwouldlikelyhaveclusteredthesesamevoxelstogetheraswell,butwithgreaterineciency.InferenceisdrawnovertheMRFusingtheCartoonModelasdescribedinSection 2.5.3 .ThefeaturesusedarethecurrentandthesetofGaborltersdiscussedearlier.Inordertoinuenceamorepreciselabeling,IimplementedtheProbability-BasedTrainingRealignment(PBTR)aglorithm.PBTRisintroducedinStep5banddirectlyinuencestheMRFduringeachiteration.Section 3.4 discussestuningtheMRFwithPBTR.ItispossiblefortheMRFtoassignthesamelabeltoregionsthatdonothavesequentiallyconnectedvoxels.Whilesuchassignmentsappropriatelylabelthefeaturesofthecorrespondingregions,theydonotmodelthescenecompletelysincetheseregionsarenotcontiguouslyconnectedintheground.Therefore,theRegionSplittercomponenthighlightedbyStep6re-labelstheseregions.Thenewlabelingstructurefollowsasimilarpattern.Forexample,ifthreeunconnectedregionsaregiventhelabel1,thenewsplitre-labelingwillbe:11,12,and13. 3.1.3Post-ProcessingThepost-processingmoduleprovidesuserinterfacesupportforevaluationandtruthingofascenemodel.Thealgorithmprocessingmoduleidentieslabelsetsforthescenemodelcorrespondingtothesuper-voxelregions,CompetitiveAgglomerationlabels,MRFlabels,andunconnectedMRFlabels.AsSteps1a,1b,1c,and1dindicateinFigure 69

PAGE 70

3-8 ,thepost-processingmoduleexpectstobegivenallfouroftheselabelsets.Then,alllabelsetsarepassedtotheMGRSSViewer(Step2),TrainingTool(Step3),andTruthTool(Step4)subsystems. Figure3-8. MGRSSpost-processingmodule. TheMGRSSViewerdisplaysoneGPRframeofthecurrentasapseudo-colorimageaswellasthelabelsetofthecorrespondingframeforthesuper-voxelsegmentationandthenalresultlabeledbytheMRF.Ascrollbarprovidestheusertheabilitytonavigatethrougheachoftheframes,oneframeatatime.Anindividualregioncanbeselectedinthenalresultsframeandanewpopupguredisplaysthethree-dimensionalvolumeoftheselectedregion.Thedisplayallowstheusertoadjusttheviewingperspective,rotatingaround,zoominginto,andzoomingoutfromthesceneelement.SampleresultsproducedbytheMGRSSViewerareshowninSection 4.4.3 .TheinterfacefortheMGRSSTruthToolandMGRSSTrainingToolissimilar.BothdisplaytwoB-scanframes.Oneisacross-trackviewandtheotherisadown-trackviewofthethreedimensionalvolume.TheGPRdataandthecorrespondingsuper-voxelregionshavebeenstretchedbylinearinterpolationsothattheregionsandregionboundariesareeasiertoidentify.Ascrollbarbeloweachframeallowstheusertheabilitytonavigatefromoneframetothenext.ThestartingviewshowstheGPRcurrentasapseudo-color 70

PAGE 71

imagewithbordersaroundeachsuper-voxelregion.AradiobuttonontheGUIturnstheGPRimagesonandoaswellasallowinguser-labeledregionsandclusterstobedisplayedintheirassignedcolors.Atogglebuttonshowsandhidesunlabeledregions.TheLinkbuttonlinkstogetheralloftheselectedregions.Whennopreviouslylabeleduserregionisintheto-be-linkedset,theTrainingToolprovidesaselectionbetweenasetofsixpre-determinedclustertypes:thegroundlayer,thebackground,anobjectofinterestseparatedintolowandhighenergyrepresentations,alighthued(lowenergy)layer,andadarkhued(highenergy)layer.TheTruthToolalsoprovidesapopup,howeverthispopupallowstheusertospecifythedesiredcolorforthenewlyclusteredsetofsuper-voxelregionswithoutanycontextualtypeindicated.TheUnLinkbuttonunlinksselectedsuper-voxelregionsfromtheclusteredregiontowhichtheyareassigned.Structuralinformation,suchasthecurrentlyselectedclusterregionandthelistofsuper-voxelregionkeyvaluesaredisplayedontheGUI.Inaddition,theTrainingTooldisplaysthenumberoftrainingregionsassignedtoeachclustertype.Finally,theSaveModelbuttonsavesthetainingandtruthedmodel.SampleresultsproducedbytheMGRSSTruthToolandMGRSSTrainingToolareshowninSections 4.4.4 and 4.4.5 3.2ModellingHuman-OperatorCollectedHand-HeldDataAsnotedinSection 2.2.2.1 ,whenahuman-operatorcollectsdatawithahand-helddevice,thedataislikelytobenon-uniformlysampled.Dierencesmaybeobservedinthesweptheightandtheintervaldistancebetweenresponsevectors.Aswell,therewillnotbearegularframingofvectors.Ofcourse,thisalsomeanstherewillnotbeabalancedsequenceofframessuchasthenaturalframingfoundinvehicularcollections.Therewillstillbeadenedareatheregionofinterestcollectedduringinvestigationmode.Thisareaservesastheboundsofthethree-dimensionalvolumeofthescenemodel.Atthetimeofthisanalysis,Ididnothaveaccesstohuman-operatorcollectedhand-helddata.Theonlyavailablehand-helddatawascollectedusingadatacollectioncartwitharobotarm.Thisdevicesimulatestheoperationofthehand-heldsystembeing 71

PAGE 72

operatedbyahumanuser,butdiersfromahumaninitshighprecisionandreliabilityofmotion[ 12 ].Testinghuman-operatordatawassimulatedwithrobotdatacollectedbyrandomlyremovingA-scansfromthecollectedsweeps.Itestedreconstructionofthegridvolumebyusingcubic,linear,andnearestneighborinterpolationtollinthemissinggapsbetweenA-scanswithinframesoftherobotarmcollecteddata.TheperformanceofthissolutionisdiscussedinSection 4.1.1 .However,twocomplicationswereobserved.First,theGPRsystemitselfdidnotcollecthighqualitydata,makinganalysisofmyreconstructionintoagridvolumedicult.Second,thedistancebetweensweepswaslargebecauseofthesensorheadconstruction.Thesensorincludedthreereceiverscollectingdataatdierentfrequencies.Therefore,thereadingscollectedbyeachsensorcouldnotbecorrelated.Whileitispossiblethathuman-operatorcollecteddatamayobservesimilardistancesasthosebetweenrobotcollectedframes,itisalsoanticipatedthatthisdistancewillnotbeasstructuredthroughoutthecollection.Therandomnessofthehuman-operatordatamayllinmoregapsthroughoutthecollectionsequence.Fewerlargeblocksofseparationwillallowforabetterreconstructionprocesstobeimplemented.Oncethegridvolumeisreconstructed,IamablecombinenearestneighborprinciplesandSLICtoadjusttothedierenceinthescaleofthedata.FelzenszwalbandHuttenlocher[ 13 ]useanearestneighborapproachthatassociatesvoxelneighborhoodswithoutrelyingonEuclideandistanceinagridgraph.IimplementavariationonthismethodwithintheSLICalgorithm.Recall,SLICdeterminespotentialvoxelregionsbyexamininga2S2S2Sareasurroundingeachclustercenter,whereSistheprojectednumberofsuper-voxels.Imodifythesearchareatobe2Schannel2Sdepth2Sframe,whereSchannel,Sdepth,andSframearetheprojectedsizedistributionsofthesuper-voxelregionsinthedepth,channel,andframedimensions.Usingthesedimensions,adynamicexaminationspacecanbedened.Atthisstage,theSLICalgorithmcanbeusedasbefore. 72

PAGE 73

Recall,thecombineddistancemeasureforSLICandthecalculationofSarereviewedinEquation 3{1 andEquation 3{2 .SplittingSandhowitisusedintotheseparatedimensionsistheonlychangenecessarytothealgorithm.Todothis,Idenethedistanceequation^D0giveninEquation 3{3 .Also,Iintroducetheparameter^dsanditsdenitioninEquation 3{4 showsthedistributionofSchannel,Sdepth,andSframetothecorrespondingspatialdimensionfeature.Finally,Equations 3{5 3{6 ,and 3{7 providethesizeofSchannel,Sdepth,andSframe. D0=s dc2+ds S2m2(3{1) S=3r N k=3r channelsdepthsframes k(3{2) ^D0=q dc2+(^ds)m2(3{3) ^ds=s (xj)]TJ /F3 11.955 Tf 11.95 0 Td[(xi)2 S2channel+(yj)]TJ /F3 11.955 Tf 11.95 0 Td[(yi)2 S2depth+(zj)]TJ /F3 11.955 Tf 11.95 0 Td[(zi)2 S2frame(3{4) Schannel=r (channelsdepths)+(channelsframes) k(3{5) Sdepth=r (depthschannels)+(depthsframes) k(3{6) Sframe=r (frameschannels)+(framesdepths) k(3{7) 3.3MRFTuningByitself,theMRFdoesnotlabelallelementswithinthesceneappropriately.IhaveincorporatedtheprinciplesofconstrainedclusteringtoenhancetheaccuracyofMGRSSlabelings.Inaddition,theparameterandsemi-supervisedlearningalgorithmsdiscussedinSection 2.7 werepotentialoptionsforreninglabelaccuracywithinascene.Recall, 73

PAGE 74

parameterlearningcalculatesweightstoassociatewithspecicfeatures.Weightedfeaturesareusedtomakeamoreappropriatecontributiontolabelassignments.TheTruthToolprovidestheabilitytogeneratelabelsetsthatcanbeusedforthistypeoftraining.Performingsemi-supervisedlearning,truthedlabelsetsalsoprovideabaseofknownlabels.Whenlabelingnewscenes,anexistingtruthedscenecanbepairedwiththenewsceneinordertoperformthesemi-supervisedlearningofthenewlabelset.Distributingadditionallabelsetswithinthesystemincursgreateroverheadthansimplyincludingthefeatureweightsofparameterlearning.However,beingabletotrainlearnedtruthsetsfordierentscenariosmayincreasetheabilitytoaccuratelylabelscenesfromrelatedscenarios.Aswell,asemi-supervisedlearningprocesstypicallyhasaknownlabelsetsmallerthantheftypercentprovidedbyatruthedscenemodeloverthesamescenario. 3.4Probability-BasedTrainingRealignmentIhavecreatedtheProbability-BasedTrainingRealignment(PBTR)algorithmtoincorporatesemi-supervisedconstrainedclusteringintoMGRSS.PBTRenhancesthelabelmodelMGRSSproduces.Alabelsetforspecicsuper-voxelregionsiscreatedusingtheTrainingTooldiscussedinSections 3.1.3 and 4.4.5 .Thetraininglabelsetfocusesonidentifyingkeylabelclusterswithineachtrainingsceneexamined.TraininglabelsareintroducedinStep5bofthealgorithmprocessingmodulediscussedinSection 3.1.2 .Theselabelsidentifymust-linkconstraints.Sinceeverytrainingregionisgivenalabel,anyobservationsnotincludedinthesamemust-linksetcannotbelinkedandimplicitlyformtherepresentationofthecannot-linkset.Thealgorithmandthetraininglabelsareincorporatedintothecartoonmodel,Section 2.5.3 ,inthefollowingmanner.Eachsuper-voxelregionisdescribedbyafeaturevectorF=f~fsjs2S=SU[STg,whereSUisthesetofunknownregionlabelsandSListhesetofpreviouslyknowntrainingregionlabels.Recall,!=f!s;s2Sgdenesaspeciclabeling2where=1;2;:::;L.Themodelconvergestothemostlikelyarrangementoflabelstoproduce^!.Equation 2{8 remainsthesamefortheunknown 74

PAGE 75

labelsSUandisshowninEquation 3{8 .Equation 3{9 istheenergyfunctionusedtoevaluatelabelsforthetrainingregionsST.ThisequationlookssimilarwiththeevaluationofonlyST.Howeverthereisalsonoconceptofneighborhoodsinuencingthetrainingregionprobability.Therefore,onlythesingletonfunctionVs(!s;~fs)isevaluated. exp(U(!SU;FSU))/exp Xs2SUVs(!s;~fs)+Xfs;rg2CSU(!s;!r)!(3{8) exp(U(!ST;FST))/exp Xs2STVs(!s;~fs)!(3{9)ExtrastepsareincludedintheprocessingoftheEMalgorithminordertoincorporatethetrainingregions.Theandmodelparametersarecalculatedovertheentiredataset,includingthefeaturesandlabelsofthetrainingregions,asshowninEquation 3{10 and 3{11 .Usingthecurrentlabelassignments,~xi,i=1;2;:::;N,denotesfeaturevectorsassignedtotheclass2andNisthenumberofunlabeledandtrainingregionsassignedtheclass. ~=1 NNXi=1~xi(3{10) ~=1 N)]TJ /F1 11.955 Tf 11.95 0 Td[(1NXi=1(~xi)]TJ /F3 11.955 Tf 11.96 0 Td[(~)T(~xi)]TJ /F3 11.955 Tf 11.95 0 Td[(~)(3{11)InPBTR,parametersareevaluatedattwoseparatepoints.First,trainingclustersarematchedtothenearestprobabilityunknownclusterusingthecurrentlabelsgiventotheunknownregions.Usingtheprobabilityofalabeltoacluster,eachtraininglabelsetisexclusivelyalignedwiththehighestprobabilityoftheunknownclusterscurrentdistribution,estabilishingaone-to-oneassociation.Then,parametersareestimated,usingtheentiretrainingsettoinuencethestrengthofmatchingclusters.Next,newlabelassignmentsaremadeindividuallyforeveryunknownandtrainingregion.Thus,thetrainingregionmust-linkandcannot-linkconstraintsaresofteneduntilafterthenextparameterestimationstep.Atthistime,itispossiblethataclusterlabelwillnolongerbeinuseandsoitispruned.Now,parametersarere-evalated.Beforereturningtore-label 75

PAGE 76

allelements,thetraininglabelsmustbere-alignedtothehighestprobabilityunknowncluster.Oncetrainingclustersarere-alignedagain,parametersarere-estimatedwiththehardconstraintshavingbeenre-established.Finally,theprocessrepeatsuntilconvergence.HereisanoutlineofstepbystepprocessPBTRuses: 1. Setuptheunknown(new)regionstobelabeled. (a) Assigninitialclusterlabelstounknownregions. (b) Estimateparametersforunknownregions. 2. Re-aligntrainingclusterstohighestprobabilityunknowncluster(hardconstraint). (a) Re-establisheshardconstraints. (b) ImposesOne-to-onerelationshipbetweentrainingandunknownclusters. 3. Estimateparametersforallregions. 4. Assignclusterlabelstoallregionsindividually,relaxingtoasoftconstraint. 5. Pruneemptyclusters. 6. Estimateparametersforallregions. 7. ReturntoStep#2.Figures 3-9 through 3-14 areexcerptsfromtheentirediagramedsequenceofPBTRstepsfoundinAppendix A .Specically,Figure 3-9 isasamplesetoftrainingregions,separatedintolabeledclusters.ThisexampleincludesfourofthesixlabeltypesidentiedinthetruthandtrainingscenesdescribedinSections 4.1.3 and 4.1.4 .There-aligningofeachtrainingclusterintoaspecicunknownclusterofregionsisshowninFigure 3-10 .Figure 3-11 isanexampleofindividualregionsbeingassignedanewlabel,inthecaseshown,thelabelhappenstobethesameforeachregion.InFigure 3-12 ,thenewlabelingshownplacesregionsintodierentclusters.ItisobservedinFigure 3-13 thatoneunknownclusternolongercontainsanyregions,thereforeitispruned.Finally,theprocessrepeatsinFigure 3-14 byreturningthethere-alignmentoftrainingclustersintounknownlabelclusters. 3.5MGRSSModelScenariosMGRSSexaminesthirteendierentmodelscenarios.EachmodelscenarioisexecutedoverthesetoftruthscenesdescribedinSection 4.1.3 .TwelveofthescenariosaredierentcombinationsofalgorithmcomponentsdetailedinChapter 2 andincorporatedintothe 76

PAGE 77

Figure3-9. PBTRsampletrainingregionsandtypes. Figure3-10. PBTRtrainingclusterre-alignmentiteration#1. Figure3-11. PBTRassigninglabelsexampleI. 77

PAGE 78

Figure3-12. PBTRassigninglabelsexampleII. Figure3-13. PBTRclusterpruning. Figure3-14. PBTRtrainingclusterre-alignmentiteration#2. 78

PAGE 79

algorithmprocessingmoduledescribedinSection 3.1.2 .Thesuper-voxelover-segmentationcomponenthastwopossiblealgorithms,3D-UCMandSLIC.Thepre-clusteringstagehasthreeoptions:CompetitiveAgglomeration(CA),HierarchicalClustering(HC),andusingthesuper-voxelregions(SVR)themselves.ThelastcomponentdeneswhetherornotthetrainingfromthePBTRalgorithmwasincludedintheMRFprocessing.ThesescenariosarelistedinTable 3-1 .Thenalscenario,Scenario13,appliestheentiresequenceofhierarchiesgeneratedby3D-UCMtotheGPRtruthscenes. Table3-1. AlgorithmcombinationsincludedinMGRSSalgorithmprocessing,formingtwelveofthirteenmodelscenarios. ScenarioOver-SegmentationPre-ClusteringPBTR 13D-UCMCANo23D-UCMCAYes33D-UCMHCNo43D-UCMHCYes53D-UCMSVRNo63D-UCMSVRYes7SLICCANo8SLICCAYes9SLICHCNo10SLICHCYes11SLICSVRNo12SLICSVRYes 79

PAGE 80

CHAPTER4RESULTSInthischapter,IevaluatetheresultsMGRSSproducesandtheperformanceofthesystem.IncorporatingthePBTRalgorithmintoMGRSS,asdescribedinSection 3.4 ,hasincreasedtheaccuracyofclusterlabels.Themajorityoftheexecutiontimeoccurswithinmodule#2:algorithmprocessing,discussedinSection 3.1.2 .SpecicallytheslowestexecutionoccurswhiledrawinginferencefromtheMRF.Thisisexpected,sincetheMRFwillneedtocyclethroughtheEMstepsandredistributeclusterassignmentsineveryiteration.Asthenumberofregionsidentiedduringover-segmentationincreases,thenumberofiterationsincreases.Asexpected,theconverseisalsotrue,decreasingthenumberofregionsdecreasesthenumberofiterations.InSection 4.1 IanalyzetheprocesselementsMGRSSemploys.Section 4.2 evaluatestheseparabilityofthethetruthandtrainingclusteredregions.Section 4.3 examinestheresultsproducedbythemodelscenariosdescribedinSection 3.5 .ModelscenariosareevaluatedonascenebyscenebasisusingtheJaccardIndexandtheareaunderthecurve(AUC)producedbyReceiverOperatingCharacteristic(ROC)curves.Finally,Section 4.4 discussesthesystemcomponentsthatmakeuptheMGRSSarchitecture. 4.1ProcessElementsDierentprocesselementsweretestedandevaluatedwithintheMGRSSstructure.First,Section 4.1.1 analyzestheprogressmadeinresolvingnon-uniformlysampledhand-helddatatoextentpossiblewiththelimiteddataavailable.Section 4.1.2 examinesthesystemperformanceduringthesuper-voxelgenerationstageoftheprocess.TheprocessforannotatingtruthscenesandtrainingregionsaregiveninSections 4.1.3 and 4.1.4 4.1.1StructuringNon-UniformSamplesDuetonothavinghuman-operatorcollecteddataavailable,testingofnon-uniformsampleswaslimitiedtoasimulationusinghand-helddatacollectedwitharobotarm. 80

PAGE 81

A-scanswererandomlyremovedfromthisdataandperformanceevaluatedbycomparingmeansquarederrorbetweenthereconstructedandoriginalsamples.Missingsampleswerereconstructedusingcubic,linear,andnearestneighborinterpolation.Eachofthesethreetechniquesweretestedbyusingthesequenceoftwo-dimensionalB-scansofthedatavolume,mappingA-scanstoevenlyspacedgridcoordinates,andtheUTMcoordinateswherethedatawascollectedinactualspace.UsingtheUTMcoordinatesmostcloselyfollowstheanticipatedstructureofhand-helduser-operatordatasincesampleswillbecollectedoveranundeterminedintervalandpotentiallyatrandom.Ianticipatethat,whenhand-helddatacollectedbyahuman-operatorisexamined,thedistancebetweenA-scanswillvary,leadingtothereconstructionoftheentireB-scanframeaswell.Atthesametime,themulti-stageprocessforhand-heldinterrogationasdescribedinSection 2.2.2.1 willprovideagreaternumberofsamplesclosetogetheratthemostsignicantcoordinates,aboveapossibleobject.Thelargeareacoverageprovidedbysweepmodecombinedwiththemoredenselycollectedinvestigationmodewillformausefulsetofpointstoperformreconstruction.Astheanalyishereshows,thereductioninproximitywillaidintheprocessofreconstructingthescene.Theimprovementinresultsindicatesthathavingagreaternumberofintermediatesampleswillincreasetheabilitytoaccuratelyreconstructthethree-dimesnionalvolumewhennewdatabecomesavailable.Figure 4-1 showsthemeansquarederrorofthethreemethods.Note,theblackcurveisthenearestneighborinterpolation.Anearestneighbormethodolgytswellwiththeanticipatedhand-helddatasincetherewillbevaryingdegreesseparationbetweenreadingsinthegeographicspace,allowingclosergroupsofsamplestoprovidemoreinuenceoneachother.Figure 4-2 isaviewofoneB-scanframefromtheresampleddatausingeachofthealgorithmsdescribed.Equation 4{1 istheformulaformeansquarederror.Inthisequation,Nisthetotalnumberofobservationsbeingconsidered,xiistheoriginalA-scan 81

PAGE 82

sample,andyiisthereconstructedA-scan. MSE=1 NNXi=1(xi)]TJ /F3 11.955 Tf 11.96 0 Td[(yi)2(4{1) Figure4-1. PlotofthemeansquarederrorfornearestneighborresamplingusingB-scan(black),Grid(blue),andUTM(orange)coordinates. Figure4-2. SampleB-scanframesfromnon-uniformre-samplinginterpolation. 82

PAGE 83

4.1.2GeneratingSuper-VoxelRegionsInitialtestingfocusedoncomparingtheresultsobservedusingEHGBandSLICtoover-segmentintosuper-voxels.Subsequently,Ihavealsoincorporatedthesuper-voxelsgeneratedby3D-UCMintotheMGRSSstructure.EHGBandSLICtestingfocusedontimeperformanceandtheusefulnessofthesuper-voxelsformedwithinthecontextofMGRSS.Comparatively,theexecutiontimeoftheSLICalgorithmisfasterandmoreconsistentthanEHGBwhencreatingtheover-segmentationMGRSSrequires.Bothalgorithmsprovideparametersthatmanagetheminimumsuper-voxelsize.SLICalsoprovidesaparameterforthetotalnumberofregions.Thistotalservesasabaseline,butisjustatargetvalue.Thenalnumberofregionlabelswillbedenedbywhatthealgorithmdeterminessegmentsthescene.Thetotalregionsparameterallowsformoreuser-controlledscalingofSLIC.Whethersearchingforalargerorsmallernumberofregions,executiontimeofSLICisnearlyidentical.Ontheotherhand,EHGBreducesregionsaseachhierarchyisclustered.However,inordertoreachhierarchyN,theother(N)]TJ /F1 11.955 Tf 11.96 0 Td[(1)hierarchiesmustbecalculated.Table 4-1 showsmeanstatisticscalculatedfromtwoscenariosusingEHGBandSLICtoperformover-segmentation.Therstscenariohas11scenestoanalyzeandthesecondincludes35scenes.Thetableliststhemeanvalueofthefollowingstatistics:thesuper-voxelsegmentationtime,theCAexecutiontime,theMRFexecutiontime,thenumberofMRFiterations,thenumberofsuper-voxelregionsidentied,andthetotaltimetoperformallMGRSSprocessing.Asthetableshows,SLICisconsistentlymoreecientthanEHGB.Theanticipatednumberofsuper-voxelsforSLICwassetto200,howeverthiscanbeadjustedtomodifythetargetnumberofregionsSLICidenties.TheMRFexecutiontimeincreasesasthenumberofnodesinthegraphincreases.Thisoccursbecause,duringtheEMstep,anyorallofthenodesinthegraphcanbere-labeled.Insteadofonlymovingregionsthathavebeengivenanewlabel,forthisprototype 83

PAGE 84

system,allnodesareassignedthenewlycalculatedlabel.Anaturaloptimizationistouseamoreecientstorageprocessfornodesandtheirclusterlabelings.Inspiteoftheincreasednumberofbothsuper-voxelregionsandedgesbetweennodesinthegraph,SLICusesasmallernumberofiterationstooptimizetheMRFobjectivefunction.IconjecturethisisbecausetheSLICregionsaremoreuniformlystructuredaroundthesegmentationboundariesinthedata.ThiscreateslessvariationintheMRFlabelinginferenceandyieldsfeweriterations.FutureimplementationofMGRSSwillprefertheSLICalgorithmratherthanEHGBwhengeneratingsuper-voxelregionsbecauseofthesebenets. Table4-1. MeanperformancevaluesforEHGBandSLICcalculatedusingscenariosof11and35scenestoanalyze. AlgorithmEHGBEHGBSLICSLIC NumberofSceneModels11351135Super-VoxelTime13.1213.450.250.25CATime0.020.030.030.03MRFTime6.5013.649.9116.92MRFIterations9.4518.067.3612.14Super-VoxelRegions155.18188.80234.00242.46NumberofEdges2235.823146.403606.733829.48TotalTime24.5832.4714.5321.65 Theinitialsetof3D-UCMsuper-voxelsproducedatthelevel0hierarchyhavealsobeenusedasthesuper-voxelregionover-segmentationtheMGRSSprocessrequires.Testingusing3D-UCMsuper-voxelshasfocusedonthenalsetoflabeledresultssincethecurrentversionof3D-UCMisnotimplementedincompiledcode.Comparingthe3D-UCMprototpyeversion'sexecutiontimeperformanceagainstthecompiledcodeofEGHBandSLICwouldnotbeareasonablebasisforcomparison.Section 4.3 analyzesthelabelmodelresultsproducedusing3D-UCMsuper-voxels,SLICsuper-voxels,andthecomplete3D-UCMhierarchy.Table 4-2 comparesthenumberofsuper-voxelregions(SVR)andneighboredgesfor3D-UCMandSLIC.Figure 4-3 showsthecorrespondinglineplottothisdata.Thesestatisticsshowthatthelevel0hierarchyof3D-UCMidenties 84

PAGE 85

manymoreregionsthanSLIC.Thelevel0hierarchywaschosenbecauseitwouldcontainthebasicsuper-voxelregionsegmentationthat3D-UCMproduces.Bychoosingthemostbasichierarchy,theMRFwillbeablemergesuper-voxelregionstogetherandformregionsandregionboundarieswiththemostexibilitypossible.Thisprovidesthesolutionwiththemostlikelyprobabilitytocreateboundariesatthebestlocationsandthereforeidentifythelabeledclusters. Table4-2. Numberof3D-UCMandSLICsuper-voxelregions(SVR)andneighboredges. Scene3D-UCMSVR3D-UCMEdgesSLICSVRSLICEdges 163168568426329882592883130230263035919802762693456463528635225832025369748630229260465694781482313058760288418423030008555174146226276895755774822272790103860510162262516Avg5510749052392901 Figure4-3. Numberof3D-UCMandSLICsuper-voxelregions(SVR)andneighboredges. 85

PAGE 86

Table 4-3 showsthenumberofclustersfoundbyCAandHCfor3D-UCMandSLICduringthepre-clusteringstageofthealgorithmprocessing.Figure 4-4 showsthecorrespondinglineplottothisdata. Table4-3. NumberofclustersfoundbyCAandHCfor3D-UCMandSLIC. MGRSS3DUCMMGRSSSLICSceneCACAPBTRHCHCPBTRCACAPBTRHCHCPBTR 112123636101036362121212412489393931212585810113838412123333121139395121298989103838612128787664141712129090119434381212747471047479121283831112414110121254549113636Avg121274749104040 Figure4-4. NumberofclustersfoundbyCAandHCfor3D-UCMandSLIC. Thelargenumberofsuper-voxelregionsdoescreateaproblemfortheMGRSS3DUCMSVRandMGRSS3DUCMSVRPBTRscenarios.Bothofthesescenariosusethesuper-voxelregionsastheinitiallabelingfortheMRF.Duringparameterestimation,the 86

PAGE 87

MRFcalculatesthecovariancematrixofthefeaturesthevoxelsofeachsuper-voxelregion.Duetothelargenumberofsuper-voxelregions,thenumberofvoxelsineachregionisoftentoosmalltoproperlyconditionthematrixforcovarianceevaluation.ThisleadstotheMRFnotbeingabletoperformusefulanalysis.Iattemptedtoresolvethisissuebyconstructinganarticialdiagonalcovariancematrixfortheinsucientlyconditionedregions.Thismatrixwasproducedbytakingthevarianceofthepointsavailableandusingthemasthediagonalmatrix.Unfortunately,thisdidnotresolvetheproblemastheMRFcouldnotndenoughsimilaritytomergesuper-voxelregionsandreachedtheconvergencethresholdafteronlyoneiteration.Theresultinglabelmodelwasnobetterthanthesuper-voxelregionsthemselves.Thisbehaviorwasnotobservedintheothermodelscenariosbecausereducednumberofpre-clusteredregionsleadtoasucientnumberofvoxelsforeachcluster.Therefore,themodelscenariosofMGRSS3DUCMSVRandMGRSS3DUCMSVRPBTRwereleftoutofmynalanalysis.Figure 4-5 ishistogramofthe3D-UCMsuper-voxelregionsizes,showingthelargenumberofsmallsizedinitialregions.Figure 4-6 showsthecomparativevoxelsizeinthesuper-voxelregionsidentiedbySLIC. Figure4-5. Histogramofvoxelsizesobservedin3D-UCMhierarchy0super-voxels. 87

PAGE 88

Figure4-6. HistogramofvoxelsizesobservedinSLICsuper-voxels. ThetruthingprocessisdiscussedSection 4.1.3 andSLICsuper-voxelswereusedtolabelthetruthregions.InsituationswhenaSLICsuper-voxeldidnotdeneitsboundarysuciently,tworegionsthatshouldhavebeendistinctwouldmergetogether.Ableedeectwasobservedwheretworegionswouldoverlapandthereforemustbegiventhesamelabelwhentruthing.AnexampleisshowninFigure 4-7 .Inthediagram,theimageontheleftsideshowstheareaofasuper-voxelregionsfoundbySLIC.Ontherightside,isanimageshowingtheGPRcurrentofthisregion.Thetwoarrowspointtolocationsinthehighlightedregionthatshouldnotbecontainedinthesamesuper-voxel.Theconnectedlinepointstoabackgroundportionoftheregionandthedashedlinepointstowhatisclearlyapieceofalayer.Thisproducedasub-optimalabilitytopreciselylabeleachvoxelinthesceneasasinglelabelmustbechosenfortheentirearea,aectingthetruthmodelandresultingevaluations.Ianticipatethatusingthe3D-UCMsuper-voxelregionsinplaceofSLICwouldpreventsomeofthislabelmerging.However,thesizeincreaseinthenumberofsuper-voxelregionstolabelwouldmakethetruthingexerciseprohibitiveatthistime.Myfuturework,discussedinChapter 5 ,considersoptionsforaddressingtheissueofabettersuper-voxelrepresentation. 88

PAGE 89

Figure4-7. Bleedeectmergingtwodistincttypesintoonesuper-voxelregion. 4.1.3TruthScenesTheprocessfordeningtruthscenesrequiredausertoselectlabelclustersforthesuper-voxelregionsofeachsceneanalyzed.AsimilarprocesstotheinitialstepsofMGRSSisused.TheabsolutevalueoftheGPRistakentoidentifythecurrent.Inordertoallowtheuserabetterviewofthescenewhenlabelingtruth,thedataisthenover-sampledandsuper-voxelregionboundariesarerepresented.First,thesceneisover-sampledbyafactoroffourinboththechannelandframedimensionsusingthree-dimensionallinearinterpolation.Therstroundofover-sampleddataisover-segmentedusingSLIC,creatingthesuper-voxelregionsofthescenemodel.SLICwaschosenbasedupontheanalysisprovidedinSection 4.1.2 .However,thisselectionmaycausethetruthlabelingtobemoreaccuratewhenSLICisapartofthemodelscenariobeingevaluated.Futureworkconsidershowanotheralgorithmsuchas3D-UCMmaybeusedtoassistinlabelingthetruthscene.Asecondroundofover-sampling,againusingthree-dimensionallinearinterpolation,isperformedtoallowspaceforregionboundaries.Thisover-samplingdoublesthedepth,channel,andframedimensions.Insteadofdirectlyusingeachnewlyinterpolateddatavalue,onlythevaluescontainedwithinasuper-voxelregionarekept.Anyvoxelpositionsontheboundarybetweenregionsareannotatedasaboundary,allowingthetruthtool 89

PAGE 90

theabilitytoclearlyshowtheseparationbetweensuper-voxelregionsinthescenebeingtruthed.TheMGRSSTruthToolshowninFigure 4-25 anddiscussedinSection 4.4.4 providesthisdisplayandisusedtoselectlabelsforeachsuper-voxelregion.Inthedataexamined,sixdierentlayerandobjecttypeswereidentied.EachlabelnumberandthethecorrespondingtypeofgroundorobjectstructureisshowninTable 4-4 .Onceeachregionhasbeenassignedoneofthesixlabels,themodelissaved.Savingthemodelstorestheover-sampledregionlabelingandalsounder-samplesbacktotheoriginaldimensions,providingtruthlabelsforallthevoxelsfoundinthethree-dimensionalvolumeoftheoriginalscene. Table4-4. Sixtypesidentiedineachtruthscene. LabelType 1GroundLayer2Background3LowEnergyObject4HighEnergyObject5LowEnergyLayer6HighEnergyLayer TentruthsceneswereannotatedandusedtoevaluatetheperformanceofMGRSS.Thetruthscenesincludevemetalanti-tank(AT)objects,twofalsealarms(FA),twoholes,andonepressureplate(PP)object.Theobjectinformationfoundineachtruthscene,includingtheobjecttypeandtheburieddepth,aswellasthegroundstructureofthesceneareshowninTable 4-5 .Appendix B containsimagesoftheGPRcurrentvalueandthetruthlabelingforeachtruthscene.WhenimplementingthetruthprocessforMGRSS,Iassignedthesametruthtypelabeltodisconnectedregionswithsimilarfeaturespaceproperties.Infutureimplementations,includingdepthorconnectednessasafeaturemaynecessitatere-labelingthetruthset.AnexampleofdisconnectedtypesisshowninFigure 4-8 .Inthegure,disconnectedbackgroundelements,lowenergyobjectregions,andlowenergylayerregionsarehighlighted.Notethatbackgroundelementsarefoundabovethegroundlayerand 90

PAGE 91

Table4-5. Detailsofthetenuser-annotatedtruthscenesusedintesting. SceneGroundObjectDepth 1GravelMetalAT12DirtMetalAT93AsphaltMetalAT64ConcreteHole65ConcreteHole66GravelFANA7GravelFANA8GravelMetalAT39GravelMetalAT110DirtPP2 underthegroundlayerornearthebottomofthedatacollectionscene.Bothoftheseareashaveverysimilarfeaturesandaretruthedasbackground.However,becausetheyareseparatedinspacebythegroundlayer,theycouldbeviewedasdistinctlabelclusters.Thesameprincipleholdsforanyseparationbetweenelementswithinthescene. Figure4-8. Disconnectedtypesobservedintruthscene. 4.1.4TrainingRegionsThesamesetupfortruthdescribedinSection 4.1.3 isimplementedbytheMGRSSTrainingTooldiscussedinSection 4.4.4 .Trainingregionswerecollectedbyselectingsuper-voxelregionsfromfourteentrainingscenes.Thesetoftrainingscenesisdistinct 91

PAGE 92

fromthesetoftruthscenes.Acrossthetrainingscenes,60trainingregionswereselectedforeachofthesixdierentlayerandobjecttypesshownintable 4-4 ,givingatotalof360trainingregions.ThetrainingscenesincludetenmetalATobjects,onelow-metalATobject,twonon-metalATobjects,andonePPobject.Theobjectinformationfoundineachtruthscene,includingtheobjecttypeandtheburieddepth,aswellasthegroundstructureofthesceneareshowninTable 4-6 Table4-6. Detailsofthefourteentrainingscenesusedtoselecttrainingregions. SceneGroundObjectDepth 1GravelMetalAT32GravelMetalAT13GravelMetalAT54GravelMetalAT15GravelMetalAT16GravelMetalAT17GravelMetalAT58GravelMetalAT59GravelLow-MetalAT210GravelMetalAT111DirtPP212DirtMetalAT613DirtNon-MetalAT814DirtNon-MetalAT8 4.2EvaluationofTruthandTrainingClusterSeparabilityInordertoevaluatetheabilityofPBTRtoinuencethedierentmodelscenariostocreateabetterlabelingofthescene,IhaveanalyzedthetruthandtraininglabeledregionsusingtheJ3value.TheJ3value,showninEquation 4{6 ,isdiscussedbyTheodoridisandKoutroumbasin2006[ 50 ].J3representstheclusterseparabilityobservedindata.InEquations 4{2 through 4{4 ,Piisthepriorprobabilityandiisthemeanoflabeli.Swisthewithin-classscattermatrixgiveninEquation 4{2 andiscalculatedbysummingthepriorprobabilityofthecovariancematrix,i,foreachlabeli.Thebetween-classscattermatrixiscalculatedinEquation 4{3 .Here,thepriorprobabilityofthevariancebetweeneachlabeliandtheglobalmean,i,issummed.TheglobalmeanisgiveninEquation 92

PAGE 93

4{4 .Eachlabelmeanisweightedbythepriorprobabilityandsummedtogether.Equation 4{5 showsthatthewithin-classandbetween-classscattermatricescanbesummedtoformthemixturematrixofthedataset. Sw=NXi=1Pii(4{2) Sb=NXi=1Pi(i)]TJ /F3 11.955 Tf 11.96 0 Td[(0)(i)]TJ /F3 11.955 Tf 11.95 0 Td[(0)T(4{3) 0=NXi=1Pii(4{4) Sm=Sw+Sb(4{5) J3=tracefS)]TJ /F5 7.97 Tf 6.58 0 Td[(1wSmg(4{6)Intuitively,J3measuresthevariancewithineachlabelandcomparesitagainstthevariancewithintheentiremixtureofthescene.ThehighertheJ3value,themoreseparablethedatais.Figure 4-9 isatheplotoftheJ3valuesofthetruthscenes(bluecurve)comparedtotheJ3valueofthetrainingdata(orangeline).Table 4-7 showsthescenebysceneJ3values.Asbothreect,thetrainingdatahasahigherJ3value,indicatinggreaterseparabilityandcapturingthedierencebetweenclusterlabelsbetterthanthetruth.ThisanalysisholdstrueasSection 4.3 showsalgorithmsusingPBTRconsistentlyoutperformthesamealgorithmwithoutusingPBTR. 4.3EvaluationofMGRSSModelScenarioPerformanceAsasystem,MGRSSusesthirteendierentmodelscenarios,describedinSection 3.5 ,tocreatealabelmodelofaGPRscenevolume.ExamplesoftheframebyframegraphicalresultsproducedwhenusingMGRSSSLICCAPBTR,MGRSS3DUCMCAPBTR,andthe3DUCMHierarchyareshowninAppendix C .Inadditiontointeractivelydisplayingtheresultsofthescenemodel,MGRSSevaluatestheperformanceofeachmodelscenariosovertentruthscenestodeterminehowthescenariomodelsperformwithrespecttoeachother. 93

PAGE 94

Figure4-9. J3measuringtruthandtrainingseparability. Table4-7. ScenebysceneJ3values. J3ValueSceneTruthTraining 1132.0196148.9347277.2932148.9347339.6103148.9347486.8947148.93475147.2657148.9347673.8969148.9347780.0790148.9347849.7213148.9347933.7013148.93471061.3425148.9347Avg78.1825148.9347 Eachmodelscenarioisexecutedoverthetentruthscenes,thentheJaccardIndexandAUCvaluesarecalculated.Intheguresandtablesthatfollow,highernumbersindicateabetterresult.Theexceptionistheusageofrankings,whererank1isthebestresult,rank2issecondbest,andsoon.Allguresfollowthepatternofplacingthey-axisorientedsuchthatthebestresultisreectedbythepointsclosesttothetopofthegure.IusetheJaccardIndexandtheareaunderthecurve(AUC)toevaluatethelabelresultsproducedbyeachmodelscenario. 94

PAGE 95

4.3.1JaccardIndexEvaluationTheJaccardIndexisdenedbyEquation 4{7 .Inthisequation,TrepresentsthesetoftruthlabelsandUrepresentsthesetofunknown(new)labels.ThecardinalityoftheintersectionbetweenTandUisdividedtheunionofTandUtocreatetheJaccardIndexvalue.Intuitively,theJaccardIndexrepresentsthenumberofcorrectlylabeledregionsoverthenumberofregionsassociatedwiththatlabelbyeitherthetruthorthealgorithm.TheJaccardIndexisusedbecauseitprovidesacontextwithinthescopeofeachlabelitself. Jaccard=jT\Uj jT[Uj(4{7)Asotherperformanceevaluationtechniqueswereconsidered,Iobservedthatthemulti-classlabelingofthisproblemcreatestheneedtoisolatetheevaluationofeachlabeltypeindividually.Otherwise,theperformanceassessmentwouldbeskewedbecauseofthediversityinthedata.TheRandIndex,showninEquation 4{8 ,isacommonformulausedinevaluatingclusterperformance.However,thisprovidesanexampleofthecomplicationsarisinginthesceneanalysisproblemMGRSSsolves.InEquation 4{8 ,TPisthesetoftruepositives,FPisthesetoffalsepositives,FNisthesetoffalsenegatives,andTNisthesetoftruenegatives.GPRsceneanalysisisamulti-classproblem.Byawidemargin,themostcommonlabelisthebackground.Wheneveranotherlabeldoesnotincludethebackgroundinitsset,disproportionateweightisgiventotheTNvalue.Allofthenon-backgroundlabelsincreasetheweightofthenumerator,whetherornotthebackgroundelementshavebeencorrectlylabeled.TheoverwhelmingnumberofbackgroundlabeledtruthvoxelsisshowninTables 4-8 and 4-9 .Table 4-8 showsthenumberofvoxelsassignedtoeachtruthtypeforeveryscene,includingthetotalvoxelsinthenalcolumnandtheaverageassignmentsinthenalrowofthetable.Table 4-9 isthecorrespondingpercentageofeachtruthtyperoundedtothenearestdecimal.Valuesofzeroindicatelessthan1%ofthevoxelsareassignedthegivenlabel.Notethatinthe 95

PAGE 96

averagecase,thebackgroundvoxelsmakeup87%ofthelabeledscene. Rand=TP+TN TP+FP+FN+TN(4{8) Table4-8. Numberofvoxelsassignedtoeachtruthtypeacrossallscenes. ObjectLayerSceneGroundBackgroundLEHELEHETotal 135477685615712214201491850167495682216336037422265530236604042655872330170631835772523283479942711749568432493667609988026653079861237495685196516164232611797120994291655872635328487181669946636731976360756073579249323131841046542502005760756083733350555874153353384441545760756093819049243980754474359752840760756010199256158394696574115433295655872Avg30599579947536418133201514916664656 Table4-9. Percentofvoxelsassignedtoeachtruthtypeacrossallscenes. ObjectLayerSceneGroundBackgroundLEHELEHETotal 1591102110023920041100348410561004489104110053940021100668000103100768110931008683116310096811165100103941021100Avg5871052100 Figures 4-10 and 4-11 summarizethetablesthatfollow,showingtheJaccardIndexresultsforeachmodelscenarioovereverysceneasapercentvalue.Figure 4-10 displayseachmodelscenariowiththesamecolorasitscorrespondingPBTRresult.Thebasemodelscenariosareplottedusingasolidline,whilethemodelscenarioswithPBTRhaveadashedline.Similarly,Figure 4-11 displaysthesamelineplots,excepteachmodel 96

PAGE 97

scenariosubsetisplottedadierentway.TheMGRSS3DUCMmodelscenarioshaveadashedanddottedlinewhennotusingPBTRandadashedlinewhenusingPBTR.TheSLICmodelscenarioshaveasolidlinewhennotusingPBTRandadashedlinewhenusingPBTR.Finally,the3DUCMHierarchymodelisdisplayedusingadottedline.Recall,the3D-UCMalgorithm,describedinSection 2.5.5 ,doesnotusePBTR. Figure4-10. JaccardIndexresultsforallmodelscenariosoverallscenesemphasizingthedierenceswithandwithoutPBTR. Table 4-10 showsthepercentJaccardIndexofeachmodelscenarioovereveryscene.ThisJaccardvalueiscalculatedbyaveragingtheJaccardIndexofthesixlabeltypesidentiedbythetruthdiscussedinSection 4.1.3 .Table 4-11 showstheJaccardIndexrankofeachmodelscenarioovereveryscene.Asthegureplotsandtablesshow,MGRSSSLICCAPBTRreceivesthehighestJaccardIndexandrank1.Inaddition,Table 4-12 showsthemoderankingvalueforeachmodelscenariooverthe10scenesandmean.Again,MGRSSSLICCAPBTRisthewinner,receivingrank1insevenoutofelevencases. 4.3.2AreaUndertheCurveEvaluationTheareaunderthecurve(AUC)iscalculatedfromROCcurvesthatareproducedbyexaminingthelabelaccuracyforeachtruthtypelabelinggeneratedbyeverymodel 97

PAGE 98

Figure4-11. JaccardIndexresultsforallmodelscenariosoverallscenesemphasizingthedierentsuper-voxelsegmentations. Table4-10. PercentJaccardIndexmeanvaluesofeachmodelscenariooverthesetofscenes,includingtheaverage. SceneMeanJaccardIndexasaPercentModel12345678910Avg MGRSS3DUCMCA2733403027272828293230MGRSS3DUCMCAPBTR3938333939302928333534MGRSS3DUCMHC3829263116242425271626MGRSS3DUCMHCPBTR3635323940272828303433MGRSSSLICCA3836423534293232383535MGRSSSLICCAPBTR4033423741433239334839MGRSSSLICHC3836382834303027373533MGRSSSLICHCPBTR3837403832433036354838MGRSSSLICSVR3128312816263027363929MGRSSSLICSVRPBTR30334337323131383347363DUCMHierarchy3240353032353131343633 scenarioacrossthesetofscenes.Therearesixtruthtypes:ground,background,lowenergyobject,highenergyobject,lowenergylayer,andhighenergylayer,tenmodelscenariosimplementedusingtheMGRSSprocess,andtenscenes,creatingatotalof600possibleROCcurves.Inaddition,thirtyhierarchieswereproducedforeachscenewhenusingthe3DUCMHierarchymodelscenario,creating30hierarchiesby10scenesby6truthtypesforanother1800ROCcurves.Reviewingandinterpretingresultsfromthis 98

PAGE 99

Table4-11. RankingtheJaccardIndexvalueofeachmodelscenariooverthesetofscenes,includingtheaveragerank. SceneJaccardIndexRankingsModel12345678910Avg MGRSS3DUCMCA11749999610109MGRSS3DUCMCAPBTR22813588885MGRSS3DUCMHC51011710111111111111MGRSS3DUCMHCPBTR769228107998MGRSSSLICCA34264724164MGRSSSLICCAPBTR19341111611MGRSSSLICHC4561056610277MGRSSSLICHCPBTR63537253422MGRSSSLICSVR91110111110793410MGRSSSLICSVRPBTR1081584327333DUCMHierarchy81786345556 Table4-12. Tableofthemostfrequentrankforeachmodelscenario. ModelModeFrequency MGRSS3DUCMCA95MGRSS3DUCMCAPBTR85MGRSS3DUCMHC117MGRSS3DUCMHCPBTR93MGRSSSLICCA44MGRSSSLICCAPBTR17MGRSSSLICHC63MGRSSSLICHCPBTR23MGRSSSLICSVR103MGRSSSLICSVRPBTR333DUCMHierarchy53 numberofROCsischallenging.Instead,IhaveincludedthefollowingsummarytablesoftheAUCvaluesfromtheROCsgenerated.AnothercomplicationintheevaluationoftheROCresultsisthatthe3DUCMHierarchydoesnotprovideacondencecorrespondingtoeachlabel.Therefore,onlybinarycondencesf0;1garepossible.MGRSSprovidesthenalprobabilityassociatedwitheachlabelassignmentasthecondencevalue.Figure 4-12 showsthesixROCsforeachofthetruthtypesforthe3DUCMHierarchylevel#11labelingofscene#8.Asthegureshows,eachoftheROCsonlyhastwopointsandthereisnocurve,onlyastraightlineduetothebinarylabeling.Figure 4-13 displaysthesixROCsofeachtruth 99

PAGE 100

typefortheMGRSSprobabilitycondence.Note,eachROCdisplaysaseriesofpointsalongatruecurve.Comparatively,Figure 4-14 showsanincreaseintheoverallAUCwhentheMGRSSbinarycondenceisused.Section 4.3.2.1 evaluatestheresultswhencomparingthebinarycondencesof3D-UCMtotheprobabilitycondencesofMGRSS.Section 4.3.2.2 comparesthebinarycondencesof3D-UCMtothebinarycondenceoftheMGRSSlabelassignments. Figure4-12. 3DUCMHierarchybinarycondenceROCs. 4.3.2.1Comparing3DUCMHierarchybinarycondencetoMGRSSprobabil-itycondenceTakingthemeanAUCofeverytruthtypeineachscene,Figure 4-15 plotstheaverageAUCvaluescalculatedbythesetofmodelscenarios.Thevaluesarecalculatedanddisplayedasapercentage.Similarly,Figure 4-16 summarizestherankingoftheAUCsfromeachmodelscenarioacrossthetruthtypes.Astheguresshow,the3DUCMHierarchywiththemaximumJaccardIndexrankedrstintheaveragecaseandforthemosttruthtypes.MGRSSSLICCAPBTRrankedsecond.Foreverytruthtype,anMGRSSmodelscenariousingPBTRrankedrstorsecond.Inthecaseswherethe 100

PAGE 101

Figure4-13. MGRSSprobabilitycondenceROCs. Figure4-14. MGRSSbinarycondenceROCs. 101

PAGE 102

PBTRinuencedscenarionishedsecond,itwasbehindthe3DUCMHierarchy.ThisdemonstratesthePBTRconsistentlyimprovestheperformanceofthemodelscenarios.TablesdetailingthecorrespondingresultsfortheindividualtruthtypesareprovidedinAppendix D Figure4-15. AveragetruthtypepercentageAUCforallmodelscenariosoverallscenesgeneratedfrom3DUCMHierarchybinarycondenceROCsandMGRSSprobabilitycondenceROCs. 4.3.2.2Comparing3DUCMHierarchybinarycondencetoMGRSSbinarycondenceAsinSection 4.3.2.1 ,themeanAUCofeverytruthtypeiscalculatedasapercentageandrank.However,inthiscomparison,theAUCsoftheMGRSSROCsaregeneratedusingbinarycondences,justlikethe3D-UCMAUCs.Figure 4-17 showsthepercentresultsandFigure 4-18 summarizestherankingswhenusingbinarycondencesforboththe3D-UCMandMGRSSlabels.Intheseresults,the3DUCMHierarchywiththemaximumJaccardIndexrankedfourth,insteadofrst.JustasIobservedintheevaluationoftheJaccardIndexresults,theMGRSSSLICCAPBTRscenariorankedrstintheaveragecaseandforthemosttruthtypes,whileMGRSSSLICHCPBTRrankedsecondandMGRSSSLICSVRPBTRrankedthird.Again,thisdemonstratesthatPBTR 102

PAGE 103

Figure4-16. Averagetruthtyperankingsforallscenariosoverallscenesgeneratedfrom3DUCMHierarchybinarycondenceROCsandMGRSSprobabilitycondenceROCs. consistentlyimprovestheperformanceofthemodelscenarios.Tables 4-13 and 4-14 givethecompletesetofpercentagesandrankingsforeverymodelscenarioandtruthscenecombination. Figure4-17. AveragetruthtypepercentageAUCforallmodelscenariosoverallscenesgeneratedfrom3DUCMHierarchybinarycondenceROCsandMGRSSbinarycondenceROCs. 103

PAGE 104

Figure4-18. Averagetruthtyperankingsforallscenariosoverallscenesgeneratedfrom3DUCMHierarchybinarycondenceROCsandMGRSSbinarycondenceROCs. Table4-13. CompletesceneAUCpercentagescalculatedfromROCsgeneratedusing3DUCMHierarchybinarycondencesandMGRSSbinarycondences. CompleteSceneAUCasaPercentModel12345678910Avg 3DUCMCA17222218211617171826193DUCMCAPBTR23272124231816171923213DUCMHC241914198151414178153DUCMHCPBTR2021172431161717182521SLICCA2220241924171818222521SLICCAPBTR2525262427312023193225SLICHC2220211624181716212520SLICHCPBTR2222252427291921203224SLICSVR181618158151716212917SLICSVRPBTR17233124271719221930233DUCMHierarchy2024221822252024222422 104

PAGE 105

Table4-14. CompletesceneAUCrankingscalculatedfromROCsgeneratedusing3DUCMHierarchybinarycondencesandMGRSSbinarycondences. CompleteSceneAUCRankingsModel12345678910Avg 3DUCMCA1055998989593DUCMCAPBTR31847410761053DUCMHC210116101111111111113DUCMHCPBTR7710519861067SLICCA48475755276SLICCAPBTR12212122711SLICHC5971065610388SLICHCPBTR66334244522SLICSVR9119111110794410SLICSVRPBTR1141236338333DUCMHierarchy83688311194 4.4MGRSSSystemComponentsTheMarkovGroundRegionSegmentationSystem(MGRSS)providesanecientandsimpleinterfaceforcreatingascenemodel.TheonlyuserinputparameterrequiredtomodelanewsceneistheGPRdata.Resultsarethelabelsetsfromthesuper-voxelregions,pre-clusteringlabels,MRFlabels,andunconnectedMRFlabels.ResultsaredisplayedvisuallyusingtheMGRSSViewModelGUI,MGRSS3DUCMHierarchyGUI,MGRSSViewer,MGRSSTruthTool,andMGRSSTrainingToolgraphicalinterfaces.InsteaddisplayingtheresultsfromanMGRSSmodelscenario,theMGRSS3DUCMHierarchyGUIshowsallofthehierarchiesgeneratedbythe3D-UCMalgorithm.Thesectionsthatfollow, 4.4.1 through 4.4.5 ,provideanalysisandexamplesofeachinterface. 4.4.1MGRSSViewModelGUITheMGRSSViewModelGUIprovidestheuserwithaninteractiveenvironmentforchoosingbetweenthedierentmodelscenariosandscenesdescribedinSections 3.5 and 4.1.3 .Theinterfaceallowstheusertoscrollthrougheachoftheframesofthescenebeingviewed,showingthecross-trackpersectiveoftheGPRcurrentdisplayedasanimage,thetruthlabeling,andtheresultsgeneratedbythechosenmodelscenario.IfthePBTRalgorithmcanbeappliedtothemodel,thelabelingproducedbyincorporatingPBTRis 105

PAGE 106

alsodisplayed.Figure 4-19 showsscene#8withresultsgeneratedbytheMGRSSSLICCAandMGRSSSLICCAPBTRmodels. Figure4-19. MGRSSViewModelGUI. 4.4.2MGRSS3DUCMHierarchyGUIWhilethehighestJaccardIndexvalueofthe3D-UCMhierarchiesgeneratedforasceneareavailabletobeviewedusingtheMGRSSViewModelGUI,theentiresetofhierarchiesgeneratedby3D-UCMcanbeviewedusingtheMGRSS3DUCMHierarchyGUI.Thisinterfaceallowstheusertostepselectthedesiredsceneaswellasstepthrougheachofthesegmentationhierarchiesoneatatime.SimilartotheMGRSSViewModelGUI,theGPRcurrent,truthlabeling,and3DUCMHierarchyresultsaredisplayedframebyframe.Figures 4-20 and 4-21 showscene#8withtheresultsgeneratedfrom3DUCMHierarchylevel#17and#21. 4.4.3MGRSSViewerTheMGRSSViewerprovidesaccesstoexamininganindivualmodelandscenefromadierentperspective.Figure 4-22 showsthegraphicalinterface.Listedfromlefttorightalongthetoprowandthenlefttorightalongthebottomrowoftheframe,thesearethekeyelementsontheinterface: 106

PAGE 107

Figure4-20. MGRSS3DUCMHierarchyGUIofscene#8fromhierarchy#17. Figure4-21. MGRSS3DUCMHierarchyGUIofscene#8fromhierarchy#21. 107

PAGE 108

VoltagedisplayoftheB-scanframe Super-voxelsidentiedbyover-segmentation UnconnectedMRFclustersidentiedbythescenario Three-dimensionalviewofaselectedcluster Thecurrentframenumber Framenavigationthroughtheframesofthescene Figure4-22. MGRSSViewer. The3DelementviewisdisplayedbyselectingaclusterintheFinalResultframeoftheMGRSSViewer.sectionoftheGUIcontainsalistoftheUnconnectedMRFRegionsandaViewbutton.Choosingalabelandthenselectingviewwilldisplayaninteractivethree-dimensionalgridoftheMRFregionchosen.Figure 4-23 showsthedefaultviewofthepopupwiththeobjectofinterestdisplayedfromtheexamplescene.Figure 4-24 showsazoomedviewofthesameobject. Figure4-23. Default3Delementview. 108

PAGE 109

Figure4-24. Zoomed3Delementview. 4.4.4MGRSSTruthToolFigure 4-25 istheinitialMGRSSTruthToolandFigure 4-26 showstheTruthToolbeingusedtolinkregionstogetherintothesameclusterlabel.Figure 4-27 showsthepopupformakingalabel(re)-selectionofasuper-voxelregion.Thetooldisplaysthecross-trackviewofcurrentontheleftandthedowntrackviewofthecurrentontheright.Navigationbuttonsprovidetheuserwiththeabilitytoscrollthroughtheframesofthegivensceneinbothdirections.WhentheLinkbuttonisselected,thepopupshowninFigure 4-27 allowstheusertheoptionto:choosealabeltoassociatewiththeregionsselectedorcanceltheoperation.ThelabelchoicesarethesameasthetypeslistedinTable 4-4 withtheadditionoftwomiscellaneousoptionsintheeventadierenttypeofelementisobservedwithinanewscene.ThecompletelistoflabeloptionsisprovidedinTable 4-15 .Whenincludingapreviouslylabeledregionwithintheselectedsetofregionstolabel,allunlabeledregionsareassignedtheexistinglabel.Figure 4-28 showsthescenewithallregionslabeled.Inordertoprovidetheuserwithanunderstandingofspecicregioninformation,detailsrelatedtoaregioncanbeviewedbyselectingaregionkeyfromtheSelectedRegionslistbox.AllregionsalreadyselectedaredisplayedinwhiteontheGUI.Theregionwhosedetailsarebeingviewedisdisplayedinacheckerpatternsothatitisdistinguishedfromtheotherregionsinthelist.Figure 4-29 highlightsaregionkeybeingselectedand 109

PAGE 110

Figure4-25. MGRSSTruthTool. Figure4-26. SelectingregionsontheTruthTool. 110

PAGE 111

Figure4-27. Pop-upprovidinglabelselection. Figure4-28. Allregionslabeled. 111

PAGE 112

Table4-15. Completelistoflabeloptions. LabelType 1GroundLayer2Background3LowEnergyObject4HighEnergyObject5LowEnergyLayer6HighEnergyLayer7MiscellaneousI8MiscellaneousII thecorrespondinggraphicalchangetotheregioninthecross-trackanddown-trackframes.AsFigure 4-30 shows,thedetailsrelatedtotheregionare: Keyvalueoftheselectedregion Currentoftheselectedregion Currentnearesttotheselectedregion Maximumcurrentvaluefromallregionneighbors Minimumcurrentvaluefromallregionneighbors Maximumcurrentvaluefromallunassignedregions Minimumcurrentvaluefromallunassignedregion Figure4-29. Choosingaregionfordetailviewing. 112

PAGE 113

Figure4-30. Detailsoftheselectedregion. 4.4.5MGRSSTrainingToolTheMGRSSTrainingTool,showninFigure 4-31 ,issimilarinappearanceandfunctionalitytotheMGRSSTruthTool.RegionscanbeselectedandgivenalabelinthesamemannerastheMGRSSTruthTool.Theprimaryadditionisacountofthecurrentlabelstrainedforeachofthesixlabeltypesidentiedinthetruthscenes.Futureworkinvolvesisolatingspecicsceneelementsandweightingthetrainedparameterstoprovidegreaterinuenceoverspecifctypesofscenes.Chapter 5 discussesthisconceptinfurtherdetail. Figure4-31. MGRSSTrainingTool. 113

PAGE 114

CHAPTER5CONCLUSIONSUndocumentedexplosivehazardscontinuetobefoundinpost-conictareasoftheworldandnewhazardscontinuetobeemplacedintoregionsofconict[ 2 ].Whilemanysuchhazardsexist,evenasingleonewouldbetoomany.Increasingtheprobabilityofaccuratelydetectingexplosivehazardsbyimprovinghazardclassicationandreducingfalsealarmratesisnecessarytopreventdangertotheworldpopulation.ManyimprovementshavebeenmadebyexaminingtheindividualresponsevectorscollectedbyGPRdevices,howeversucheortshavefocusedondiscriminatingindividualreadingsinsingleorsmallgroups[ 3 ].Byconsideringasequenceofblockedresponses,sceneanalysisincreasestheabilitytoaccuratelymoveexplosivehazarddetectionintoanewera.TheMarkovGroundRegionSegmentationSystem(MGRSS)labelssuchaseriesofresponsesamples.Examinationofthelabeledsceneprovidessystemuserswiththeabilitytotrackdierenttypesofkeyelementsfoundinthescene,includinggroundlayers,clutterobjects,andexplosivehazards. 5.1MGRSSMGRSSimplementsacomputationallyandstructurallyecientsolutiontocreatingascenemodelfromGPRdata.TheonlyinputrequiredisthesequenceofGPRsamples.Theoutputproducedeectivelymodelsthescene.Thescenemodelisreturnedbythesystemaswellasdisplayedinaninteractivegraphicalenvironment.TheGUIenvironmentprovidesuserstheabilitytolabelatruthsceneorselecttrainingexamples.ThePBTRalgorithm'suseofthetrainingmodelhasbeenshowntoimprovetheaccuracyofthelabeledscene.Specically,theMGRSSSLICCAPBTRalgorithmhasbeenshowntocreatethebestlabeledscenetodate,rankingrstbytheJaccardIndex,rankingsecondbytheareaundertheROCcurvegeneratedusingprobabilitycondences,andrankingrstbytheareaundertheROCcurvegeneratedusingbinarycondences. 114

PAGE 115

5.2FutureWorkWhileMGRSSprovidesmanynewoptionsforsceneanalysisofdatacollectedbyahazarddetectionsystem,therearestillmanythingslefttodo.Perhapsthemostimportantaspectofthisworkistoenablebetterexplosivehazarddetection.ThiscanbepursuedbyutilizingthescenemodelsMGRSSproducesasfeaturesinaclassicationmethod.Inaddition,thereareimprovementsthatcanbemadetotheMGRSSprocesssothatitcreatesbetterscenemodelsinamorecomputationallyeectiveway.First,theperformanceofthe3DUCMHierarchywarrentsfurtherexaminationtoseeifaspecichierarchylevelprovidesamoreoptimalsuper-voxelsegmentation.Identifyinga3DUCMHierarchywithasmallenoughnumberofsuper-voxelregionsmayprovideamoreaccuratemodelfortheuserwhenannotatingatruthscene.Aswell,anupdatetotheSLICalgorithm,calledSNICbyAchantaandSusstrunk,hasrecentlybeenimplementedandmayimproveupontheaccuracyofthesuper-voxelregionsSLICidenties[ 51 ].TheperformanceofCAasapre-clusteringstepmaybeenchancedbyimplementingSemi-SupervisedCompetitiveAgglomeration.UsingthesameprinciplesasPBTR,afeedbackloopoftrainedclusterlabelswouldbeincorporatedintotheCAaglorithm.Also,enhancementstothefeaturesetMGRSSextractsmayprovideabetterrepresentationoftheGPRdataandincreasetheaccuracyofthescenemodel.ThisworkhasnotexaminedsignalattenuationdierencesobservedinvaryingdepthlevelsoftheGPR.OnepossibilitywouldbetouseGammacorrectionontheluminanceofthevoltagereturns.Also,adjustmentstotheGaborltersetsmaybetterrepresentthecontour.Anexampleadjustmentwouldbetomodifytheltersfoundinlowerdepthstoresolveatteningobservedinhyperbolareectionsatthesedepths.Finally,trainingmodelscreatedfromdirectedscenecontentcouldbeusedbyPBTRtoprovidespecializedinuencetothelabelingofnewsceneshavingasimilarstructure.Forexample,trainingmodelsconstructedfromagrassgroundlayerareinherentlydierentthancollectionssampled 115

PAGE 116

overanasphaltgroundlayer.IanticipateallowingasystemoperatortheabilitytodynamicallyselectthescenetrainingmodelthattsthecurrentenvironmentwillenhancethelabelingaccuracyPBTRprovides. 116

PAGE 117

APPENDIXAMGRSSPBTRALGORITHMFigures A-1 through A-16 modeltheProbability-BasedTrainingRealignment(PBTR)algorithm.Figures A-1 representstrainingregionclustersforthegroundlayer(red),background(blue),object(purprle),andsub-surfacelayer(orange).Figure A-2 showstheunknownornewregions(black)tolabel.TheinitiallabelassignmentidentiessixclustersinFigure A-3 andtheirparametersareestimatedinFigure A-4 .Figure A-5 showsthetrainingregionclustersbeingalignedone-to-onewithunknownclusterstoformthehardconstraint.Parameterestimation0isnowperformedontheupdatedclustersinFigure A-6 .Figures A-7 through A-12 areexamplesoftheindividualregionsbeingre-labeledbasedupontheirnearestprobabilitycluster.Notethissteprelaxesthehardconstraintimposedinstep2toasoftconstraint,allowingtrainingregionsofthesametypetobeassignedintodierentclusters.Thetransitionofregionsbetweenclustersishighlightedoneatatime,whileregionsremaininginthesameclustersareleftunchanged.AclusterwithnoregionshasdeveloppedandisprunedinFigure A-13 .Parameters00areestimatedusingtheregiondistributionshowninFigure A-14 .IftheMRFhasnotconverged,theseparameterswillbeusedtodeterminetheprobabilityofre-alignmentwhenreturningtostep2,asshowninFigure A-15 .Figure A-16 repeatstheprocessbyre-formingthehardconstraintandre-aligningtrainingregionclustersone-to-onewiththeirhighestprobabilityunknowncluster. FigureA-1. PBTRtrainingregions. 117

PAGE 118

FigureA-2. PBTRunknown(new)regions. FigureA-3. PBTRinitialsetup:assignlabels. FigureA-4. PBTRinitialsetup:estimateparameters. 118

PAGE 119

FigureA-5. PBTRre-aligningtrainingregionsandformingthehardconstraint. FigureA-6. PBTRestimateparameters0. FigureA-7. PBTRassigninglabelspart1. 119

PAGE 120

FigureA-8. PBTRassigninglabelspart2. FigureA-9. PBTRassigninglabelspart3. FigureA-10. PBTRassigninglabelspart4. 120

PAGE 121

FigureA-11. PBTRassigninglabelspart5. FigureA-12. PBTRassigninglabelspart6. FigureA-13. PBTRpruningcluster. 121

PAGE 122

FigureA-14. PBTRparameterestimation00. FigureA-15. PBTRuntilconvergence,returntostep2. FigureA-16. PBTRre-aligntrainingregions,formingthehardconstraint. 122

PAGE 123

APPENDIXBTRUTHSCENEVISUALIZATIONSFigures B-1 through B-10 displayframe31fromeachoftheMGRSStruthscenes.Ineachgurethereisapairofimages.TheimageontheleftdisplaystheGPRcurrent.Theimageontherightdisplaystheuser-annotatedtruthforthescene.Table B-1 reviewstheattributesofeachscene.CompleteanalysisofthetruthscenesisgiveninSection 4.1.3 TableB-1. Detailsofthetenuser-annotatedtruthscenesusedintesting. SceneGroundObjectDepth 1GravelMetalAT12DirtMetalAT93AsphaltMetalAT64ConcreteHole65ConcreteHole66GravelFANA7GravelFANA8GravelMetalAT39GravelMetalAT110DirtPP2 FigureB-1. Scene#1truthatframe31. 123

PAGE 124

FigureB-2. Scene#2truthatframe31. FigureB-3. Scene#3truthatframe31. FigureB-4. Scene#4truthatframe31. 124

PAGE 125

FigureB-5. Scene#5truthatframe31. FigureB-6. Scene#6truthatframe31. FigureB-7. Scene#7truthatframe31. 125

PAGE 126

FigureB-8. Scene#8truthatframe31. FigureB-9. Scene#9truthatframe31. FigureB-10. Scene#10truthatframe31. 126

PAGE 127

APPENDIXCMGRSSMODELSCENARIORESULTSFigures C-2 through C-61 showthesequenceofresultsMGRSS3DUCMCA,MGRSSSLICCA,andthe3DUCMHierarchyproduceforframes21through40ofscene#8.TheMGRSSViewModelGUIinFigure C-1 reviewstheplacementofmodelsshownineachgure.Fromlefttoright,fourscenemodelsareshown.TherstmodelistheGPRcurrentdisplayedasanpseudo-colorimage.Next,thetruthmodelisdisplayed.Then,theresultproducedbythespecicmodelscenarioisshown.Finally,ifPBTRcanbeappliedtothemodelscenario,theresultitproducesisgiveninthefourthposition.ThestructureofeachguresetisdescribedinTable C-1 ,wherethecellpositionsofthetablereecttheplacementofmodelsonthepage. FigureC-1. MGRSSViewModelGUIwithinterfacedetails. TableC-1. Figureandmodelscenariomap. Model1Model2Model3Model4 CurrentTruthMGRSSSLICCAPBTRCurrentTruthMGRSS3DUCMCAPBTRCurrentTruth3DUCMHierarchy(PBTRNA) 127

PAGE 128

FigureC-2. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame21. FigureC-3. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame21. FigureC-4. 3DUCMHierarchyscene#8frame21. 128

PAGE 129

FigureC-5. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame22. FigureC-6. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame22. FigureC-7. 3DUCMHierarchyscene#8frame22. 129

PAGE 130

FigureC-8. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame23. FigureC-9. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame23. FigureC-10. 3DUCMHierarchyscene#8frame23. 130

PAGE 131

FigureC-11. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame24. FigureC-12. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame24. FigureC-13. 3DUCMHierarchyscene#8frame24. 131

PAGE 132

FigureC-14. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame25. FigureC-15. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame25. FigureC-16. 3DUCMHierarchyscene#8frame25. 132

PAGE 133

FigureC-17. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame26. FigureC-18. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame26. FigureC-19. 3DUCMHierarchyscene#8frame26. 133

PAGE 134

FigureC-20. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame27. FigureC-21. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame27. FigureC-22. 3DUCMHierarchyscene#8frame27. 134

PAGE 135

FigureC-23. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame28. FigureC-24. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame28. FigureC-25. 3DUCMHierarchyscene#8frame28. 135

PAGE 136

FigureC-26. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame29. FigureC-27. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame29. FigureC-28. 3DUCMHierarchyscene#8frame29. 136

PAGE 137

FigureC-29. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame30. FigureC-30. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame30. FigureC-31. 3DUCMHierarchyscene#8frame30. 137

PAGE 138

FigureC-32. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame31. FigureC-33. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame31. FigureC-34. 3DUCMHierarchyscene#8frame31. 138

PAGE 139

FigureC-35. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame32. FigureC-36. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame32. FigureC-37. 3DUCMHierarchyscene#8frame32. 139

PAGE 140

FigureC-38. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame33. FigureC-39. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame33. FigureC-40. 3DUCMHierarchyscene#8frame33. 140

PAGE 141

FigureC-41. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame34. FigureC-42. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame34. FigureC-43. 3DUCMHierarchyscene#8frame34. 141

PAGE 142

FigureC-44. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame35. FigureC-45. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame35. FigureC-46. 3DUCMHierarchyscene#8frame35. 142

PAGE 143

FigureC-47. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame36. FigureC-48. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame36. FigureC-49. 3DUCMHierarchyscene#8frame36. 143

PAGE 144

FigureC-50. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame37. FigureC-51. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame37. FigureC-52. 3DUCMHierarchyscene#8frame37. 144

PAGE 145

FigureC-53. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame38. FigureC-54. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame38. FigureC-55. 3DUCMHierarchyscene#8frame38. 145

PAGE 146

FigureC-56. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame39. FigureC-57. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame39. FigureC-58. 3DUCMHierarchyscene#8frame39. 146

PAGE 147

FigureC-59. MGRSSSLICCAandMGRSSSLICCAPBTRscene#8frame40. FigureC-60. MGRSS3DUCMCAandMGRSS3DUCMCAPBTRscene#8frame40. FigureC-61. 3DUCMHierarchyscene#8frame40. 147

PAGE 148

APPENDIXDMGRSSAUCRESULTSThestatisticsinthisappendixarebasedupontheROCsgeneratedfrom3DUCMHierarchybinarycondencesandMGRSSprobabilitycondences.ThemeanAUCofthesixtruthtypesidentiedinSection 4.1.3 iscalculatedforeverymodelscenarioandscenecombination.Table D-1 showsallthemeanAUCvaluesasapercentage.Table D-2 showseachmodelscenario'sAUCrankcomparedtotheothermodels.AsdiscussedinSection 4.3 ,the3DUCMHierarchyhasthehighestaveragerank,followedbyMGRSSSLICCAPBTR. TableD-1. CompletesceneAUCpercentages. CompleteSceneAUCasaPercentModel12345678910Avg MGRSS3DUCMCA11121610910111091812MGRSS3DUCMCAPBTR1825131912121210131915MGRSS3DUCMHC17981089899810MGRSS3DUCMHCPBTR1618121921111210112115MGRSSSLICCA12101411191189131913MGRSSSLICCAPBTR2528181924211413102520MGRSSSLICHC12101710191288131913MGRSSSLICHCPBTR2018191921241312102518MGRSSSLICSVR1079981088132611MGRSSSLICSVRPBTR18223119201214131223183DUCMHierarchy2024221822252024222422 TableD-2. CompletesceneAUCrank. CompleteSceneAUCRankModel12345678910Avg MGRSS3DUCMCA107699107510109MGRSS3DUCMCAPBTR52838567575MGRSS3DUCMHC6101181011118111111MGRSS3DUCMHCPBTR75913856766MGRSSSLICCA88776789388MGRSSSLICCAPBTR11441322922MGRSSSLICHC9951074911297MGRSSSLICHCPBTR26324244834MGRSSSLICSVR1111101111910104110MGRSSSLICSVRPBTR441556336533DUCMHierarchy33262111141 148

PAGE 149

Similarly,Tables D-3 through D-14 providetablepairsthatdetailthemeanAUCandscenariorankingforeachoftheindividualtypes:ground,background,loweneryobject,highenergyobject,lowenergylayer,andhighenergylayer.Specically,Tables D-3 and D-4 reviewtheaverageAUCevaluationofthegroundlabeling.Here,the3DUCMHierarchyrankedfourth,whileSLICSVRPBTRandSLICCAPBTRwererstandsecondrespectively. TableD-3. GroundAUCpercentages. GroundAUCasaPercentModel12345678910Avg MGRSS3DUCMCA2327502315212727152325MGRSS3DUCMCAPBTR4346144836152015133929MGRSS3DUCMHC3491725012144011016MGRSS3DUCMHCPBTR222625493713171294125MGRSSSLICCA3843414264283437453040MGRSSSLICCAPBTR7981415688583937114053MGRSSSLICHC3843522664313438453040MGRSSSLICHCPBTR7354385894313932114047MGRSSSLICSVR21128220223438452322MGRSSSLICSVRPBTR78817856921939372240543DUCMHierarchy4547494946434746453845 TableD-4. GroundAUCrank. GroundAUCRankModel12345678910Avg MGRSS3DUCMCA983109789698MGRSS3DUCMCAPBTR5510689910757MGRSS3DUCMHC81199101111281111MGRSS3DUCMHCPBTR1098471010111119MGRSSSLICCA66574555275MGRSSSLICCAPBTR11623126922MGRSSSLICHC77285463386MGRSSSLICHCPBTR337113381033MGRSSSLICSVR111011111167441010MGRSSSLICSVRPBTR221328475413DUCMHierarchy44456211164 149

PAGE 150

Thenexttwotables, D-5 and D-6 providethestatisticsforthebackgroundlabeling.Inthiscase,MGRSS3DUCMHCPBTRrankedrst.MGRSSSLICCAPBTRwasthirdandthe3DUCMHierarchywasfth. TableD-5. BackgroundAUCpercentages. BackgroundAUCasaPercentModel12345678910Avg MGRSS3DUCMCA116108219161515911MGRSS3DUCMCAPBTR293219201212019223221MGRSS3DUCMHC116985119169154719MGRSS3DUCMHCPBTR303219201201919214623MGRSSSLICCA146793129783511MGRSSSLICCAPBTR343320207171313113220MGRSSSLICHC146793148993511MGRSSSLICHCPBTR28202020713912153218MGRSSSLICSVR14679511381083516MGRSSSLICSVRPBTR3219202021312121432173DUCMHierarchy1525121521143114182219 TableD-6. BackgroundAUCrank. BackgroundAUCRankModel12345678910Avg MGRSS3DUCMCA1110710845341110MGRSS3DUCMCAPBTR424411122162MGRSS3DUCMHC101181113410514MGRSS3DUCMHCPBTR335510231221MGRSSSLICCA779761191110311MGRSSSLICCAPBTR11115565873MGRSSSLICHC8810877109949MGRSSSLICHCPBTR55224886686MGRSSSLICSVR991192101181158MGRSSSLICSVRPBTR263399777973DUCMHierarchy646636143105 150

PAGE 151

Tables D-7 and D-8 providethestatisticsforthelowenergyobjectlabeling.Correctlylabelinganobjectisoneofthekeyaspectstothisworkandalsooneofthemostchallenging.Asthesestatisticsshow,manyofthealgorithmsdidnotperformverywellwhenlabelingthelowenergyobject.The3DUCMHierarchyrankedrstandtheMGRSSSLICCAPBTRwassecond. TableD-7. LowenergyobjectAUCpercentages. LowEnergyObjectAUCasaPercentModel12345678910Avg MGRSS3DUCMCA000020000002MGRSS3DUCMCAPBTR10861000042MGRSS3DUCMHC00010000000MGRSS3DUCMHCPBTR003610000132MGRSSSLICCA00000000000MGRSSSLICCAPBTR018321000033MGRSSSLICHC00000000000MGRSSSLICHCPBTR061421000033MGRSSSLICSVR00000000000MGRSSSLICSVRPBTR0014210000323DUCMHierarchy1230131479024111113 TableD-8. LowenergyobjectAUCrank. LowEnergyObjectAUCRankModel12345678910Avg MGRSS3DUCMCA47881314476MGRSS3DUCMCAPBTR24436423337MGRSS3DUCMHC56778535588MGRSS3DUCMHCPBTR35527242214MGRSSSLICCA68999656699MGRSSSLICCAPBTR72643767742MGRSSSLICHC8910101087881010MGRSSSLICHCPBTR93154989953MGRSSSLICSVR101011111110910101111MGRSSSLICSVRPBTR111126511101111653DUCMHierarchy113121111121 151

PAGE 152

Tables D-9 and D-10 providethestatisticsforthehighenergyobjectlabeling.Thisisalsoakeyelementtolabelineverysceneandthealgorithmsperformedbettercomparedtotheidenticationofthelowenergyobject.The3DUCMHierarchyrankedrst,whiletheMGRSSSLICHCPBTRwassecondandMGRSSSLICCAPBTRwasfourth. TableD-9. HighenergyobjectAUCpercentages. HighEnergyObjectAUCasaPercentModel12345678910Avg MGRSS3DUCMCA0974110000367MGRSS3DUCMCAPBTR1143871330222011MGRSS3DUCMHC2517540000005MGRSS3DUCMHCPBTR19175813672138MGRSSSLICCA00000000000MGRSSSLICCAPBTR010130158114811MGRSSSLICHC00120000000MGRSSSLICHCPBTR001310628114814MGRSSSLICSVR000100000485MGRSSSLICSVRPBTR01331008114893DUCMHierarchy153310437028234617 TableD-10. HighenergyobjectAUCrank. HighEnergyObjectAUCRankModel12345678910Avg MGRSS3DUCMCA54444667867MGRSS3DUCMCAPBTR41332552273MGRSS3DUCMHC12558778798MGRSS3DUCMHCPBTR23623443586MGRSSSLICCA681011988991011MGRSSSLICCAPBTR761171314314MGRSSSLICHC8986109910101110MGRSSSLICHCPBTR910286125422MGRSSSLICSVR1011910111010111139MGRSSSLICSVRPBTR11719711366453DUCMHierarchy357152111151 152

PAGE 153

Tables D-11 and D-12 providethestatisticsforthelowenergylayerlabeling.Inthiscase,MGRSS3DUCMHCPBTRrankedrst.The3DUCMHierarchywassecondandMGRSSSLICCAPBTRwasseventh. TableD-11. LowenergylayerAUCpercentages. LowEnergyLayerAUCasaPercentModel12345678910Avg MGRSS3DUCMCA2627920300003612MGRSS3DUCMCAPBTR152710244181916171616MGRSS3DUCMHC26224230000008MGRSS3DUCMHCPBTR152782437151816141619MGRSSSLICCA1691104700003912MGRSSSLICCAPBTR1568161314121216712MGRSSSLICHC1698134700003913MGRSSSLICHCPBTR4813161016121215711MGRSSSLICSVR16911300000398MGRSSSLICSVRPBTR06121614181212137113DUCMHierarchy272528101815619171918 TableD-12. LowenergylayerAUCrank. LowEnergyLayerAUCRankModel12345678910Avg MGRSS3DUCMCA31549787846MGRSS3DUCMCAPBTR72428213273MGRSS3DUCMHC25931097971111MGRSS3DUCMHCPBTR83613422561MGRSSSLICCA46101018910915MGRSSSLICCAPBTR910756634387MGRSSSLICHC578821010111024MGRSSSLICHCPBTR109267345498MGRSSSLICSVR68119111111811310MGRSSSLICSVRPBTR111137515661093DUCMHierarchy141114561152 153

PAGE 154

Finally,Tables D-13 and D-14 providethestatisticsforthehighenergylayerlabeling.Inthiscase,MGRSSSLICCAPBTRrankedrst.The3DUCMHierarchywassecond. TableD-13. HighenergylayerAUCpercentages. HighEnergyLayerAUCasaPercentModel12345678910Avg MGRSS3DUCMCA34204722201825213MGRSS3DUCMCAPBTR94217191511822312MGRSS3DUCMHC431130211962509MGRSS3DUCMHCPBTR11711938971021413MGRSSSLICCA6036302879261013MGRSSSLICCAPBTR253036162231314192120MGRSSSLICHC6033902870261012MGRSSSLICHCPBTR1818171613231314192117MGRSSSLICSVR61436902870261014MGRSSSLICSVRPBTR0263016132113141910163DUCMHierarchy8142783730361018619 TableD-14. HighenergylayerAUCrank. HighEnergyLayerAUCRankModel12345678910Avg MGRSS3DUCMCA1078967214108MGRSS3DUCMCAPBTR4878310786910MGRSS3DUCMHC991010883951111MGRSS3DUCMHCPBTR3611611185786MGRSSSLICCA6101119297137MGRSSSLICCAPBTR11317542811MGRSSSLICHC711441031010249MGRSSSLICHCPBTR23924653923MGRSSSLICSVR84251141111355MGRSSSLICSVRPBTR11253596410643DUCMHierarchy556721161172 154

PAGE 155

REFERENCES [1] A.A.Milne,WinniethePooh.Methuen&Co.Ltd.,1926. [2] CenterforInternationalStabilizationandRecovery,\Towalktheearthinsafety,"http://www.state.gov/t/pm/rls/rpt/walkearth/2012/index.htm,2012. [3] T.Glenn,P.Gader,andJ.Wilson,\Non-visualsceneanalysisforimprovinggroundpenetratingradarbaseddetectionsystems,"ARO2011TechnicalReportPresentation. [4] P.A.Torrione,L.M.Collins,F.Clodfelter,S.Frasier,andI.Starnes,\Applicationofthelmcalgorithmtoanomalydetectionusingthewichmann/niitekground-penetratingradar,"inAeroSense2003.InternationalSocietyforOpticsandPhotonics,2003,pp.1127{1136. [5] P.Ngan,S.Burke,R.Cresci,J.N.Wilson,P.Gader,andD.K.Ho,\Regionprocessingalgorithmforhstamids,"inDefenseandSecuritySymposium.InternationalSocietyforOpticsandPhotonics,2006,pp.621732{621732. [6] Z.KatoandT.-C.Pong,\Amarkovrandomeldimagesegmentationmodelforcolortexturedimages,"ImageandVisionComputing,vol.24,no.10,pp.1103{1114,2006. [7] S.Z.Li,Markovrandomeldmodelingincomputervision.SpringerScience&BusinessMedia,2012. [8] M.Wertheimer,Lawsoforganizationinperceptualforms(partialtranslation),W.B.Ellised.Harcourt,Brace,andCompany,1938. [9] J.WuandL.Zhang,\Gestaltsaliency:Salientregiondetectionbasedongestaltprinciples,"in2013IEEEInternationalConferenceonImageProcessing.IEEE,2013,pp.181{185. [10] G.Kootstra,N.Bergstrom,andD.Kragic,\Gestaltprinciplesforattentionandsegmentationinnaturalandarticialvisionsystems,"inICRA2011WorkshoponSemanticPerception,MappingandExploration(SPME),Shanghai,China.eSMCs,2011. [11] D.J.Daniels,GroundPenetratingRadar,1sted.InstitutionofElectricalEngineers,2004,iSBN0863413609. [12] P.J.Dobbins,J.N.Wilson,andB.Smock,\Alayertrackingapproachtoburiedsurfacedetection,"inSPIEDefense+Security.InternationalSocietyforOpticsandPhotonics,2015,pp.945415{945415. [13] P.F.FelzenszwalbandD.P.Huttenlocher,\Ecientgraph-basedimagesegmentation,"InternationalJournalofComputerVision,vol.59,no.2,pp.167{181,2004. 155

PAGE 156

[14] N.J.Cassidy,\Groundpenetratingradardataprocessing,modellingandanalysis,"Groundpenetratingradar:theoryandapplications,pp.141{176,2009. [15] G.Safont,A.Salazar,A.Rodriguez,andL.Vergara,\Onrecoveringmissinggroundpenetratingradartracesbystatisticalinterpolationmethods,"RemoteSensing,vol.6,no.8,pp.7546{7565,2014. [16] L.M.Surhone,M.T.Timpledon,andS.F.Marseken,SpatialDescriptiveStatistics:DescriptiveStatistics,GIS,Geostatistics,Variogram,Correlogram,Kriging,CuzickEdwardsTest.VDMPublishing:Saarbrucken,Germany,2010. [17] J.-S.WangandY.-L.Hsu,\Dynamicnonlinearsystemidenticationusingawiener-typerecurrentnetworkwithokidalgorithm."JournalofInformationSci-ence&Engineering,vol.24,no.3,2008. [18] B.SmockandJ.Wilson,\Reciprocalpointerchainsforidentifyinglayerboundariesinground-penetratingradardata,"inGeoscienceandRemoteSensingSymposium(IGARSS),2012IEEEInternational.IEEE,2012,pp.602{605. [19] B.SmockandW.Joseph,\Ecientmultiplelayerboundarydetectioninground-penetratingradardatausinganextendedviterbialgorithm,"inSPIEDe-fense,Security,andSensing.InternationalSocietyforOpticsandPhotonics,2012,pp.83571X{83571X. [20] C.M.Bishop,Patternrecognitionandmachinelearning.Springer,2006. [21] MathWorks,Matlab,andR2016a,\Color-basedsegmentationusingk-meansclustering,"http://www.mathworks.com/help/images/examples/. [22] A.Blake,P.Kohli,andC.Rother,Markovrandomeldsforvisionandimageprocessing.MitPress,2011. [23] S.GemanandD.Geman,\Stochasticrelaxation,gibbsdistributions,andthebayesianrestorationofimages,"IEEETransactionsonpatternanalysisandmachineintelligence,no.6,pp.721{741,1984. [24] S.Cheung,\Proofofhammersley-cliordtheorem,"http://www.vis.uky.edu/cheung/courses/ee639/Hammersley-Cliord Theorem.pdf,2008. [25] A.K.JainandF.Farrokhnia,\Unsupervisedtexturesegmentationusinggaborlters,"Patternrecognition,vol.24,no.12,pp.1167{1186,1991. [26] C.XuandJ.J.Corso,\Evaluationofsuper-voxelmethodsforearlyvideoprocessing,"inComputerVisionandPatternRecognition(CVPR),2012IEEEConferenceon.IEEE,2012,pp.1202{1209. 156

PAGE 157

[27] R.Achanta,A.Shaji,K.Smith,A.Lucchi,P.Fua,andS.Susstrunk,\Slicsuperpixelscomparedtostate-of-the-artsuperpixelmethods,"IEEEtransactionsonpatternanalysisandmachineintelligence,vol.34,no.11,pp.2274{2282,2012. [28] M.Grundmann,V.Kwatra,M.Han,andI.Essa,\Ecienthierarchicalgraphbasedvideosegmentation,"IEEECVPR,2010. [29] J.ShiandJ.Malik,\Normalizedcutsandimagesegmentation,"IEEETransactionsonpatternanalysisandmachineintelligence,vol.22,no.8,pp.888{905,2000. [30] C.XuandJ.J.Corso,\Libsvx:Asupervoxellibraryandbenchmarkforearlyvideoprocessing,"arXivpreprintarXiv:1512.09049,2015. [31] A.Radhakrishna,A.Shaji,K.Smith,A.Lucchi,P.Fua,andS.Susstrunk,\Slicsuperpixels,"Dept.SchoolComput.Commun.Sci.,EPFL,Lausanne,Switzerland,Tech.Rep,vol.149300,2010. [32] A.Lucchi,K.Smith,R.Achanta,V.Lepetit,andP.Fua,\Afullyautomatedapproachtosegmentationofirregularlyshapedcellularstructuresinemimages,"inInternationalConferenceonMedicalImageComputingandComputer-AssistedIntervention.Springer,2010,pp.463{471. [33] R.Achanta,A.Shaji,K.Smith,A.Lucchi,P.Fua,andS.Susstrunk,\Slico,zeroparameterversionofslics,"http://ivrl.ep.ch/research/superpixels#SLICO. [34] P.Arbelaez,M.Maire,C.Fowlkes,andJ.Malik,\Contourdetectionandhierarchicalimagesegmentation,"IEEEtransactionsonpatternanalysisandmachineintelligence,vol.33,no.5,pp.898{916,2011. [35] C.Yang,M.Sethi,A.Rangarajan,andS.Ranka,\Supervoxel-basedsegmentationof3dvolumetricimages,"inAsianConferenceonComputerVision.Springer,2016,pp.37{53. [36] C.J.Taylor,\Towardsfastandaccuratesegmentation,"inProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition,2013,pp.1916{1922. [37] R.O.Duda,P.E.Hart,andD.G.Stork,Patternclassication.JohnWiley&Sons,2001. [38] H.FriguiandR.Krishnapuram,\Clusteringbycompetitiveagglomeration,"PatternRecognition,vol.30,no.7,pp.1109{1119,1997. [39] J.C.Bezdek,Patternrecognitionwithfuzzyobjectivefunctionalgorithms.SpringerScience&BusinessMedia,2013. [40] K.WagstaandC.Cardie,\Clusteringwithinstance-levelconstraints,"inProceed-ingsoftheSeventeenthInternationalConferenceonMachineLearning,2000,pp.1103{1110. 157

PAGE 158

[41] Y.Yan,M.Sethi,A.Rangarajan,R.R.Vatsavai,andS.Ranka,\Graph-basedsemi-supervisedclassicationonveryhighresolutionremotesensingimages,"InternationalJournalofBigDataIntelligence,vol.4,no.2,pp.108{122,2017. [42] W.LiandA.McCallum,\Anoteonsemi-supervisedlearningusingmarkovrandomelds,"TechnicalReport,2004. [43] A.McCallumandT.Minka,\Semi-supervisedlearningusingdistancemetricslearnedviadirichlettrees,"unpublishedwork,1999. [44] D.Anguelov,B.Taskarf,V.Chatalbashev,D.Koller,D.Gupta,G.Heitz,andA.Ng,\Discriminativelearningofmarkovrandomeldsforsegmentationof3dscandata,"in2005IEEEComputerSocietyConferenceonComputerVisionandPatternRecognition(CVPR'05),vol.2.IEEE,2005,pp.169{176. [45] B.T.C.G.D.Roller,\Max-marginmarkovnetworks,"Advancesinneuralinforma-tionprocessingsystems,vol.16,p.25,2004. [46] S.Basu,M.Bilenko,andR.J.Mooney,\Aprobabilisticframeworkforsemi-supervisedclustering,"inProceedingsofthetenthACMSIGKDDinterna-tionalconferenceonKnowledgediscoveryanddatamining.ACM,2004,pp.59{68. [47] S.Basu,M.Bilenko,A.Banerjee,andR.J.Mooney,\Probabilisticsemi-supervisedclusteringwithconstraints,"Semi-supervisedlearning,pp.71{98,2006. [48] A.Banerjee,S.Merugu,I.S.Dhillon,andJ.Ghosh,\Clusteringwithbregmandivergences,"Journalofmachinelearningresearch,vol.6,no.Oct,pp.1705{1749,2005. [49] H.Huang,Y.Cheng,andR.Zhao,\Asemi-supervisedclusteringalgorithmbasedonmust-linkset,"AdvancedDataMiningandApplications,pp.492{499,2008. [50] S.TheodoridisandK.Koutroumbas,Patternrecognition,3rded.AcademicPress,2006. [51] R.AchantaandS.Susstrunk,\Superpixelsandpolygonsusingsimplenon-iterativeclustering,"inIEEEConferenceonComputerVisionandPatternRecognition(CVPR),2017. 158

PAGE 159

BIOGRAPHICALSKETCHPeteDobbinsisaFloridanative.AsanundergraduatestudentheearnedaBachelorofArtsinclassicalstudiesanddiscoveredaloveforAncientGreece,includingtheculture,language,andliterature.Hefoundputtingtogetherthepuzzlepiecesofancienttextsissimilartocomputerprogrammingandsocompletedamaster'sincomputerengineering.Afterworkingasprogrammer/analystandassistinginthedevelopmentofadigitalmediaconsultingrmfortheentertainmentindustry,hereturnedtotheUniversityofFloridaaslecturerinComputerandInformationSciencesandEngineering.ShapedbythecommunityapproachtolearningintheClassicalStudiesdepartment,Peteagainfoundjoyintheinteractionbetweenstudentsandfaculty.Ofcoursethistimehewasleadingtheinstructionprocess.Thispathhasbroughthimtowhereheisnow,completingadoctorateincomputerengineeringandallowinghimtofurtherhisabilitytoparticipateinstudent/teacherdynamics.Whilecompletinghisdoctorate,hehasworkedintheComputationalScienceandIntelligence(CSI)labattheUniversityofFloridaandenjoyedweeklytacotuesdayluncheswithlab-matesattheaectionatelynamed\headsweats".HehashadtheprivilegeofpublishingJavalabmanualsandbeeninvolvedinthecreationofmultiplestartupcompanies. 159