Citation
Volumetric Data Reconstruction from Irregular Samples and Compressively Sensed Measurements

Material Information

Title:
Volumetric Data Reconstruction from Irregular Samples and Compressively Sensed Measurements
Creator:
Xu, Xie
Place of Publication:
[Gainesville, Fla.]
Florida
Publisher:
University of Florida
Publication Date:
Language:
english
Physical Description:
1 online resource (115 p.)

Thesis/Dissertation Information

Degree:
Doctorate ( Ph.D.)
Degree Grantor:
University of Florida
Degree Disciplines:
Computer Engineering
Computer and Information Science and Engineering
Committee Chair:
ENTEZARI,ALIREZA
Committee Co-Chair:
VEMURI,BABA C
Committee Members:
RANGARAJAN,ANAND
BANERJEE,ARUNAVA
PAUL,ANAND ABRAHAM
Graduation Date:
5/3/2014

Subjects

Subjects / Keywords:
Approximation ( jstor )
Boxes ( jstor )
Conceptual lattices ( jstor )
Datasets ( jstor )
Face centered cubic lattices ( jstor )
Interpolation ( jstor )
Mathematical lattices ( jstor )
Sampling rates ( jstor )
Signals ( jstor )
Supernova remnants ( jstor )
Computer and Information Science and Engineering -- Dissertations, Academic -- UF
box-splines -- compressed-sensing -- reconstruction -- sampling -- sparse-approximation -- sparse-representation -- volumetric-data
Genre:
bibliography ( marcgt )
theses ( marcgt )
government publication (state, provincial, terriorial, dependent) ( marcgt )
born-digital ( sobekcm )
Electronic Thesis or Dissertation
Computer Engineering thesis, Ph.D.

Notes

Abstract:
Sampling and reconstruction of volumetric data are ubiquitous throughout biomedical imaging, scientific simulation, and visualization applications. In this dissertation, we focus on the reconstruction of volumetric data from irregular samples as well as compressively sensed measurements. We examine different sampling lattices and their respective shift-invariant spaces for the reconstruction of irregularly sampled volumetric data. Given an irregularly sampled dataset, we demonstrate that the non-tensor-product approximations corresponding to the Body Centered Cubic (BCC) lattice and the Face Centered Cubic (FCC) lattice provide more accurate reconstructions than the tensor-product approximations associated with the commonly-used Cartesian lattice. Our study is motivated by the sampling-theoretic advantages of the BCC lattice and the FCC lattice over the Cartesian lattice. Our practical algorithm utilizes multidimensional box spline functions and $\sinc$ functions that are tailored to these lattices. We also present a regularization scheme that provides a variational reconstruction framework for efficient implementation. The improvements in accuracy are quantified numerically and visualized in our experiments with synthetic as well as real biomedical datasets. We also examine compressed sensing principles for the sparse approximation of volumetric datasets. We propose that the compressed sensing framework can be used for a refinable and reusable data reduction framework for the in-situ processing of volumetric datasets. Instead of saving a high resolution dataset, only a few partial Fourier measurements of the original dataset are kept for data reduction. These measurements are sensed without any prior knowledge of specific feature domains for the dataset. The original dataset can be recovered from the saved measurements. We demonstrate the superiority of the analysis recovery model along with surfacelets for efficient representation of volumetric data. We establish that the accuracy of reconstruction can further improve when the basis for sparser representations of data becomes available. To facilitate our study, we also construct a novel non-separable 3-D tight wavelet frame decomposition using a seven direction box spline for sparse representation of the data. Our studies and experiment results motivate future research on the study of custom-designed sparse representations for large-scale volumetric data. ( en )
General Note:
In the series University of Florida Digital Collections.
General Note:
Includes vita.
Bibliography:
Includes bibliographical references.
Source of Description:
Description based on online resource; title from PDF title page.
Source of Description:
This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Thesis:
Thesis (Ph.D.)--University of Florida, 2014.
Local:
Adviser: ENTEZARI,ALIREZA.
Local:
Co-adviser: VEMURI,BABA C.
Electronic Access:
RESTRICTED TO UF STUDENTS, STAFF, FACULTY, AND ON-CAMPUS USE UNTIL 2016-05-31
Statement of Responsibility:
by Xie Xu.

Record Information

Source Institution:
UFRGP
Rights Management:
Applicable rights reserved.
Embargo Date:
5/31/2016
Resource Identifier:
907295128 ( OCLC )
Classification:
LD1780 2014 ( lcc )

Downloads

This item has the following downloads:


Full Text
xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID EWW48QRFK_HFWN42 INGEST_TIME 2014-10-03T22:51:48Z PACKAGE UFE0046527_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES



PAGE 1

VOLUMETRICDATARECONSTRUCTIONFROMIRREGULARSAMPLESAND COMPRESSIVELYSENSEDMEASUREMENTS By XIEXU ADISSERTATIONPRESENTEDTOTHEGRADUATESCHOOL OFTHEUNIVERSITYOFFLORIDAINPARTIALFULFILLMENT OFTHEREQUIREMENTSFORTHEDEGREEOF DOCTOROFPHILOSOPHY UNIVERSITYOFFLORIDA 2014

PAGE 2

c 2014XieXu

PAGE 3

Tomyfamily

PAGE 4

ACKNOWLEDGMENTS IappreciateDr.AlirezaEntezari,Dr.ArunavaBanerjeeandDr.StephenThebaut kindlywritingtherecommendationlettersformyPhDapplication.Theirsupportstarts mywonderfuljourneyofthePhDstudyatUniversityofFlorida. IthankmyadviserDr.AlirezaEntezariforo!eringmetheopportunityandfundsto pursuemyPhDdegree.Heplaysafundamentalroleinmyresearchcultivating.Without hisencouragement,guidance,andsupport,myPhDjourneywouldneverhavebeenso enjoyableandmeaningful.Iadmirehisprofessionalknowledge,hisworkattitude,andhis personality. Iwanttothankmycommitteemembers:Dr.AnandPaul,Dr.AnandRangarajan, Dr.ArunavaBanerjee,andDr.BabaVemuri.Iamdeeplyinspiredbytheirpassionand seriousnesstowardteachingandresearching.Theirvaluablefeedbackhasgreatlyimproved thisdissertation.Inaddition,IthankDr.AnandRangarajan,Dr.ArunavaBanerjeeand Dr.BabaVemuriforgivingmeinsightfullecturesonmachinelearning,algorithms,and medicalimageprocessing,respectively. Ialsothankmanyfriendsandcolleagueswhohavehelpedmethroughtheyears.I speciallythankAlexanderSinghAlvarado,MahsaMirzargar,andWenxingYefortheir helpandguidanceonmyresearch.Iampleasedtoworkwithothergroupmembers:Bo Ma,ElhamSakhaee,SusanneSuter,andTusharAthawale. Lastly,Iowetomylovinganddevotedfamily.Ithankmyparents,FengfengLuand KaiyingXu,whoarealwayssupportiveandalwaysthereforme.Iamthankfultomy wife,Qijing(Angela)Yu,whoconstantlypavesthewaywithloveandunderstanding. 4

PAGE 5

TABLEOFCONTENTS page ACKNOWLEDGMENTS ................................. 4 LISTOFTABLES ..................................... 7 LISTOFFIGURES .................................... 8 ABSTRACT ........................................ 10 CHAPTER 1INTRODUCTION .................................. 12 2NONUNIFORMRECONSTRUCTIONANDSAMPLINGLATTICES ..... 17 2.1ReconstructionfromIrregularSamples .................... 17 2.2OptimalSamplingLattices ........................... 19 2.3BoxSplines ................................... 23 2.4MultivariatesincFunctions .......................... 26 3VOLUMETRICDATARECONSTRUCTIONFROMIRREGULARSAMPLES 29 3.1Motivation .................................... 29 3.2NonuniformReconstructionFramework .................... 30 3.2.1RegularizedLeastSquares ....................... 32 3.2.2ImplementationandOptimization ................... 34 3.2.3Contributions .............................. 35 3.3ExperimentsandDiscussion .......................... 36 3.3.1ExperimentSetting ........................... 36 3.3.2NonuniformReconstructionUsingBoxSplines ............ 39 3.3.2.1RegularizationinBoxSplineSpaces ............. 39 3.3.2.2SyntheticDataset ....................... 40 3.3.2.3RealDatasets ......................... 41 3.3.2.4ResiliencetoJitterNoise ................... 42 3.3.2.5ComputationCost ...................... 44 3.3.3NonuniformReconstructionUsingMultivariatesincFunctions ... 46 3.3.3.1LanczosWindow ....................... 47 3.3.3.2SyntheticDataset ....................... 48 3.3.3.3RealDatasets ......................... 50 3.3.3.4ResiliencetoJitterNoise ................... 51 3.3.3.5LanczosWindowedsincParameters ............. 55 3.3.3.6ComputationalCost ..................... 56 4COMPRESSEDSENSINGANDSPARSEAPPROXIMATION ......... 59 4.1CompressedSensingTheory .......................... 60 5

PAGE 6

4.1.1RestrictedIsometryProperty ...................... 60 4.1.2TransformDomainSparsity ...................... 61 4.2SparseApproximation ............................. 63 4.2.1ConvexRelaxation ........................... 63 4.2.2GreedyMethods ............................. 64 4.3BoxSplineTightWaveletFrames ....................... 65 4.3.1TightWaveletFrame .......................... 65 4.3.2Sub-QMFCondition .......................... 66 4.3.3ConstructionofTightWaveletFrames ................. 67 4.4MultiscaleGeometricRepresentations ..................... 69 5VOLUMEREDUCTIONINACOMPRESSEDSENSINGFRAMEWORK ... 71 5.1In-situDataReduction ............................. 71 5.2MotivationandContributions ......................... 73 5.33-DBoxSplineFramelets ........................... 75 5.4ExperimentsandDiscussion .......................... 78 5.4.1ImplementationDetails ......................... 78 5.4.2SparseDatasets ............................. 81 5.4.2.1Fuel .............................. 81 5.4.2.2Aneurysm ........................... 84 5.4.3DatasetsSparseinTransformDomains ................ 86 5.4.3.1E! ectivenessofSparseRepresentations ........... 87 5.4.3.2Fuel:Revisited ........................ 90 5.4.3.3Hydrogen ........................... 92 5.4.3.4Supernova ........................... 94 5.4.4NoisyMeasurements .......................... 98 5.4.5Large-scaleVolumetricDatasets .................... 100 6CONCLUSION .................................... 105 REFERENCES ....................................... 107 BIOGRAPHICALSKETCH ................................ 115 6

PAGE 7

LISTOFTABLES Table page 3-1NotationsforChapter3 ............................... 39 4-1NotationsforChapter4and5 ............................ 60 5-1SparseapproximationoftheSupernovadatasets .................. 97 5-2SparseapproximationoftheHydrogendatasetfromnoisemeasurements .... 100 7

PAGE 8

LISTOFFIGURES Figure page 2-1Scattereddataapproximationof1-Dsignalfromirregularsamples ........ 19 2-2Samplinglattices ................................... 20 2-3Ageneral2-Dlattice ................................. 21 2-4VoronoicellsoftheCC,BCC,andFCClattices .................. 21 2-5Constructionofboxsplinesviadirectionalconvolution .............. 24 2-6Thesupportofthefour-directionboxspline .................... 24 3-1TheMLdataset .................................... 37 3-2NonuniformreconstructionoftheMLdataset ................... 41 3-3TheBCCreconstructionvstheCCreconstructionontheMLdataset ...... 42 3-4NonuniformreconstructionoftheCarpFishdataset ................ 43 3-5ReconstructionoftheMLdatasetfromjitteredsamplinglattices ......... 44 3-6ReconstructionoftheAneurysmdatasetfromjitteredCCandBCCsamples .. 45 3-7ComputationtimefortheBCCandtheCCreconstructionframeworks ..... 46 3-8ReconstructionoftheMLdatasetfromrandomsamples ............. 49 3-9TheMLdatasetreconstructedfromirregularsamples ............... 50 3-10TheCarpFishdatasetreconstructedfromirregularsamples ........... 51 3-11RenderingoftheCarpFishdataset ......................... 52 3-12ReconstructionoftheMLdatasetfromjitteredsamples .............. 54 3-13ReconstructionoftheAneurysmdatasetfromjitteredsamples .......... 55 3-14RenderingoftheAneurysmdataset ......................... 56 3-15ReconstructionoftheMLdatasetfromirregularsamples ............. 57 5-1Thesensingandrecoverypipeline .......................... 74 5-2SparseapproximationoftheFueldataset ...................... 83 5-3ReconstructionoftheFueldataset ......................... 84 5-4SparseapproximationoftheAneurysmdataset .................. 85 8

PAGE 9

5-5ReconstructionoftheAneurysmdataset ...................... 86 5-6ApproximationaccuracyoftheFueldataset .................... 88 5-7ApproximationaccuracyoftheHydrogendataset ................. 89 5-8ApproximationaccuracyoftheSupernovadataset ................. 89 5-9SparseapproximationoftheFueldatasetintransformdomains ......... 91 5-10GroundtruthoftheHydrogenandSupernovadatasets .............. 93 5-11SparseapproximationoftheHydrogendataset ................... 94 5-12SparseapproximationandinterpolationoftheHydrogendataset ......... 95 5-13SparseapproximationandinterpolationoftheSupernovadataset ........ 96 5-14SparseapproximationvscubicinterpolationoftheHydrogendataset ...... 101 5-15SparseapproximationvslinearinterpolationoftheHeadAneurysmdataset .. 102 9

PAGE 10

AbstractofDissertationPresentedtotheGraduateSchool oftheUniversityofFloridainPartialFulllmentofthe RequirementsfortheDegreeofDoctorofPhilosophy VOLUMETRICDATARECONSTRUCTIONFROMIRREGULARSAMPLESAND COMPRESSIVELYSENSEDMEASUREMENTS By XieXu May2014 Chair:AlirezaEntezari Major:ComputerEngineering Samplingandreconstructionofvolumetricdataareubiquitousthroughoutbiomedical imaging,scienticsimulation,andvisualizationapplications.Inthisdissertation,wefocus onthereconstructionofvolumetricdatafromirregularsamplesaswellascompressively sensedmeasurements. Weexaminedi!erentsamplinglatticesandtheirrespectiveshift-invariantspacesfor thereconstructionofirregularlysampledvolumetricdata.Givenanirregularlysampled dataset,wedemonstratethatthenon-tensor-productapproximationscorrespondingtothe BodyCenteredCubic(BCC)latticeandtheFaceCenteredCubic(FCC)latticeprovide moreaccuratereconstructionsthanthetensor-productapproximationsassociatedwith thecommonly-usedCartesianlattice.Ourstudyismotivatedbythesampling-theoretic advantagesoftheBCClatticeandtheFCClatticeovertheCartesianlattice.Our practicalalgorithmutilizesmultidimensionalboxsplinefunctionsandsincfunctionsthat aretailoredtotheselattices.Wealsopresentaregularizationschemethatprovidesa variationalreconstructionframeworkfore"cientimplementation.Theimprovementsin accuracyarequantiednumericallyandvisualizedinourexperimentswithsyntheticas wellasrealbiomedicaldatasets. Wealsoexaminecompressedsensingprinciplesforthesparseapproximationof volumetricdatasets.Weproposethatthecompressedsensingframeworkcanbeused forarenableandreusabledatareductionframeworkforthein-situprocessingof 10

PAGE 11

volumetricdatasets.Insteadofsavingahighresolutiondataset,onlyafewpartialFourier measurementsoftheoriginaldatasetarekeptfordatareduction.Thesemeasurements aresensedwithoutanypriorknowledgeofspecicfeaturedomainsforthedataset.The originaldatasetcanberecoveredfromthesavedmeasurements.Wedemonstratethe superiorityoftheanalysisrecoverymodelalongwithsurfaceletsfore"cientrepresentation ofvolumetricdata.Weestablishthattheaccuracyofreconstructioncanfurtherimprove whenthebasisforsparserrepresentationsofdatabecomesavailable.Tofacilitateour study,wealsoconstructanovelnon-separable3-Dtightwaveletframedecomposition usingasevendirectionboxsplineforsparserepresentationofthedata.Ourstudies andexperimentresultsmotivatefutureresearchonthestudyofcustom-designedsparse representationsforlarge-scalevolumetricdata. 11

PAGE 12

CHAPTER1 INTRODUCTION Samplingandreconstructionarepervasiveinbiomedicalimaging,scienticsimulation andvisualizationapplicationswherevariousmodelsarestudiedtorelatethediscrete tocontinuous-domainrepresentationoffunctions.Commontomostapplications,isan underlyingfunction f thatmodelsaphysicalquantityoraphenomenonofinterestthat isoftendenedonthecontinuousdomain.Thedatacollectedbythesimulationorthe acquisitiondeviceisadiscretizedversionofthesignal f sampledonthesamplingset, X = { x 1 ,x 2 ,...,x N } ,adiscretesetofpointsoverthedomain.Theassociatedfunction values, f X = { f 1 ,f 2 ,...,f N } ,areusuallypoint-evaluations(orotherlinearmeasurements) ofthefunction f onthesamplingset.Thereconstructionoftheoriginal f fromthegiven data { ( x n ,f n ) } isachievedbysuitableinterpolationorapproximationalgorithms.The reconstructedsignalis,then,passedtothevisualizationpipelineforimaging. Akeydevelopmentthathasbeenfoundationaltothediscrete-continuousmodelfor processingbandlimitedsignalsisthesocalledWhittaker-Shannon-Kotel'nikovtheorem [ 94 ].The1-Dbandlimitedreconstructionformulainvolvesacardinalseriesexpansion: f ( x )= k Z c k sinc( x k ) (11) wheresinc( x ):=sin( x ) / ( x )whoseFouriertransformistheindicatorfunction, ( ,! ) oftheintervalboundbytheNyquistfrequency(i.e.,unitcelloftheperiodicspectrumof thesamplinglattice Z ).When f isabandlimitedfunction,thesimplechoiceoffunction's samplesonregularintervals c k = f ( k )allowsustoperfectlyreconstruct f .Inthepractical setting,thesincfunctionisoftentruncatedwiththeaidofwindowingtechniques(e.g., Lanczos,Hamming,Parzen,Blackman)[ 76 ]suchthatanitedomainreconstructionis feasible. Whenitcomestothemultivariatesetting,Equation( 11 )canbesimplyextended withaCartesianproduct:thefunctionissampledonaCartesianlattice(i.e., c k = f ( k ) 12

PAGE 13

where k # Z d )andthemultivariatesincisconstructedbythetensor-productofthesinc function.TheFouriertransformofsuchatensor-productsincistheindicatorfunctionofa (hyper-)cubethatruns( ,! )oneachaxisofthefrequency. Generalizationsofthesamplingtheoremintheframeworkofshift-invariantspaces havebeenstudiedinsignalprocessing[ 1 67 85 94 ]andapproximationtheory[ 22 24 46 ].Ashift-invariantspaceisspannedbylatticeshiftsofasingle,oftencompactly supported,functionthatservesasagenerator[ 1 ]orkernel[ 67 ]fortheshift-invariant space.Thecommonchoiceofthegeneratorisasplinefunction[ 93 ].Anelementinthe spacecanbeconsideredasalinearcombination(withsomecoe"cients)ofCartesianshifts ofthegenerator, # : s ( x )= k Z d c k # ( x k ) Theinterpolationproblem(i.e., s ( x n )= f n )canbeformulatedasaproblemtodetermine thecoe" cients c k giventhedata f X .Thisproblemcanbesetupasalinearsystemof equationswhosesystemmatrixissparseandhasasmallbandwidthifthegeneratoris acompactlysupportedfunction.Hence,theinterpolationproblemcanbesolvedinan e "cientandstablemanner.Thesamplingandreconstructionproblemsonthespaceof band-limitedandnon-bandlimitedfunctionsinthecontextofshift-invariantspaceshas beenstudied[ 1 94 ]. Ontheotherhand,whenthesamplingsetisnonuniform(i.e.,irregular)the reconstructionproblemisconsiderablymoredi" cultandsometimesill-dened.Results fromBeurlingandLandau[ 1 ]provideatheoreticalframeworkforthebandlimited reconstructionfromsuchirregularsamples.Inthiscontext,thedatapoints f s = { f 1 ,...,f N } aregivenbysampling f onanonuniformsetofpoints X := { x 1 ,...,x N } ,i.e., f n = f ( x n ).Thereconstructionprocessstartsbyimposinga(scaledanduniform)grid, ofunknown c k coe"cientstosincfunctionsasinEquation( 11 )andprovidesarecipefor nding c k .Theproblemofreconstructingsignalfromirregularlysampleddataisoften referredtoasscattereddataapproximation[ 97 ]. 13

PAGE 14

Theliteratureisrepletewithvarioustechniquesfordealingwithscattereddata.The twomainideasarebasedonplacingsuitablebasisfunctionsontheirregulardatasites [ 97 ]ortransformingtheirregularly-sampleddataintoashift-invariantspace[ 2 69 ]as anintermediaterepresentation.Whiletheformerapproachisquitegeneralandenjoys fromexibleconstraintsonthesamplingset,thelatterapproachleadstosparseand well-conditionedinterpolationmatricesandenjoysfromcomputationally-e"cientmethods [ 49 94 ]thatcanbeexploitedthroughoutthevisualizationpipeline[ 96 ].Motivatedbythe shift-invariantframeworkforirregularsamplingandreconstruction,inthisdissertationwe examinetheshift-invariantspacesassociatedwiththeBodyCenteredCubiclatticeand theFaceCenteredCubiclatticeforthereconstructionofirregularly-sampledvolumetric data. Althoughthesamplingtheoremhasbeenthehallmarkofinformationtheoryand signalprocessing,theNyquistsamplingraterequiredforexactreconstructionusually yieldsmassiveamountofdatainmanyacquisitionsystems.Thetransformcodingis usuallyinvolvedaftertheacquisitionwheresparsity(inthetransform-domain)isexploited for"feature"-basedcompressionofthedata.Forexample,wavelet-domaintransform codingtechniques,studiedoverthepasttwodecades,havebeensuccessfulinsparse feature-basedrepresentationofmultidimensional(e.g.,image,volumetric,time-varying) data.Forimagesandhigherdimensionaldata,thereareseveralgeneralizationsofwavelets thatincorporategeometricstructuresforsparsefeaturerepresentationsuchascurvelets [ 10 ],shearlets[ 53 ]andsurfacelets[ 57 ]. Theresearche! ortsinsignalprocessingaretransformingthesample-then-compress paradigmtocompressivelysamplethedata.Theemergingeldofcompressedsensing integratesthesparsetransform-codingtechniquestotheclassicalsamplingtheory[ 3 16 ]. Whiletherearesignicante!ortson2-Dimaging(e.g.,single-pixelcameraproject[ 28 ]), sparserepresentationsfor3-Dvolumetricdatahavenotbeenstudiedinthecontextof compressedsensing. 14

PAGE 15

Inthisdissertationwealsostudytheproblemofsparseapproximationinthecontext ofvolumetricdataandproposeusingtheanalysisrecoverymodel(Section 4.1.2 )with non-separablesurfaceletsasasparsereconstructiondomainforcompressedsensingin the3-Dsetting.Thekeyaspectoftheproposedframeworkistheuniversalityofthe sensingprocess:thesensingisperformedwithoutapriorispecicationofthedomainin whichthedatafeaturesarebestdescribed.Thefeaturedomainonlyentersthepicture atthereconstructionstage.Thisimpliesthatonecanrenethenotionoffeatures(via integrationofdomainknowledgeorlearningmechanisms)andfurtherimprovethe accuracyofreconstructionwithouthavingtorepeattheacquisitionprocess. Theprospectsofstudyingsparserepresentationsforvolumetricdataarequite attractive.Foradatasetofsize N witharelativelysmallnumber, k ,ofvolumetric featureswiththesefeaturesbeingnoneotherthanspace-domainfeaturesornon-zero waveletcoe"cientsinthesimplestcasethenecessarynumberofsamplesis O ( k log N ): linearlyproportionalto k andlogarithmicallyproportionaltothedatasetsize N .The remarkablelogarithmicreductionfromthe"Nyquist-rate"samplingisthesignicant accomplishmentofthecompressedsensingparadigm. Theonlycaveatisthatreconstructionalgorithms,unlikeintheclassicalShannon samplingtheorem,arenolongerlinearandinvolveconvexoptimizationandotheriterative methods.Giventheavailabilityofabundantcomputing(e.g.,multi-core/GPU)resources, non-linearmethodsarenolongerbarrierstothereconstruction.Speciallyconsideringthe simulationenvironmentswherethesupercomputer'sprocessingpowerkeepsgrowing,the increaseindatasetsizesischallengedbytheI/Obottleneck.Theproposedframework providesasmartin-situdatareductionmechanisminwhichtheuniversalityofthesensing processallowsforrenementsofthereconstructionbyfurtherimprovingthedenitionof featuresafterthedataacquisitionstage. Thisdissertationisorganizedasfollows:InChapter 2 ,wereviewcommonapproaches forscatterdataapproximationandintroducetheoptimalsamplinglatticesaswellastheir 15

PAGE 16

associatedkernelsthatarepertinenttoourdiscussioninChapter 3 ;InChapter 3 ,we examinedi! erentsamplinglatticesasintermediaterepresentationsforthereconstruction ofirregularlysampledvolumetricdata;Wesummarizethecompressedsensingtheoryand thesparseapproximationmethodsinChapter 4 ;Ourdatareductionframeworkexploiting thecompressedsensingtheoryandthesparsemodelingispresentedinChapter 5 ;We concludethedissertationinChapter 6 .Alargeportionofthisdissertationhasbeen publishedintheauthor'srecentworks[ 100 102 ]. 16

PAGE 17

CHAPTER2 NONUNIFORMRECONSTRUCTIONANDSAMPLINGLATTICES 2.1ReconstructionfromIrregularSamples Manyapplicationsyielddatathatareirregularlysampled.Numericalsolutions tovariouspartialdi!erentialequations,uiddynamicssimulations(e.g.,grid-free semi-Lagrangianadvection),laser-rangeandterraindata,orastronomicalmeasurementsof starluminosityareexamplesofapplicationswherethesamplingpositionsarenonuniform (i.e.,irregular)overthedomain[ 1 ].Nonuniformdatasetsalsoappearinthecontextof machinelearningproblemsforkernellearningandparameterestimation[ 97 ]. Incontrast,multivariatedatacomingfrommedicalmodalitiessuchasmagnetic resonance(MR)areoftenacquiredonaregularlattice(e.g.,theCartesianlattice). However,thenonlinearbehavioroftheanaloghardwarecausesnonuniformplacementof theacquisitionpointsinMR[ 90 ].Moreover,theacquisitiondevicessometimesemploy radial,spiralorothertrajectoriesforsignalsampling(e.g., q -spacesampling[ 106 ]). Thereconstructionalgorithmsprovideanirregulartoregularresamplingoperations beforethedatacanbefurtherprocessed[ 1 ].Evenwhenthesamplesareacquiredona regularlattice,thesamplinglocationsareperturbeddueto"Eddycurrents"andgradient delaysinMR.Thisuncertaintyisoftenmodeledwithperturbationoflatticepoints[ 87 ]. Nonuniformsamplingalsoappearsinmedicalultrasoundasimagesareacquiredalong scan-lineswhosedistributionisnonuniforminthespace[ 69 ]. Theproblemofreconstructingsignalfromirregularlysampleddataisoftenreferred toasscattereddatainterpolation[ 75 ],scattereddatattingormoregenerallyscattered dataapproximation[ 97 ].Thetwomainideasfordealingwithscattereddataarebasedon placingsuitablebasis/kernelfunctionsontheirregulardatasites[ 97 ]ortransforming theirregularly-sampleddataintoashift-invariantspace[ 2 69 ]asanintermediate representation. 17

PAGE 18

Radialbasisfunctions(RBF)arecommonapproachesforscattered-datainterpolation problems[ 75 97 ].Theapproximationinthiscase,hasthe"scattered"shiftsofa univariatefunction, $ ,thatisappliedto2-Dor3-Dsettingbyaradialextension.The caseof $ ( x )= $ x $ ,leadstomembranesplinesand $ ( x )= $ x $ log $ x $ leadstothin-plate splines.For(exact)interpolationproblem,alinearsystemofequationsneedstobe constructed,thatisnotsparsenorbandedandusuallyill-conditionedsince $ isnota compactlysupportedfunction.Solvingforcoe"cientsisnumericallyandcomputationally challenging;moreover,evenaftertheinitialinversionoftheinterpolationmatrix,the computationalcostofinterpolationisprohibitivewhenthesizeofnonuniformsamplesis largesincethebasisfunctionsareglobal.WhileRBFsprovideageneralframeworkfor reconstruction,thecomputationalchallengesanddelicatenumericalissuesarethepriceto payfortheirgenerality[ 94 ]. Movingleastsquares(MLS)isalocalinterpolationmethodthat,unlikeRBF, solvestheinterpolationsystemmatrixforeachinterpolationpointinalocalfashion. TheMLShasbeenintroducedforvolumeraycasting[ 55 ]andservesasageneralization toSheppard'sinterpolationmethod.WhiletheMLSframeworkservesasageneral methodforreconstructionofuniformandnonuniformsampling,thecostofreconstruction issometimessignicantsinceaninterpolationmatrixhastobeinvertedforevery interpolationpointindependently[ 39 55 ]. Theideaofusingshift-invariantspacesforanintermediaterepresentationof irregularly-sampleddatahasbeenproposedforimagereconstruction[ 2 69 ],surface reconstruction[ 42 56 ]andvisualization[ 96 ].Thisapproachessentially"resamples"the scattereddataontoaCartesianlatticewheretensor-productB-splinescanbeusedfor reconstruction(Figure 2-1 ).ComparedtomethodslikeRBFandMLS,theapproachof usingshift-invariantspacesleadstosparse,bandedandwell-conditionedinterpolation matricesandenjoysfromcomputationally-e" cientmethods[ 49 94 ]thatcanbeexploited throughoutthevisualizationpipeline[ 96 ].Athorougherroranalysisusingshift-invariant 18

PAGE 19

f(x) X cc cc cc cc c 12345678M ... xxxxxxxxxxx 1 2 3 4 5 6 7 8 9 10 N Figure2-1. Scattereddataapproximationof1-Dsignal f from N irregularsamples.A uniformgrid(inblue)of M unknowncoe" cientsisimposedinthe reconstructiondomainandtheunknowncoe" cients c := { c 1 ,...,c M } are foundfromtheirregularsamples(thecircles)locatedat X := { x 1 ,...,x N } withsignalvalues f s := { f 1 ,...,f N } .Theunderlyingsignal f canthenbe approximatedfromthecoe" cients c k .In2-D,theimposeduniformgridcan betheCartesianlatticeorthehexagonallattice,whilein3-D,thegridcanbe theCartesianlattice,theBCClattice,ortheFCClattice(Section 2.2 ). spacesasanintermediaterepresentationofscattereddatahasbeenpresentedbyJohnson etal.[ 49 ]. Figure 2-1 illustratesthescattereddataapproximationof1-Dsignalusingan intermediaterepresentation.Thedatapoints f s = { f 1 ,...,f N } aregivenbysampling f on anonuniformsetofpoints X := { x 1 ,...,x N } ,i.e., f n = f ( x n ).Thereconstructionprocess startsbyimposingauniformgrid(e.g.,Cartesianlattice),ofunknown c k coe"cientsand providesarecipefornding c k .Theobtaineduniformdata(i.e., c k )canbelaterusedto approximatetheunderlyingcontinuousfunction f 2.2OptimalSamplingLattices WhiletheCartesianCubic(CC)latticeiscommonlyusedinmanysamplingand reconstructionapplicationsduetoitssimplicity,theseminalworkofMiyakawa[ 68 ]and PetersenandMiddleton[ 78 ]showedtheadvantagesofusingnon-Cartesianlatticessuch asspherepackinglatticesovertheCClattice.Particularly,the2-Dhexagonallattice [ 95 ]andthe3-DFaceCenteredCubic(FCC)andBodyCenteredCubic(BCC)lattices (Figure 2-2 )havebeendemonstratedtobesuperiortotheCClatticeforthesamplingand 19

PAGE 20

reconstructionofisotropicallybandlimitedsignals[ 29 33 78 ].Morerecently,ithasbeen demonstratedthatthespherepackinglattices(e.g.,FCC)areidealforsamplingrough stochasticprocesseswhiletheirduals(e.g.,BCC)areidealforsamplingsmoothstochastic processes[ 52 ].Bothfamiliesofthesenon-Cartesianlatticesoutperformthee" ciencyof theCClatticefromthesampling-theoreticviewpoint:forgeneral3-Dbandlimitedsignals tobesampledonaCClattice,theBCCandFCClatticesallowthereductionofthe samplingrateby30%and23%respectively[ 78 ].Thisnotionofe" ciencyincreasesin higherdimensions,asthee" ciencyis14%forthe2-Dhexagonallatticeand50%forthe 4-Dcheckerboardlattice[ 19 ]. ACartesian BBCC CFCC Figure2-2. Samplinglattices.A)Cartesianlattice.B)BCClattice.C)FCClattice. Formally,agenericlatticeinanydimensionistheintegerspanofasetofbasis vectorsthatareoftenarrangedinamatrix, L .Forexample,the2-DCClatticeis representedbythe2 % 2identitymatrixandthehexagonallatticecanberepresentedwith H =[ u 1 u 2 ],where u 1 =[1 / 2 & 3 / 2] T and u 2 =[1 / 2 & 3 / 2] T (Figure 2-3 ).Inthe3-D setting,theCClatticepointsarethespanofthecolumnsof C := I (i.e.,the3 % 3identity matrix).TheFCCandBCClatticesarespannedbythecolumnsof F and B ,respectively: F := # # # # $ 011 101 110 % & & & & and B := # # # # $ 111 1 11 11 1 % & & & & 20

PAGE 21

Figure2-3. Ageneral2-Dlattice L =[ u 1 u 2 ].TheVoronoicellofthebluelatticepointis enclosedintheshadedyellowarea. ACube BTruncatedoctahedron CRhombicdodecahedron Figure2-4. VoronoicellsoftheCC,BCC,andFCClattices.A)Cube.B)Truncated octahedron.C)Rhombicdodecahedron. TheVoronoicell(akaunitorfundamentalcell)ofalatticepointiscomposedofall pointsintheambientspacethatareclosertothatlatticepointthananyotherlattice point.AllpointsinalatticehaveVoronoicellsthatarecongruenttoeachotherandhence onecanrefertotheVoronoicellofalatticeunambiguously(Figure 2-3 ).In3-D,the VoronoicelloftheCClatticeisacube,oftheBCClatticeisatruncatedoctahedronand oftheFCClatticeisarhombicdodecahedron(Figure 2-4 ).Thedensityofalattice L is thereciprocalofthevolumeofitsVoronoicell,whichissimply1 / | det L | .Thedensityofa latticecanbeadjustedsimplybyscalingitsgeneratormatrix % L withany % > 0. Theexpecteddistanceofarandomly-chosenpointin R 3 toitsclosestpointonthe BCClatticeortheFCClatticeissmallerthanitsexpecteddistancetotheclosestCC 21

PAGE 22

latticepoint,whenallthreelatticeshavethesamedensity.Thisnotionofproximitycan bedemonstratedusingtheconceptoftheVoronoicellofthelattice.Sincetheclosest latticepointtoanyrandompointinspaceenclosesthatpointwithinitsVoronoicell(i.e., itsneighborhood),theexpecteddistanceofarandompointtotheclosestlatticepoint canbecomputedbytheexpecteddistanceofarandompointwithintheVoronoicellof alatticetoitscenter.Comparedtothecube(i.e.,theVoronoicelloftheCClattice),a randompointinsidethetruncatedoctahedronortherhombicdodecahedron(i.e.,Voronoi celloftheBCClatticeandtheFCClattice,respectively)hasasmallerexpecteddistance tothecell'scentroid,assumingthesepolyhedrahavethesamevolume. Thesecondmomentofthepolytope P isdenedas U ( P )= ( P $ x $ 2 d x ,where $ x $ denotes & 2 -normofthevector x ,thatis,theEuclideandistancebetweenpoint x andthecentroidofthepolytope.Provided P hasaunitvolume, U ( P )thenindicates theexpectedsquareddistanceofarandompointinsidethepolytopetoitscentroid. Thesecondmoments(i.e., U ( P ))ofunitareasquareandhexagonare0.166666and 0.160375,respectively[ 19 ].Moreover,thesecondmomentsofcube,truncatedoctahedron andrhombicdodecahedron,ofunitvolume,are0.249999,0.2356299,and0.2362353, respectively[ 19 ].ThisshowsthattheexpecteddistanceofarandompointtoaCClattice islargerthanthedistancetoanFCClatticewhichisonlyslightlylargerthanthedistance toaBCClattice. Thecaveatofusingthesenon-Cartesianlatticesinsamplingandreconstruction applicationsisthedesignofthenon-separablereconstructionkernels/lters.Forsampling andreconstructingmultivariatefunctions,theCartesianlatticeisthecommonchoicedue toitsseparablestructurewhereatensor-productlteringoperationcanbeeasilyapplied. However,thenon-separablenatureofthesenon-Cartesianlatticesrequiresthedesignsof non-separableltersthattunetotheselattices.Inthefollowingtwosections,wepresent theconstructionofnon-separablemultivariateboxsplinefunctions[ 36 ]andsincfunctions 22

PAGE 23

[ 105 ]thathavebeendemonstratedtobesuperiortotheirtensor-productcounterpartsas reconstructionlters. 2.3BoxSplines Aboxsplineisasmoothpiecewisepolynomial,compactly-supported,function (denedin R d ),thatisassociatedwithasetofvectorsthatareusuallygatheredina matrix: =[ 1 ...' N ][ 23 ].When N = d ,thesimpleboxspline(leftinFigure 2-5 )is denedtobethe(normalized)indicatorfunctionoftheparallelepipedformedby d vectors in R d : B ( x )=1 / | det | when x isintheparallelformedby and B ( x )=0otherwise. When N>d ,boxsplinesaredenedrecursively: B # ( x )= ) 1 0 B ( x t )d t, where isanarbitraryvectorin R d Fromthesignalprocessingviewpoint,boxsplinesareconstructedbyrepeated convolutionofline-segmentdistributionsalongeachvectorin (Figure 2-5 ).Specically, wehave: B ( x )=( B 1 ' B N )( x ) (21) wheretheelementaryboxsplines, B n ,areDiracdistributionssupportedoverline segments x = t n with t # [0 1].Byarotation/scaling,theseelementaryboxsplines canbedescribedbywiththeaxis-alignedboxspline B e 1 ( x )=box( x 1 ) ( ( x 2 ,x d ), where ( ( x 2 ,x d )isthe( d 1)-dimensionalDiracdistributionandbox( x )isthe indicatorfunctionoftheunitinterval.Equation( 21 )showsthataboxsplineis acompactly-supportedpositivefunctionwhosesupportisazonotope(e.g.,rhombic dodecahedroninFigure 2-6 )thatistheMinkowskisumof N linesegmentscorresponding tovectorsin [ 23 ]. Whenthelowerdimensionalspaceis R (i.e., d =1),theboxsplinescoincidewith univariateB-splines.Whenthedistinctcolumnvectorsof areorthogonaltoeachother, boxsplinesamounttotensor-productB-splines.Thecontinuityandapproximationpower 23

PAGE 24

Figure2-5. Constructionofthepiecewiseconstant,thelinearelementandthe Zwart-Powellbivariateboxsplinesviadirectionalconvolution. Figure2-6. Thesupportofthefour-directionboxspline, B [1 1 1 1] ,isarhombic dodecahedronwhichprovidesasecond-orderbasisfunctionontheBCC lattice. oftheshift-invariantsplinespaceformedbythelinearcombinationofshiftsofabox splinearedeterminedbasedonpropertiesofthematrix .Theapproximationproperties, alongwiththedeBoor-Hollig'srecurrencerelationsforfastevaluationofboxsplines, arediscussedintheworkofdeBooretal.[ 23 ],whichisthedenitivereferenceonthe subject. Inthe3-Dsetting,avoxelbasiscanberepresentedbyaboxsplinewhosedirection setisformedbythecanonicalaxis-alignedbasisvectors: :=[ e 1 e 2 e 3 ].Thesedirections correspondtotheprincipledirectionsintheCClattice.Theboxspline B generates adiscontinuous(i.e., C 1 )andrst-ordershift-invariantsplinespace.Moregenerally, 24

PAGE 25

an R th -ordertensor-productB-splinecanberepresentedasaboxsplinewhosedirection matrixiscomposedoftherepeatsofthedirectionsin ,eachofwhichisrepeated R -times.Whenthedirectionsetoftheboxsplineisxed,onecanidentifyaboxspline byindicatingtherepetitionsofthedirections.ForinstancethetricubicB-splinecanbe speciedasaboxspline B [4 4 4] sinceitsdirectionsetiscomposedofdirectionsin each ofwhichrepeated4times.Thisbasisfunctiongeneratesa C 2 ,fourth-ordershift-invariant spaceforapproximation. TheBCClattice,ontheotherhand,hasfourprincipledirectionsthataregivenby: B := # # # # $ 111 1 1 11 1 11 1 1 % & & & & Constructingaboxsplineoutofthesefourprincipledirectionsprovidesa C 0 ,second-order boxspline, B B [ 36 ].ThesupportofthisboxsplineisillustratedinFigure 2-6 .Repeating thedirectionsleadstotheso-calledquinticboxspline,denotedby B [2 2 2 2] whenthe directionsetisgivenby B .Thisboxsplinegeneratesashift-invariantspacewhichisa C 2 andiscapableoffourth-orderapproximation. Theseapproximationtheoreticpropertiesarematchedbetweenthequinticbox splineontheBCClatticeandtricubicB-splineontheCClattice[ 36 ].Remarkably,the quinticboxsplineprovidesafourth-ordersolutionwhilereducingthecomputational costofreconstruction.Theleverageincomputationale"ciencyisduetothefactthat thetricubicB-splinerequiresa4 % 4 % 4=64pointneighborhoodforasinglepoint, whilethenon-separablequinticboxsplinerequiresa4 % 8=32pointneighborhoodfor reconstruction[ 36 ]. Therefore,weconsiderthetricubicB-spline B [4 4 4] (withdirections )andthequintic boxspline B [2 2 2 2] (withdirections B )asbasisfunctionstogeneratetheCCandthe BCCshift-invariantsplinespaces.WhilethetricubicB-splinecanbeevaluatedusing 25

PAGE 26

atensor-productofunivariatecubicB-splines,thequinticboxsplinecanbeevaluated e "cientlyusingthemethodproposedbyEntezarietal.[ 36 ]. 2.4Multivariate sinc Functions Thesincfunctionfora1-Dlattice( Z )isafunctionwhoseFouriertransformisthe indicatorfunctionoftheunitcelloftheduallattice2 Z : ( ,! ) ( ) )=1when | ) | < and ( ,! ) ( ) )=0when | ) | > .Thenotionofdualityforhigherdimensionallatticesinvolves thegeneratormatrix:thedual(akareciprocal)tothelattice L isspannedby L =2 L T TheCClattice C isselfdual;inotherwordsitsduallatticeisspannedbythecolumnsof 2 C .TheFCClattice F andtheBCClattice B aredualsofeachotheruptoscaling:the dualto F isthespanofthecolumnsof2 B andthedualto B isthespanofthecolumns of2 F Samplingamultidimensionalsignal, f ,onalattice L leadstotheperiodicreplication ofitsFouriertransform, f ,ontheduallattice, L .Hence,theunitofreplicationinthe frequencydomain(i.e.,theVoronoicelloftheduallattice)isthemultidimensionalcounter parttothe( ,! )intheunivariatesamplingtheoremasinEquation( 11 ).TheVoronoi celloftheduallatticeiscalledthe(rst)Brillouinzone.Inotherwords,theBrillouin zoneisasingleunitfromtheperiodicreplicationontheduallatticethatindicatesthe maximumbandwidth(i.e.,theNyquistfrequency)ofabandlimitedsignal.Notethatthe densityofalattice(i.e.,samplingdensity/rate)isequaltothevolumeofitsBrillouin zonebutreciprocaltothevolumeofitsVoronoicell.Similartothe1-Dsetting,thesinc functiononeachlatticeisamultivariatefunctionwhoseFouriertransformistheindicator functionoftheBrillouinzone. TheBrillouinzonefortheCClatticeisacubewhichmakestheintegrationinthe inverseFouriertransformseparableandleadstothetensor-productrepresentationofthe sincfunction: sinc C ( x )= 3 i =1 sinc( e T i x )=sinc( x )sinc( y )sinc( z ) 26

PAGE 27

where x =[ x,y,z ] T and e i arethecanonicalaxisalignedvectorsformingtheCClattice (i.e.,columnsof I ). TheinverseFouriertransformsofBrillouinzonesfortheBCClatticeandtheFCC latticearenolongerseparablewhichmakestheformulasofthesincfunctionsmore involved.However,usingthegeometricdecompositiontechniqueintroducedin[ 105 ]we canderivethesincfunctionsforgeneralmultidimensionallatticesincludingtheBCCand FCClattices. TheBrillouinzoneoftheBCClatticeisarhombicdodecahedronthatcanbe decomposedinto4parallelohedra.TheinverseFouriertransformoftheindicatorfunction ofeachparallelohedronisaproductofthreesincfunctionsalongtheedgesofthat parallelohedron: sinc B ( x )= 1 4 4 j =1 [cos( !' T j x ) i $ = j sinc( T i x )] where [ 1 ...' 4 ]= 1 4 # # # # $ 1 1 11 11 11 1 111 % & & & & aretheprincipledirections(zones)oftheBCClattice(edgesofitsBrillouinzone). TheBrillouinzoneoftheFCClatticecanbedecomposedinto16parallelohedra: sinc F ( x )= 1 16 16 j =1 [cos( T j x ) i I j sinc( T i x )] where [ 1 ...* 6 ]= 1 4 # # # # $ 1 11100 11001 1 001 111 % & & & & aretheprincipledirections(zones)oftheFCClattice(edgesofitsBrillouinzone).The shiftvectors j areduetothefactthattheparallelepipedsconstitutingtheBrillouinzone 27

PAGE 28

oftheFCClatticeareshiftedfromtheorigin[ 105 ]: [ 1 ... 16 ]= 1 4 # # # # $ 130 12120 0 1 3 11 202 30121 1 22 210 3 2 1 10 22101 130 0 1 3 11 200 % & & & & Theindexset I j denotestheedgesof j thparallelepipedintruncatedoctahedron.For example I 1 = { 1 2 4 } indicatesthatparallelepiped1iscomposedofedges: 1 2 and 4 Similarly,wehave I 2 = { 1 5 6 } I 3 = { 3 4 5 } I 4 = { 2 3 4 } I 5 = { 1 2 6 } I 6 = { 3 5 6 } I 7 = { 1 3 5 } I 8 = { 1 4 6 } I 9 = { 2 4 5 } I 10 = { 1 2 3 } I 11 = { 1 2 5 } I 12 = { 4 5 6 } I 13 = { 3 4 6 } I 14 = { 2 5 6 } I 15 = { 1 3 4 } I 16 = { 2 3 6 } Intherestofthisdissertationwerefertothesincfunctionforalattice L ,assinc L where L # { C B F } 28

PAGE 29

CHAPTER3 VOLUMETRICDATARECONSTRUCTIONFROMIRREGULARSAMPLES Inthischapter,weexaminedi! erentsamplinglattices(i.e.,theCartesianCubic(CC), theBodyCenteredCubic(BCC),andtheFaceCenteredCubic(FCC)lattices)andtheir respectiveshift-invariantspacesforthereconstructionofirregularlysampledvolumetric data. 3.1Motivation Foramultidimensionalsignal,theshapeofitsspectrumdeterminestheoptimal samplinglatticeforthesignal.Theoptimalsamplinglatticeisobtainedwhenitsdual latticecandenselyreplicatethesignalspectruminanoptimalway[ 29 58 ].Withaknown shapeofthesignalspectrum,onecancomputetheoptimalsamplinglatticesforthat specicsignal[ 58 ].However,inmanypracticalapplicationstheknowledgeaboutthe geometryofthespectraisunavailable.Forageneralmultidimensionalsignalofunknown spectrum,thesignalshouldbesampledasuniformlyaspossibleinordertoisotropically preservehighfrequenciesalongalldirections[ 52 ].Theisotropictreatmentofdirections suggesttheuseofasphereastheshapeofanunknownspectrum,whichmeritstheuse ofspherepacking/coveringlattices(e.g.,theBCCandFCClattices)forsamplinggeneral multidimensionalsignalsevenwhenthesignalsarenotbandlimited. Whilethesenon-Cartesianlatticeshavebeeninvestigatedprimarilyinthecontext ofregularsampling,inthisworkweexaminethemforthereconstructionofirregularly sampleddata,wheretheunderlyingsignalhasanunknownspectrum.Inthissetting thesamplingpositions, X ,areplacedrandomlyinspaceorperturbedfromaregular patternbyjitternoise.Thetheoreticalmotivationforourworkisbasedonthefactthat theexpecteddistanceofarandompointin R 3 totheclosestlatticepointontheBCC latticeortheFCClatticeissmallerthanitsexpecteddistancetotheCClatticeofthe samedensity.Thismeansthatgivennonuniformsamplesofafunctionin3-D,theaverage distancesofasamplepoint x n totheBCC/FCClatticepointsaresmallerthanthe 29

PAGE 30

averagedistancestotheCClatticepoints.Thesmalleraveragedistanceshould,intheory, translatetoamoreaccuraterepresentationofthenonuniformdatabytheBCClattice ortheFCClattice.Thisisduetothee" cienttilingofthespacea! ordedbytheVoronoi cellsoftheBCClatticeandtheFCClatticecomparedtotheCClattice'scubetiling[ 19 ]. 3.2NonuniformReconstructionFramework Inourstudy,the1-Dscattereddataapproximationusinganintermediaterepresentation, asillustratedinFigure 2-1 ,isextendedtothe3-Dsetting.Weconsideragenerallattice shift-invariantapproximationspace S L # thatarethelinearcombinationofthelattice-shifts (i.e.,semi-discreteconvolution 1 )ofasuitablegenerator # : S L # = { s | s ( x )= c % # ( x )= k L Z 3 c k # ( x k ) } (31) where L istheCC,theBCCortheFCClattice( L # { C B F } ).Notethatwhen thegeneratorisasincfunction,Equation( 31 )reducestoEquation( 11 ).The coe"cients c k areoftenchosentomatchthesamplevalues c k = f k [ 66 85 ]orthrougha quasi-interpolationscheme[ 34 85 ]tocontroltheerrorofthereconstruction. Consideringthatthesampleddata f X isnite,thereconstructionisperformed overaboundeddomain.Thereforethereareanitenumberoflatticepointswhose basisfunctionscontributetothereconstructionovertheboundeddomainsincethebasis functionsarecompactlysupported. Let { k 1 k 2 ,..., k M } # L Z 3 denotethelatticepositionsthatoverlapthebounding volume.Then,fornotationalconvenience,wecanrenamethecoe"cients c m := c k m and shiftedbasisfunctions # m ( x ):= # ( x k m )usingthislinearizedindexing m =1 ...M : s ( x )= M m =1 c m # m ( x )=[ # 1 ( x ) # M ( x )] c (32) 1 Asopposedtodiscreteorcontinuousconvolutionsthataredenotedby ,semi-discrete convolutionissometimesdenotedby % 30

PAGE 31

wherethecoe"cient(column)vector c :=[ c 1 c M ] T ,andthe[ # m ]denotetherowvector elementsaretheshiftsofthegeneratortothelatticepointswithintheboundingvolume. Wecansetupthe N % M systemmatrixwhose n th rowistheevaluationofshifted basisfunctionsatthesamplepoint x n :[ ] n,m = # m ( x n ).Then,theinterpolationproblem, s ( x n )= f n ,for n =1 ...N ,canbesolvedbythesolutiontothe(oftenunder-determined) linearsystemofequationsgivenby: c = f (33) where f :=[ f 1 f N ] T arethedatavaluesgivenatthe(nonuniform)samplepoints.Note thatonlyasmallnumberofbasisfunctions,whenevaluatedatasamplepoint x n ,have anon-zerocontributionsincethebasisfunctionsarecompactlysupported.Therefore, eachrowof hasasmallnumberofnon-zerocoe" cientsanditisablock-bandedmatrix. Oncethecoe" cientvector c issolvedfromEquation( 33 ),wecanapproximatethe continuousspacebyexploitingEquation( 31 ). Inpracticalproblemsthesizeofthesystemmatrix prohibitsonefromusingdirect methodsforinvertingthismatrix(i.e.,pseudo-inverse).Anumberofiterativemethods (e.g.,conjugategradient)methodshavebeenproposedthatcanbeusedtosolvethe linearsystemEquation( 33 ).Dependingontheirregularityofthesamplingset X solvingEquation( 33 )isoftenanunder-determinedoranover-determinedproblem.In thesecases,iterativemethodssuchasconjugategradientusuallyreturnaleastsquares solution.Infact,thenonuniformsamplingtheory[ 1 ]requiresacaponthemaximum distancebetweenneighboringsamplepointstoguaranteetheconvergenceofthesolution. Inpracticalapplicationssucharequirementmaybeviolatedwhich,inturn,couldleadto rankdeciencyofthesysteminEquation( 33 ).Thisrankdeciencycanberesolvedby regularizingtheleastsquaressolution.However,asweintendtocomparethesolutions builtfromvariouslattices,regularizingthediscretecoe" cients, c ,inEquation( 33 ) isbiasedtothechoiceofthelatticeasthediscreteneighborhoodsoflatticepointsare di!erentfromeachother.Aswewillseebelow,ourapproachregularizesthesolutionin 31

PAGE 32

thecontinuousdomainasopposedtoregularizingthediscretecoe" cientsinEquation ( 32 ). 3.2.1RegularizedLeastSquares Aclassicalapproachtoregularizetheleast-squaressolutionistoimposeapenalty termtotheleast-squaresttingprocess: min c R M $ c f $ 2 2 + + | s | 2 H (34) wheretheminimizationisconsideredoverallfunctions s ( x )thatliveina(su"ciently) smoothfunctionspaceH(e.g.,Sobolevspace).Thisextratermpenalizesunwanted solutions c thatwouldintroducehighoscillationsorlargeenergyinthederivative(s)of thesolution, s ( x )(asinEquation( 32 )),inthecontinuousdomain.Theregularization term | s | 2 H isameasureofsmoothnessandissensitivetooscillationsorlargeenergyin thederivativesof s ( x ).Tikhonovregularizationandtotalvariationsimposedi!erent typesofpenaltyfunctionsthatareoftendiscretizedandimposedonthecoe"cients c .As illustratedbelow,onecanalsocomputethepenaltytermoverthecontinuous-domain s ( x ) (asopposedtonitedi!erencingon c )andminimizethispenaltytermoverthechoicesof c .Theregularizationparameter + balancesthedelityandsmoothnessofthesolution. Inthe1-Dcase,thesmoothnessofafunctionisoftenmeasuredbytheenergyinthe secondderivative: | s | 2 = ( ( s %% ( x )) 2 d x ,withwhichtheregularizedleast-squarestting hasasolutionthattendstobesmoothbutwiththeedgesharpnesspreserved[ 109 ].In themultidimensionalsetting,thismeasureisgeneralizedbymeasuringtheenergyofthe Laplacianof s [ 2 69 96 ]: | s | 2 H = ( s,s ) H := ( s xx ,s xx ) + ( s yy ,s yy ) + ( s zz ,s zz ) (35) Here ( s 1 ,s 2 ) = ( s 1 ( x ) s 2 ( x )d x denotestheinnerproductdenedontwofunctionsandthe subscript s xx indicatesthesecondpartialderivativewithrespectto x 32

PAGE 33

Amoregeneralmeasurementofroughnessusingtheso-calledBeppo-Levi(or Duchon'ssemi)normallowsustoincludealargerclassoffunctions(e.g.,thin-plate splines)[ 97 ]inoursearchforasolution.Thisisparticularhelpfulwhenwewantto reconstructinasplinespace.Inthemultivariatesetting( R d ),therst-orderdirectional derivativealongtheunitvector v isdenedasD v f ( x ):= v f ( x ),where f is thegradientoffunction f .Thegeneralordern derivativeinthecanonicalaxis-aligned directions e 1 e 2 e d isdenedusingamulti-index, p =( p 1 ,...,p d ),with | p | := p 1 + + p d = n : D p f =D p 1 e 1 D p d e d f = n f/ ( x p 1 1 x p d d ) Moreover,dening p !:= p 1 p d !,theinnerproductoffunction f and g foranordern Beppo-Levispaceisdenedby: ( f,g ) BL n := | p | = n n p ( D p f, D p g ) (36) Thisinnerproductcanbeusedtodenethe(semi)normoffunctionsintheBeppo-Levi space: | s | 2 BL 2 := ( s,s ) BL 2 .Whatispertinenttoourapplicationisthatthisnorm | s | 2 BL 2 is ageneralizationofthe,1-D,second-orderderivativeto3-Dthatisbeingusedtomeasure "roughness"ofthesolution s .Thereforewerewritetheregularizedleast-squaresproblem as: min c R M $ c f $ 2 2 + + | s | 2 BL 2 (37) wherewesolvetheminimizationproblemnotintheentireBeppo-Levispace,but ratheronshift-invariantsubspaces S L # + BL 2 oftheBeppo-Levispace.Shift-invariant subspacesallowustosolvethisoptimizationprobleme"cientlywhilemaintainingagood approximationtotheglobaloptimal[ 2 49 ].Inparticular,whiletheworkbyArigovindan etal.[ 2 ]considerstheCartesianshift-invariant(B-spline)subspacesweconsiderthe (moregeneral)boxsplinelattice-shift-invariantspacesforthissubspaceapproximation 33

PAGE 34

problem.Moreover,thisapproachcanserveasausefulpreconditionerforthemultivariate smoothingsplines. 3.2.2ImplementationandOptimization SubstitutingEquation( 32 )intotheLaplacian-basedregularizationcostfunction Equation( 35 ), 2 wehave: | s | 2 H = ( s,s ) H = + M m =1 c m # m M m =1 c m # m H = M m =1 M m =1 c m ( # m # m ) H c m = c T Gc (38) whereeachelementofthe M % M regularizationmatrix, G ,isthe H inner-product betweenLaplaciansoftwo # functionsshiftedtotwolatticepoints: G :=[ g m,m ]= ( # m # m ) H (39) Since # (i.e.,windowedsincorboxsplines)isacompactlysupportedfunction,onlythe shiftstoafewneighboringlatticepointswillleadtoanon-zeroinnerproduct.Therefore, G isa(block)bandedmatrix.Moreover,foreverylatticepointthesamesetofshifts wouldprovidethenon-zeroinnerproducts.ThereforeEquation( 39 )canbeinterpreted asaFIRlterandthematrix G isa(block)Toeplitzmatrix.Onerowofthismatrix determinestheweightsforneighborsofasinglelatticepoint.Theseweightsaretherefore pre-computedandre-usedforallotherlatticepoints.Therefore,wecanconstruct G e "cientlyusingthepre-computedweights. WecannowsolveEquation( 34 )byconsideringthedelitytermintermsofthe systemmatrix: $ c f $ 2 2 = ( c f c f ) = c T T c 2 c T T f + f T f 2 ForthecaseofregularizationintheBeppo-Levispace,thesettingissimilar. 34

PAGE 35

andtheregularizationtermEquation( 38 ): min c R M c T T + + G c 2 c T T f (310) Wecanusequadraticprogrammingtosolvethisminimizationprocess.However,the HessianofEquation( 310 ), T + + G ,ispositivedenitewhenthelattice-shiftsof thebasisfunctions(i.e.,columnsof )arelinearlyindependent[ 49 ].Infactshiftsofbox splinefunctionsandsincfunctionsontotheirlatticepointsformanorthonormalsetand hencearelinearlyindependent.Therefore,theuniqueminimizerofEquation( 310 )canbe obtainedbydi! erentiationwithrespectto c : T + + G c = T f (311) Since T + + G ispositivedeniteandsymmetric,wecansolveEquation( 311 ) e "cientlybytheconjugategradientmethod.Thesolutiontothislinearsystemprovides thecoe" cientvector c thatisusedforreconstructingthesignalasinEquation( 32 ). 3.2.3Contributions Inthischapterwedemonstratethatseeminglyslightdi! erencesbetweenthe commonly-usedCCandtheBCC/FCC(akaoptimal)latticesleadtosignicant improvementintheaccuracyofthereconstructionfromirregularsamples.Weshow thatforagivensetofirregularly-sampleddatapoints,reconstructionsontotheBCC andFCClatticesprovideamoreaccurateapproximationthanreconstructionsontothe CClattice.Inaddition,weintroduceanovelregularizationschemeontheBCClattice basedontheBeppoLevi(Duchonssemi-)normofboxsplineswhicharecomputedin closed-form.Thisregularizationschemecanbeimplementede"cientlyasaniteimpulse response(FIR)lter. AsdemonstratedbyConwayandSloane[ 19 ],withtheincreaseindimension,the di !erencebetweenthenormalizedmomentsoftheVoronoicelloftheCClatticeand thatoftheoptimallattices[ 52 ]increasesignicantlywiththedimension.Hencethe 35

PAGE 36

improvementsintheaccuracyofthereconstructioncomparedtotheCCreconstruction areexpectedtogeneralizetohigherdimensions.Whileforillustrationpurposeswefocus onthe3-Dimaging/volumeapplications,theframeworkdirectlygeneralizestohigher dimensions[ 105 ]. 3.3ExperimentsandDiscussion Inwhatfollows,weinvestigatethevisualandnumericalaccuracyofreconstruction ontothe3-Dsamplinglatticesalongwithnon-separablebasisfunctions.Theexperimental comparisonbetweentheCClatticeandtheBCClatticefornonuniformreconstruction usingtheboxsplinebasisfunctionsispresentedinSection 3.3.2 .Theexperimental comparisonamongtheCClattice,theBCClattice,andtheFCClatticefornonuniform reconstructionusingthesincbasisfunctionsisshowninSection 3.3.3 3.3.1ExperimentSetting Weexperimentwithbothsyntheticandrealvolumetricdatasets.Oursynthetic benchmarkisachirp-likefrequency-modulationdatasetproposedbyMarschnerandLobb (ML)[ 66 ].TheMLimageinFigure 3-1 isrenderedbyevaluatingtheproposedexplicit function[ 66 ].Forvisualizationpurposes,theiso-surfaceof0.5isrendered.Withregular sampling,thisfunctioncanbesampledatthe"critical"resolutionof M =41 % 41 % 41on theCClatticeandatanequivalent 3 samplingontheBCClatticeof M =32 % 32 % 32 % 2 andontheFCClatticeof M =25 % 25 % 25 % 4 4 .Since98%oftheenergyofthesignal spectrumiscapturedbythiscriticalsamplingfrequency,itservesasa"practical"Nyquist frequencyforthissignal[ 66 ]. 3 AnitevolumecannotcontaintheexactsamenumberofpointsfortheCC,theBCC andtheFCClattices.HencetheresolutionsarechosenconservativelyinfavoroftheCC lattice. 4 TheBCClatticecanbeconsideredastwointerleavingCClattices,whiletheFCC latticecanbeconsideredasfourinterleavingCClattices. 36

PAGE 37

Figure3-1. TheMLdataset. OurexperimentsonrealvolumetricdatasetsinvolvebiomedicaldatasetstheCarp Fish(CT)andtheAneurysm(MR) 5 .NotethattheCarpFishandAneurysmdatasets aresomewhatnoisyduetotheacquisitionprocess.TheCarpFishandAneurysmhave beensampledataresolutionof256 % 256 % 256whichserveasthe"ground-truth".The irregularly-sampleddatasetsweregeneratedbyrandomlythrowingawaypointsfromthe originaldatasets.Weadoptanadaptivesamplingstrategy[ 1 ]suchthatasampleispicked withaprobabilityproportionaltothemagnitudeofthegradientatthatpoint.Therefore theirregularsamplesoftenlieclosetothoseregionswhosegradientarehigh. ToensuretheCC,BCCandFCClattices(superimposedontheirregulardataset) havethesamelatticedensitieswithinthedomain,wescalethelatticessothatthe volumesoftheirVoronoicells(i.e.,cube,truncatedoctahedronandrhombicdodecahedron, respectively)areequal.Notethatoveranitedomainwemaynotbeabletocontainthe exactsamenumberoflatticepointsforthethreelatticepatternswiththesamelattice density.Tobeconservativeinourexperiments,wepickresolutionsinfavoroftheCC latticeasshownbelow.Thereforeinsidethedomain,theCClatticecouldhaveslightly 5 Courtesyofhttp://www9.informatik.uni-erlangen.de/External/vollib/. 37

PAGE 38

morelatticepointsthantheBCCandFCClattices.Thereportedadvantagesofthelatter aredespitetheslightresolutionadvantagesthatwegivetotheCClattice. Themainworkloadofourframeworkistondthesolutionofthelinearsystem Equation( 311 ),wheretheconjugategradientmethodisusedinourexperiments. Iterativemethodssuchasconjugategradientmethodareparticularlye"cientwhenthe linearsystemhassparsematrix(i.e., T + + G inourcase),whichisthecasein ourframework.Inaddition,asmentionedinSection 3.2.2 ,theregularizationmatrix G canbepre-computedandre-used.Therefore,ourreconstructionframeworkise"cient andpractical.WeimplementedasolverforEquation( 311 )usingMATLAB'sbuilt-in conjugategradientmethodandthemethodconvergedwithin200iterationswiththe desiredtolerance1 % 10 9 .Thereconstructiontimecanbesignicantlyshortenedwith morepowerfulmachines(e.g.,multi-core)andparallelizationtechniques. Allimagesarerenderedbyaray-casterfromthereconstructedvolumetricimages. Toavoidclutterintheimages,singleiso-surfacesareextractedfromthevolume,inthe renderingstage,sothattheimagescanbevisuallycompared.Thereconstructionaccuracy ismeasuredquantitativelybytheSNR(SignaltoNoiseRatio).WenotethattheSNR's arealwayscomputedovertheentirevolumetricdomainandisnotlimitedtotherendered iso-surfaces. Forbrevityofdiscussion,wehavedenedseveralnotationsfortherestofthis chapter,assummarizedtheminTable 3-1 .Forourexperimentsonthesyntheticdataset, ML,wedenotetheresolutionsofthe(imposed)CC,BCCandFCCreconstruction latticesas P c P b and P f ,respectively.Thisresolutionisa"practical"Nyquistfrequency forML[ 66 ].Forrealdatasets,thesuperimposedCClatticehasaxedresolutionof P c whiletheBCCandFCClatticeshavetheresolutionof P b and P f ,respectively.In otherwords,wexthenumberofimposedreconstructionlatticepoints(i.e., M ),and examinetheperformanceofthreelatticeswhenthenumberofirregularsamples(i.e., N ) orregularizationparameter(i.e., + )orjitternoiselevelarevaried. 38

PAGE 39

Table3-1. NotationsforSection 3.3.2 andSection 3.3.3 NotationDescription N numberofirregularsamples M numberofreconstructionlatticepoints T distancebetweentwoneighborpointsofCClattice + regularizationparameterasinEquation( 34 )andEquation( 37 ) P c resolutionof41 % 41 % 41( M =68 921points),forMLdataset P b resolutionof32 % 32 % 32 % 2( M =65 536points),forMLdataset P f resolutionof25 % 25 % 25 % 4( M =62 500points),forMLdataset P c resolutionof140 % 140 % 140( M =2 744 000points),forrealdatasets P b resolutionof111 % 111 % 111 % 2( M =2 735 262points),forrealdatasets P f resolutionof88 % 88 % 88 % 4( M =2 725 888points),forrealdatasets 3.3.2NonuniformReconstructionUsingBoxSplines 3.3.2.1RegularizationinBoxSplineSpaces AswepresentedinSection 3.2.1 ,computingthe BL 2 normofbasisfunctionsinvolves di !erentiationoftheboxsplineswhichisavailableinclosed-form.Werstintroducethe directionalnite-di!erenceoperatoralongthevector : # f ( x ):= f ( x ) f ( x ). ThisoperatorisadiscretecounterpartofthedirectionalderivativeD alongthevector Thedirectionalderivativeofaboxsplinealongadirectioninitsdirectionsetisobtained exactlywithanitedi! erencingonboxsplineswhosedirection isremoved[ 23 ]: D B ( x )=# B \ ( x )for # IncaseoftheCClattice,thederivativesinEquation( 36 )forEquation( 39 )canbe directlycomputedfor # = B [4 4 4] sincetheboxsplinedirectionsintricubicB-splineare axisaligned(i.e., =[ e 1 e 2 e 3 ]).Forinstance: D e 1 B [4 4 4] ( x )= B [3 4 4] ( x ) B [3 4 4] ( x e 1 ) TheboxsplinesontherighthandsidearetensorproductofcubicB-splinesalongthe y and z directionsandquadraticB-splinealongthe x directionsincetheremovalof e 1 reducesitsrepetitionfrom4to3. 39

PAGE 40

Ontheotherhand,thefourdirectionsin B =:[ 1 2 3 4 ],fromEquation( 2.3 ),are notaxis-alignedandderivativesinEquation( 36 )cannotbedirectlycomputedforthe quinticboxspline.However,using e 1 = 1 2 ( 2 + 3 ),wehave: D e 1 B [2 2 2 2] ( x )= 1 2 D 2 B [2 2 2 2] ( x )+D 3 B [2 2 2 2] ( x ) = 1 2 B [2 1 2 2] ( x ) B [2 1 2 2] ( x 2 )+ B [2 2 1 2] ( x ) B [2 2 1 2] ( x 3 ) . Therefore,allthederivativesintheBeppo-Levinormcomputationscanbeanalytically calculatedintheboxsplinebasis.WhiletheCCcaseisevaluatedusingtensor-product ofunivariateB-splines,theBCCcaseisevaluatedwiththepiecewisepolynomial representationofquinticboxspline[ 36 ].Sincewehavethebasisfunctionsrepresented intheirpiecewisepolynomialforms,theintegrationinvolvedinEquation( 36 )canalso becomputedinclosed-forminasymbolicenvironmentsuchasMaple.Asweindicatedin Section 3.2.2 ,thesecomputationscanbereused. 3.3.2.2SyntheticDataset Figure 3-2 illustratesthereconstructionfromirregularly-sampledMLdatasetwhere thesampleswererandomlyplacedinsidetheboundingcubewithauniformdistribution. Thenumberofnonuniformsamplesis1.2timesofthenumberoflatticepoints, N = 1 2 M =1 2 P c .Theimagesonthesecondrowshowthee!ectofregularizationasthe accuracyofreconstructionisimprovedineverycase.TheaccuracyoftheBCCframework ishigher(around3dB)thantheCClattice.Visuallywecanobservethatthestructureof theinnerthreeringshavebeenreconstructedwithhighaccuracyontheBCCregularized reconstructions,whileontheCCsidetheartifactsarevisibleonthesecondandthird rings. Figure 3-3 examinestheBCCandtheCCreconstructionsforarangeofregularization levels.Inthisgure,thesolidlinesdescribetheSNRintheCCpipelineandthedashed linesthecorrespondingSNR'sintheBCCcase.Theredplotsareforacasewhenthe numberoflatticepoints M andthenumberofnonuniformsamplepoints N match(i.e., 40

PAGE 41

ACC, =0,SNR=20.88dB BBCC, =0,SNR=24.64dB CCC, =5 % 10 5 ,SNR=23.20 dB DBCC, =5 % 10 5 ,SNR= 27.04dB Figure3-2. NonuniformreconstructionoftheMLdataset.A)ReconstructionwiththeCC latticewithoutregularization.B)ReconstructionwiththeBCClatticewithout regularization.C)ReconstructionwiththeCClatticewithregularization.D) ReconstructionwiththeBCClatticewithregularization. N = M = P c ),whiletheblueplotsarefor N =1 2 M =1 2 P c .Theregularization parameterstartsfrom + =0(i.e.,noregularization)andisincreaseduntiltheSNRdrops asthereconstructionbecomestoosmooth.ThisgureclearlyillustratesthattheBCC pipelineisconsistentlymoreaccuratethantheCCpipeline. 3.3.2.3RealDatasets Figure 3-11 showsthereconstructionoftheCarpFishfrom N =800 K irregularly sampleddataset.Asisclearlyseenintheleftcolumn,intheCCreconstructionthetail nshavemergedwhiletheBCC(middlecolumn)haspreservedthens.Moreover,we 41

PAGE 42

16 18 20 22 24 26 28 30 0 1 10 5 5 10 5 1 10 4 1 10 3 1 10 2 Regularization ( ) SNR (dB) CC, N=M CC, N=1.2M BCC, N=M BCC, N=1.2M Figure3-3. TheBCCreconstructionisconsistentlymoreaccuratethantheCC reconstructionforarangeofregularizationlevelsontheMLdataset. canseeinthesecondrow,theBCCconservesthenedetailsattheedgesoftheribs. IngeneraltheBCClatticeproducedacleanerreconstructionthantheCClattice.Note thattheSNRvaluesunderestimatetheimprovementsintheBCCcasecomparedtothe CCcasesinceSNRisaglobalmeasureandverysensitivetotheamountofnoisealready presentinthedata.TheSNRinthiscaseiscalculatedovertheentirevolumewhich includesthenoisyareasthatareawayfromtheirregularlyplacedsamples. 3.3.2.4ResiliencetoJitterNoise Instatisticalphysics,structuralstabilityoftheCC,theBCCandtheFCClattices arestudiedfortheimpactofjitternoise[ 59 ].Inthefollowingexperiments(Section 3.3.2.4 andSection 3.3.3.4 ),westudytheimpactofjitternoiseonthequalityofsamplingand reconstructioninournonuniformreconstructionframeworkandexperimenttheresilience oftheselatticesagainstjitternoise. Weintroduceajitteredsamplingexperimentandajittereddataexperiment.To generateirregularsamplesforjitteredsamplingexperiments,wechooseasampling lattice L ,perturbthelocationofeachlatticepointof L randomly,andthensamplethe ground-truthsignalattheperturbedposition.Incontrast,forjittereddataexperiments, 42

PAGE 43

AGround-truth BCC,SNR=26.07dBCBCC,SNR=27.38dB Figure3-4. TheCarpFishdatasetreconstructedfrom N =800 K nonuniformsamples.A) Ground-truth.B)TheCCreconstructionwasperformedon M = P c points usingthetricubicB-spline.C)TheBCCreconstructionwasperformedon M = P b pointsusingthequinticboxspline.Theregularizationparameteris + =1 % 10 3 inbothcases.TheadvantageoftheBCCreconstructionis clearlyvisibleinthetailnsandribsareas.NotethattheSNR underestimatestheadvantagesofBCCsincetheunderlyingdatasetisnoisy. theirregularsamplescomefromsamplingatthelatticepointsandperturbingthedata locationafterthesamplingprocess.Inbothcases,theirregularityofsamplesetis controlledbythemaximumradiusoftheperturbation(e.g.,0 5 T ,where T isdened inTable 3-1 ).AsdescribedinSection 2.1 ,thejitteredsamplingandjittereddatacan happenintheprocessofacquisitiondata(e.g.,medicalmodalitiesorLADARdata). Whenreconstructingfromirregularsamples,wechoosethesuperimposedreconstruction latticethesameas L ,thatis,wehave N = M Figure 3-5 givestheperformanceoftheBCCandtheCClatticeswithvaryingradii ofperturbationonsamplinglattices(i.e.,jitteredsampling).TheseexperimentsonML datasetdemonstratethattheBCCframeworkconsistentlyoutperformstheCCframework withvaryinglevelsofirregularityinthesamplingset.Thisgaininaccuracyismaintained acrossvariouslevelsofregularization. 43

PAGE 44

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 16 18 20 22 24 26 28 30 32 34 Perturbation (T) SER (dB) CC, =0.01 CC, =0.001 CC, =0 BCC, =0.01 BCC, =0.001 BCC, =0 Figure3-5. TheaccuracyofreconstructionoftheMLdatasetfromthejitteredCC(solid lines)andthejitteredBCC(dashedlines)samplinglattices.Thenonuniform pointsweresampledbyintroducingperturbationswithvaryingradiitolattice points(i.e.,jitteredsampling). Figure 3-6 showsthejittereddataexperimentwiththeAneurysmdataset.The CCcaseisshownintherstrowandtheBCCcaseonthesecondrow.Theprominent di !erencesappearinthereconstructionofthethinveins(thedottedboxes).Forthe secondcolumnthethinveinsinthelowerandrightpartsarerecoveredcontiguouslywhile theCCframeworkshowseveraldiscontinuities.Astheregularizationincreaseto + =10 2 (thethirdcolumn)someoftheveinsintheCCreconstructioncompletelydisappear, whiletheBCCstillrecoversastrandindicatingastrongersignaltonoiseratiointhose regions.TheAneurysmdatasetcontainsmostlyzerovaluesoutsidetheveinswhichoccupy asmallnumberofvoxelsthroughouttheentirevolume.WhencomparingtheSNRinboth casesweobtainsmallvalues,thisisaconsequenceoftheamountofnoiseintheoriginal dataset. 3.3.2.5ComputationCost Duetothesmallsupportofthe(non-separable)quinticboxsplinekernel[ 36 ]onthe BCClattice,Equation( 311 )issolvedmoree"cientlyontheBCClattice.Figure 3-7 documentsthetimeforsolvingEquation( 311 )inMATLAB(singlethreaded).Thetests 44

PAGE 45

ACC, =0,SNR=7.47dB BCC, =10 3 ,SNR=7.72 dB CCC, =10 2 ,SNR=7.11 dB DBCC, =0,SNR=11.10dB EBCC, =10 3 ,SNR=9.47 dB FBCC, =10 2 ,SNR=8.48 dB Figure3-6. ReconstructionoftheAneurysmdatasetfromjitteredCCandBCCsamples. A)B)C)ReconstructionusingtheCClatticewithvaryingregularization parameter + .D)E)F)ReconstructionusingtheBCClatticewithvarying regularizationparameter + .Intheseexperimentsweuse P c pointsfortheCC and P b pointsfortheBCC.SNRvaluesarerelativelylowasthedatasetis noisyandAneurysmsarecontainedinasmallregionofthevolume. werecarriedoutonacomputerwithIntelCorei7-9753.33GHzCPU.Astheresolution ofthelattice( M )orsizeofthesamplingset( N )increasesthecomputationaladvantages oftheBCCframeworkgrows.WealsoexaminedaGPU-acceleratedsolverforEquation ( 311 )thatscaleddownthetimesinFigure 3-7 byanorderofmagnitude. Thegainsincomputationale"ciencyoftheBCCpipelinebenetstherendering stageinadditiontothereconstructionprobleminEquation( 311 ).Oncethecoe" cients c ineachlatticeareobtainedfromEquation( 311 )theyareusedforconstructinga 45

PAGE 46

61 71 81 91 101 111 121 131 141 0 100 200 300 400 500 600 700 800 900 1000 Resolution Time (sec) Cartesian BCC Figure3-7. ComputationtimeforsolvingEquationEquation( 311 )onboththeBCCand theCCreconstructionframeworks.Thex-axisindicatestheaxisresolutionof theCCframework(e.g., M =101 3 ).ThecorrespondingBCCsystemmatrix wassetuptohavethesamedimensions.Thenumberofnonuniformsamplesis N =1 5 M spline s ( x )asinEquation( 31 ).Thissplineisusedtocomputethevolumerendering integralinrenderingalgorithms(e.g.,ray-casting)thatentailevaluationof s ( x )atvarious pointsalongraysin3-D.TheadditionalbenetoftheBCCframeworkisthatthecostof evaluationof s ( x )atarbitrarypoints(alongrays)ishalfofthecostofevaluationonthe CCframework.Thereforethevolumerenderingalgorithmsfordata(i.e., c )ontheBCC latticearetwiceasfastastheCCcounterparts[ 36 37 ]. 3.3.3NonuniformReconstructionUsingMultivariate sinc Functions InSection 3.3.2 ,wehaveinvestigatedsplinettingofirregularlysampleddatainbox splinespacesbuiltfromtheCCandtheBCClattices.Assplinesarecompactlysupported functionsinthespacedomain,inthissectionweo! eradualframeworkwhichisin thebandlimitedsetting.Thebandlimitedsettingallowsustoperformathoroughand unbiasedanalysisofcommonlatticesbyincludingtheFCClatticewhichisnotpossiblein 46

PAGE 47

thesplineframework.Moreover,byutilizingsincfunctionsontheselattices,thefrequency responseofreconstructionisuniformacrossalllattices;incontrastboxspline'sfrequency responsesdi!erfromtheCClatticetotheBCClattice[ 36 ].Henceourexperimentso!er afairandunbiasedanalysisofthesereconstructionofirregularlysampleddata.Our experimentsettingforthissectionisthesameasthelastone(Section 3.3.1 ).Sincewe arenolongersearchforasolutioninthesplinespace,theLaplacian-basedregularization (Equation( 34 ))isusedinourbandlimitedsetting. 3.3.3.1LanczosWindow Thesupportofthesincfunctionisunboundedwhichmakesitsspace-domain evaluationimpractical.Apracticalcompromiseistotruncatethesincfunctionusing awindowfunction.In1-D,therearemanywindowingtechniquessuchasHamming, Parzen,Blackman[ 76 ].Thesewindowfunctionscanbeextendedtothemultivariate CClatticethroughtensor-product,butcannotbeextendedtonon-tensor-productsinc functions(suchasthosefortheBCCandtheFCC).However,theLanczoswindowis denedasthemainlobeofthesincfunctionaconceptthatextendstosincfunctions denedforanylattice. Let S { sinc L } bethesupportofthemainlobeofsinc L ;then,the L anczoswindowed sinc L isgivenas: L ( x )= / 0 1 0 2 sinc L ( x )sinc n L ( x a ) x /a # S { sinc L } 0 otherwise (312) where a istheintegervaluedscalingparameter(usually2or3in1-Dapplications[ 7 ]), anddeterminesthesizeofthewindow'ssupport; n isthesmoothnessparameterwhich controlsthedegreeofcontinuityofthetruncatedsincandhowfastthewindowedfunction dropstozero.Inourexperiments,weadoptedthechoiceof n =2and a =3asthe originalpaper[ 105 ].Westudythee!ectsofthesetwoparametersonthereconstructionin Section 3.3.3.5 47

PAGE 48

3.3.3.2SyntheticDataset Figure 3-8 illustratesthereconstructionfrom N =1 5 M irregularMLsamples. Samplesareplacedinsidetheboundingdomainrandomlywithauniformdistribution. SincewewanttogivethesamedatapointstoeachoftheCC,theBCCandtheFCC reconstructionpipelines,wexedthecongurationoftheirregularsamplingpoints accordingto N =1 5 P c fortheCClatticeandusedthesamesampleddatafortheBCC andtheFCClattices(andpickedtheirrespective M to M = P b and M = P f ). Oneachrow(i.e.,foraxed + )ofFigure 3-8 ,wecanclearlyseethattheBCCand theFCClatticesoutperformtheCClatticeby5and10dB's,andtheFCClatticewins amongthelattertwo.Eachcolumncorrespondstoreconstructionontoonelatticewith varyingvalueoftheregularizationparameter.Wecanseethattheregularizationcan improvethereconstructionperformance.As + increases,thereconstructionaccuracy increasestoacertainlevel(i.e., + =1 % 10 3 inthiscases),andthendrops.This reectsthecompromisebetweenthettingaccuracyandthesmoothnessofthesolution. Althoughpickingthebest + isnotstraightforward,theregularizationparameter a ectsthethreelatticesuniformly.Thisisduetothefactthatweareregularizingthe continuous-domainsolution,notthediscretelatticecoe"cients. ObviousartifactsoftheCCreconstructioncanbeseenontheinnerthreerings ofthedataset,whiletheyarereconstructedwithhighaccuracyontheBCCandthe FCCreconstruction.ReconstructionofhighfrequencypartsofMLdataset(i.e.,outer rings)failsontheCCreconstruction,however,theBCCandtheFCCreconstructioncan preservethosepartsmoreaccurately. Figure 3-9 plotsSNRresultsofthereconstructionoftheMLdatasetfromasmaller numberofirregularsamples( N =0 75 M and M ).Theregularizationimprovesthe reconstructionperformance,especiallywhenthesystemmatrix(i.e., S inEquation( 33 )) isunder-determined,thatis,whenwehavefewersamplesthanthereconstructionlattice points.Experimentingwithdi!erentnumbersofirregularsamples(Figure 3-8 andFigure 48

PAGE 49

ACC,SNR=21 19dB BBCC,SNR=26 54dB CFCC,SNR=31 61dB DCC,SNR=22 46dB EBCC,SNR=27 38dB FFCC,SNR=32 53dB GCC,SNR=24 58dB HBCC,SNR=28 99dB IFCC,SNR=33 20dB JCC,SNR=23 90dB KBCC,SNR=27 05dB LFCC,SNR=27 76dB Figure3-8. ReconstructionoftheMLdatasetfrom N =1 5 M randomsamples. ResolutionoftheCC(A,D,G,J),BCC(B,E,H,K)andFCC(C,F,I,L) reconstructionlatticesare M = P c P b and P f ,respectively.Regularization parameter + =0(A,B,C), + =1 % 10 4 (D,E,F), + =1 % 10 3 (G,H,I)and + =1 % 10 2 (J,K,L). 49

PAGE 50

3-9 ),weobservethattheBCCandtheFCClatticesconsistentlyoutperformtheCC latticeacrossvariouslevelsofregularizations. 8 12 16 20 24 28 32 0 1 10 -5 1 10 -4 1 10 -3 1 10 -2 SNR (dB) Regularization ( ) CC, N = M CC, N = 0.75M BCC, N = M BCC, N = 0.75M FCC, N = M FCC, N = 0.75M Figure3-9. TheMLdatasetreconstructedfrom N =0 75 M (dashlines)and N = M (solidlines)irregularsamples.ResolutionoftheCC(redlines),BCC(blue lines)andFCC(greenlines)latticesare M = P c P b and P f ,respectively. 3.3.3.3RealDatasets Figure 3-10 plotsthereconstructionaccuracyfortheCarpFishfromanirregularly-sampled datasetwith N =800 000.Theirregularsamplesaregeneratedbyrandomlypicking pointsfromthegroundtruthdatasetwitharesolutionof256 % 256 % 256.Without regularization(i.e., + =0,notshownintheplot),thereconstructionfails(yieldingSNRof 0 1406dB,2 2083dBand2 2755dBfortheCC,theBCCandtheFCC,respectively)as thesystemmatrixisseverelyunder-determined. Figure 3-11 istherenderingresultofthecaseof + =1 % 10 3 inFigure 3-10 .The precisionofreconstructioncanbeeasilyobservedfromtailnsandribsofthesh(dotted boxes).TheCCapproximation(secondrow)mergesthetailns,whiletheBCCand FCC(thethirdandfourthrows)preservethem.TheBCCandtheFCCreconstructions keeptheindividualstructureofeachribclear,howeverthereconstructionontotheCC 50

PAGE 51

21 22 23 24 25 26 27 28 1 10 -3 5 10 -3 1 10 -2 5 10 -2 1 10 -1 SNR (dB) Regularization ( ) CC BCC FCC Figure3-10. TheCarpFishdatasetreconstructedfrom N =800 000irregularsamples. ResolutionoftheCC(redline),BCC(blueline)andFCC(greenline) latticesare P c P b and P f ,respectively.TheSNRunderestimatesthe advantagesoftheBCC/FCC(alsoFigure 3-11 ),whichisinuencedbythe existingnoiseintheunderlyingdataset. latticehasadditionalartifactsmergingneighboringribs.IngeneraltheBCCandthe FCClatticesyieldacleanerreconstructionthantheCClatticedoes.TheSNRvalues underestimatetheperformanceoftheBCCandtheFCCcasessincethevaluesare computedovertheentirevolumeincludingnoisyareasthatareawayfromtheirregularly pickedsamples. 3.3.3.4ResiliencetoJitterNoise AswhatwehavedoneinSection 3.3.2.4 ,wenowexaminetheresilienceofthese latticesagainstjitternoiseinourbandlimitedreconstructionframework.Theexperiment settingisthesameasSection 3.3.2.4 Figure 3-12 (A)plotstheperformanceoftheCC,theBCCandtheFCClattices withvaryingradiiofperturbationonsamplinglattices(i.e.,jitteredsampling).These experimentsdemonstratethattheBCCandtheFCClatticesconsistentlyoutperform theCClatticewithvaryingirregularityofthesamplinglattices.Figure 3-12 (B)plots 51

PAGE 52

AGround-truth BCC,SER=23 38 CBCC,SER=22 78 DFCC,SER=24 39 Figure3-11. RenderingofCarpshdatasetcorrespondingtothecasesof + =1 % 10 3 in Figure 3-10 .A)Theground-truth.TheSNRvaluesofB)theCC,C)the BCCandD)theFCCreconstructionare22 99dB,22 72dBand24 34dB, respectively.Thetailnsandribsareas(dottedboxes)arereconstructed moreaccuratelyontheFCCandtheBCCcomparedtotheCC.TheSNR valuesunderestimatetheadvantagesoftheBCCandtheFCCbecauseofthe existingnoiseinthebackgroundoftheCTdataset. 52

PAGE 53

theresultsofourjittereddataexperiments.Wecanseethatwithincertainrangeof perturbationofthesampleddata(e.g.,theperturbationislessthan0 4 T ),theBCCand theFCClatticeshavebetterreconstructionresultsthantheCClattice.However,as theperturbationgoesup,theperformanceofthreelatticesdropdowntoasimilarlevel. Withregularization,theCClatticeevenyieldsslightlybetterresultsthanBCCandFCC latticesfortheperturbationlargerthan0 6 T Figure 3-12 (B)showsquiteadi!erentpatternfromFigure 3-12 (A).Injittered sampling(Figure 3-12 (A)),theperturbation(onsamplinglattice)isdonebeforesampling thesignal,andthusprovidesnonuniformsamplesofthesignal.However,injittered data(Figure 3-12 (B)),thealreadyregularly-sampleddataisperturbedtoanearby location,andthedatavalueistreatedasthesignalvalueatthatlocation.Forjittered datacase,beyondcertainlevelofperturbation(e.g.0 5 T ),theprovidedirregularsamples becomehighlyunreliableforreconstruction,vanishingtheadvantagesofBCCandFCC reconstruction.BothFigure 3-12 (A)andFigure 3-12 (B)showthattheregularization leadstoaslowdropinperformancewhentheperturbationissevere. Figure 3-13 andFigure 3-14 illustratetheaccuracyofreconstructionsofthe Aneurysmdatasetfromjittereddata(i.e.,perturbationofregularly-sampleddata). FromFigure 3-13 ,weseethattheFCCandtheBCCreconstructionsyieldaround1 and2dBhigherSNR'sthantheCCreconstruction,respectively.AllSNRvaluesare relativelylowbecausethedatasetisnoisyandstructuresarecontainedinaverysmall regionofthevolume.Thereconstructiondi!erencecanalsobeseenfromFigure 3-14 wheretherendereddatasetscorrespondtothecasesof + =0 05and + =0 2inFigure 3-13 .ThedottedboxeshighlightareaswherethinveinsareabsentwiththeCClattice whiletheBCCandtheFCCstillpreservetheseveins,whichindicatesahigherSNRin thoseregions. 53

PAGE 54

8 12 16 20 24 28 32 36 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 SNR (dB) Perturbation (T) CC, = 0 CC, =1 10 -3 BCC, = 0 BCC, = 1 10 -3 FCC, = 0 FCC, = 1 10 -3 AJitteredsampling 0 4 8 12 16 20 24 28 32 36 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 SNR (dB) Perturbation (T) CC, = 0 CC, =1 10 -3 BCC, = 0 BCC, = 1 10 -3 FCC, = 0 FCC, = 1 10 -3 BJittereddata Figure3-12. ReconstructionoftheMLdatasetfrom N = M randomsamplesusingCC ( M = P c ,redlines),BCC( M = P b ,bluelines)andFCC( M = P f ,green lines)reconstructionlattice.A)Jitteredsampling.B)Jittereddata. 54

PAGE 55

7.5 8 8.5 9 9.5 10 10.5 0 1 10 -3 5 10 -3 1 10 -2 5 10 -2 1 10 -1 2 10 -1 SNR (dB) Regularization ( ) CC BCC FCC Figure3-13. ReconstructionoftheAneurysmdatasetfrom N = M samplesusingCC ( M = P c ,redlines),BCC( M = P b ,bluelines)andFCC( M = P f ,green lines)reconstructionlattice.Themaximumradiusoftheperturbationis 0 25 % T 3.3.3.5LanczosWindowed sinc Parameters WehavealreadydescribedinSection 3.3.3.1 thattheLanczoswindowedsinc L involvestwoparameters, a and n ,where a determinesthesizeofthekernelsupport,and n controlsthedegreeofcontinuityofthewindowedsinc.Inourprevious3-Dexperiments, wehaveused a =3and n =2. Figure 3-15 istheSNRresultsofreconstructionsfrom N =1 2 M randomMLsamples withdi! erentparametersettingsofthewindowedsinc.Thesolidlinesinbothguresare ourcomparisonbasiswiththesamekernelparametersettingsastheexperimentsshown intheprevioussections.DottedlinesinFigure 3-15 (A)showtheresultswith a =3 and n =3,anddottedlinesinFigure 3-15 (B)areforthecaseof a =2and n =2. Wecanseethatincreasingthekernelfunctioncontinuity(Figure 3-15 (A))decreases thereconstructionaccuracybecausethekernelover-smoothsthehighfrequencypartsof thesignal.Smallerwindowsize(Figure 3-15 (B))alsodecreasestheperformancesince fewersamplesareusedforreconstruction.However,wecanobservefromtheseplotsthat 55

PAGE 56

ACC,SNR=8.54dB BBCC,SNR=10.12dB CFCC,SNR=9.39dB DCC,SNR=7.62dB EBCC,SNR=8.85dB FFCC,SNR=8.72dB Figure3-14. RenderingoftheAneurysmdatasetwiththereconstructionsetup correspondingtothecasesof + =0 05(A),B),C))and + =0 2(D),E),F))in Figure 3-13 thereconstructionsfromtheBCCandtheFCClatticesstilloutperformthoseoftheCC latticeamongvariouskernelsettings. 3.3.3.6ComputationalCost Ourexperimentspresentedinthissectionwerecarriedoutonadesktopwithasingle core2.20GHzCPU.Inourexperiments,theMatlabconjugategradientsolverreturned within30minutes.Thereconstructiontimecanbesignicantlyshortenedwithmore powerfulmachines(e.g.,multi-core)andparallelizationtechniques. Wealsoobservethat,withthesameparametersetting,reconstructionontothe BCCandtheFCClatticesrequirelesstimetosolvethesystemEquation( 311 )than reconstructionontotheCClattice.ThisisbecausethattheLanczosWindowedsinc L 56

PAGE 57

12 16 20 24 28 32 0 1 10 -5 1 10 -4 1 10 -3 1 10 -2 SNR (dB) Regularization ( ) CC, a=3, n=2 CC, a=3, n=3 BCC, a=3, n=2 BCC, a=3, n=3 FCC, a=3, n=2 FCC, a=3, n=3 A 8 12 16 20 24 28 32 0 1 10 -5 1 10 -4 1 10 -3 1 10 -2 SNR (dB) Regularization ( ) CC, a=3, n=2 CC, a=2, n=2 BCC, a=3, n=2 BCC, a=2, n=2 FCC, a=3, n=2 FCC, a=2, n=2 B Figure3-15. ReconstructionoftheMLdatasetfrom N =1 2 M irregularsamples. ResolutionoftheCC(redlines),theBCC(bluelines)andtheFCC(green lines)latticesare M = P c P b and P f ,respectively.A)Solidlinesareresults fromkernelwith a =3and n =2,whiledottedlinesarewith a =3and n =3.B)Solidlinesareresultsfromkernelwith a =3and n =2,while dottedlinesarewith a =2and n =2. 57

PAGE 58

functionscorrespondingtotheBCCandtheFCClatticeshavesmallersupportcompared totheCCone.Thesystemmatrices S fortheBCCandtheFCCcasesarethensparser thantheCCcase,whichinturnreducesthecomputationalcostofnumericalmethods takingadvantageofsparsity,suchastheconjugategradientmethod. 58

PAGE 59

CHAPTER4 COMPRESSEDSENSINGANDSPARSEAPPROXIMATION Inthepastdecadetherehasbeentremendousprogressintheeldofsparse representationofsignals.Thefundamentalideaistotransformasignaltoanotherdomain wherethesignalissparselyrepresented.Forexample,waveletsprovidegenericsparsifying transformfornaturalsignals.Transformingthesignaltowaveletdomainreturnsafew non-zerowaveletcoe"cientsthatsparselyrepresentthesignal.Whilewaveletsandtheir moderngeneralizationshavebeensuccessfulinprovidinggenericbasisfunctionsforsparse representationofdata,exploitingthegeometryofthemultivariatedatacanresultin tailoredbasisfunctionsthatsparselyrepresentaclassofdatasetsinanoptimalway[ 53 ]. Thesparsetransform-domainrepresentationofdatahasbeusedfornumeroustaskssuch asestimation,compressionanddenoisingproblems[ 31 ]. Thesparsemodelingnotonlyprovidesaexibleframeworkformanydataprocessing applications,itisalsothefundamentalbuildingblockofthecompressedsensingtheory[ 3 16 ].Thesamplingmechanismofcompressedsensingallowssub-Nyquistratesamplingofa signalandtheaccuratereconstructionisachievedbyexploitingthesparserepresentation ofthesignal.Compressedsensinghasemergedasapowerfulparadigmforreducing samplingrate:modalitiessuchasMR[ 60 61 ]andCT[ 77 ]andscienticsimulation experiments[ 4 ]areadoptingconceptsfromcompressedsensingtoexploitsub-Nyquist ratesforacquisition. Recentadvancesinscienticsimulationhaveresultedinlargescalevolumetricdata. Forexample,climate,combustion,orsupernovasimulationscangeneratevolumesof sizeofhundredsofterabytes[ 63 84 ].Ontheotherhand,thebandwidthofdiskreading andwriting(i.e.,I/O)couldhardlykeepupwiththegrowthofthecomputationpower. Thecompressedsensingframeworkfacilitatesdatareductionandcouldbeapromising approachtoalleviateI/Oproblems.Inthischapter,wereviewkeyconceptsthatare pertinenttoourdatareductionframeworkpresentedinnextchapter.Forclarityof 59

PAGE 60

presentationwesummarizecommonnotationsusedincurrentchapterandnextchapterin Table 4-1 Table4-1. NotationSummary NotationDescription N sizeofthe(vectorized)signal x k numberoffeaturesornon-zerosofthesignal x n sizeofmeasurementvector y ;numberofsamples $ samplingrate, $ = n/N x vectorizedsignalvectorofsize N % 1 y measurementvectorofsize n % 1 A sensingmatrixofsize n % N coe"cientsofthesignal x inatransformdomain # transformationmatrixthatrelatesasignal x toitscoe" cients: x = # ( errorestimateinEquation( 47 ),stoppingcriterionofthesolverNESTA smoothnessparameterofthesolverNESTA 4.1CompressedSensingTheory Inthissection,webrieydiscussconceptsfromthecompressedsensing(CS)theory. TheworksbyDonoho[ 25 ]andCand`esetal.[ 12 15 ]canbeconsultedforamorein-depth presentation. 4.1.1RestrictedIsometryProperty Let x # R N beavectorizedvolumedatasetcomprising N voxels.Ifmostofthe voxelsarezero,thenthevolumerepresentedby x issparse.Thesparsityisdescribedby thesocalled & 0 -(pseudo)norm, $ x $ 0 ,thatsimplycountsthenumberofnon-zeroelements in x .Fora k -sparsesignalwehave $ x $ 0 k .Thesensingmechanismislinearasthe measurementvector(i.e.,thesampleset) y # R n islinearlyrelatedto x viaan n % N matrix A : y = Ax (41) with n<
PAGE 61

[ 14 ]:thereexistsarestrictedisometryconstant # (0 1)suchthat (1 ) $ x $ 2 2 ,$ Ax $ 2 2 (1+ ) $ x $ 2 2 (42) holdsforany k -sparsevector x ,where $ $ 2 denotesthe & 2 norm.Intuitively,therestricted isometryimpliesthatinavalidsensingmatrix A ,everypossiblesetof k columnsof thematrix A formsanapproximatelyorthogonalset.Matricesthathavebeenproven (probabilistically)tomeetRIPincludepartialFourierorcosinematrices(i.e.,with randomlyselectedrowsfromthefulldiscreteFourierorcosinetransformmatrix), GaussianandBernoullirandommatrices(withelementsi.i.d.drawnbynormaland Bernoullidistributions,respectively)[ 14 15 ]. Thereconstruction(recovery)mechanismthatisassociatedwiththissensing, isnon-linearandcanbeformulatedasanoptimizationproblem(alsocalledsparse approximationproblem[ 17 ]): min x $ x $ 0 subjectto Ax = y (43) When n 2 k thisoptimizationproblemexactlyrecoverstheoriginal x [ 12 25 ].This meansthat,intheory,wecansolvefortheunder-determinedsystemuniquelybysearching forthesparsest x thatsatisesthesensingconstraints. 4.1.2TransformDomainSparsity Manysignalsmaynotnecessarilybesparseinthecanonicalspacedomain.However, mostofthenaturalsignalsaresparseincertaintransformdomain.Forexample,wavelets andtheirgeneralizationsprovidegenericsparsifyingtransformsfornaturalsignals. Manytransform-codingcompressionsystemssuchasJPEG2000andMPEGactually utilizethisfactthatwavelettransformsofmanysignalsleadtosparseorcompressible representations,thatis,mostofthetransformcoe" cientsarezeroorveryclosetozero (i.e.,compressible).Let # denotethechangeofbasismatrixthatrelatesasignal x toits, say,waveletrepresentation: x = # .Here,thematrix # representsabackwarddiscrete 61

PAGE 62

wavelettransformand denotethewaveletcoe" cientsofthesignal x .Wecanintegrate thesparsetransform-domainrepresentationintotheCSframeworkEquation( 43 )and re-formulatetheoptimizationinthetransform-domain: min $ $ 0 subjectto A # = y (44) or min x $ #   x $ 0 subjectto Ax = y (45) where #   representstheforwardtransform. TheEquation( 44 ),commonlyreferredtoasthe synthesis sparseapproximation problem,closelyfollowsthatofEquation( 43 )withtheexceptionthatthelinear systemthatspeciestheconstraintsistransformed: A # .Ithasbeenshownthatifthe sparsifyingoperator # isatightframe[ 51 ], A # wouldsatisfyRIPwithcertainrestricted isometryconstant,provided A meetsRIP[ 11 ].Oncewehavethesparsesolution, ,in thetransform-domain,wecanreadilyreconstructtheoriginalsignal: x = # TheproblemEquation( 45 )isthe analysis sparseapproximationproblem,which isnotaswidelystudiedasthesynthesiscounterpart.Ingeneral,thesetwoproblemsare notequivalentunlessthematrix # isabasis[ 32 ].Recently,theanalysisformulationhas attractedattention[ 11 32 72 ]andmanyempiricalworkscomparingtheanalysismodel andthesynthesismodelhaverevealedthee! ectivenessoftheanalysis-basedapproach [ 79 83 ].However,itisdi"culttoconcludewhichoneisbetteringeneralandtheoretical analysesoftheanalysismodelanditsrelationtothesynthesismodelarestillnotfully understood[ 72 ]. Wenotethatthemeasurementvector y isobtainedvia y = Ax withoutknowledge ofthetransformdomain, # .Inotherwords,theacquisitionisuniversal:itisperformed withoutknowingwhich # sparsiesthesignal x .Thisimpliesthatwheneverwehave abetterdescriptionoffeaturesofinterest(anew # % throughlearning,futurex-letsor domainknowledgethatprovidesadictionaryoffeatures),wecanre-usethesamesample 62

PAGE 63

set y andreplace # intheoptimizationframeworkEquation( 44 )orEquation( 45 ). InChapter 5 wewilldemonstratethesuitabilityofthisfeatureoftheCSframeworkfor volumetricdata.Bychoosingsuitabletransformdomain,wecanincreasetheaccuracyof reconstruction,orreducethenumberofnecessarymeasurements. 4.2SparseApproximation ItisdemonstratedthatsolvingEquation( 43 )isanNP-hardproblem[ 13 ]fora non-trivial A .However,itturnsoutthatwithincreasingthenumberofsamplesto n = O ( k log N )(with k = $ x $ 0 ),wecanndpolynomial-timealternatives.Weexamine heretwodistinctapproachestotheoptimizationproblemEquation( 43 ),namelyconvex relaxationandgreedymethods.Thesetwoapproacheshaveprovablycorrectsolutions when A meetstheRIPconditions.Newermethodsexist(e.g.,homotopic & 0 minimization [ 91 ])thato!erattractiveresultsintermsofreconstructionaccuracyandspeedunder certainconditions.However,thetheoreticalguaranteesforreconstructiono!eredbysome convexoptimizationandgreedymethodsmakesthemattractiveforapplications. 4.2.1ConvexRelaxation Inthecontextofstatisticalregression,ithasbeenknown[ 86 ]thatsolvingalinear systemwith & 1 normconstraintonthesolutionprovidesasparsesolutionandthatthe & 1 norm"promotes"sparsity.Thisisduetothefactthatsmallfractions(coe" cientsof x ) getpenalizedmoreseverelyinthe & 1 norm. Ithasbeenproventhatoptimalornear-optimalsolutionstosparseapproximation problemscanbeachievedusingconvexrelaxationmethodsinavarietyofsettings[ 89 ]. Oncethemeasurementmatrix A satisesRIP,thehighlynon-convexoptimization problemEquation( 43 )canberelaxedtoitsconvexcounterpart[ 12 25 ]: min x $ x $ 1 suchthat Ax = y (46) ThisconvexoptimizationproblemisalsoknownastheBasisPursuitmethod[ 17 ]inthe literature. 63

PAGE 64

Inthecasethatthemeasureddataiscontaminatedbynoise z : y = Ax + z ,Equation ( 46 )isusuallyextendedtotheBasisPursuitDenoising[ 17 ]problem: min x $ x $ 1 suchthat $ Ax y $ 2 ( (47) where ( isanestimatedupperboundonthenoiselevel. ManyapproacheshavebeenproposedtosolveconvexproblemEquation( 46 )and Equation( 47 ):rst-ordermethod[ 5 ],interior-pointmethod[ 50 ],iterativeshrinkage method[ 98 ],alternatingdirectionalgorithms[ 9 104 ],splitBregmanmethod[ 40 ],to nameafew.Comparedtogreedytechniques(Section 4.2.2 ),convexrelaxationalgorithms usuallyhavebetterperformance(intermsofreconstructionaccuracyandrobustness) whenthesignalisnotverysparseandnoiseispresent[ 89 ]. 4.2.2GreedyMethods Theideabehindgreedymethodsisthattheyiterativelyrenethecurrentestimateof thesparsesignalbymodifyingoneorseveralcoe" cientssuchthatthemodicationyields abetterapproximationofthesignal.Theiterationcontinuesuntilallnon-zerocoe"cients arefoundorastoppingcriterionisreached. Forexample,orthogonalmatchingpursuit(OMP)[ 88 ],consideredasoneofthe simplestgreedyalgorithms,ndsnon-zeroentriesin x oneelementatatime.Ateach iteration,OMPpicksonecolumnof A thatismoststronglycorrelatedwith y andadds thatatomtoacolumnsetthatindicatethenon-zeroindicesof x .Thenitupdates y bysubtractingthecolumnset'scontributionfrom y anditerateontheresidual.After k iterations,thealgorithmmayidentifythecorrectsetofnon-zeroentries.Someofthe OMP-enhancedalgorithmsincludeStOMP[ 26 ],ROMP[ 74 ],CoSaMP[ 73 ]andOMPR [ 48 ]. Greedymethodsareviableforproblemswherethetargetsignalisextremelysparse. Thisisduetothefactthatgreedymethodsonlybringalimitednumberofnon-zero 64

PAGE 65

entriesof x atatime.Therefore,thenumberofiterationsincreaseslinearlyintermsof thesignaldensity. 4.3BoxSplineTightWaveletFrames WhilethewaveletsparsifyingbasishasbeenwidelyusedinmanyCSapplications,the theoreticalstudyofCSwithtightframeassparsifyingbasiswasalsowellestablished[ 11 ]. Similartowavelets,waveletframes(i.e.,framelets)separateasignal'shigh-passfrequency partsfromthelow-passfrequencypart.Unlikewaveletbasisthatconsistsoflinearly independentfunctions,waveletframesinvolvemanyredundantfunctionsthatcanprovide betterapproximateofsignalfeaturessuchasedgesandsurfaces[ 43 ].Tightwaveletframes aregeneralizedfromorthonormalwaveletsandmanyapproacheshavebeenproposedto generatetightwaveletframes[ 18 21 54 ]. InthissectionwereviewthemethodproposedbyLaiandStockler[ 54 ]that constructstightwaveletframesfromboxsplines.Sinceaboxsplinewithanyorder ofsmoothnesscanbeobtainedfromvariousdirectionsets(Section 2.3 ),theproposed constructionmethodismoreexibleandcanresultinsmallernumbersofmultivariate tightwaveletframesthanothermethods[ 54 71 ].InSection 5.3 ,weusethisconstruction schemetogenerateourown3-Dboxsplineframeletsfromthe7-direction,non-separable, trivariateboxspline[ 35 ].Inourfollowingpresentation,weadoptthenotationsusedinthe workbyLaiandStockler[ 54 ]. 4.3.1TightWaveletFrame Asetoffunctions { f i } i I iscalledaframeifthereexistpositiveconstants A B such that A $ f $ 2 i I | ( f,f i ) | 2 B $ f $ 2 f # L 2 ( R d ) where ( f,g ) = ( R d f ( x ) g ( x )d x denotestheinnerproductdenedontwofunctionsand $ f $ 2 := ( f,f ) isthe L 2 -normin L 2 ( R d ).Wecall { f i } i I atightframeif A = B .Itis 65

PAGE 66

known[ 20 ]thatatightframe { f i } i I canrepresentany f # L 2 ( R d )uptoanormalization: f = i I ( f,f i ) f i Atightwaveletframecanbeconstructedfromarenablefunctionsuchasabox spline.Arenablefunction # L 2 ( R d )isdenedbyarenableequation: ( ) )= P ( ) / 2) ( ) / 2) (48) where istheFouriertransformof and P ( ) )isatrigonometricpolynomial. P is oftencalledthemaskoftherenablefunction .Ifwecanndaseriesoftrigonometric polynomials Q i thatsatisestheUnitaryExtensionPrinciple(UEP)[ 81 ]: P ( ) ) P ( ) + & )+ i Q i ( ) ) Q i ( ) + & )= / 0 1 0 2 1 & =0 0 & # { 0 1 } 2 \{ 0 } (49) thenwecandenewaveletframegeneratorsorframelets / i intermsoftheFourier transformby / i ( ) )= Q i ( ) / 2) ( ) / 2) Theset$:= { / i } generatesatightframe[ 54 ].Bydening / j,k ( y )=2 jd/ 2 / (2 j y k )and Z isthesetofallintegers,then% ( $ ):= { / j,k ; / # $ ,j # Z ,k # Z d } formsatightwavelet frame.Thewavelettightframeseparateshigh-passfrequency(withhigh-passlters Q i ) fromthelow-passfrequency(withlow-passlter P ). 4.3.2Sub-QMFCondition Equation( 49 )inmatrixformissimply PP & + QQ & = I 2 d 2 d (410) where P =( P ( ) + & ); & # { 0 1 } d ) T 66

PAGE 67

isavectorofsize2 d % 1and Q =( Q i ( ) + & ); & # { 0 1 } d ,i =1 ,...,r ) isamatrixofsize2 d % r ,and Q & denotesthecomplexconjugatetransposeofthematrix Q .Theconstructionoftightwaveletframesinvolvesndingthe Q thatsatisesEquation ( 410 ).Ithasbeenshown[ 54 ]that Q canbeeasilyfoundif P satisestheQuadrature MirrorFilter(QMF)condition P & P =1,thatis $ { 0 1 } d | P ( ) + & ) | 2 =1 However,themask P ofarenablefunction usuallydoesnotsatisfytheQMFcondition butmaysatisfythesub-QMFcondition: $ { 0 1 } d | P ( ) + & ) | 2 1 (411) Forexample,ithasbeenproved[ 54 ]thatthemaskofanymultivariateboxspline with directionset& + Z d containingallofthestandardunitvectorssatisesthesub-QMF condition,andthemask P canbeinferredfromEquation( 48 ): P ( ) )= ! 1+ e i % 2 ,i = & 1 (412) Supposethatthemask P oftherenablefunction meetsthesub-QMFcondition,we presenttheconstructionoftightwaveletframefrom innextsection. 4.3.3ConstructionofTightWaveletFrames Anytrigonometricpolynomial P ( ) )canberewrittenasthesumofitspolyphase components.Let M =2 d/ 2 ( e im ( % + $ ) ) $ { 0 1 } d ,m { 0 1 } d ,i = & 1 ) # R (413) bethepolyphasematrix,where & denotestherowindexand m denotesthecolumn indexoftheunitarymatrix M .Uptoanormalization,thepolyphasecomponentsofthe 67

PAGE 68

trigonometricpolynomial P aredenedbythecolumnvector 3 P :=( 3 P m (2 ) ); m # { 0 1 } d ) T = M & P (414) whereeach 3 P m isatrigonometricpolynomial.Hence,weobtainthepolyphasedecomposition of P byinspectingtherstrowoftheidentity P = M 3 P ,whichgives P ( ) )=2 d/ 2 m { 0 1 } d e im % 3 P m (2 ) ) WiththesedenitionsrepresentedinEquation( 410 ),Equation( 413 )andEquation ( 414 ),thefollowingtheoremandtheassociatedproof(adoptedfrom[ 54 ])provideusthe constructiveschemetondtightwaveletframesbasedonmultivariateboxsplines. Theorem4.1 (Theorem3.4of[ 54 ]) Supposethat P satisesthesub-QMFcondition Equation( 411 ).Supposethatthereexisttrigonometricpolynomials 4 P 1 ,..., 4 P N suchthat m { 0 1 } d | 3 P m ( ) ) | 2 + N i =1 | 4 P i ( ) ) | 2 =1 (415) Thenthereexist 2 d + N compactlysupportedtightframegeneratorswithwaveletmasks Q m ,m =1 ,..., 2 d + N ,suchthat P Q m satisfyEquation( 49 ). Proof. Wedenethecombinedcolumnvector 4 P =( 3 P m (2 ) ); m # { 0 1 } d 4 P i (2 ) ); 1 i N ) T ofsize(2 d + N )andthematrix 4 Q := I (2 d + N ) (2 d + N ) 4 P 4 P & Equation( 415 )impliesthat 4 Q 4 Q & = 4 Q ,andthisgives 4 P 4 P & + 4 Q 4 Q & = I (2 d + N ) (2 d + N ) Restrictingtotherstprinciple2 d % 2 d blocksintheabovematrices,wehave 3 P 3 P & + 3 Q 3 Q & = I 2 d 2 d (416) 68

PAGE 69

where 3 P = M & P wasalreadydenedbeforeand 3 Q denotestherst2 d % (2 d + N )block matrixof 4 Q .Since P = M 3 P ,Equation( 416 )yields PP & + M 3 Q ( M 3 Q ) & = I 2 d 2 d whichisEquation( 410 ).Thuswelet Q = M 3 Q Thentherstrow[ Q 1 ,...,Q 2 d + N ]of Q givesthedesiredtrigonometricfunctionsfor compactlysupportedtightwaveletframegenerators. TheconstructiveschemepresentedinTheorem 415 allowustondtightwavelet framesbasedonmultivariateboxsplines.Similartotraditionalwaveletdecomposition,the tightwaveletframe(i.e.,framelet)decompositioncanbeachievedbasedontheobtained low-passlter P andhigh-passlters Q i ,wherethemultivariateimageisdecomposed(by convolutionintimespaceormultiplicationinfrequencyspace)intoalow-passsub-image andmanyhigh-passsub-images.Themulti-leveldecompositioncanbeachievedby recursivelydecomposingthelow-passsub-imagealongwithdownsampling[ 43 ].Wewill constructourown3-DboxsplineframeletdecompositionusingthisschemeinSection 5.3 4.4MultiscaleGeometricRepresentations Althoughtheyareacommonchoiceforsparserepresentations,waveletsareinfact onlyoptimalforapproximating(multivariate)datawithpointwisesingularities.Wavelets areunabletohandlesingularitiesalongcurvesorsurfacese" ciently[ 53 ].Thisisdue totheisotropiccharacteristicofwavelets,thatis,theyareobtainedbyisotropically dilatingasingleornitesetofgeneratingfunctions.Thereforewaveletslackdirectional sensitivity,andareunabletodetectthegeometry(i.e.,singularities)ofmultivariatedata well.Thedi"cultywithrepresentinggeometricsingularitiesispresentinHaaraswellas higher-order(e.g.,Daubechies)wavelets.Forvolumetricdata,thesingularitiesintroduced bysurfaceboundariesareusuallypresent.Whenusingwaveletstosparselyrepresent 69

PAGE 70

volumetricobjects,thesesingularitiesrequiremanycoe"cientsintherepresentationto accuratelycapturethem,leadingtosub-optimalsparserepresentations. Thereisavarietyofgeometricextensionsofwaveletssuchascurvelets[ 10 ],shearlets [ 53 ]andsurfacelets[ 57 ],exploitingthegeometryofthemultivariatedataforoptimally sparserepresentations.Unlikeconventionalwavelets,thesesocalledX-letsusuallyallow anisotropicscaling,shearing,androtationofbasiselementssuchthatthesebasescan captureobjectstructuresmoree"ciently.Forexample,in3-Dsettings,thecurvelet andsurfacelettransformsallowfordirectionaldecompositionof3-Dsignals,andcanbe usedtoe" cientlycaptureandrepresentsurface-likesingularitiesinvolumetricdata.A comprehensivelistofthesedirectionaldecompositionmethodsisavailableonline 1 Whenusingthesedirectionaltransforms,weneedtopayattentiontotheirredundancy factors(i.e.,thenumberoftransformedcoe"cientsoverthenumberofsignalelements). Ahighredundancyfactorcanmaketheproblemsizetoolargetohandle,whichbecomes especiallyproblematicwhenprocessingvolumetricdata.Forexample,the3-Dcurvelet transformhasaredundancyfactorofapproximately25whilethe3-Dsurfacelettransform hasaredundancyfactorofapproximately4.Toreducetheredundancyfactorofthe3-D curvelettransform,theoriginalpaper[ 10 ]suggeststouseamixrepresentationwhere wavelets(insteadofcurvelets)areusedatthenestscale.Thenewversionofthecurvelet transformhasaredundancyfactorofapproximately5butitsdirectionalselectivityofne detailsissignicantlyreduced.Therefore,thelowredundantcurvelettransformismore suitableforbandlimitedsignals[ 57 ]. InChapter 5 ,wewillexploitandexaminethesparserepresentationso!eredby wavelets,boxsplineframelets,curveletsandsurfaceletsinourproposeddatareduction andreconstructionframework. 1 Seehttp://www.laurent-duval.eu/siva-wits-where-is-the-starlet.html. 70

PAGE 71

CHAPTER5 VOLUMEREDUCTIONINACOMPRESSEDSENSINGFRAMEWORK 5.1In-situDataReduction Thegrowingpowerofhardwarehasadvancedscienticsimulationandacquisition intotheeraofextreme-scale.Traditionalwayofsavingasmuchrawdataasthestorage capacityallowshasturnedI/Ointoaperformancebottleneck.Therefore,thein-situ ("inposition")dataprocessinghasbecomewidelyusedtoreducethedataoutputfor large-scalesimulations[ 62 64 ].Insteadofsimplydumpingtherawdataintostorage forpost-processing,thedataispre-processedinthesimulationstagewhilethedatais stillinmemory.Comparedtothewholesimulationpipeline,the"in-situ"pre-processing takesonlyasmallportionofthetime.Ifdonecarefully,compactrepresentationsthat encodekeyinformationoftherawdatacanbeobtainedduringthepre-processing.Itmay besu"cienttosavethesalientinformationinsteadoftheoriginalrawdata,leadingto considerablesavingsondatatransfertimeanddatastoragespace. Thesimplestwaystoreducedatasizein-situaresubsamplingandquantization[ 64 ]. Large-scalesimulationsusuallyrunthousandsoftimestepinaneresolution.However, becauseofI/Obottleneckandstorageconstraints,researchersmayselectivelysave,say every100thtimestep,orlowresolution(intermsofcoarsedataresolutionanddata precision)versionsoftheresults[ 108 ].Oneexampleofquantizationisscalarquantization, whichmapstherangeoforiginaldatavaluestoamuchcoarseronethusachievingahigh compressionrate.Notethatthemissingtimestepsmakethepost-precessingtasks,such asvisualizationofpathlinesofvectorelds,problematic[ 64 ].Thereforethehigh-cost simulationhastoreruntoprovidethedesireddata.Bothsubsamplingandquantization techniquesresultinawasteofcomputingtime. Awidelyuseddatareductiontechniqueistransform-basedcompression[ 47 107 ].The discretecosinetransformandthewavelettransformarethemostpopulartransform-based encodingandcompressionapproaches(e.g.,JPEG,JPEG2000andMPEG).The 71

PAGE 72

dataistransformedfromthecanonicaldomaintoanotherdomain,suchthatthe energyconcentratesonasmallpercentageoflargecoe"cients.Therefore,thesignal iseithersparse(onlyafewofthecoe" cientsarenon-zero)orcompressible(canbe well-approximatedbyafewlargecoe"cients)inthetransformdomain,yieldingacompact storage. Featureextractionisalsocommonlyusedforin-situdatareduction[ 64 ].Inafeature extractionprocess,afewimportant"features",areextractedfromthelarge-scaledata andthenonlytheextractedfeaturesarekeptforlateranalysis.Forexample,inthe applicationofowvisualization,afeaturecanbeboundariesofobjectsthatcanbefound byimageprocessingtechniquessuchasedgedetection;orcriticalpointsofavectoreld thatcanbeclassiedbyeigenanalysisofthevelocitygradienttensor;orregionswith vorticesthatcanbeidentiedbytechniquescombiningphysicalcharacteristicsanalysis andtopologicalanalysis[ 80 ].Otherfeatureextractionmethodsexploitingmachine learning[ 92 ],informationtheory[ 99 ],andtopologysimplication[ 44 ]arealsoproposed forparticularneeds.Comparedtothecompletedatasets,thefeaturesareusuallymuch smallerinsize. Intheexistingmethodologies,certainassumptionisalwaysimposedtothedata duringthedatareductionorfeatureextractionstage,suchasthedataisbandlimitedor canbesparselyrepresentedbyadictionary.Oncethefeaturesorcoe"cientsareextracted, recoveringtheoriginaldatamaynotbepossiblesincetheassumptioncanbeinaccurate ortheprocessisirreversible.Besides,scientistsdonotalwaysknowexactlywhichfeature domaintsthedatabestbeforethedatareductionprocess[ 64 ].Thiscancauseproblems for exploration applicationswherethedomainexpertsneedtosearchandrevisethe featuresofinterest.Forsuchexplorationapplications,thelossofinformationduringthe in-situdatareductionstagemeansthatthecostlysimulationshavetoberepeatedinorder tore-createtheoriginaldata.Thisneedtorepeatingsimulationscanbeeliminatedinthe compressedsensingframeworkdiscussedinthischapter. 72

PAGE 73

5.2MotivationandContributions Unlikethetraditionaldatareductionandcompressionmethodologieswherethesparse modelingisconsideredasapriori,thecompressedsensing(CS)frameworkonlyinvolves thefeaturedomaininthedecodingstage.TheadvantageoftheCSframeworkisthat atthepre-processingstagenopriorinformationisrequiredaboutthefeaturespace justtheassumptionthatthedataissparseinsomefeaturespace.Thismeansthatonce thedataissensedusingtheCSframework,wecancontinuetorenethedenitionof featuresthroughdictionarylearningorothersourcesofdomain-knowledgeafterthedata reductionstage.Thisexibilitymakesourapproachsuitableforin-situprocessing.With arenednotionoffeatures,oneonlyneedstore-runthereconstructiononthe"old" CSdatawithouthavingtoaccesstheoriginaldatafromthehigh-resolutionsimulation results.Thisforward-compatibility'oftheCSdatareductionallowsthedomain-experts tofurtherstudythesimulationstobetterdenethefeaturesofinterestinthesimulation data.Figure 5-1 depictstheproposedframeworkforin-situdataprocessing. Contributions .Ourprimaryinterestistoexploittheuniversalityofcompressed sensingforthein-situdatareductionoflarge-scalevolumetricdatasets.Althoughthe compressedsensinghasbeenwidelyappliedonimageandvideoprocessing,ithasnot beenadoptedonvolumetricthereal3-Ddata.Inthissense,weextendtheapplication ofcompressedsensingtotheeldofvolumetricdataprocessing.Todemonstratethe universalityofcompressedsensing,weconstructatightwaveletframeusingaseven directionboxsplineandderiveourown3-Dframeletdecomposition,whichisnever donebefore.Inourexperiments,wehavealsoappliedboththecurvelettransformand thesurfacelettransformonvolumetricdata.Thetraditionalusageofthe3-Dcurvelet transformandthe3-Dsurfacelettransformismostlywithvideodenoising.Toour bestknowledge,wearethersttoadoptandcomparethesetwotransformsforsparse approximationofvolumetricdata.Wearethersttoproposeusingsurfaceletsasa sparsereconstructiondomainalongwithanalysisapproximationmodelforthecompressed 73

PAGE 74

Universal sensing: y = Ax Storage I/O Bottleneck Sparse approximation: min || x|| s.t. Ax = y Feature domain refined by domain experts: X = W 0 x Acquisition/ Simulation Data Reduction Stage Data Recovery Stage y x R N y R n n << N ! x Analysis/ Visualization Figure5-1. Thesensingandrecoverypipeline.TheCSframeworkallowsustosensethe datawithoutpriorinformationofspecicfeaturedomainsduringthedata reductionstage.Domainexpertscanfurtherrenethedenitionoffeatures withouthavingtorepeatthesimulationprocess.Inthedatarecoverystage, thecompressivelysenseddatacanbereusedtoreconstructtheoriginaldata withindi! erentfeaturedomains. 74

PAGE 75

sensingin3-D.Ourstudiesandexperimentresultsmotivatefutureresearchonthestudy ofcustom-designedsparserepresentationsforlarge-scalevolumetricdata. 5.33-DBoxSplineFramelets Inthissection,wederiveourconstructionofa3-Dboxsplinetightwaveletframe (i.e.,boxsplineframelets)basedontheprocedurediscussedinSection 4.3 .The3-Dbox spline(Section 2.3 )usedinourconstructionisasevendirectionboxspline B 7 := B representedbythematrix: := # # # # $ 1001 1 11 010 11 11 001 1 111 % & & & & Thissevendirectiontri-variateboxsplinewasproposedbyEntezariandMoller[ 35 ]asa reconstructionkernelforreconstructingvolumetricdatasampledontheCartesianlattice. ThesevendirectionboxsplineisathreedimensionalextensionoftheZwart-Powell elementin2-Dandyieldsa C 2 reconstruction[ 35 ].Wenotethattheuseoftheseven directionboxsplineisbynomeanstheoptimalchoicefortheconstruction.When constructingaboxsplinewaveletframe,ourchoiceoftheboxsplineisexible. Themask P 7 oftheboxsplinefunction B 7 canbefoundeasilyfromEquation( 412 ). TouseTheorem 4.1 ,weneedtondtrigonometricpolynomials 4 P thatsatisfyEquation ( 415 ).However,itischallengingtosolveEquation( 415 )sincethenumberofaswell asthedegreesofthesetrigonometricpolynomials 4 P areunknown.Wedonothavean systematicapproachtondthetrigonometricpolynomialscurrently[ 43 71 ].Inour study,wefoundthesetrigonometricpolynomialsbybruteforce.Westartedwithasmall numberoflowdegreepolynomialsandsolvedthenonlinearsystemofequationswiththe helpoftheMATLABoptimizationtoolbox.Aswegraduallyincreasedthenumberand thedegreesofthesearchingpolynomials,wecouldeventuallysolvethesystemwithina 75

PAGE 76

desiredtolerancesuchthat m { 0 1 } d | 3 P m ( ) ) | 2 + N i =1 | 4 P i ( ) ) | 2 =1+ 0 wheretheerror | 0 | < 1 % 10 10 ,andthefoundtrigonometricpolynomialsare( i = & 1): 4 P 1 ( x,y,z )= 0 091958523720075+0 029540950360654e i ( x + z ) 0 055393944197397e 2 ix +0 029540950360654e i ( x y ) 0 033057924273848e i ( y z ) +0 349375516899988e i ( x + y ) 0 204706925587171e i ( x z ) 0 033057924273848e i (2 x + y + z ) 0 024106717973875e ix +0 033824537649418e iy 4 P 2 ( x,y,z )=0 144114271924141 0 022800386056568e i ( y +2 z ) 0 240088353438797e i ( x + y ) 0 011713963408866e i (2 x + y + z ) 0 051870635996870e 2 iy +0 139528094359981e i ( y + z ) + 0 042830963369529e i ( x +2 y ) 4 P 3 ( x,y,z )= 0 2494136662+0 046319872045671e 2 ix + 0 313794185682974e i ( y z ) 0 121306450446461e i ( x + y ) 0 016592264521335e i ( x z ) +0 029859165099415e 2 iy 0 002660839600869e i (2 x + y ) 4 P 4 ( x,y,z )= 0 210955562252439+0 031180905619549e ix + 0 185051897228716e i (2 x + y ) 0 005277235816680e iz 4 P 5 ( x,y,z )= 0 052619569700884 0 018244960614608e 2 ix + 0 262792175311200e i ( x + y + z ) 0 016498683733553e i ( y z ) 0 248640198359287e i ( x + y ) 0 013820599024194e i ( x z ) 0 204115361426441e i (2 x + y + z ) +0 075805738143829e 2 iy + 0 053525053883541e i ( y + z ) +0 161816407830973e i ( x +2 y ) 4 P 6 ( x,y,z )= 0 020979953063565+0 032344356043292e i ( y +2 z ) 0 068642822413993e i (2 x + y + z ) +0 057278418452243e ix 76

PAGE 77

4 P 7 ( x,y,z )= 0 063744765278902 0 121580250824103e i ( x + y + z ) 0 019596932659393e i ( y z ) 0 162993714780325e i ( x + y ) 0 046415962717723e i ( x z ) +0 227610168803249e i (2 x + y + z ) 0 012205916739895e 2 iy +0 045015715073069e i ( y + z ) 0 006102958369948e i ( x +2 y ) +0 160014609440697e iz Withthesetrigonometricpolynomials,wefollowtheconstructionschemepresented inTheorem 4.1 toobtain15highpasslters Q ( $ ) ( x,y,z )= 5 m,n,k q ( $ ) m,n,k e imx e iny e ikz andonelowpasslter P 7 ( x,y,z )= 5 m,n,k p m,n,k e imx e iny e ikz thatcanbeusedinour 3-Dboxsplineframeletdecomposition.Similartothediscretewaveletdecomposition, the3-Dboxsplineframeletdecompositionisachievedbysuccessivehighpassandlowpass lteringofthe3-Dsignal.Thelteredsignalisthensubsampledbydiscardingeveryother sample.Theprocedure,whichiscalledthesubbandcoding,canberepeatedforfurther decompositiononthelowpasslteredsignalsuchthatapyramiddecompositionstructure isobtained.Our3-Dboxsplineframeletdecompositionhasalowredundancyfactor around3 1 .AswepointedoutinSection 4.4 ,alowredundancyfactortendstokeepthe problemsizeinamanageablescaleacrucialconsiderationforvolumetricdataprocessing applications. 1 Inourconstructionoftheboxsplineframelets,weobtained16(15highpass+1 lowpass)ltersofsize15 3 .Thelteringprocesswasimplementedbyconvolvingthe signalwiththelters,followedbydownsampling.Foradatasetofsize x 3 ,aone-level decompositionleadstoaredundancyfactorof 16( x +15 1 2 ) 3 x 3 = 2( x +14) 3 x 3 A( n +1)-leveldecompositionleadstoaredundancyfactorof 2 5 i =0 ...n ( x +14 2 i ) 3 x 3 < 16( x +14) 3 7 x 3 77

PAGE 78

5.4ExperimentsandDiscussion Inthissectionweexamineourframeworkofvolumetriccompressedsensingand discussitsexibilityandadvantagesfordatareduction.Wedemonstratethatwithasmall setofrandomdiscretecosinetransform(DCT)measurements,wecanrecoveravolumetric datasetaccuratelyinashorttime.Ourexperimentresultssuggestthatthecombination oftheanalysissparseapproximationmodelandthesurfacelettransformisparticularly e "cientforsparseapproximationofvolumetricdatasets. Wealsocompareourvolumetriccompressedsensingframeworkwithcommonly useddatareductiontechniquessuchasrun-lengthencodinganddownsampling.We demonstratethattheperformanceofdatareductionandreconstructionaccuracyof theproposedframeworkiscomparabletooroftenbetterthanthoseoftherun-length encodinganddownsamplingapproaches.Inaddition,wedemonstratetheadvantages oftheuniversalsensingmechanism,a! ordedbytheCStheory,ofourdata-reduction framework. Themainthemeweinvestigateinourexperimentsisthenotionofuniversalityof sensing,thatis,withasparserrepresentationofthedata x ,wecanre-usethesenseddata y ,tofurtherrenethereconstructionshenceimprovingtheaccuracyanddelityof visualization.Unlikeotherdatareductiontechniques(e.g.,downsampling),theproposed frameworkprovidesaexiblesolutiontodatareductionofgenerickindsofvolumetric datasets. 5.4.1ImplementationDetails Forlargeimageandvolumetricdata,thematrix A (or # )isprohibitivelylarge anditcannotbeloadedinmainmemory.Instead,thematrix A canberepresented bystoringtheindicestotherandomwavemodesselectedformeasurement(i.e.,' containstheindicesoftherandomDCTrows).Therefore,intheimplementationofsparse approximationmethodsdiscussedinSection 4.2 ,wecanuseaproceduralformofmatrix A wherewecompute A anditstranspose,onthey,formatrix-vectormultiplications. 78

PAGE 79

Forexample,inourimplementationofthevolumetriccompressedsensinginEquation ( 41 ),thematrix-vectormultiplication(i.e., Ax )canbee"cientlyimplementedby applyingthe(3-D)DCTon x ,retainingtheDCTcoe"cientscorrespondingtothedesired measurementindices'.Moreover,whenwechoose # tobethewavelet(orframelet, curvelet,surfacelet)transformmatrix,tocomputetheproductof A # withavector,we cansimplyapplythewavelettransformandthentheDCTontheinputvector,andpick resultingelementscorrespondingtodesiredpositions(givenby').Sinceweusuallyhave fastalgorithmstoperformthesetransformsandthealgorithmsmayevenbenetfrom parallelcomputing,matrix-vectormultiplicationinthiscasecanbecomputede" ciently. Theapproachhasaverysmallmemoryfootprint(sizeof' ). TheacquisitionandreconstructionframeworksdescribedinSection 4.1 andSection 4.2 areimplementedinC++andMATLABforourexperimentsonvolumetricdatasets. Inourexperiments,wehaveusedFFTWlibrary 2 toperformtheparallelDCTtransform onthevolumetricdata.Duringtheacquisitionstep,afterapplyingDCTontheground truthdataset,wekeepasmallnumberofbase(DC)frequencies(lessthan0 5%ofthe coe"cients)andrandomlypicktherestofmeasurements,thusobtainameasurement vector y oflength n .Wedenethesamplingrateas $ := n/N ,thenumberofkept samples/measurementsoverthetotalnumberofelements/voxelsofadataset. Weexaminedthereconstructionprocessfortwocases:a)thefeaturesareinthe canonicaldomain(i.e.,thedatasetissparse)wherethereconstructionfollowsEquation ( 43 );andb)thefeaturesareinatransformdomainwherethereconstructionfollows Equation( 44 )orEquation( 45 ).Experimentsinvolvingsparsityintransformdomains arecarriedoutintheHaarwaveletdomain,theboxsplineframeletdomain,the curveletdomainandthesurfaceletdomain.WehaveimplementedtheHaarwavelet 2 Available:http://www.!tw.org. 79

PAGE 80

transforminane" cientandparallelmanner(usingOpenMP 3 ).Theboxsplineframelet decomposition/transformpresentedinSection 5.3 havebeenimplementedinMATLAB. WeadoptedtheCurveLabtoolbox 4 andtheSurfacelettoolbox 5 toperformthecurvelet transformandthesurfacelettransform,respectively.Wenotethattheboxsplineframelet, curveletandsurfacelettransformsimplementedatthisstagedonotruninparallel. Therefore,performingtheboxsplineframelettransformtothecurvelettransformorthe surfacelettransformtakessignicantlylongertimethanperformingthewavelettransform (Section 5.4.3 ). AsmentionedinSection 4.4 ,thefullversionofthecurvelettransformhasahigh redundancyfactorthatprohibitsitspracticalusage.Instead,inourexperimentswehave usedthelowredundantversionofthecurvelettransformthatuseswaveletsatthenest levelsuchthatwecankeeptheproblemsizeinamanageablescale. ThebasispursuitsolverNESTA 6 ,whichsolvestheproblemoftheformEquation ( 47 ),ischosenforsparseapproximationinourexperiments.WechoosetouseNESTA becauseitsolvesboththesynthesisproblemandtheanalysisproblem,anditrelieson veryfewparameters.Itisarst-ordermethodandusesasmoothingtechniquethat replacesthenon-smooth & 1 normwithasmoothterm[ 5 ].Thesmoothedversionofthe & 1 norminvolvesasmoothnessparameter .Ingeneral, shouldbesmallforhighaccuracy orlargeforfasterperformance;when =0,nosmoothingisapplied.Asuitablevalue of balancesthetrade-o!betweenrecoveryaccuracyandtime,whichcanbechosen byperformingsometrialexperiments.Thestoppingcriterioniscontrolledbyasingle 3 Available:http://www.openmp.org. 4 Available:http://www.curvelet.org/software.html. 5 Available:http://www.mathworks.com/matlabcentral/leexchange/14485. 6 Available:http://www-stat.stanford.edu/candes/nesta/nesta.html. 80

PAGE 81

parameter ( ,the & 2 errorboundasinEquation( 47 ),whichenforcesthedelityofthe solution.Thevaluesof and ( arespeciedinthefollowingsections. Wealsocomparetheproposedcompressedsensing(CS)frameworkwithrun-length encoding(RLE)anddownsampling(DS),bothofwhichareimplementedinMATLAB. Inthecaseofdownsampling,wereducedthesamplingratebyafactoroftwoineach dimension,resultinginadownsamplingrateof $ =12 5%.Weusedlinearaswellascubic splinesforthechoiceofltersfordownsampling,whicharedenotedby"DS-linear"and "DS-cubic",respectivelyinourexperiments.Thee! ectivesamplingrateofRLEdepends onthedatasetandisdocumentedexplicitlyforeachstudy. OurexperimentsarecarriedoutonaworkstationwithAMDOpteron(TM)Processor 62742.20GHzCPUand16GBmainmemory.ThemaximumnumberofparallelCPU threadsallowedinourexperimentsis32.Theaccuracyofreconstructionisevaluated numericallyintermsoftheSignaltoNoiseRatio(SNR)whichismeasuredinlogarithmic scale(dB)overtheentirevolumetricdata.Thedatasetsexaminedinourexperiments includetheFuel,theAneurysm,theHydrogen,theSupernovaandtheHeadAneurysm 7 TheSupernovadatasetshavebeenexaminedatmultipletimesteps.Wealsoprovide volumerenderingimagesofbothgroundtruthandrecovereddataforvisualcomparison. 5.4.2SparseDatasets 5.4.2.1Fuel TheFueldatasethasaresolutionof64 % 64 % 64.Itisasparsedatasetasonly5%of voxelsarenon-zero.RLEoftheFueldatasetyieldsasamplingrateof $ =10 24%.Since thelocationsofnon-zerovoxelsareunknownattheacquisitionstage,werandomlysample 7 TheFuel,AneurysmandHydrogendatasetsareobtainedfromhttp://www.gris.unituebingen.de/edu/areas/scivis/volren/datasets/datasets.html;TheSupernova datasetsareobtainedfromhttp://vis.cs.ucdavis.edu/VisFiles/index.php;TheHead Aneurysmdatasetisobtainedfromhttp://www.gris.uni-tuebingen.de/edu/areas/scivis /volren/datasets/new.html. 81

PAGE 82

thefrequencyspaceasinEquation( 41 )andexperimentwithdi!erentsamplingrates. Inthesparseapproximationstage,wesetthesolverparameter ( =1 % 10 8 (i.e.,error estimate)andexperimentwith =1 % 10 2 =5 % 10 3 ,and =1 % 10 3 toexamine thee!ectofthesmoothnessparameter Figure 5-2 showstheaccuracyofsparseapproximationaswellasthetimeusedfor recoveryoftheFueldataset.Weseetheimprovementsintheapproximationalongthe increasingsamplingrate $ .However,theimprovementbecomeslessandlesssignicant. Thisindicatesthatoncesu"cientmeasurementsareprovided,compressedsensing guaranteesaccuratereconstructionandrequiresnomoremeasurements,whichmeets thetheoreticalexpectationonthemeasurementbound.Wecanobservethatthesparse approximationcanprovidehighlyaccuraterecoverywithsamplingrateslessthan $ = 10%.Thesmoothnessparameterofthesolveralsoa!ectsthereconstructionprocess.As wepointoutinthelastsection,smaller valuestendtoproducemoreaccurateresults whilelarger valuestendtohavefasterperformance.Therecoverytimedecreasesas thenumberofmeasurementsincreasesbecausemoremeasurementstendtoyieldfaster convergeofthesolverthereforerequireslessnumberofiterationstoreachthestopping criterion.ThesparseapproximationoftheFueldatasettakesafewsecondsingeneral. Figure 5-3 showsthevolumerenderingimages,aswellasSNRs,oftheground truthFueldatasetandtherecovereddatafromCSmeasurementsandthedownsampled dataset.WeseethatwiththesamesamplingrateasRLE,thesparseapproximation ( =5 % 10 3 )returnsalmostanexactreconstructionwithSNR61.35dB.Incontrast, downsamplingdoesnotworkwellhere.Thelinearandcubicinterpolationofthe downsampledFueldatasetyieldSNRof17.09dBand19.13dB,respectively,andthe artifactsofinterpolationareobviouslyseenfromtheinsidestructuresofthereconstructed datasets.Interpolationalsofailstorecoversomeparts(e.g.,isolatedgreenblobsintheleft side)oftheFueldataset. 82

PAGE 83

20 30 40 50 60 70 80 90 100 5% 10% 15% 20% 25% 30% SNR (dB) Sampling rate = 1 10 -3 = 5 10 -3 = 1 10 -2 ASNR 4 6 8 10 12 14 16 5% 10% 15% 20% 25% 30% Time (second) Sampling rate = 1 10 -3 = 5 10 -3 = 1 10 -2 BTime Figure5-2. SparseapproximationoftheFueldataset.A)Reconstructionaccuracy.B) ReconstructionTime. 83

PAGE 84

AGroundtruth,RLE =10 24% BCS =10 24%,SNR=61.35dB CDS-linear =12 5%,SNR=17.09 dB DDS-cubic =12 5%,SNR=19.13 dB Figure5-3. ReconstructionoftheFueldataset.A)Groundtruth,theRLEofground truthdatasetyieldsasamplingrateof $ =10 24%.B)CSreconstruction ( =5 % 10 3 )fromthesamplingrateof $ =10 24%.C)Linearinterpolation ofthedownsampleddataset.D)Cubicinterpolationofthedownsampled dataset. 5.4.2.2Aneurysm AnothersparsedatasetwehaveexperimentedonistheAneurysmdataset.The Aneurysmdatasethasbeensampledataresolutionof256 % 256 % 256,ofwhichonly1% ofdataelementsarenon-zero.RLEoftheAneurysmdatasetyieldsasamplingrateof $ =2 41%.Figure 5-4 showstheaccuracyofsparseapproximationwithdi! erentsampling rates,aswellasthetimeusedforrecoveryoftheAneurysmdatasetwhen ( =1 % 10 8 and 84

PAGE 85

=5 % 10 3 .Wecanseethatincreasingthesamplingrateimprovesthereconstruction accuracy;however,theimprovementsbecomelesssignicantathighersamplingrates. Withasamplingrateaslowas $ =2 5%,sparseapproximationachieveshighaccurate reconstructionwithSNR42.27dB.Asthenumberofmeasurementsincreases,the reconstructiontimedecreasesduetofasterconvergenceofthesolver.Whenthesampling rateincreasesto $ =12 5%(notshownintheplot),sparseapproximationcanyieldan SNRof78.32dBinlessthan2minutes. 10 20 30 40 50 60 70 1% 1.5% 2% 2.5% 3% 3.5% 4% 200 250 300 350 400 450 500 SNR (dB) Time (second) Sampling rate SNR Time Figure5-4. Sparseapproximationaccuracy(red)andtiming(blue)oftheAneurysm dataset. Figure 5-5 showsthevolumerenderingimages,aswellasSNRs,ofthegroundtruth AneurysmdataandtherecovereddatafromCSmeasurementsandthedownsampled datasetofsize128 % 128 % 128.WiththesamesamplingrateasRLE,sparseapproximation isaccurate(SNR=36.35dB).Incontrast,downsamplingyieldsamuchhighersampling rateandinterpolatingthedownsampleddataleadstopoorrecoveryresults.Forexample, somevesselsaremissing(e.g.,thedottedareas)intheimagesofinterpolateddatasets. FromourexperimentsonFuelandAneurysmdatasets,wecanobservethatRLEis extremelye"cientfordatacompressionoftrulysparse,noisefree,datasets.However, RLEisonlysuitablefordatasetswithlargehomogeneousareasandfailstocompressdata 85

PAGE 86

AGroundtruth,RLE =2 41% BCS =2 41%,SNR=36.35dB CDS-linear =12 5%,SNR=9.58 dB DDS-cubic =12 5%,SNR=9.93 dB Figure5-5. ReconstructionoftheAneurysmdataset.A)Groundtruth,theRLEof groundtruthdatasetyieldsasamplingrateof $ =2 41%.B)CS reconstructionfromthesamplingrateof $ =2 41%.DownsamplingwithC) linearandD)cubiclters.Thedottedboxesareusedforhighlightingpurpose. withcomplexstructuresandissusceptibletonoise.Meanwhile,theproposedframework providesauniversalsamplingprocessforvariouskindsofdatasets.Withasmallnumber ofcompressivemeasurements,wecanaccuratelyrecoverythedataset. 5.4.3DatasetsSparseinTransformDomains Inthissection,weexperimentwithdatasetsthatarenon-sparseinthecanonical domainbutsparseorcompressibleincertaintransformdomainusingtheapproach discussedinSection 4.1.2 .Sincethevoxelvalueofthedatasetvaries,RLEfailsto 86

PAGE 87

compressthedataset( $ / 200%)andisnotasuitablechoicefordatareductionhere.We examinethewaveletdomain,asthecommonchoiceforsparserepresentation.Moreover, todemonstratetheuniversalityofthesensingprocess,weexaminethereconstructionin othertransformdomainsandestablishtheirutilityformoreaccuratereconstructionfrom thesamemeasurementsthatwereprovidedorthewaveletframework.Tothatend,we examinetheboxsplineframeletdomain,thecurveletdomainandthesurfaceletdomain forsparseapproximation,anddemonstratethattheexibilityo! eredbysurfaceletsallows forsparserrepresentationsofthedatathatleadstomoreaccuratesparseapproximation. AswehavediscussedinSection 4.1.2 ,boththesynthesismodelEquation( 44 )and theanalysismodelEquation( 45 )canbeusedforsparseapproximationwhenatransform domainisinvolved.Thesolverweused,NESTA,isabletosolveboththesynthesis problemandtheanalysisproblem.Solvingtheanalysisproblemrequiresasmallernumber ofmatrix-vectormultiplicationsthansolvingthesynthesisproblembytheNESTA algorithm.Wechoosetosolvetheanalysisproblemwhenthewavelettransformdomainis involvedsinceinthiscasethesynthesismodelandtheanalysismodelareequivalentwhile solvingtheanalysisproblemisfaster. 5.4.3.1E$ectivenessofSparseRepresentations Thee! ectivenessofthesparserepresentationo! eredbycertainsparsifyingtransform isusuallymeasuredbytherateofdecayoftheapproximationerror[ 27 65 ].Essentially, ifasignalisapproximatedusingthebest(largest) M -termtransformcoe"cients,the rateofdecayoftheapproximationerrormeasureshowfasttheapproximationerror(in a & 2 -normsense)decaysas M increases.Arigorousanalysisoftherateofdecayofthe approximationerrorisachallengingproblemanditisusuallydonebyimposingcertain assumptionontheunderlyingsignalsuchasthesignalrepresentsa C 2 continuousfunction [ 30 ]. Thee! ectivenessofwavelets,boxsplineframelets,curvelets,andsurfaceletsforsparse modelingiscomparedandplottedinFigure 5-6 ,Figure 5-7 andFigure 5-8 ,wherethe 87

PAGE 88

transformsareappliedtotheFueldataset,theHydrogendatasetandtheSupernova dataset,respectively.Toconsistentlyuseourmetric(i.e.,SNR)forreconstruction,weplot theapproximationaccuracy(SNR)alongthepercentage( 1 )ofthebest M -termtransform coe"cientsusedtoapproximatethesignal,whichessentiallyconveysthesameideaasthe rateofdecayoftheapproximationerror.Notethattocompensatefortheredundancy ofdi! erenttransformsandperformafaircomparison,theapproximationaccuracyis computedalongthepercentageofthecoe"cientsinsteadofthenumberofthecoe"cients [ 30 ].Exactapproximation(uptoanumericalerror)isachievedwhen 1 =1. 10 20 30 40 50 60 70 1% 5% 10% 15% 20% 25% SNR (dB) Percentage of the largest coeffcients ( ) wavelets framelets curvelets surfacelets Figure5-6. ApproximationaccuracyoftheFueldatasetfromthebest M -termtransform coe"cients.Thebest M -termapproximationaccuracyo! eredbywavelets (red),boxsplineframelets(yellow),curvelets(blue),andsurfacelets(green) increasesasthepercentageofthelargestcoe" cientsusedtoapproximatedthe datasetincreases. FromFigure 5-6 ,Figure 5-7 andFigure 5-8 weseethattheapproximationaccuracy increasesasthepercentageofthelargestcoe" cientsusedtoapproximatedthedataset increases.FromFigure 5-7 andFigure 5-8 ,weseethatforanextremelylowpercentage (e.g., 1 < 5%),thecoe" cientsfromsurfacelettransformrepresenttheunderlying datasetbestamongthesetransforms.ThisgivesusahintthatwhentheCSsampling 88

PAGE 89

20 30 40 50 60 70 1% 5% 10% 15% 20% 25% SNR (dB) Percentage of the largest coeffcients ( ) wavelets framelets curvelets surfacelets Figure5-7. ApproximationaccuracyoftheHydrogendatasetfromthebest M -term transformcoe" cients. 10 20 30 40 50 60 70 1% 5% 10% 15% 20% 25% SNR (dB) Percentage of the largest coeffcients ( ) wavelets framelets curvelets surfacelets Figure5-8. ApproximationaccuracyoftheSupernovadataset(timestepT1345)fromthe best M -termtransformcoe"cients. 89

PAGE 90

rate $ issmall,reconstructionwithsurfaceletsmaybeabetterchoiceandmayyield moreaccuratesparseapproximationfortheHydrogenandSupernovadatasets.Aswe bringinmorecoe" cients(i.e.,increasing 1 ),thewaveletandframeletrepresentations outperformthesurfaceletandcurveletrepresentations,whichindicatesthatastheCS samplingrate $ increases,waveletsandframeletsmayalsoprovideaccurateorevenbetter sparseapproximationthansurfaceletsandcurveletsfortheHydrogenandSupernova datasets.Incontrast,fortheFueldataset(Figure 5-6 ),waveletsandframeletsoutperform surfaceletsandcurveletsinthebest M -termapproximation,whichsuggeststhatsparse approximationusingwavelets/frameletsmaybemoreaccuratethansparseapproximation usingsurfacelets/curvelets.Wealsoobservefromtheseplotsthattheapproximation accuracyofsurfaceletsisconsistentlybetterthanthatofcurvelets.Therefore,webelieve usingthesurfaceletsforsparseapproximationwillconsistentlyyieldmoreaccurate recoverythanusingthecurvelets. 5.4.3.2Fuel:Revisited Beforeweexaminethenon-sparsedatasets(e.g.,HydrogenandSupernova),let's performsparseapproximationontheFueldatasetagain.Unlikewhatwehavedone inSection 5.4.2.1 wherethesparseapproximationexperimentswerecarriedoutinthe canonicaldomain,wenowconsiderthesparseapproximationintransformdomains. SincewehaveknowntheFueldatasetisasparsedataset,inpractice,simplyperforming sparseapproximationinthecanonicaldomainmaybethebestchoice.Thepurposeof theseexperimentsontheFueldatasetistoverifythee!ectivenessofdi!erentsparse representationsassuggestedinFigure 5-6 Figure 5-9 comparesthesparseapproximationaccuracyoftheFueldatasetinthe waveletdomain,theboxsplineframeletdomain,thecurveletdomainandsurfacelet domain.Foreachdomain(exceptthewaveletdomain),wehaveexperimentedonboth thesynthesismodel(fram-synthesis,curv-synthesisandsurf-synthesis)andtheanalysis (fram-analysis,curv-analysisandsurf-analysis)model.WecanseefromFigure 5-9 that 90

PAGE 91

0 10 20 30 40 50 60 70 5% 10% 15% 20% 25% 30% 35% 40% SNR (dB) Sampling rate wavelet fram-synthesis fram-analysis curv-synthesis curv-analysis surf-synthesis surf-analysis Figure5-9. SparseapproximationoftheFueldatasetinthewaveletdomain(red),thebox splineframeletdomain(yellow),thecurveletdomain(blue)andthesurfacelet domain(green). theapproximationaccuracyimprovesalongtheincreaseofthesamplingrate.Wavelets andframeletsoutperformcurveletsandsurfaceletswhenusedinthesparseapproximation oftheFueldataset.Whenthesamplingrateislessthan30%,boxsplineframeletsyield thehighestSNRamongthesefourdomains.Wavelets'performanceisthebestwhen $ 30%.Inallcases,curveletsgeneratestheleastcompetitiveresults.Basedonthe comparisonbetweentheanalysismodel(solidlines)andthesynthesismodel(dotted lines)showninFigure 5-9 ,itisnoteasytoconcludewhichoneperformsbetterforthe Fueldataset.Ingeneral,theexperimentresultsplottedinFigure 5-9 twellwiththe implicationofFigure 5-6 Wenotethatthesparseapproximationinthecanonicaldomain(Section 5.4.2 )canbe consideredasthesparseapproximationusinganidentitytransform.ComparingFigure 5-6 withFigure 5-2 (A),wecanseetheidentitytransformperformsmuchbetterthanother sparsifyingtransforms,whichisreasonablesincetheidentitytransformrepresentstheFuel datasetinthesparsestwayamongthesetransforms. 91

PAGE 92

OurexperimentsontheFueldatasetshowthatgiventhesamemeasurements,wecan improvethesparseapproximationbychoosingadomainthatcanrepresentthesignalin asparserway.Inthefollowingsections,weexaminethee!ectivenessofusingdi!erent sparsifyingtransformsforsparseapproximationofnon-sparsevolumetricdatasetsinterms ofrecoveryaccuracyandtime. 5.4.3.3Hydrogen ThegroundtruthoftheHydrogendatasethasaresolutionof128 % 128 % 128 (Figure 5-10 ).Figure 5-11 plotsthesparseapproximationresultsoftheHydrogendataset exploitingthewaveletdomain,theboxsplineframeletdomain,thecurveletdomain andsurfaceletdomainfordi! erentsamplingrates.Fortheboxsplineframelet,the curveletandthesurfaceletdomains,wehavealsoexperimentedonboththesynthesis model(fram-synthesis,curv-synthesisandsurf-synthesis)andtheanalysis(fram-analysis, curv-analysisandsurf-analysis)modelandcomparedtheresults.Theresultsshownin Figure 5-11 areobtainedfromNESTAwith ( =1 % 10 8 and =1 % 10 2 .From theplot,wecanseethatasthesamplingrateincreases,thereconstructionaccuracy improves.Weobservethatforvariousrangesofsamplingrates,changingthesparsifying domaincanimprovethesparseapproximationperformance.Surfaceletsyieldthemost accuratereconstructionamongthedomainsused.Wealsoobservethattheanalysismodel consistentlyoutperformsthesynthesismodelalongvaryingsamplingratesforthesparse approximationusingframeletsandsurfacelets.Theperformancedi! erencebetweenthe analysismodelandthesynthesiswhenusingframeletsissignicant(aslargeas8dB).In contrast,forthesparseapproximationusingcurvelets,thesynthesismodelyieldshigher SNR'sthantheanalysismodeldoes. Figure 5-12 showsthevolumerenderingimagesofthesparseapproximationdatasets forthecaseof $ =12 5%aswellasthedatasetsreconstructedfrominterpolationofthe downsampleddataset.WeseethatthedownsamplingworkswellfortheHydrogendataset. Thesparseapproximationusingwaveletsdoesnotreturnanaccuratereconstruction. 92

PAGE 93

AHydrogen BSupernova Figure5-10. RenderingofthegroundtruthHydrogendatasetandthegroundtruth Supernovadataset.A)Hydrogen.B)Supernova(timestepT1345). Usingboxsplineframeletsorcurvelets,theaccuracyofthesparseapproximation improves.Whenchoosingsurfaceletsforthesparseapproximation,highlyaccurate recoveriesareachieved.Wealsoseethatthesparseapproximationtendstosmooththe datasetbecausethesparseapproximationrecoversthemostsignicant(usuallylow frequency)coe"cientswhileleavesothercoe"cients(usuallyhighfrequency)aszeros. TheaveragetimeusedforonecaseofsparseapproximationshowninFigure 5-11 is 1.5minutesforwavelet,6.0minutesforsurf-analysis,42.8minutesforsurf-synthesis,22.8 minutesforcurv-analysis,5.23hoursforcurv-synthesis,41.0minutesforfram-analysis and4.47hoursforfram-synthesis.Reconstructionsusingboxsplineframelets,curvelets andsurfaceletsareslowerthanusingwavelets.Thisismainlycausedbytheslownessof thenon-parallelimplementation.Forexample,performingapairofforwardandbackward wavelet,surfacelet,curveletandframelettransformsontheHydrogendatasetrequires about0.06seconds,1.6seconds,10secondsand24sections,respectively.Theincreased problemsizeduetotheredundantdecompositionofthethreetransformsalsoslowsdown thereconstructions. Thesynthesisproblemrequiresmuchmoretimetosolvethantheanalysisproblem does.Thereasonisthatsolvingthesynthesisproblemrequiressignicantmoreiterations 93

PAGE 94

12 16 20 24 28 32 36 40 5% 10% 15% 20% 25% 30% 35% 40% SNR (dB) Sampling rate wavelet fram-synthesis fram-analysis curv-synthesis curv-analysis surf-synthesis surf-analysis Figure5-11. SparseapproximationoftheHydrogendatasetinthewaveletdomain(red), theboxsplineframeletdomain(yellow),thecurveletdomain(blue)andthe surfaceletdomain(green). thansolvingtheanalysisproblem(i.e.,thesynthesismodelconvergesslowly).Inaddition, forthesolverused,eachiterationofsolvingthesynthesisproblemrequiresabouttwice thenumberofmatrix-vectormultiplicationsthaneachiterationofsolvingtheanalysis problem.Consideringthecomputationale" ciencyoftheanalysismodel,weemploythe analysisapproachinsteadofthesynthesiscounterpartfortherestofourexperiments. 5.4.3.4Supernova TheSupernovadatasetsinclude60timesteps(rangingfromT1295toT1354).Each timestepofthegroundtruthdatahasaresolutionof216 % 216 % 216(Figure 5-10 ).Table 5-1 illustratesresults(SNRandtiming)ofthesparseapproximation( ( =1 % 10 8 and =5 % 10 4 )oftheSupernovadatasetsoftimestepT1305,T1315,T1325,T1335and T1345.Figure 5-13 showsvolumerenderingimagesofthesparseapproximateddatasetsat timestepT1345whenthesamplingrateis $ =12 5%.Theinterpolationresultsfromthe downsampleddatasetarealsoshowninFigure 5-13 94

PAGE 95

ADS-linear,SNR=29.78dB BDS-cubic,SNR=30.43dB CCS-wavelet,SNR=17.02dB DCS-fram-synthesis,SNR= 22.47dB ECS-fram-analysis,SNR= 30.42dB FCS-curv-synthesis,SNR= 29.70dB GCS-curv-analysis,SNR= 28.64dB HCS-surf-synthesis,SNR= 32.15dB ICS-surf-analysis,SNR=33.73 dB Figure5-12. SparseapproximationandinterpolationoftheHydrogendatasetwith samplingrate $ =12 5%.A)Interpolationwithlinearlter.B)Interpolation withcubiclter.C)Sparseapproximationusingwavelets.Sparse approximationusingboxsplineframeletswithD)synthesismodelandE) analysismodel.SparseapproximationusingcurveletswithF)synthesis modelandG)analysismodel.Sparseapproximationusingsurfaceletswith H)synthesismodelandI)analysismodel. 95

PAGE 96

ACS-wavelet,SNR=21.75dB BCS-framelet,SNR=23.00dB CCS-curvelet,SNR=22.88dB DCS-surfacelet,SNR=26.58dB EDS-linear,SNR=25.07dB FDS-cubic,SNR=25.64dB Figure5-13. SparseapproximationandinterpolationoftheSupernovadataset ( $ =12 5%).SparseapproximationusingA)wavelets,B)boxspline framelets,C)curveletsandD)surfacelets.InterpolationusingE)linearlter andF)cubiclter. 96

PAGE 97

Table5-1. Theaccuracy(SNR)andaveragerecoverytime(minute)ofthesparse approximation( =5 % 10 4 )oftheSupernovadatasets(timesteps1305,1315, 1325,1335,1345)usingwavelets(W.),boxsplineframelets(F.)curvelets(C.) andsurfacelets(S.). $ 5%10%12.5%15%20%25%30%35%40%Time 1305 W.18.1621.2722.6124.0526.6729.2031.5133.5935.48 0 3.1 F.19.7721.6922.6123.4424.9526.3027.5628.7429.9054.5 C.19.6321.0921.7922.4223.6124.6925.7526.7727.7558.5 S.22.7625.1126.0826.9828.6930.3031.8433.3234.7323.5 1315 W.18.2221.2122.4523.8026.4028.7130.9533.0234.93 0 3.1 F.20.0721.9322.7923.5825.0326.3327.5728.7329.8654.0 C.19.9721.4522.1222.7423.8724.9125.9126.8927.8458.6 S.22.9725.1426.0126.8528.4729.9931.4732.9034.2823.5 1325 W.18.7021.4022.5623.8526.2628.5330.6732.7134.59 0 3.0 F.20.8022.3923.1923.9725.3826.6327.8028.9230.0254.1 C.20.8122.1922.8523.4624.6125.6726.6627.6328.5857.9 S.23.7526.1027.0527.9229.5230.9932.4033.7635.0623.4 1335 W.18.0520.8622.0123.2825.7828.0830.2232.2634.20 0 3.1 F.20.2621.9322.7523.5124.9526.2727.4828.6329.7453.7 C.20.3121.7522.4123.0324.1825.2526.2827.2628.2458.5 S.23.3525.6026.5327.3728.9530.4231.8333.1734.4822.7 1345 W.18.2720.6921.7522.9425.1427.2329.2231.1032.97 0 3.1 F.20.6822.2623.0023.7125.0726.3227.5128.6529.7254.3 C.20.8122.2322.8823.4724.5625.5526.5327.4728.3858.6 S.23.6225.7426.5827.3528.7930.1631.5232.8434.1423.9 Weseefromthetablethatinmostcasesthesparseapproximationwithsurfacelets yieldsthehighestSNR.Sparseapproximationwithboxsplineframeletsusuallyhas slightlyhigherSNRthansparseapproximationwithcurvelets.Unlikethecaseofthe Hydrogen,fortheSupernovadatasetstheperformanceofframeletsandcurveletsfor sparseapproximationdoesnotnecessarilybetterthanthatofwavelets.Wealsosee theSNRdi!erencebetweenwaveletsandsurfaceletsonsparseapproximationdrops asthenumberofmeasurementsincreases.ForSupernovaofT1305andT1315,sparse approximationusingwaveletsevenyieldshigherSNR'sthansparseapproximation usingsurfaceletswhenthesamplingratesareabove35%.Thisindicatesthataslong asasu" cientnumberofmeasurementsareprovided,sparseapproximationusing waveletscanalsoreturnaccuratereconstructions.Thereconstructionaccuracyofthe 97

PAGE 98

sparseapproximationmaybeimprovedbysettingsmaller valuesbutwithincreasing reconstructiontime.Sparseapproximationusingframelets,curveletsorsurfacelets takesmuchmoretimethansparseapproximationusingwavelets,whichiscausedbythe slownessofperformingthesethreetransforms. FromtheimagesshowninFigure 5-13 ,wecanobserveanoverallaccuracyofthe sparseapproximationwiththesefourtransformdomains.However,obviousartifacts existinthedatasetrecoveredwithwaveletsandtheinsideredstructuresaremissed inthedatasetrecoveredwithboxsplineframeletsorcurvelets,whilereconstruction usingsurfaceletsgeneratesclearerandmoreaccurateresult.Thesparseapproximation usingwavelets,framelets,orcurveletsdoesnotperformbetterthaninterpolationfrom downsampleddataset.However,byexploitinganothertransformdomain(e.g.,surfacelet transformdomain),wecanimprovethereconstructionusingtheexistingmeasurements andachievehigherrecoveryaccuracythantheinterpolationmethods. OurexperimentsonHydrogenandSupernovashowthatitispossibletoimprove thesparseapproximationbyexploitingdi!erentsparsifyingdomainsusingthesame measurements.ComparedtoRLEanddownsampling,thecompressedsensingframework providesusauniversalsamplingmechanism.Ourresultssuggesttheuseoftheanalysis sparseapproximationmodelwithsurfaceletsfor3-Dcompressedsensing. Thetimeneededforreconstructionwithframelets,curveletsorsurfaceletscan begreatlyreducedaslongase"cientimplementationofthesetransformsisavailable. Thesetransformsareusuallyimplementedthroughdirectionallterbanksthatinvolve performingtheFouriertransformmanytimes.TheFouriertransformandtheprocesson individualsubbandcanruninparallel.Thereforethesetransforms,similartowavelets, lendsthemselvestoparallelizationakeyfeatureforpracticalapplications. 5.4.4NoisyMeasurements Theresultsshownintheprevioustwosectionsareforthecaseofnoiselessmeasurements. WenowstudythecasethatthegroundtruthdatasetiscontaminatedbywhiteGaussian 98

PAGE 99

noise.Therefore,theCSframeworkhasmeasurements y = A ( x + u )= Ax + Au = Ax + z (51) where u isazero-meanadditivewhiteGaussiannoisevectorwithcovariance 2 2 I .Wenote that z inEquation( 51 )hasthesamedistributionas u sinceweassumewhitenoiseand AA T = I WehaveexperimentedontheHydrogendatasetwithvaryingnoiselevelscharacterized bythestandarddeviation 2 .Thewhitenoiseisrstaddedtothegroundtruthdataset, andthenthecompressedsensingordownsamplingisperformedonthecontaminated dataset.Theaccuracyofreconstructionismeasuredbycomparingtherecovered/interpolated datasettothegroundtruth(noiseless)dataset.Forthesparseapproximationsolver NESTA,weset =1 % 10 2 and ( = 2 6 n +2 & 2 n thatisacommonheuristicsuggested bythesolverusageguide.Table 5-2 reportstheaccuracy(SNR)ofsparseapproximation (analysismodel)oftheHydrogendatasetfordi!erentsamplingrates.Wealsostudythe e ectofadditivenoisetotherecoverytime.Theaveragereconstructiontime(inminutes) spentonthedatasetwithcertainadditivenoiselevelisshowninTable 5-2 too. FromTable 5-2 weseethatasthenoiselevelincreases,thereconstructionaccuracy dropsandthereconstructiontimeincreases.Whenthenoiselevelincreases,more measurements(i.e.,increasing $ )maynothelptoimprovethesparseapproximation sincetheextrameasurementsdonotprovidereliabledatainformationforreconstruction. Forallcases,thesparseapproximationwithsurfaceletsyieldsamuchmoreaccurate recoverythanthesparseapproximationwithwavelets,frameletsorcurvelets. Figure 5-14 showsthecomparisonbetweenthesparseapproximation( $ =12 5%) andthecubicinterpolationofthedownsampleddataset.Thelinearinterpolationof thedownsampleddataset,notshownintheplot,yieldsverysimilarSNR'sasthecubic interpolation.Theperformanceofbothsparseapproximationandinterpolationdrops alongtheincreasingstandarddeviationoftheadditivenoise.However,comparedtothe 99

PAGE 100

Table5-2. ThesparseapproximationoftheHydrogendatasetfromnoisemeasurements usingwavelets(W.),boxsplineframelets(F.),curvelets(C.)andsurfacelets (S.).Theaveragereconstructiontime(inminutes)increasesalongwiththe noiselevel. $ 5%10%12.5%15%20%25%30%35%40%Time 2 =.5 W.14.2415.9616.2817.3418.8119.9020.6921.8322.81 0 1.5 F.25.1526.6227.0427.4427.9428.3128.5228.6628.7951.0 C.25.7325.9326.0226.1226.3026.5026.6426.7926.9025.4 S.28.9130.4530.8631.1131.4731.7531.9332.0832.2110.9 2 =1 W.13.8915.5215.7816.7017.9418.8819.5520.3720.99 0 1.7 F.23.3324.3124.5524.7725.0125.1425.1725.1425.1155.1 C.25.1124.8524.7324.6424.4824.3724.2624.1624.0830.2 S.27.2128.5128.8829.1629.5929.9230.1430.3030.4413.3 2 =2 W.13.3514.7614.9515.6516.6517.4317.9118.5518.90 0 2.1 F.21.0221.5321.5721.6021.5521.4321.2821.1020.9363.2 C.23.8323.1422.8922.7022.3722.1221.9021.7121.5438.3 S.26.0526.6826.8627.0027.3027.5627.7527.8427.9417.1 2 =3 W.12.9714.2214.3714.9415.7616.4316.8117.3517.60 0 2.4 F.19.4919.6519.5619.4819.2619.0318.7918.5518.3271.2 C.22.8021.9721.6821.4521.0620.7520.4920.2620.0543.3 S.25.5325.8125.8725.9226.0526.1526.2326.2426.2721.0 2 =4 W.12.7113.8313.9414.4415.1215.6716.0116.4916.67 0 2.6 F.18.2918.1818.0117.8717.5517.2416.9616.6816.4278.7 C.22.0121.0920.7720.5120.0719.7119.4019.1318.8948.4 S.25.1225.2525.2325.2225.2025.1925.1825.0925.0523.2 2 =5 W.12.5413.5113.6214.0614.6315.1015.3915.8315.95 0 2.8 F.17.2716.9816.7516.5716.1815.8315.5115.2014.9385.4 C.21.3620.3620.0119.7219.2218.8218.4818.1817.9151.2 S.24.7324.7824.7124.6424.5524.4424.3524.1924.1025.2 interpolation,reconstructionfromcompressedsensingismuchmoreresilienttonoise thankstothedenoisingcapabilityofthesparseapproximationmethods.Wenotethat forinterpolationmethodstoachieveareliablereconstructioninanoisysetting,extra denoisingstepsarerequired. 5.4.5Large-scaleVolumetricDatasets Asthesizeofdatasetsgrows,applyingcompressedsensingandthensparse approximationonthewholedatasetmaynotbeaviablechoiceduetoincreasing reconstructiontime.Atilingstrategy,asadoptedintheJPEG2000standard,maybe used.Foralarge-scalevolumetricdataset,wecanpartitionthedatasetintonon-overlapping 100

PAGE 101

5 10 15 20 25 30 35 0 1 2 3 4 5 SNR (dB) Standard deviation CS-wavelet CS-framelet CS-curvelet CS-surfacelet DS-cubic Figure5-14. Sparseapproximation( $ =12 5)vscubicinterpolationoftheHydrogen datasetfromnoisysamples. blocksofthesamesize.Foreachblock(i.e.,asub-volume),wecanobtainthecompressive measurementsseparately.Atthereconstructionstage,thesub-volumescanberecovered inparallelsincethesparseapproximationofeachsub-volumeisindependent.Inthis section,wedemonstratethatapplyingsuchastrategytotheproposedframeworkis feasible.Sophisticatedpartitioningstrategiessuchastheonewithoverlappingareasmay beappliedtoo,butweleaveitasourfuturework. WeconsidertheHeadAneurysmdatasetofsize512 % 512 % 512.Thevolumeis dividedintoeightsub-volumesandeachsub-volumehasaresolutionof256 % 256 % 256. Foreachsub-volume,wemaintainaCSsamplingrateof $ =12 5%suchthatwealso haveanoverallsamplingrateof $ =12 5%forthewhole(combined)volume.Thesparse approximationofeachsub-volumeisdoneinparallel.Inourexperiments,all(sub)sparse approximationprocesseshavethesameparametersetting: ( =1 % 10 8 and =5 % 10 3 Theeight(sub)sparseapproximationprocessesrunsimultaneously,andeachprocessis allowedtoforkatmosteightparallelCPUthreads.Notethatsincethedatasetisasparse dataset,nosparsifyingtransformisappliedforthereconstruction. 101

PAGE 102

AGroundtruth BCS $ =12 5%,SNR=76.56dB CDS-linear $ =12 5%,SNR=12.34dB DDS-cubic $ =12 5%,SNR=12.20dB Figure5-15. SparseapproximationvslinearinterpolationoftheHeadAneurysmdataset. A)Groundtruth.B)Sparseapproximation.InterpolationwithC)linear lterandD)cubiclter.Close-upimagesareshownontheright. 102

PAGE 103

ThesparseapproximationoftheHeadAneurysmdatasetnishesin5 5minutesand yieldsanSNRof76.56dB.Figure 5-15 comparesthevolumerenderingimagesaswell asSNR'softherecovereddatasetsfromthesparseapproximationandthelinearand cubicinterpolationofthedownsampleddataset.Weseevisuallyandnumerically,the sparseapproximationoutperformstheinterpolationmethods.Theinterpolationresultsin inaccurateimageswhilethesparseapproximationleadstoanalmostidenticalimageas thegroundtruthimage. Theproposedframeworkatcurrentstagesu! ersfromthelongrecoveringtime. Thisisduetothehighcomplexityofthenon-linearsparseapproximationalgorithms aswellasine"cientimplementationofthesparsifyingtransforms.However,webelieve suchconstraintscanbelargelyrelaxedbytheavailabilityofparallelcomputingresources suchasGPUs.ByprolingofourCPUimplementation,wenoticedthatingeneralmore than90%ofthetimewassimplyspentonperformingtransformations(i.e.,FFT,X-let transforms).E" cientimplementationsofthesetransformstakingtheadvantagesofGPU computinghavebeenproposedinrecentyears[ 38 41 70 ],whichreducethetimein ordersofmagnitude.Moreover,transformationalgorithmsutilizingthesparseproperties ofthesignal(e.g.,sFFT[ 45 ])werealsostudiedandimplementedtofurtherreducethe complexityoftheexistingtransformationalgorithms. Wewouldliketomentionherethatinourearlyinvestigation,wehavealsotried othersparseapproximationmethodssuchasthegreedymethodsOMP[ 88 ]andROMP [ 74 ]andtheinterior-pointmethodl1-ls[ 50 ].Wenotethatthereconstructionmethods arealsoanimportantfactorforaccuraterecoveryandtimingconsiderations.Fromour earlyinvestigation,wehaveobservedthatgreedymethodsaremorelikelytosucceedif thesignalisextremelysparse(e.g.,theAneurysmdataset).However,asthesparsityof thesignaldecreases,greedymethodsrequirelongrecoverytimes.Thisisbecausegreedy methodsrecoveralimitednumberofnon-zeroelementsineachiteration;asthenumber ofnon-zeroelementsincreases,moreiterationsarerequiredtoaccuratelyrecoverythe 103

PAGE 104

signal.Meanwhile,theinterior-pointmethodl1-lsislikelyapplicabletovariouskindsof datasetsandyieldsaccuratereconstructions.However,theinternalNewtonstepandline searchstepofl1-lsmakeitaslowsolver.Webelievethattheapplicationofourproposed frameworkinvolumetricdatareductionisamotivationtostudysparseapproximationfor extremelylargescaleproblems 104

PAGE 105

CHAPTER6 CONCLUSION Inthisdissertation,wepresentourframeworksforreconstructionofvolumetricdata fromirregularlysampleddataaswellasfromcompressivelysenseddata. OurnonuniformreconstructionframeworkpresentedinChapter 3 exploitsoptimal samplinglattices(i.e.,theBCClatticeandtheFCClattice)asintermediaterepresentations forthereconstructionsinashift-invariantspace.Theboxsplinefunctionsandthesinc functionsdenedfortheseoptimalsamplinglatticesareusedasreconstructionkernels. Theperformanceoftheseoptimalsamplinglatticesforreconstructioniscomparedto theperformanceoftheCartesianlattice.Throughnumericalandvisualcomparisonswe establishthatforagivensetofsampleddatatheoptimalsamplinglatticesalongwith non-separablebasisfunctionsprovideamoreaccuratereconstructionthantheCartesian latticewiththetensor-productbasisfunction.Usingoptimalsamplinglatticesalso decreasesthecomputationalcostincomparisontotheCartesianlattice.Furthermore,the optimallatticesprovedtobemorerobustagainstnoiseinthesamplesetcomparedwith theCartesianlattice. InourstudyofthecompressedsensingframeworkpresentedinChapter 4 and Chapter 5 ,webuildacaseforanddemonstratethesignicanceofsparsityinvolumetric dataprocessing.Tofacilitatethestudy,wederiveanovel3-Dframeletdecomposition fromasevendirectionboxspline.Ourexperiments,onbothsparsesignalsandsignals thataresparseinatransformdomain,providecompellingevidenceforusingcompressed sensingforasmartdata-reductionstrategyforvolumetricdatasets.Acquiringonlya fewrandommeasurementsofthesignalasapartofin-situprocessingleadstosignicant decreaseinthedatastorageandI/Orequirements.Theacquisitionisuniversal,that is,thesensingprocessrequiresnopriorknowledgeoffeaturesinthesignal.Usingthe compressedmeasurementswecanrecovertheoriginaldatawithhighaccuracy.The amountofsavingsdependson(1)ndingagoodtransformdomaininwhichthesignalis 105

PAGE 106

sparseand(2)algorithmsthatcane"cientlyrecoverthesparserepresentationfromthe measurement.Ourresultsmotivatefutureresearchonthestudyofcustom-designedsparse representationsandsparseapproximationalgorithmsforlarge-scalevolumetricdata. Therearemanyopportunitiesforfurtherenhancementsofourframeworks.Recent advancesonmulti-coreandparallelcomputationtechniques,forexamplegeneral-purpose computingongraphicsprocessingunits(GPGPU),coulddrasticallyreducethereconstruction timeofourframeworks.Infact,theimplementationofseveralconjugategradientand compressedsensingalgorithmsonGPGPUplatformshasrecentlyappearedande"ciently used[ 6 8 ].Themultivariatedecompositioncanalsobenetfromaparallelcomputing platform.Thesparserepresentationmodelingforvolumetricdatacanbefurtherexplored. More3-Dboxsplinefunctionscanbeexploitedinourconstructionoftightwavelet frames,aswehavedoneinChapter 5 foroneboxsplineexample.Moreover,wecan renethesparserepresentationmodelingthroughdictionarylearning[ 82 103 ]orother sourcesofdomain-knowledgeforaparticularkindofdata.Thesparsemodelingmay beappliedinournonuniformreconstructionframeworktoachieveareconstructionina sparsieddomainaswell.Thescalabilityofourframeworkswhendealingwithlarge-scale volumetricdataisalsoanimportantdirectionforfutureresearch. 106

PAGE 107

REFERENCES [1] A.AldroubiandK.Grochenig.Nonuniformsamplingandreconstructionin shift-invariantspaces. SIAMRev. ,43(4):585620,2001. [2] M.Arigovindan,M.Suhling,P.Hunziker,andM.Unser.Variationalimage reconstructionfromarbitrarilyspacedsamples:Afastmultiresolutionspline solution. IEEETrans.onImageProcessing ,14(4):450460,2005. [3] R.Baraniuk.Compressivesensing[lecturenotes]. SignalProcessingMagazine, IEEE ,24(4):118121,July2007. [4] R.Baraniuk.Moreisless:Signalprocessingandthedatadeluge. Science 331(6018):717719,2011. [5] S.Becker,J.Bobin,andE.Cand`es.NESTA:Afastandaccuraterst-ordermethod forsparserecovery. SIAMJ.ImagingSciences ,4(1):139,2011. [6] J.BlanchardandJ.Tanner.Gpuacceleratedgreedyalgorithmsforcompressed sensing. MathematicalProgrammingComputation ,5(3):267304,2013. [7] J.Blinn. JimBlinn'scorner:dirtypixels .MorganKaufmannPublishersInc.,San Francisco,CA,USA,1998. [8] J.Bolz,I.Farmer,E.Grinspun,andP.Schrooder.Sparsematrixsolversonthegpu: Conjugategradientsandmultigrid. ACMTrans.Graph. ,22(3):917924,July2003. [9] S.Boyd,N.Parikh,E.Chu,B.Peleato,andJ.Eckstein.Distributedoptimization andstatisticallearningviathealternatingdirectionmethodofmultipliers. FoundationsandTrendsinMachineLearning ,3(1):1122,2011. [10] E.Cand`es,L.Demanet,D.Donoho,andL.Ying.Fastdiscretecurvelettransforms. MultiscaleModelingandSimulation ,5(3):861899,January2006. [11] E.Cand`es,Y.Eldar,D.Needell,andP.Randall.Compressedsensingwithcoherent andredundantdictionaries. AppliedandComputationalHarmonicAnalysis 31(1):5973,2010. [12] E.Cand`es,J.Romberg,andT.Tao.Robustuncertaintyprinciples:exactsignal, reconstructionfromhighlyincompletefrequencyinformation. IEEETransactionson InformationTheory ,52(2):489509,2006. [13] E.Cand`es,M.Rudelson,T.Tao,andR.Vershynin.Errorcorrectionvialinear programming.In FOCS:IEEESymposiumonFoundationsofComputerScience (FOCS) ,2005. [14] E.Cand`esandTTao.Decodingbylinearprogramming. IEEETransactionson InformationTheory ,51(12):42034215,2005. 107

PAGE 108

[15] E.Cand`esandT.Tao.Near-optimalsignalrecoveryfromrandomprojections: Universalencodingstrategies. IEEETransactionsonInformationTheory 52(12):54065425,2006. [16] E.Cand`esandM.Wakin.Anintroductiontocompressivesensing. IEEESignal ProcessingMagazine ,25(2):2130,2008. [17] S.Chen,D.Donoho,andM.Saunders.Atomicdecompositionbybasispursuit. SIAMJournalonScienticComputing ,20(1):3361,1998. [18] C.ChuiandW.He.Constructionofmultivariatetightframesviakronecker products. AppliedandComputationalHarmonicAnalysis ,11(1):305312,2001. [19] J.ConwayandN.Sloane. SpherePackings,LatticesandGroups .Springer,3rd edition,1999. [20] I.Daubechies. TenLecturesonWavelets .SIAMPublication,1992. [21] I.Daubechies,B.Han,A.Ron,andZ.Shen.Framelets:Mra-basedconstructionsof waveletframes. AppliedandComputationalHarmonicAnalysis ,14(1):146,2003. [22] C.deBoor,R.A.DeVore,andA.Ron.Approximationfromshift-invariant subspacesof L 2 ( R d ). Trans.Amer.Math.Soc. ,341(2):787806,1994. [23] C.deBoor,K.Hollig,andS.Riemenschneider. BoxSplines ,volume98of Applied MathematicalSciences .Springer-Verlag,NewYork,1993. [24] C.deBoorandA.Ron.Fourieranalysisoftheapproximationpowerofprincipal shift-invariantspaces. Constr.Approx. ,8(4):427462,1992. [25] D.Donoho.Compressedsensing. IEEETransactionsonInformationTheory 52(4):12891306,2006. [26] D.Donoho,Y.Tsaig,I.Drori,andJ.Starck.Sparsesolutionofunderdetermined systemsoflinearequationsbystagewiseorthogonalmatchingpursuit. IEEE TransactionsonInformationTheory ,58(2):10941121,2012. [27] D.Donoho,M.Vetterli,R.Devore,andI.Daubechies.Datacompressionand harmonicanalysis. IEEETrans.Inform.Theory ,44:24352476,1998. [28] M.Duarte,M.Davenport,D.Takhar,J.Laska,T.Sun,K.Kelly,andR.Baraniuk. Single-pixelimagingviacompressivesampling. SignalProcessingMagazine,IEEE 25(2):8391,2008. [29] D.DudgeonandR.Mersereau. MultidimensionalDigitalSignalProcessing Prentice-Hall,Inc.,Englewood-Cli!s,NJ,1stedition,1984. [30] G.Easley,D.Labate,andW.Lim.Sparsedirectionalimagerepresentationsusing thediscreteshearlettransform. AppliedandComputationalHarmonicAnalysis 25(1):2546,2008. 108

PAGE 109

[31] M.Elad. SparseandRedundantRepresentations-FromTheorytoApplicationsin SignalandImageProcessing .Springer,2010. [32] M.Elad,P.Milanfar,andR.Rubinstein.Analysisversussynthesisinsignalpriors. InverseProblem ,23(3):947968,2007. [33] A.Entezari. OptimalSamplingLatticesandTrivariateBoxSplines .PhDthesis, SimonFraserUniversity,Vancouver,Canada,July2007. [34] A.Entezari,M.Mirzargar,andL.Kalantari.Quasi-interpolationontheBody CenteredCubicLattice. ComputerGraphicsForum ,28(3):10151022,2009. [35] A.EntezariandT.Moller.Extensionsofthezwart-powellboxsplineforvolumetric datareconstructiononthecartesianlattice. IEEETransactionsonVisualizationand ComputerGraphics ,12(5):13371344,September2006. [36] A.Entezari,D.VanDeVille,andT.Moller.Practicalboxsplinesforreconstruction onthebodycenteredcubiclattice. IEEETrans.onVis.Comput.Graph. 14(2):313328,March-April2008. [37] B.Finkbeiner,A.Entezari,D.VanDeVille,andT.Moller.E"cientvolume renderingonthebodycenteredcubiclatticeusingboxsplines. Computersand Graphics ,34(4):409423,2010. [38] J.Franco,G.Bernab,J.Fernndez,andM.Ujaldn.Parallel3dfastwavelettransform onmanycoregpusandmulticorecpus. ProcediaComputerScience ,1(1):11011110, 2010. [39] R.Franke,H.Hagen,andG.Nielson.Leastsquaressurfaceapproximationto scattereddatausingmultiquadraticfunctions. AdvancesinComputationalMathematics ,2(1):8199,1994. [40] T.GoldsteinandS.Osher.ThesplitbregmanmethodforL1-regularizedproblems. JournalonImagingSciences ,2(2):323343,2009. [41] N.Govindaraju,B.Lloyd,Y.Dotsenko,B.Smith,andJ.Manferdelli.High performancediscretefouriertransformsongraphicsprocessors.In HighPerformance Computing,Networking,StorageandAnalysis,2008.SC2008.International Conferencefor ,pages112,Nov2008. [42] B.Gregorski,B.Hamann,andK.Joy.ReconstructionofB-splinesurfacesfrom scattereddatapoints.In Comp.Graphics ,pages163170,2000. [43] W.GuoandM.Lai.Boxsplinewaveletframesforimageedgeanalysis. SIAMJ. ImagingSciences ,6(3):15531578,2013. [44] A.Gyulassy,V.Natarajan,V.Pascucci,P.Bremer,andB.Hamann.Topology-based simplicationforfeatureextractionfrom3Dscalarelds.In IEEEVisualization page68.IEEEComputerSociety,2005. 109

PAGE 110

[45] H.Hassanieh,P.Indyk,D.Katabi,andE.Price.Simpleandpracticalalgorithm forsparsefouriertransform.In ProceedingsoftheTwenty-thirdAnnualACM-SIAM SymposiumonDiscreteAlgorithms ,SODA'12,pages11831194.SIAM,2012. [46] O.HoltzandA.Ron.Approximationordersofshift-invariantsubspacesof W s 2 ( R d ). J.Approx.Theory ,132(1):97148,2005. [47] I.IhmandS.Park.Wavelet-based3Dcompressionschemeforinteractive visualizationofverylargevolumedata. ComputerGraphicsForum ,18(1):315, mar1999. [48] P.Jain,A.Tewari,andI.Dhillon.Orthogonalmatchingpursuitwithreplacement. In AdvancesinNeuralInformationProcessingSystems24:25thAnnualConference onNeuralInformationProcessingSystems ,pages12151223,2011. [49] M.Johnson,Z.Shen,andY.Xu.Scattereddatareconstructionbyregularization inB-splineandassociatedwaveletspaces. JournalofApproximationTheory 159(2):197223,2009. [50] S.Kim,K.Koh,M.Lustig,S.Boyd,andD.Gorinevsky.Amethodforlarge-scale & 1 -regularizedleastsquares. IEEEJournalofSelectedTopicsinSignalProcessing 1:606617,2007. [51] J.KovacevicandA.Chebira.Anintroductiontoframes. FoundationsandTrendsin SignalProcessing ,2(1):194,2008. [52] H.Kunsch,E.Agrell,andF.Hamprecht.Optimallatticesforsampling. IEEE Trans.onInfo.Theory ,51(2):634647,Feb.2005. [53] G.KutyniokandD.Labate. Shearlets:MultiscaleAnalysisforMultivariateData BirkhauserPublication,rstedition,2012. [54] MLaiandJStockler.Constructionofmultivariatecompactlysupportedtight waveletframes. AppliedandComputationalHarmonicAnalysis ,21(3):324348, 2006. [55] C.Ledergerber,G.Guennebaud,M.Meyer,M.Bacher,andH.Pster.Volume MLSRayCasting. VisualizationandComputerGraphics,IEEETransactionson 14(6):13721379,2008. [56] S.Lee,G.Wolberg,andS.Shin.Scattereddatainterpolationwithmultilevel B-splines. VisualizationandComputerGraphics,IEEETransactionson ,3(3):228 244,jul-sep1997. [57] Y.LuandM.Do.Multidimensionaldirectionallterbanksandsurfacelets. Image Processing,IEEETransactionson ,16(4):918931,2007. [58] Y.Lu,M.Do,andR.Laugesen.AcomputableFourierconditiongenerating alias-freesamplinglattices. IEEETrans.onSignalProcessing ,57:17681782,2009. 110

PAGE 111

[59] V.Lucarini.Three-dimensionalrandomVoronoitessellations:Fromcubiccrystal latticestoPoissonpointprocesses. JournalofStatisticalPhysics ,134(1):185206, 2009. [60] M.Lustig,D.Donoho,andJ.Pauly.Sparsemri:Theapplicationofcompressed sensingforrapidmrimaging. MagneticResonanceinMedicine ,58(6):11821195, 2007. [61] M.Lustig,D.Donoho,J.Santos,andJ.Pauly.Compressedsensingmri. Signal ProcessingMagazine,IEEE ,25(2):7282,march2008. [62] K.Ma.Insituvisualizationatextremescale:Challengesandopportunities. ComputerGraphicsandApplications,IEEE ,29(6):1419,2009. [63] K.Ma,E.Lum,H.Yu,H.Akiba,M.Huang,Y.Wang,andG.Schussman.Scientic discoverythroughadvancedvisualization. JournalofPhysics:ConferenceSeries 16(1):491,2005. [64] K.Ma,C.Wang,H.Yu,andA.Tikhonova.In-situprocessingandvisualizationfor ultrascalesimulations.In JournalofPhysics:ConferenceSeries ,volume78,pages 012043.IOPPublishing,2007. [65] S.Mallat. AWaveletTourofSignalProcessing .AcademicPress,SanDiego,CA, 1998. [66] S.MarschnerandR.Lobb.Anevaluationofreconstructionltersforvolume rendering.In Proc.oftheIEEEConf.onVisualization1994 ,pages100107.IEEE ComputerSocietyPress,October1994. [67] E.Meijering.Achronologyofinterpolation:Fromancientastronomytomodern signalandimageprocessing. ProceedingsoftheIEEE ,90(3):319342,March2002. [68] H.Miyakawa.Samplingtheoremofstationarystochasticvariablesin multidimensionalspace. JournaloftheInstituteofElectronicandCommunicationEngineersofJapan ,42:421427,1959. [69] O.Morozov,M.Unser,andP.Hunziker.Reconstructionoflarge,irregularlysampled multidimensionalimages.atensor-basedapproach. IEEETrans.onMedicalImaging 30(2):366374,feb.2011. [70] M.Motamedi,S.Sobhieh,S.A.Motamedi,andA.H.Rezaie.Anultra-fast,optimized andmassively-parallelizedcurvelettransformalgorithmongp-gpus.In Electrical Engineering(ICEE),201321stIranianConferenceon ,pages16,May2013. [71] K.Nam. Tightwaveletframeconstructionanditsapplication .PhDthesis, UniversityofGeorgia,Athens,Georgia,August2005. [72] S.Nam,M.Davies,M.Elad,andR.Gribonval.Thecosparseanalysismodeland algorithms. AppliedandComputationalHarmonicAnalysis ,34(1):3056,2013. 111

PAGE 112

[73] D.NeedellandJ.Tropp.CosaMP:iterativesignalrecoveryfromincompleteand inaccuratesamples. Commun.ACM ,53(12):93100,2010. [74] D.NeedellandR.Vershynin.Uniformuncertaintyprincipleandsignalrecovery viaregularizedorthogonalmatchingpursuit. FoundationsofComputational Mathematics ,9(3):317334,2009. [75] G.M.Nielson.Scattereddatamodeling. IEEEComput.Graph.Appl. ,13(1):6070, 1993. [76] A.OppenheimandR.Schafer. Discrete-TimeSignalProcessing .PrenticeHallInc., EnglewoodsCli!s,NJ,1989. [77] X.Pan,E.Sidky,andM.Vannier.WhydocommercialCTscannersstillemploy traditional,lteredback-projectionforimagereconstruction? InverseProblems 25(12):123009,2009. [78] D.PetersenandD.Middleton.Samplingandreconstructionofwave-number-limited functionsin N -dimensionalEuclideanspaces. InformationandControl 5(4):279323,December1962. [79] J.Portilla.Imagerestorationthroughl0analysis-basedsparseoptimizationintight frames.In ICIP ,pages39093912.IEEE,2009. [80] F.H.Post,B.Vrolijk,H.Hauser,R.S.Laramee,andH.Doleisch.Thestateof theartinowvisualisation:Featureextractionandtracking. ComputerGraphics Forum ,22(4),2003. [81] A.RonandZ.Shen.A"nesystemsin l 2 ( R d ):theanalysisoftheanalysisoperator. J.FunctionalAnal.Appl. ,148:408447,1997. [82] R.Rubinstein,A.Bruckstein,andM.Elad.Dictionariesforsparserepresentation modeling. Proc.IEEE ,98(6):10451057,Jun2010. [83] I.W.SelesnickandM.A.T.Figueiredo.Signalrestorationwithovercomplete wavelettransforms:Comparisonofanalysisandsynthesispriors.In SPIE ,page 7446,August2009. [84] A.ShoshaniandD.Rotem. ScienticDataManagementChallenges,Technology, andDeployment .Chapman&Hall/CRCPress,2010. [85] P.Thevenaz,T.Blu,andM.Unser.Interpolationrevisited. IEEETransactionson MedicalImaging ,19(7):739758,July2000. [86] R.Tibshirani.Regressionshrinkageandselectionviathelasso. JournaloftheRoyal StatisticalSociety,SeriesB ,58(1):267288,1996. [87] M.Tisdall. DevelopmentandvalidationofalgorithmsforMRIsignalcomponent estimation .PhDthesis,SimonFraserUniversity,Vancouver,Canada,Dec.2007. 112

PAGE 113

[88] J.TroppandA.Gilbert.Signalrecoveryfromrandommeasurementsviaorthogonal matchingpursuit. IEEETransactionsonInformationTheory ,53(12):46554666, 2007. [89] J.TroppandS.Wright.Computationalmethodsforsparsesolutionoflinearinverse problems. ProceedingsoftheIEEE ,98(6):948958,2010. [90] H.Trussell,L.Arnder,P.Moran,andR.Williams.Correctionsfornonuniform samplingdistortionsinmagneticresonanceimagery. MedicalImaging,IEEE Transactionson ,7(1):3244,mar1988. [91] J.TrzaskoandA.Manduca.Highlyundersampledmagneticresonanceimage reconstructionviahomotopic & 0 -minimization. IEEETrans.Med.Imaging 28(1):106121,2009. [92] F.TzengandK.Ma.Intelligentfeatureextractionandtrackingforvisualizing large-scale4Dowsimulations.In Proceedingsofthe2005ACM/IEEEconference onSupercomputing ,pages66.ACMPressandIEEEComputerSocietyPress,2005. [93] M.Unser.Splines:aperfecttforsignalandimageprocessing. SignalProcessing Magazine,IEEE ,16(6):2238,1999. [94] M.Unser.Sampling50YearsafterShannon. ProceedingsoftheIEEE 88(4):569587,April2000. [95] D.VanDeVille,T.Blu,M.Unser,W.Philips,I.Lemahieu,andR.VandeWalle. Hex-splines:Anovelsplinefamilyforhexagonallattices. IEEETrans.onImage Processing ,13(6):758772,June2004. [96] E.Vucini,T.Moller,andM.E.Groller.E"cientreconstructionfromnon-uniform pointsets. TheVisualComputer ,24(7):555563,2008. [97] H.Wendland. Scattereddataapproximation ,volume17of CambridgeMonographson AppliedandComputationalMathematics .CambridgeUniversityPress,Cambridge, 2005. [98] S.Wright,R.Nowak,andM.Figueiredo.Sparsereconstructionbyseparable approximation. IEEETransactionsonSignalProcessing ,57(7):24792493,2009. [99] L.Xu,T.Lee,andH.Shen.Aninformation-theoreticframeworkforow visualization. VisualizationandComputerGraphics,IEEETransactionson 16(6):12161224,2010. [100] X.Xu,A.Alvarado,andA.Entezari.Reconstructionofirregularly-sampled volumetricdataine"cientboxsplinespaces. MedicalImaging,IEEETransactions on ,31(7):14721480,2012. 113

PAGE 114

[101] X.Xu,E.Sakhaee,andA.Entezari.Volumetricdatareductioninacompressed sensingframework. ComputerGraphicsForum,SpecialIssueonEuroVIS ,33(4), 2014. [102] X.Xu,W.Ye,andA.Entezari.Bandlimitedreconstructionofmultidimensional imagesfromirregularsamples. ImageProcessing,IEEETransactionson 22(10):39503960,2013. [103] M.Yaghoobi,T.Blumensath,andM.Davies.Dictionarylearningforsparse approximationswiththemajorizationmethod. IEEETransactionsonSignal Processing ,57(6):21782191,2009. [104] J.YangandY.Zhang.Alternatingdirectionalgorithmsfor & 1 -problemsin compressivesensing. SIAMJournalonScienticComputing ,33(1):250278, 2011. [105] W.YeandA.Entezari.Ageometricconstructionofmultivariatesincfunctions. IEEETrans.onImageProcessing ,21(6):29692979,June2012. [106] W.Ye,A.Entezari,andB.Vemuri.Tomographicreconstructionofdi!usion propagatorsfromdw-mriusingoptimalsamplinglattices.In BiomedicalImaging: FromNanotoMacro,2010IEEEInternationalSymposiumon ,pages788791, april2010. [107] B.YeoandB.Liu.VolumerenderingofDCT-basedcompressed3Dscalardata. IEEETransactionsonVisualizationandComputerGraphics ,1(1):2943,March 1995. [108] H.Yu,C.Wang,R.Grout,J.Chen,andK.Ma.Insituvisualizationforlarge-scale combustionsimulations. IEEEComputerGraphicsandApplications ,30(3):4557, 2010. [109] W.Zhu,Y.Wang,andQ.Zhu.Second-orderderivative-basedsmoothnessmeasure forerrorconcealmentinDCT-basedcodecs. IEEETrans.CircuitsandSystemsfor VideoTechnology ,8(6):713,October1998. 114

PAGE 115

BIOGRAPHICALSKETCH XieXuwasborninShouning,Fujian,China.HegraduatedfromTianjinUniversity withaBachelorofEngineeringdegreeinsoftwareengineeringin2009.Hejoinedthe DepartmentofComputerandInformationScienceandEngineeringatUniversity ofFloridain2009andgraduatedwithaDoctorofPhilosophydegreeincomputer engineeringin2014. 115