UFDC Home  myUFDC Home  Help 



Full Text  
PAGE 1 MULTIRESOLUTION IMAGE FUSION USING MULTISCALE ESTIMATION FRAMEWORK By HOJIN JHEE A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORID A IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2010 1 PAGE 2 2010 Hojin Jhee 2 PAGE 3 To my parents and friends 3 PAGE 4 ACKNOWLEDGMENTS I wish to sincerely thank my academic advis or, Dr. Fred Taylor, for providing me a chance to keep pursuing my graduate studies at University of Florida. I deeply appreciate his notable support, friendly adv ice, and consideration in guiding for completing this dissertation. I am grateful to all my supervi sory committee members: Dr. Herman Lam, Dr. Janise McNair and Dr. Douglas Cenzer sharing their valuable time for great comments and interests by many ways during my Ph.D. period. Also, I need to send deep thanks to Dr. Clint Slatton who serv ed as my former academic advisor. He greatly inspired my Ph.D. research. He was a respectable researcher, instructor and sometimes, nice friend during the periods when I was under his advice. I will always keep him and his family in my thoughts and pray for his peaceful rest. Special thanks go to all my colleagues for their helps, encouragements and nice friendships during my joyful experienc es at University of Florida. Lastly, I would like to express utmost respect and appreciation to my parents who have patiently and endlessly offered support, inspiration and motivation for achieving my academic dream. I owe much them all. 4 PAGE 5 TABLE OF CONTENTS page ACKNOWLEDG MENTS..................................................................................................4LIST OF TABLES............................................................................................................7LIST OF FI GURES..........................................................................................................8LIST OF ABBR EVIATIONS...........................................................................................11ABSTRACT ...................................................................................................................131 INTRODUC TION....................................................................................................151.1 Background .......................................................................................................151.2 MultiScale Image Fusion.................................................................................161.3 Problem Statemen t and Organi zation...............................................................182 OVERVIEW OF MULTISCAL E ESTIMATION FR AMEWORK..............................212.1 Introdu ction .......................................................................................................212.2 MultiScale Data Represent ation......................................................................222.2.1 State Space Models on qth Order Tree....................................................222.2.2 Markov Property of MultiScale Process..................................................232.3 Conclu sions ......................................................................................................253 DATA FUSION BY MULTISCALE KALMAN SMOOTHING ALGORITHM............273.1 Introdu ction .......................................................................................................273.2 Prelimi naries .....................................................................................................283.3 The Kalm an Filt er..............................................................................................293.3.1 Linear St ate M odel..................................................................................293.3.2 Kalman Filter ing Algorithm......................................................................303.3.3 Kalman Smoot hing Algor ithm..................................................................303.4 MultiScale Kalman Sm oothing (MKS) Algorit hm..............................................323.5 Conclu sions ......................................................................................................364 APPLICAT IONS......................................................................................................394.1 Introdu ction .......................................................................................................394.2 ReducedComplexit y MKS (RC MKS)..............................................................404.2.1 Moti vations..............................................................................................404.2.2 Sparse MKS Implementat ion...................................................................414.2.3 Image Fusion Results Using RCMKS.....................................................424.2.4 Conc lusions .............................................................................................434.3 VectorValued MKS..........................................................................................44 5 PAGE 6 4.3.1 Moti vations..............................................................................................444.3.2 Standard MKS Using Vect orValued Meas urements...............................444.3.3 The Information Fo rm of Kalman Filter ....................................................454.3.4 Efficient Measurem ent Updates Methods................................................454.3.4.1 Serial measur ement updates method............................................454.3.4.2 Selective measur ement updates method.......................................484.3.5 Image Fusion Results Using VectorV alued MK S...................................484.3.6 Conc lusions .............................................................................................505 QUADTREE IMAGE MODEL AND BLOCKY ARTIFA CT.......................................555.1 Introdu ction .......................................................................................................555.2 Example: Comparisons of Covari ance Structures among Three Different Graphical Models..............................................................................................555.2.1 Intr oduction ..............................................................................................555.2.2 Estimation of Gaussian Pr ocess..............................................................575.2.3 MultiScale Modeling Using Pyrami dal Tree............................................575.2.3.1 Prior model of pyramidal tree.........................................................585.2.3.2 Intrascale structure ( For t )........................................................585.2.3.3 Interscale structure (For s )..........................................................595.2.4 Comparisons of Correlation Decays (MonoScale, Dyadic Tree and Pyramidal Tree).............................................................................................605.3 Conclu sions ......................................................................................................616 IMAGE FUSION USING SINGLE FRAME SUPER RESOLUTION........................666.1 Introdu ction .......................................................................................................666.2 Super Resolu tion Met hod.................................................................................696.3 Super Resolution MultiScale Kam an Smoothing Algorit hm (SRMKS)............706.3.1 Summary of Super Resolution Al gorithm.................................................716.3.2 Feature R epresentat ion...........................................................................746.3.3 Super Resolution MultiSc ale Kalman Smoothi ng (SRM KS)..................756.4 Simula tions .......................................................................................................756.5 Conclu sions ......................................................................................................767 CONCLUSIONS AND FUTUREWO RKS................................................................847.1 Conclu sions ......................................................................................................847.2 Future Works....................................................................................................87LIST OF RE FERENCES...............................................................................................89BIOGRAPHICAL SKETCH ............................................................................................94 6 PAGE 7 LIST OF TABLES Table page 41 Comparison of memory st orage using RCMKS and standard MKS. (Mb:Mega bytes, Gb :Giga by tes)........................................................................5342 ERS and TOPSAR system parameters and h (RMS error variance)...............5343 Comparison of error performanc e using single TOPSAR MKS and the proposed method. (All elevation units are in meters.).........................................5461 Mean square errors of image fusion results in Figure 66 (in meters)................83 7 PAGE 8 LIST OF FIGURES Figure page 11 An example of image fu sion using multiscale estimation framework. DEMs of ERS1 (20m spacing at 9th scal e) and TOPSAR (5m spacing at 11th scale) are optimally fused together by employing Multiscale Kalman Smoothing (MKS) algorithm. We have to notice that TOPSAR at 11th scale suffers severe data dropouts (represented by white). Also, the final fused estimated at 11th scale is shown. A ll elevation units ar e in meters....................2021 Multiscale tree structures. A) 1D dyadic structure, an d B) 2D quadtree structure. The root node corresponds to the coarsest scal e while the leaf nodes comprise the finest sca le.........................................................................2622 The first four levels of 2nd order tr ee structure (dyadic tree). The parent node of is represented bys B s and two offspring are denoted by 1s and2s .............2623 Markov property of dyadic tree structure. Conditioned on node the nodes in the corresponding 3 subtrees of nodes extending away from s are uncorrelated. (Each subset of nodes is represented by yellow for s1 s green for2 s and grey for3 s respecti vely)..................................................................2631 Recursion diagram of a Kalman filter..................................................................3732 Two sweep steps of timeseries Rauc hTungStriebel algorithm. 1) Forward sweep (Kalman filtering): Optimal inference on the hidden variables () x t given a collection of past and present measurements 01,,t y yy, ,and 2) Backward sweep (Smoothing): Recursiv ely computes the quantities of estimates given all measurements01,,T y yy.......................................................3733 The MKS algorithm. A) Upward sweep (Kalman filtering) computes (3.9) ~ (3.12) from fine to coarse scale and B) Downward sweep (Kalman smoothing) computes (3.13) fr om coarse to fine scale .......................................3834 An example of image fusion by employing MKS algorithm. A) ERS1 elevation data spaced by 20m, B) TOPSAR elevation data spaced by 5m, C) Fused estimate at the finest scale (11th scale), and D) RM S error of final estimate. All elevation uni ts are in meters..........................................................3841 The finest resolution measurement is sparse relative to the coarsescale measurement. The highly sparse finescale image is at 13th scale (5m spacing), and dense coarsescale image is at 9th scale ( 80m spacing) on quadtree. The data voids are represented by grey. All elevation units are in meter..................................................................................................................51 8 PAGE 9 42 Set of subtrees for which finestscale data are available. (each set of valid subtrees are repres ented by ye llow) ...................................................................5143 Perspective views of topographic and bat hymetric elevations fused using the RCMKS method. The coverage of area is 40 Km 40 Km. A) Fused estimates (elevations) at 13th scale, and B) its RMS error. All elevation units are in me ters.......................................................................................................5244 Image data sets to be fused. A) ERS1 elevation data spaced by 20m, B) 1st TOPSAR elevation data spaced by 5m C) 2nd TOPSAR elevation data spaced by 5m, and D) 3rd TOPSAR elevation data spaced by 5m. Dark blue area at each TOPSAR image represents data voids. All elevation units are in meters................................................................................................................5245 Fusion results using vectorvalued MKS. A) Fused estimate, and B) its height uncertainty using serial measurement update method. All el evation units are in mete rs.............................................................................................................5351 An example of blocky arti fact. In this example, tw o images are being fused by MKS algorithm in Chapter 3. At final estimate, we can notice that there are blocky regions shown, and these appear at the pixel locations where finescale image pixel values are not available. (Data missing is represented by white at finescale image). All el evation units are in me ter.................................6252 Three process models used in this exam ple. A) monoscale Markov chain, B) dyadic tree structure, and C) pyrami dal tree structure. The edges (or connections) between nodes represent the statistical dependency between a node and its neighbor hood ones. .......................................................................6253 Interscale ( s ) and intrascale (t ) prior models in pyramidal tree structure...6354 Pyramidal tree structure (1D case). Penalizing parametersm andm of each scale are represented on py ramidal tree st ructure.............................................6355 A) 1D monoscale Markov chain st ructure (64 variables), B) Dyadic tree structure (64 variables at finest scale), and C) pyramidal tree structure (64 variables at fi nest scale ).....................................................................................6456 Obtained covariance matrices (1P ) in (5.6) using A) monoscale Markov chain, B) dyadic tree, and C) Pyrami dal tr ee......................................................6557 Correlation decay curves (all values are in log scale) of three different structures at the finest scale (1D case ). Parameters used in (1) Monoscale: 1M ,N/Am (2) Dyadic tree: N/Am [1 1 1 1 1 1 1]m and (3) Pyramidal tree: [1 1 1 1 1 1 1]m and [1m 1 1 1 1 1] .......................................65 9 PAGE 10 61 An example of nearest neighborhood searching algorithm applied to 55 lowresolution patch for 4 times magnifica tion: A) input low resolution patch, B) 4 nearest neighbor patches extrac ted from the training images, C) reconstructed input patch by weighted combination of image patches in (B). (Weighting vector used: ={ 0.4787, 0.3110, 0.4518,0. 2415} ). Darker red represents higher elevati on................................................................................78n62 A) Corresponding high resolution patch of Figure 61 (A), B) 4 high resolution patches corresponding to low resolution patches in (A), C) estimated high resolution patch constructed by (B). (Weighting vector used: ={ 0.4787, 0.3110, 0.4518,0.2415} ) Darker red represents higher elevation.....................79n63 8 sets of training images used for t he results in Figure 61 and Figure 62. Only high resolution training images are shown. Each of corresponding low resolution training images is obtained by down sampling by order of 4..............8064 A 5 local neighborhood in the lowre solution image for computing the firstorder and secondorder gradi ents of the pixel at t he center with elevation value 33 x .............................................................................................................8165 The input image sets to the fusi on algorithm. A) Gr ound truth, B) low resolution image (256256), C) sparse high resolution image (10241024: data void regions are represented as white), D) Super resolved image of smoothed estimate at coarse scale (at 9th scale). All elevation units are in meters................................................................................................................8166 Comparisons of fused esti mates at the finest scale (11th scale). A) using standard MKS introduced in Chapter 3, B) using proposed SRMKS in Chapter 6, C) zoomed fusion results of data void areas (areas circled by red in (A) ). 1st row: zoomed area at 1, 2nd row: zoomed area at 2, and 3rd row zoomed area at 3. For each row of re sult (left) finescale ground truth, (center) fused by MKS in Chapter 3, and (right) fused by proposed method (SRMKS). All elevation uni ts are in meters........................................................82 10 PAGE 11 LIST OF ABBREVIATIONS DEM Digital Elevation Model ERS1 European Remote Sensing satellite1 fBm fractional Brownian motion Gb Giga bytes IHRC International Hurricane Research Center InSAR Interferometric Synthetic Aperture Radar JPL Jet Propulsion Laboratory KNN K Nearest Neighborhood LiDAR Light Detection and Ranging Radar LMMSE Linear Minimum Mean Square Error MAP Maximum A Posteriori Mb Mega bytes MKS Multiscale Kalman Smoothing ML Maximum Likelihood MMSE Minimum Mean Square Error MRFs Markov Random Fields NASA National Aeronautics and Space Administration NGDC National Geophysi cal Data Center NOAA National Oceanic and Atmo spheric Administration PacRim Pacific Rim PSF Point Spread Function RMS Root Mean Square SR Super Resolution SRMKS Super Resolution Mu ltiscale Kalman Smoothing 11 PAGE 12 RCMKS Reduced Complexity MultiScale Kalman Smoothing RTS Rauch Tung Striebel TOPSAR Topographic Synthetic Aperture Radar 1D One Dimensional 2D Two Dimensional 12 PAGE 13 Abstract of Dissertation Pr esented to the Graduate School of the University of Florida in Partial Fulf illment of the Requirements for t he Degree of Doctor of Philosophy MULTIRESOLUTION IMAGE FUSION USING MULTISCALE ESTIMATION FRAMEWORK By Hojin Jhee May 2010 Chair: Fred J. Taylor Major: Electrical and Computer Engineering Recently, we have been experiencing remarkable advance in remote sensing technology and it allows us to capture large classes of natural processes and phenomenon at different resolutions with confident levels of qualities. For example, data acquisition over specific t opographic environment by employing high spatial resolution sensor is extremely useful in monitoring detail physical and biologi cal processes of the Earths surface. As data acquisition tec hniques become sophisticated using more accurate sensing devices, the demands for processing obtained data sets are more diverse and complex. In this dissertation, we develop data fusion methods to process image sets obtained by heterogeneous sources at different resolutions. This fusion scenario is based upon the idea that tries to combine image sets differi ng resolutions by employing robust and efficient signal processing scheme like multiscale estimation framework. Since the data collection processes have been performed by different methods and for different purposes, merging process is not a trivial task. Despite of this technical difficulty, real world remote sensing applications require info rmation that is insufficient to be interpreted by a single sensor measurem ent. Since individual sensor employed is 13 PAGE 14 14 operated in certain acquisition geom etries (e.g. altitude, distance or viewing angle), this turns disparate coverage and accuracy charac teristics on obtained data. A number of attempts have been made to combine data from different sensors, but existing methods are often empirical based [Sorenson, 1970]. Successful data fusion becomes especially difficult when the sensors involved have sign ificantly different ac quisition methods, wavelengths, and resolutions. To overcome this difficulty, this study shows the efforts to build robust image processing framework for combining (fusi ng) complementary multiresolution image data sets. The contributions of this work ar e summarized as: (1) constructing statistically optimal fusion method utilizing multiscale estimation framework such that one can efficiently obtain fused image estimate and its confident measure (uncertainty) at desired resolution, (2) developing new multisca le fusion techniques to find solutions for computationally challenging fusion situations, and (3) extending multiscale estimation techniques to generate both vis ually and analytically improved fused image at the finest resolution by mitigating pixel blocky artifact s commonly arising when the resolutions of the images being fused are differed by large orders of magni tudes and image pixel voids at finescale are severe. PAGE 15 CHAPTER 1 INTRODUCTION 1.1 Background Merging or fusing image m easurements captured by dis parate sensing techniques has been a challenging image processing problem. Because of the geometric constrains on individual sensor platform such as altitude, distance or viewing angle, the acquired image from each sensor shows different characteristics in coverage and accuracy [Hall, 1997]. Despite of these charac teristic diversities, real world remote sensing applications require information that is difficult to obtain with a single sensor measurement [Desmet, 1996]. For instance, a study of monitori ng natural phenomenon at coast line for prediction of inland flood surge or sediment transport associated with nearshore area typically requires gridded Digital Elevation Model (DEM) which can cover large areas with high resolution and sma ll elevation errors [NOAA; Zhang, 2003]. Since terrain data formed by DEM prov ide basic information for understanding and predicting natural systems in c oast line, generation of accurate elevation model is the most fundamental but impor tant work [Zebker, 1994]. DE M data acquired from spaceborne based sensor generally provide good measures of topographical shapes over large areas, but locally the spatial resolution is insufficient for delineating details of coast line [Komar, 1998]. To achieve finer map, simp le resampling (interpolation) process on obtained DEM can be applied, but increased cell si ze does not actually enhance spatial resolution of original data since no additional information is brought [Gesch, 2001]. On the other hand, a DEM created from airborne based platform such as Laser altimeter sensors (e.g. LiDAR (Light Detection and Ranging )) [IHRC, 2004] contains detail information about shore line shape with submete r height accuracy, because it typically 15 PAGE 16 scans laser pulses through a small angle and footprints with lower altitude than space shuttle ones. These scanning geometries al low us to acquire very high resolution terrain data [Carter, 2001]. Yet, since imagi ng swath is normally smaller than that of space shuttle based sensor, acquired DEM from this type of platform commonly suffers missed elevation values or data voids. Thus, the motivation of multiresolution image fusion arises when we are requested to obtain a seamless integration of image data from multiple resolutions such that fused result has both extensive cove rage and locally highresolution details. By taking advantages of the availability of various data sets, a method has to be proposed for the synthesis of a fusion process. In this work, a problem of combining sparse and dissimilar data types at multiple resolution s will be solved by introducing a theoretically rigid method that produces globally optimal estimate at desired resolutions. In many applications, the fused estimate at the achievable highest resolution is much desirable. This image fusion goal will be achieved by employing multiscale estimation framework and its results remain nearly optimal in the mean squared error sense. 1.2 MultiScale Image Fusion A hierarchical signal modeling has been an attractive approach in signal and image processing areas for last decades [W illsky, 2002; Gonzalez, 2002]. This has been motivated by the needs to (1) develop st ochastic model that is able to capture multiscale characters of natural processes, and (2) process multiple measurements having different resolutions [Ga llant, 2006]. Originally, the multiscale data modeling is based on the idea that one can transform spatia l data models to finetocoarse (or vice versa) method that directly models processes of interests on multiscale data structures such as dyadic or quadtrees [Daniel, 1997]. The additional nodes within structure 16 PAGE 17 correspond to coarser representat ions of the original nodes at the finer scale. The work proposed here uses quadtree structure to combine multiple image data differed by resolutions. A multiscale estimation fram ework can be widely applicable in many remote sensing applications because multiple data sets of different resolutions are often needed to characterize the processes under study [Choi, 2007]. Chou et al., introduced a multiscale linear estimation method based on the Kalman filter [Chou & Nikoukhah, 1994]. Kumar [Kumar, 1999] applied it to the problem of comb ining soil moisture data at different resolutions, while other investi gators used the method for data interpolation and fusion [Fieguth & Karl, 1995; Daniel, 1997]. Chou, et al. and Fieguth, et al. derived a recursive estimator consisting of a Multiscale Kalman Smoothing (MKS) constructed on a Markov tree data structur e that accommodates multisensor measurements of differing resolutions [Chou & Benveniste, 1994; Fieguth & Irving, 1995]. The fused estimate with MKS allows the fu sion of state variables that are not directly observed. Figure 11 illustrates an example of image fusion (pixel level) using MKS algorithm for a pair of multiresolution imag e sets measured from The Finke River in central Australia. [Slatton, 2000]. A dense low resolution image at 9th level (scale) of image pyramid is acquired by spaceborne ERS1 (European Remote Sensing satellite) processed to a spatial resolution of 20 mete rs, and high resolution one at 11th level is obtained by airborne TOPSAR (Topographic Sy nthetic Aperture Radar) gridded to resolution of 5 meters [Slatton, 2000]. In this example, we can notice that the sensing range of TOPSAR is much limit ed comparing with coverage of the ERS1, so it results in TOPSAR image yielding data dropouts or data voids. The MKS data fusion algorithm tries to fill these data dr opouts by associating twoway estimation scheme on quadtree 17 PAGE 18 structure. More details about multiscale model and finetocoarse fusion algorithm used in this example will be covered in Chapter 2 and Chapter 3. 1.3 Problem Statemen t and Organization The major objective of this dissertation is to demonstrate efforts for solving multiresolution image fusion problems commonl y encountered in remote sensing applications. For this goal, we employed mult iscale estimation approaches to construct standard (Chapter 3) and computat ionally favorable MKS algori thms (Chapter 4) which produce statistically optimal image fusion result s. The main focus, in particular, is to complete sparsely populated finescale ima ge by incorporating densely spaced coarsescale image set on hierarchical data stru cture. While proposed methods, in general, provide outstanding performance in terms of accuracy and efficiency, such treebased method is often limited to be applied in ce rtain applications, for example, when smoothness of estimated result is essent ially needed for meaningful computations of gradients and curvatures [Fiegut h & Irving, 1995]. This is caused by the limitation that MKS algorithm usually exhibits blocky artifact at regions of the fused image where only coarse data are available. More precisely, due to the blocky covariance structure introduced by quadtree, the computed estimate su ffers distracting pixel blocky artifact at locations where finescale pixels are void Although blockiness on fused estimate can be mitigated by simple postprocessing (e.g., the application of a lo wpass filter, or the averaging of multiple), this of ten deteriorates the resolution of finescale details because the obtained smoothness is achieved by spatial blurring. Recently, large classes of flexible tree structures, such as dynamic or loopy tree, have been proposed to handle pixel blockiness, but their computational co sts are still too expensive for largescale image problems. [Murphy, 1999; Murphy, 20 02; Adams, 2001; Todorovic, 2007]. Since 18 PAGE 19 our goal for this work has been inspired by developing computationally efficient algorithm in optimal sense, the choice of quadtree is sill reasonable. The following statements claim the purposes of this dissertation. In this dissertation, effective mult iresolution image fusion framework will be proposed to merge multiple image sets having different resolutions. Learning based image resolution enhancement approach will be introduced and applied on proposed fusion methods to reduce blocky artifacts ex hibited by quadtree image model. This new fusion scheme will successfully infer missed det ail information at data void region in finescale image, so that the blockiness on fused estimate can be suppressed. The organization of dissertation is as follows: In Chapter 2, we summarize multiscale data representation which plays an important role in our fusion study. Chapter 3 introduces a multiscale estimation appr oach based on Kalman filter. Multiscale Kalman smoothing (MKS) algorit hm is a generalized RauchTungStriebel (RTS) [Brown, 1997] estimator implemented on quadtree. At each node in the tree, the MKS optimally (in a least squared error sense) blends a stochastic multiscale model with the available measurements according to a Kalman gai n. In Chapter 4, we investigate computationally more challenging fusion pr oblems by introducing new fusion algorithms which satisfy efficiency and accuracy on esti mation results. Chapter 5 discusses about blocky artifact exhibited in MKS algorithm. To overcome the problem of blockiness, a new class of fusion scheme will be derived in Chapter 6 by employing learningbased image super resolution technique. Finally Chapter 7 gives conclusion of this dissertation, together with suggestions for future works. 19 PAGE 20 20 Figure 11. An example of image fusion using multiscale estimation framework. DEMs of ERS1 (20m spacing at 9th scale) and TOPSAR (5m spacing at 11th scale) are optimally fused together by em ploying Multiscale Kalman Smoothing (MKS) algorithm. We have to notice that TOPSAR at 11th scale suffers severe data dropouts (represented by whit e). Also, the final fused estimated at 11th scale is shown. All el evation units are in meters. PAGE 21 CHAPTER 2 OVERVIEW OF MULTISCALE ESTIMATION FRAMEWORK 2.1 Introduction A modeling of Gaussian processes index ed by the nodes of tree structure was introduced in [Chou & Nikoukhah, 1994]. For la st decades, Markov random fields (MRFs) have been popularly used for image analysis [Laferte, 2000]. While MRFs provide a rich structure for multidimensional modeling, they do not generally allow computationally efficient algorithms for simple analysis. Thus, it leads to computationally intensive algorithms for image estimation/ classification problems. In addition, parameter identifications are not trivial tasks for MRF due to the computational complexity in the partition function [Besag, 1974; Potamianos, 1991]. Unlike conventional monoscale MRFs whose perpixel computational load typically grows with image size, multiscale based image analysis has a perpixel complexity independent of image size for optimal estimations. Thus, si gnificant computational savings could be obtained on calculating optical flow [Chou, 1991] or the interpol ation of sea level variations in the North Pacific Ocean from sa tellite measurements as in [Fieguth & Karl, 1995]. In Chapter 2, we explore a general approach for building multiscale models on treestructures. Figure 21 s hows multiscale tree structur es for 1D (dyadic) and 2D cases (quadtree). For simplicit y, our entire descriptions in Chapter 2 are carried out in the context of the dyadic tree which corres ponds to the representat ion of lD signals. The extension to higher dimensions (e.g 2D) is possible by trivial notational alternations without analytical complexities For instance, in twodimension (2D) dyadic 21 PAGE 22 tree would be replaced by a quadtree in wh ich each node has four descendants instead of two, resulting in the same order of complexity per data point as in 1D 2.2 MultiScale Data Representation 2.2.1 State Space Models on qth Order Tree The models in [Chou, 1991; Luettgen, 1994] describe multiscale stochastic processes indexed by nodes on a tree. (Figure 21) The key to our description is that multiscale representations, regardless onedimensional or higher dimensional signal, have a timelike variable, namely scale Basically, all methods for representing and processing signals at multiple scales in volve pyramidal data structures, where each level in the pyramid corresponds to a particular scale and each node at a given scale is connected both to a parent node at the next coarser scale and to several descendent nodes at the next finer sca le. We commonly refer order tree to a tree structure of nodes connected such that each node has children (or offspring) nodes. Each node s a scale index of state(thqq is) x shere in general, the mqate vectors at the thmvel of the tree (for ) can be interpreted as repr esentation of process at scale. As described in [Chou, 1991], we def ine an upward (finetocoarse) shift operator ,, w t leMs 0, 1, 2, m thm B where B s is a parent of node and a set of downward (coarsetofine) s operatorsishift ,iqh that the q offspring of node sare1,1, 2,, suc2ss ,qs Figure 22 illustrates the relations of() x s,() x Bs1),( and 2) ( x sx s ine a s cond order tree. Now, the dynamics of state () x s are then, modeled by the form of a Gaussian autoregressive in scale ()()()()() ()~(0,()) x ssxBsswswsNQs (2.1) 22 PAGE 23 This regression is initiated at the root node 0 s with a state variable (0) x having zero mean and covariance The represents white deta il noise uncorrelated across scale and space, and also uncorrelated with the initial condition (0)P()ws(0 ) x This noise is assumed to be zero mean and covarianceQs. Since () (0) x and ws are zeromean, we can note that () () x s is a zeromean random process. Furthermore, since the detail noise is white, the process ()ws() x s(Qsis characterized completely by P and the autoregression parameters and for all nodes (0) ()s)0 s The stochastic structure of multiscale processes are expressed to provide an extremely efficient algorithm for estimating () x s, based on noisy measurements () y s. The corresponding measurement model is given by ()()()() ()~(0,()) ysHsxsvsvsNRs (2.2) where is the measurement mapp ing matrix and the noise is white with covariance () Hs () vs () R s. It is uncorrelated with state () x s at all nodes on the tree. Here, we can notice that the finetocoarse recursion corresponds to the multiresolution analysis of signals whereas t he coarsetofine recursion corresponds to the multiresolution synthesis of signals because we add higher resolution details at each scale. Simply, (2.1) and (2.2) ap propriately model the latter ca se for the multiresolution signal processing. 2.2.2 Markov Property of MultiScale Process It is well studied that the multiscale st ochastic model is analogous to traditional Markov timeseries realization [Willsky, 20 02]. The relation between two representations becomes clear once the Markov property of multiscale processes is assumed [Bouman, 1994]. To describe the Markov property of multiscale processes, we first assume that in 23 PAGE 24 a order tree each node has a single parent node and qchildren nodes. This configuration separates t he remaining nodes into thq1q subtrees (i.e. 3 subtrees for dyadic tree) by their children nodes and a parent node. Now, the Markov property states that if () x s is the value of the state at node then conditioned on s() x s the states in the corresponding subtrees of nodes ex tending away from are uncorrelated. For clarity, let 1qs s i, be the three subsets of states separated by a common node connected by three subsets of states then Markov property states that 1,2i,3s12,3 1 2 3,  123123(,,)()()(ssss ss ss ss) x ssssxssxssxsspx p x p x p x (2.3) The property (2.3) implies that the tree processes are Markov in scale from coarsetofine. Also, the conditi onal pdf of the state at node given the states at all previous scales depends only on the state at the parent nodes B s. This is intuitive but important property in mu ltiscale signal processing, because it allows implementation of efficient inference (estimation) algorithm in tree structure. Specifically, it st ates that the sets of offspring nodes (e.g two sets of nodes repres ented by yellow and green in Figure 23) extending from their common parent node ( ) are decoupled by Markov property, thus the values of each sets of nodes can be estimated independently. This leads parallelized inference algorithm on tree (e.g. Multiscale Kalman smoothing algorithm in Chapter 3). Figure 23 illust rates Markov property in dyadic tree. sThe interpretation of multisca le representation to the time series realization is that the role of state information is to provide an information interface among subsets of the process [Willsky, 2002]. This interface must store just enough process information for conditional uncorrelation of corresponding process subsets. In the timeseries case, this interface is between two subsets of the proce ss (e.g., the past and the future), while in 24 PAGE 25 the multiscale case, the interface is among multiple (e.g. 1q ) subsets of the process [Irving, 1995]. 2.3 Conclusions In Chapter 2, we have briefly reviewed a multiscale signal model which leads to developing extremely efficient estimation algorithm. Due to the Markov property in (2.3) of tree structure, parallelized inference algorithm can be achievable. The main importance of this study is to provide effective information fusion methods for disparately sensed stochastic process at different resolutions. The signal processing framework we summarized allows modeling of such multiresolution data appropriately. This results in dat a fusion algorithms that are no more complex than algorithms for filtering single resolution data. As we discussed, the same can not be achieved by classical statistical signal model like MRFs [Bouman, 1994]. The use of a simple quadtree model has been widely ex ploited in image classification and reconstruction problems. 25 PAGE 26 A B Figure 21. Multiscale tree structures. A) 1D dyadic structur e, and B) 2D quadtree structure. The root node corresponds to the coarsest scal e while the leaf nodes comprise the finest scale. Figure 22. The first four levels of 2nd or der tree structure (dyadic tree). The parent node of is represented bys B s and two offspring are denoted by 1s and2s Figure 23. Markov property of dy adic tree structure. Conditioned on node the nodes in the corresponding 3 subtrees of nodes extending away from s are uncorrelated. (Each subset of nodes is represented by yellow for s1 s green for2 s and grey for3 s respectively) 26 PAGE 27 CHAPTER 3 DATA FUSION BY MULTISCALE KALMAN SMOOTHING ALGORITHM 3.1 Introduction In Chapter 2, we discussed multiscale linear state model and performed some elementary statistical analyses on tree structur e. Multiscale model is driven by white noise, scale recursive and analogous to time domain Gaussian Markovchain [Willsky, 2002]. In Chapter 3, we investigate the problem of optimal estimation involving this model, and develop the generalization of the RauchTungStriebel (RTS) algorithm [Brown, 1997]. Basically, RTS consists of two estimation steps; a finetocoarse filtering sweep followed by a coarsetofine smoothi ng sweep. The filtering step, corresponding to a generalization of the Kalman filter to multiscale tree model, consists of threestep recursions; (1) measurement update, (2) finetocoarse prediction, and (3) the fusion of pixel information from finetocoarse scale The step (3) has no counterpart in standard timeseries Kalman filtering, and this leads to a new scalerecursive Riccati equation. [Chou & Nikoukhah, 1994]. This twoway sweeps estimation scheme optimally combines a stochastic multiscale model with the available measurements at different resolutions according to a Kalman gain. T heoretical studies have proved that recursion based estimation method supports large classes of statistical processes in efficient way [Brown, 1997]. Like in timeseries Kalman f ilter, the procedures of specifying prior information of a state space model must be preceded before implementing the multiscale estimation algorithm. In section 3.2, we present preliminaries for the development of MKS algorithm. In section 3.3 fundamental aspects of Kalman f iltering/smoothing are discussed. The detail of MKS algorithm is described in section 3.4 and finally, section 3.5 gives conclusions. 27 PAGE 28 3.2 Preliminaries The Kalman filter is a best linear estima tor when the states or variables to be estimated are Gaussian. There are severa l numbers of estimation methods, such as maximum likelihood (ML), maximum a poste riori (MAP), and minimum mean squared error (MMSE) [Kamen, 1999]. ML and MAP esti mators both compute the conditional probability distribution functions (pdfs) and they try to find t he most likely value of the random process given measurem ents. The MMSE estimator prod uces an estimate that is globally optimal since it seeks solutions by minimizing the variance of the ensemble of measurements [Kay, 1993]. The linear MMSE (LMMSE) estimate trades off between accuracy (global optimality) and computational complexity Unlike the other methods enumerated here, LMMSE does not require co mputations of conditional densities, instead, it only depends on second order stat istics of the process and measurements. Therefore, the LMMSE can be implemented as set of linear equations that can be solved with noniterative fashion [Kay, 1993]. If the processes or the measurements are not normally distributed, i.e. they are not completely de scribed by their first two moments (mean and variance), the LMMSE will represent a suboptimal estimator. However, even this is the case, the performance usually remains close to optimal (sub optimal) [Haykin, 2002]. In our fusion studies, the normalities (Gaussianity) of states and measurements are essentially assumed. For example, we assume that the topography observed by the Laser sensor (e.g. LiDAR) is Gaussian, hence, the estimates of elevation heights using these measurements are approximately Gaussian [Carter, 2001]. In [Johnson, 1998], a method is introduced to transform nonGaussian distributed data to nearly Gaussian. Generally, the choice of this transformation depends on data type and would require additiona l processing steps. 28 PAGE 29 For sections 3.3 ~3.4, we will derive more general problem of combining sparse and disparate data types at multiple scales. To achieve this goal, a Gaussian distributed statespace approach is chosen, leading a Kalman filter formulation [Grewal, 1993]. For better understanding of multiscale Kalman f ilter, section 3.3 gives background on Kalman filter theory. 3.3 The Kalman Filter The Kalman filter is a recursive stochastic estimator that attemp ts to minimize the mean squared error (MSE) between the estimates and the random variables being estimated [Kay, 1993]. Figure 31 depicts the re cursive operations of the Kalman filter [Brown, 1997]. Since the solution is computed recursively using only estimate from the previous step and the present measurement, the standard Kalm an filter has relatively low memory requirements. If the model input par ameters are specified correctly, the Kalman filter is the optimal linear minimu m meansquare error (LMMSE) estimator when the signal and noise distributions are Gaussian [Kay, 1993]. The Kalman filter conceptually makes balancing the uncertainty of the process with the uncertainty of the measurements in a linear combinat ion to reach at estimate. As a result, it is able to handle sparse or missing data in optimal manner [Brown, 1997]. 3.3.1 Linear State Model The linear equations that relate a state and measurement in t he Kalman filter are 1 kkk kkk k k x xw y Hxv (3.1) 29 PAGE 30 where k x and k y represent a state and a measurement vector at step k respectively. is state transition matr ix in state relation and represents process noise. is statemeasurement mapping and is measurement noise. kkwkHkv3.3.2 Kalman Filtering Algorithm The recursion steps of Kalman filter are shown in Figure 31. In Figure 31, the process noise is assumed to be white with covariance Q and the measurement noise is also white and has covariancewv R The process and measurement noises are not crosscorrelated. [] 0, k T kiQik Eww ik [] 0, k T ki R ik Evv ik and (3.2) []0 for ,T kiEwvikThe conditions on and v given by (3.2) must be necessarily specified for the Kalman filter to be operated as an optimal LMMSE estimator. It is also necessary that and w, QH R in (3.1) and (3.2) are being specifie d. In practice, however, these parameters are not known ex actly, but they are oft en known approximately. The performance of the filt er depends on the qualities of paramet er specification, however, in some cases, Kalman filter works well even when they are not perfectly known or the noise terms are not exactly white [Brown, 1997]. 3.3.3 Kalman Smoothing Algorithm The Kalman filter has been wi dely used to estimate the values of state variables given a model and their relationship to the m easurements. It minimi zes the trace of the error covariance matrix in (3.3) which is equivalent to minimizing the mean squared error (MMSE) between the state P x and the estimate x for each sample of independent variable [Brown, 1997]. 30 PAGE 31 In most of signal processing applications, the independent variable is temporally indexed or spatially coordinated. For the sample, the error covariance matrix is thk ()[][()()T kkkkkkkPkkPEeeExxxx ]T (3.3) where is the expectation operator. The estimate []E thkk x is conditioned on the measurement at sample k (if present) and the previous estimate of the state1k x The Kalman equation naturally infers missing data because previously estimated state values are associated with measurements to determine the current state. The measurements and the state model are weighted according to their variances. Kalman smoothing utilizes measurements to the current sample to improve the estimated state, in the s ense of reduced variance [Brown 1997]. The error covariance of the smoothed estimates is 1()()()[(1)(1)]() ()()(1,)(1)ss TPkNPkkJkPkNPkkJk JkPkkkkPkk T (3.4) where is the total number of samples to be estimated, N1,2,,0kNN and is the state transition matrix. T he formulations in (3.4) ar e sometimes referred to as RauchTungStriebel (RTS) smoothing or fix edinterval smoothing [Grewal, 1993]. Basic recursion steps of RTS algorithm in times eries process are shown in Figure 32. The derivation of multiscale Kalman sm oothing (MKS) is a generalization of classical RTS algorithm on tree structure introduced in Chapter 2, with the addition of a merge step to support the data at each scale. In addition to the estimates, it provides the corresponding uncertainty (error variance) for every estimate, which is useful for quantitative analysis. 31 PAGE 32 3.4 MultiScale Kalman Smoothing (MKS) Algorithm To derive the multiscale Kalman filter, it is convenient to alter notation slightly. In 1D process, is used to denote the recursion index of the filter. Since multiscale process is modeled on tree data structure, will denote the no de of the tree and replace as the index of recursion. For a 2D process, multisca le Kalman smoothing begins at a finetocoarse sweep up to the q uadtree which is analogous to timeseries Kalman filtering. The only difference is merging step which propagates a posteriori information of finer scale to coarser scale. The filtering st ep is followed by a coarsetofine sweep down to the quadtree that corresponds to Kalman smoothing. kskUsing the scalar form, the courseto fine linear dynamic model is given by ()()()()(),0 ()()()() xssxBsswssSs ysHsxsvssTS (3.5) where x is the state variable, and y represents the measurem ents. The stochastic forcing function is a Gaussian white noise process with unity variance, and the measurement error v is a Gaussian white noise process with scale dependent covariance matrixw() R s. Srepresents the set of a ll nodes on the quadtree, and denotes nodes in which m easurements are available. s is the node index on the tree, and denotes the root node. T0 s B is a backshift operator in scale, such that B s is one scale coarser than is the coarsetofine state transition operator, is the coarsetofine stochastic det ail scaling function, is the measurementstate mapping, and sH R represents the measurement variance of the measurement s. Since a wide range of natural stochastic processes, such as topography, exhibits power law behavior in their power spectra, they can be effectively modeled as fractional Brownian motion (fBm) processes [Fieguth & Karl, 1995; Turcotte, 1 997]. We therefore assu me that our state 32 PAGE 33 process (e.g. surface elevation) follows a 1/ f model in scale [Slatton, 2001]. Using this model, the power spectr um of the state variable () x s is represented by the multiscale model in (3.5) with specifying the coarsetofine state transition operator and process noise standard deviations in (3.6) ()1s (1) 02m/2 (3.6) The values of and 0 are determined by first order regression matching of the power spectrum of measurement to a realization of the fB m model in loglog space. Since power spectra of discr ete image data can only repres ent signal energy over a finite range of spatial frequencies, a 2D Ha mming window is applied to the data prior to computing the power spectra to reduce aliasing. [()()]TLet then ()~wsN E wswt(0,1), s tI 1 where st forst So, represents the variance of the stochastic detail that is incorporated as the resolution increases. The initial priors are specified for the process model that ev olves downward. Assuming that a zeromean process [Fieguth & Karl, 1995], the priors fo r the state mean and covariance are 20 0 ]0 (0)](0)T sP [(0) [(0) xEx PExx 0 (3.7) where represents a root node, and denotes the covariance of the state at node These priors are used with the recursions in (3.5) to generate a realization of a multiscale stochastic process. The resultin g prior estimates of the state and state covariance at the leaf nodes are then used as the initial priors in the upward Kalman filter. In general, little is known about the process to be estimated, so is chosen to ss()sPs(0)sP 33 PAGE 34 be some arbitrary large number [Gan, 2001]. The a priori process model in (3.5) is now completely specified. The corresponding upward model can be specified, which the Kalman filter will track 1 1 ()()()() ()()()() ()()()() [()()]()[()()()()] ()T ss TT ss s x BsFsxsws ysHsxsvs FsPBssPs EwswsPBsIsPssPBs Qs (3.8) Here, Q is the process noise covariance in the upward model. It is the multiscale analogous to Q in the timeseries Kalman filter in section 3.3. is the finetocoarse state transition operator. FThe MKS algorithm proceeds wit h initialization, the upwar d sweep, and finally the downward sweep. Initialization : At the leaf nodes, enter the prior values ()0 () s xss PssP (3.9) Upward sweep : The upward sweep is equivalent to a Kalman filter operating in scale with an additional merge step. Using defi ned initial priors at the leaf nodes, the algorithm proceeds from the bottom of the quadtree up to the root node. 1()()()[()()()()] ()()()[()()()] ()[()()]()TTKsPssHsHsPssHsRs xssxssKsysHsxss PssIKsHsPss (3.10) The projection step is applied at all scales from the bottom. For quadtree, 1, 2, 3, 4i 34 PAGE 35 ()()() ()()()()()iiii T iiiiiixssFsxss PssFsPssFsQs (3.11) Each group of nodes at the previous scale is merged into a single value at the current scale 22 1 1 11 1 ()()()() ()[(1)()()]q i i q si ixssPssPssxss PssqPsPss1 i T) (3.12) for the quadtree, 4 q Downward sweep: The upward estimate at the r oot node is used as the initial condition to start the downward sweep, where the superscript refers to a smoothed quantity. The downward sweep then proceeds down the tree (0)(00)sxx s1 ()()()[()()] ()()()[()()]() ()()()()ss ss TxsxssJsxBsxBss PsPssJsPBsPBssJs JsPssFsPBss (3.13) More detail descriptions of the MKS al gorithm are found in [Chou & Nikoukhah, 1994; Fieguth & Irving, 1995]. This algorithm is noniterative and has constant computational complexity per pixel with where (MOs M sis the number of nodes at the finest scale Figure 33 illustrates recursion steps of MKS algorithm in single quadtree. In MKS, filtering operation needs to associate all dat a at different scales and return to the finest scale to obtain a final re sult. Thus, the Kalman f iltering is initiated from the finest scale and finished by Kalman smoothing at the finest scale. mMFigure 34 shows an example of image fusion for a pair of multiresolution image sets, a sparse TOPSAR (Topographic Synthetic Aperture Radar) at 11th scale and a dense ERS1 (European remote s ensing satellite) at 9th scale, using MKS algorithm. 35 PAGE 36 The ERS1 data set is served as a primary measurement because it covers wide areas without data void. At the finest scale, y in (3.5) and (3.10) will represent TOPSAR data, while at a coarser scale, it will represent ERS1 data. 3.5 Conclusions We have demonstrated MKS algorithm using dynamic models in (3.5). This framework leads to an efficient and highl y parallelizable scalerecursive optimal estimation algorithm by generalizing the R auchTungStriebel smoothing algorithm on hierarchical structure. It involves the Kalman filter fo r the measurement update and finetocoarse prediction at each scale and fo llowed by smoothing step for data filling process. An example has been illustrated to ve rify the potential of MKS algorithm to the multiresolution data fusion and statistical regularization problem. 36 PAGE 37 Figure 31. Recursion diagr am of a Kalman filter Figure 32. Two sweep steps of timeseries RauchTungStriebel algorithm. 1) Forward sweep (Kalman filtering): Optimal inference on the hidden variables () x t given a collection of past and present measurements 01,,t y yy, ,and 2) Backward sweep (Smoothing): Recursiv ely computes the quantities of estimates given all measurements01,,T y yy. 37 PAGE 38 A B Figure 33. The MKS algorithm. A) Upward sweep (Kalman filtering) computes (3.9) ~ (3.12) from fine to coarse scale and B) Downward sweep (Kalman smoothing) computes (3.13) from coarse to fine scale. AB CD Figure 34. An example of image fusion by employing MKS algorithm. A) ERS1 elevation data spaced by 20m, B) TOPSAR elevation data spaced by 5m, C) Fused estimate at the finest scale (11th scale), and D) RM S error of final estimate. All elevation units are in meters. 38 PAGE 39 CHAPTER 4 APPLICATIONS 4.1 Introduction We have known that the MKS algorithm is a globally optimal esti mator for fusion of remotely sensed multiresolution data in mean square error sense, and can be readily parallelized because of its Markov property In some applications, however, we encounter the situations that the standard MKS algor ithm in Chapter 3 is not applicable to solve image fusion problems. In such cases, we would consider new multiresolution image fusion methods by expl oiting properties in multiscale framework. Thus, in Chapter 4, we will in vestigate more challenging image fusion cases by introducing new fusion algorithms which still yield effici ency and accuracy on estimation results. The first application is that we are reques ted to fuse a large highly sparse finescale image using a dense coarsescale image. In this study, we can obtain computationally moderate fusion scheme by exploiting Kalman and Markov properties of quadtree structure. This new implementation is referred to as Reducedcomplexity MKS (RCMKS) [Slatton, 2005]. The other fusion scenario arises w hen there are multiple numbers of measurements available at the same reso lution (or scale). The adequate approach for this situation is to build a vectorvalued MKS, but it would increase the dimension of the measurement model in (3.5). This results in accuracy problem due to an inversion of matrix whose dimension is prohibitively high. Therefore, we mitiga te this computational difficulty by replacing covariance Kalman to information form for MKS algorithm which leads to iterative serial measurement update [Jhee, 2005]. The image fusion studies 39 PAGE 40 explored in Chapter 4 ar e based on the works in [Slatton, 2005] and [Jhee, 2005], respectively. 4.2 ReducedComplexity MKS (RCMKS) 4.2.1 Motivations Slatton, et al. [Slatton, 2000] successf ully used MKS to fuse topographic elevations derived from highresolution inte rferometric radar and ai rborne LiDAR data. However, in many remote sensing applicati ons, it is desired to capture both regional and local structure. In such cases, the resolutions of the component data sets may differ by an order of magnitude or more and the highest resolution measurements may be sparse relative to the coarsescale measur ements. Such is the case in coastal zone surficial mapping. For example, national data sets generally cover entire coastlines, while small statefunded acquisitions might cover only a few tens of square kilometers. For a standard implementation of MKS in wh ich twodimensional surface elevation data are fused, the full set of recursive operations is performed at each node in the quadtree. This approach is highly inefficient if the finest scale (the set of leaf nodes in the quadtree) is sparsely populated with measurements. In a Kalman filter, the presence of a measurement (actually t he measurement residual, known as the innovation) at a particular recursion index represents the injection of new information. If no measurement is present, the posterior es timate is simply the prior (predicted) estimate. In this work, multiscale topogr aphic and bathymetric data acquired over the Florida coast are fused together. We devel op a reducedcomplexity version of MKS to exploit this commonly faced sparse data c onfiguration. We develop an efficient implementation of the MKS algor ithm that indexes the leaf nodes so that only finescale nodes containing data are consider ed in the full recursion. 40 PAGE 41 4.2.2 Sparse MKS Implementation In data fusion applications where the measurement scales differ by an order of magnitude or more, it is quite common that the finestscale data will only be available over a small subset of the quadtree leaf nodes. This case is shown in Figure 41. New information in presented to a Kalman estima tor through the meas urements, and in the absence of measurements, the prior estimates are simply propagated. Thus, we need only consider those subtrees that contain at least one leaf node at which a measurement is availabl e (see Figure 42). Let mM be the level where dense coarsescale measurement s are available, and mM be the level where the finestscale data are available. We define an indicato r (flag) matrix to store the location of populated subtrees. The flag matrix is22NN i.e. the size of th e coarsescale data matrix. To find valid subtrees, we tile the leaf nodes with a 22 M NMN window and set the corresponding entry in the flag matrix to 0 if no node in the corresponding tile contains a measurement. Otherwise, the entry sets to 1. This can be accomplished at no extra computational cost during the specification of H Using the flag matrix, we can obtain a range of row and column indices ( M R and M C) at the finest scale for each valid subtree as 221:2 221:2MN MNMN MN MN MNMN MN N N R RR CC C (4.1) where N R and are row and column indices of the flag matrix whose entries are set to 1. Due to the Markov property of MKS, each subtree can be processed independently. In practice, however, it is convenient to c oncatenate the subtrees, such that their bases NC 41 PAGE 42 form a single rectangular matr ix. That matrix supports22 M NMN stN where s tN is the number of subtrees. Other MKS parameters, such as H are similarly permuted to correspond to the concatenated subtree st ructure. Subtrees that connect leaf node to scale that do not contain meas urements are not explicitly processed since the priors are propagated directly to scale. The reduction in computational complexity afforded by this implicit pruning of the quadtree depends RNNon (1) the size of the subtree blocks, which is determined by the difference in scales M and and (2) the aggregation of the finescale data, which determines the number of subtrees NN s t. The reduction in floating point operati ons is determined by the reduction in the number of leaf nodes that must be processed in the recursion, which is given by %22 100 *10 22MNMN st MMN ME 0 (4.2) On the downward sweep, t he full Kalman smoothing recu rsion need only proceed down the populated subtrees to the leaf nodes. The remaining coarsescale estimates can be propagated to scale M via quadtree (nearest neighbor) or linear interpolation. 4.2.3 Image Fusion Results Using RCMKS This section presents the application of the proposed method for fusing data sets over the Florida coast. The topographic LiDAR [IHRC, 2004] m easurements were available at a 5m grid spacing (13 M ), and the NGDC measur ements [NGDC, 2005] were resampled from their original 90m spacing to 80m (9 N ) so the data sets would differ in resolution by integer power s of 4. Giv en these image pair, and 4 MN s tN is 48501, so the support of the subtrees at the finest scale consists of a 776016 16 array. From (4.2), we c an calculate the expected computational savings. In the 42 PAGE 43 standard MKS implementation, the number of leaf nodes to be processed is22 M i.e. for For the data sets in this work, the reduced leaf node set is28192213 M 2 M NMN stN, i.e. which is a reduction in the number of nodes that must be processed of 81.5% at each scale below 24850116 mN It is also possible to measure the impact of pruning the quadtree in terms of memory required for a nonparallelized impl ementation of MKS. By reducing the number of subtr ees between scale M and that must be processed, we also significantly reduce the amount of memory required to store values of filt er parameters and estimates at each level in the quadtree. Without loss of genera lity, we let a 1 matrix represent a single byte and compare how much memory is used for the MKS algorithm below scale The columns in Table 41 list the required memory for the up and down sweeps for standard MKS and the proposed method. The realized memory savings between the standard MKS that uses the full quadtree and the ReducedComplexity MKS (RCMKS) totals 81.4%, wh ich corresponds well to the 81.5% savings predicted by (4.2). Using proposed method, we obtained a total reduction in floating point operations of 82.59% in the upswe ep and 72.87% in the down sweep. The resulting fused surface elevations and its unc ertainty (square root valued) are shown in Figure 43. NN4.2.4 Conclusions In this application, we investigated a largescale image fusion problem whose computation loads are not afforded by standar d MKS. When the data sets to be fused differ in resolution by an order of magnit ude or more and the finescale measurements are heavily sparse, the standard MKS algorithm is inefficien t. We reduce the number of 43 PAGE 44 floating point operations per node and also reduce the number of nodes in the quadtree being processed. We have verified that th is new implementation led to a dramatic reduction in computational complexity. 4.3 VectorValued MKS 4.3.1 Motivations When multiple measurements are availa ble at the same scale, the natural approach is to implement a vectorvalued MKS. The major difficulty when fusing multiple data sets at particular scale is that the computational complexity grows as where is the number of measurem ent sets. We ameliorate this problem by employing a serial measurement update in iterative way. We also perform the full MKS recursion only on nodes where measurem ents are available, thus reducing the number of a posteriori computation steps. 3mm4.3.2 Standard MKS Using VectorValued Measurements If we try to combine measurements with a singl e MKS iteration using vector, speed and accuracy of the MKS al gorithm suffer because large matrices must be processed, which increases computational load and round off error. For a scalar state variablem)1 m ( x s, let be an () Ys1 m measurement vector and be its corresponding noise vector whose element s are independent. The measurement model in (3.5) then becomes () Vs()()()() YsHsxsVs (4.3) where with dimension ()111THs 1 m If we directly apply vector valued measurements to the Ka lman filter recursions in [Chou & Benveniste, 1994], the Kalman gain expression is 44 PAGE 45 1()()()(()()()())TT K sPssHsHsPssHsRs (4.4) The expression in (4.4) requires an mm matrix inversion, which becomes problematic for large m4.3.3 The Information Form of Kalman Filter Using the matrix inversion lemma [S trang, 1989], the standard (covariance form) Kalman filter equations can be permuted into an information form [Brown, 1997]. For a measurement vector, the error variance and Kalman gain can be written as 1 m 111 1()()()m i i P sPssRs (4.5) 111 12()()[()()()]m K sPsRsRsRs (4.6) where()i R sis diagonal element of measurement no ise covariance matrix. In (4.5) and (4.6), and is zero otherwise. The estimate is then thi()1 HssTS ()()()(()()()) x sxssKsYsHsxss (4.7) We note that two matrix inversions are requi red to obtain the inverse of the a priori error variance at each recursion step. The merging and projection steps remain the same as in the standard form of MKS. 4.3.4 Efficient Measurement Updates Methods 4.3.4.1 Serial measurement updates method In case where the covariance matrix of is diagonal, it is possible to implement MKS using a serial meas urement update method by treating each component of Ys as a single independent measurement [Grewal, 1993]. By replacing () Rs)() Vs( 45 PAGE 46 the standard Kalman equation in [Chou & Benveniste, 1994] with the information form [Brown, 1997], the main advantages are: 1) Reduced computation complexity The number of arithmet ic computations required to process an vector is dramatically reduced if we treat it as a collection of successive scalar measurements. 1 mm2) Improved numerical accuracy By modifying the MKS upswee p algorithm, we can limit t he opportunities for round off error caused by the matrix inversion in (4.4). The filter implementation with this method requires iterations of the measurement update us ing each row of m() H sas a measurement mapping vector and the diagonal elements of () R s as the corresponding (scalar) measurement noise variances. The updating can be implemented it eratively in the following manner For define 1,2,, im 11 1 [] [][][]()()()()()T ii i i P sPssHsRsHs (4.8) where is the row of []()iHsthi() H s and is diagonal element of []()iRsthi() R s. Let[] i() P sbe the error variance associated with and Using (4.8), define the Kalman gain at iteration as: [](iH ) s[]()iRsthi1 [][][][]()()()()T iiii K sPsHsRs (4.9) Using expressions in (4.8) and (4.9), the sequentially generated Kalman gain after iterations is m 46 PAGE 47 [1] [2] [] 11 [1]1[2]2 []()[()()()] [m mmKsKsKsKs CRCRCR 1]m (4.10) where 11 [] [](()())iiCPssRs1 1,2,, i for (4.11) From (4.6) and (4.10), we not e that the difference betwe en the two expressions is the set of multiplication factors of each element in [1][2][],,mCCC ])( )()([1 1 2 1 1sRsRsRm If we set every in (4.10) as [] iC1 11 [] 1(()()()im i iCPssRsP s (4.12) then the Kalman expressions in (4.6) and (4.1 0) become identical. So, at a scale where multiple measurements are av ailable, (4.12) is computed once and substituted for for all in (4.9). Then iteration st eps in (4.9) are repeated for to obtain each entry of the Kalman gain in (4.10) sequentially. []()iPsi1,2, im This approach is particularly useful when the number of recursions (nodes) is large, as it is for image fusion. For example, if kmeasurements are av ailable at the th j scale in the quadtree, then the di mension of the measur ement vector becomes As and 2 k2jjk j increase, handling of measurement vector and other parameters in MKS algorithm becomes impractical by direct application of vectorvalued MKS. Given prior estimate information() Ys () x ss, the innovation term in (4.7 ) is linearly combined with Kalman gain in (4.10) so that the estimate update equa tion can be rewritten as 47 PAGE 48 1 ()()()(()())m ii i x sxssKsYsxss (4.13) The summation term in (4.13) now can also be replaced with iterations of update steps. For the iteration, the innovat ion associated with element of the measurement vector is multiplied with gain mthithi()i K s and stored for the next iteration update. 4.3.4.2 Selective measu rement updates method Typically, not every scale on the quadtr ee is populated with measurements. The error variance Kalman gain and update estimate() Ps)( sK () x s0 at those scales do not require the full MKS recursion. In such a case, since)( sH, we have 11()() ()0 ()() PsPss Ks xsxss (4.14) We can simply merge and project information in (4.14) at the current scale upward so that it is used as prior information for t he next coarser scale. This can significantly reduce the computation load when just a fe w scales are occupied with measurements. When the upward sweep reaches a scale w here measurements are present, the normal recursion steps are resumed. All merged and updated estimates and error variances at each scale must be stored because these values are required for the downward smoothing process. 4.3.5 Image Fusion Results Using VectorValued MKS Results are presented for fused data sets over the Finke River in Australia. The Finke River is located in Central Australia, w here it serves as a long term study site for 48 PAGE 49 fluvial geomorphology and paleohydrology. T he river flows in a southerly direction between two mountain ranges and across lo wland plains [Slatton, 2000]. Spacebased InSAR data were acquired over the Finke River in 1996 from an ERS1/2 tandem acquisition. The resulting DEM covers approximately 2,500 square kilometers and was processed to a 20 m 20 m spatial posting. Multiple airborne InSAR images where later acquired in 2000 by the NASA/JPL TOPSAR sensor as part of the PacRim 2000 TOPSAR deployment. These flightlines are approxim ately 10km 50km with spatial resolutions of 5m. The TOPSAR lines were flown with different headings yielding diversity of viewing an gle in regions of overlap [S latton, 2001]. Relative to the ERS data, the TOPSAR data have higher sp atial resolution and smaller height uncertainty; however, the large incidence an gles, which are common in airborne SAR data, lead to numerous data dropouts due to radar shadowing, layover, and lowbackscatter areas. We examine fusing ERS and TOPSAR DEMs to investigate the impact of resolution and multiple acquisitions on DEM quality in the context of different morphologic features such as elevation, gradient and curvatur e. The data sets consist of a single 512512 ERS DEM and three 2048 2048 TOPSAR DEMs. (see Figure 44.) Coherence information was available so t hat measurement height uncertainty could be computed. Zebker, et al [Zebker, 1992] previously derived the expression for standard deviation of height uncertainty (h ) from side looking geometric parameters and coherence information. Table 42 summar izes the system parameters and average standard deviations of error for each data set at finest scale. Each 20482048 T image is treated as a single independent measurement and, the proposed MKS algorithm begins with following error variance equation at the finest scale; OPSAR 49 PAGE 50 3 11 1()(()())Topi i 1 P SPSSRs s (4.15) where is measurement noise variance of the TOPSAR DEM. 1 _()TopiRthiAlso, once the updated estimate is obtained at the finest scale and projected to a coarser scale, the normal Kalman recursion steps in (4.5) ~ (4.7) are used in the remainder of the upward sweep. Standard devia tions of the estimation error for MKS implemented with single TOPSAR images and with the vectorvalued TOPSAR measurement vector are show n in Table 43. By employing serial and selective updating in sections 4.3.4.1and 4. 3.4.2, we obtain significant computational complexity reduction. For quadtrees that co ntain data at just a two scales, this approach can lead to dramatic computational savi ngs of 42% or more. The fused results at the finest scale (elevations) are shown in Figure 45. 4.3.6 Conclusions We have examined a common data fusion pr oblem, in which multiple sets of measurements are available at the same scale. We presented a method to reduce the operational complexities of MKS by intr oducing sequential and sele ctive measurement updating. We show that DEMs resulting from t he fusion of multiple data sets offer higher spatial resolution and lower height uncertainty. It shows the potentials to significantly improve the estimation of surface morphologic parameters, such as elevation, gradient, and curvature. 50 PAGE 51 Figure 41. The finest resolution measurement is sparse relative to the coarsescale measurement. The highly sparse finescale image is at 13th scale (5m spacing), and dense coarsescale image is at 9th scale ( 80m spacing) on quadtree. The data voids are represented by grey. All elevation units are in meter. Figure 42. Set of subtrees for which finestsc ale data are available. (each set of valid subtrees are represented by yellow) 51 PAGE 52 A B Figure 43. Perspective views of topographic and bathymetric elevations fused using the RCMKS method. The coverage of area is 40 Km 40 Km. A) Fused estimates (elevations) at 13th scale, and B) its RMS error. All elevation units are in meters. A B C D Figure 44. Image data sets to be fused. A) ERS1 elevati on data spaced by 20m, B) 1st TOPSAR elevation data spaced by 5m C) 2nd TOPSAR elevation data spaced by 5m, and D) 3rd TOPSAR elevation data spaced by 5m. Dark blue area at each TOPSAR image represents data voids. All elevation units are in meters. 52 PAGE 53 A B Figure 45. Fusion results using vectorval ued MKS. A) Fused es timate, and B) its height uncertainty using serial meas urement update method. All elevation units are in meters. Table 41 Giga bytes) Comparison of memory storage using RCM KS and standard MKS. (Mb:Mega bytes, Gb:Memory saving method Standard method m Up Down Up Down 10 25 136 0 Mb .22 Mb 13.19 Mb .31Mb 71.3 11100.88 Mb 52.77 Mb 545.26 Mb 285.21 Mb 12 403.53 Mb 211.08 Mb 2.181 Gb 1.141 Gb 13 1.614 Gb 844.31 Mb 8.724 Gb 4.563 Gb Total .27 Gb 8 Gb 3 17.5 Table 42. ERS and TOPSAR parameters and h system (RMS error variance) Data Types Parameters ERS1/2 (Spaceborne)TOPSAR(Airborne) Base line (m) 136 5 Num oks 5 9 ber of multilo Operating wavelength (m) 0.0567 0.0567 Base ree) 65.08 line angle (deg 6.7 Flatform altitude (km) 793.6 7.73 Slant range (near/far) (km) 851.2/861.4 13.61/23.84 Incidence angle (degree) 18.37 25.36 #1 #2 #3 Averages ofh (m) 17.38 2.98 2.89 2.35 53 PAGE 54 Table 43. Comparison of error perf ormance using single TOPSAR MKS and the proposed method. (All elevation units are in meters.) Single TOPSAR data (#1,#2 and #3 in Figure 44) Data format used #1 #2 #3 Vector valued TOPSAR Average standard deviations of error (m) (RMS error variance) 1.79 2.01 2.12 1.22 54 PAGE 55 CHAPTER 5 QUADTREE IMAGE MODEL AND BLOCKY ARTIFACT 5.1 Introduction Multiscale autoregressive model described in Chapter 2 provides an attractive alternative to traditional monoscale Markov models. In such model, additional coarsescale nodes are introduced to the finescale wh ich may or may not be directly linked to any measurement. These nodes are hidden (o r auxiliary) variables connected to describe the finescale stochastic process that is our primary in terest. If we can properly design, the resulting tree structure can a ccurately model a wide range of stochastic processes. Despite of benefits enumerated in this dissertation, the most significant drawback revealed by tree structure is the pr esence of blocky artifact in the computed estimates (see Figure 51). The problem is c aused by the fact that spatially adjacent, and hence supposedly highly correlated, finesca le nodes may be widely separated in the quadtree structure [Fi eguth & Irving, 1995]. As a result, dependencies (or correlations) between these nodes may be i nadequately captured, causing blocky pixel artifact. One potential solution to remove bl ocky artifact problem is to build edges among all pairs of nodes at every scale. Such inscale interactions should be able to account for shortrange dependencies neg lected by standar d quadtree model. 5.2 Example: Comparisons of Covar iance Structures among Three Different Graphical Models 5.2.1 Introduction This example is exhibited to look at t he source blockiness in our previous fusion studies (Figure 51). In this example, we compare the covariance structures of three different models: (1) monoscale Markov chain, (2) dyadic mult iscale tree in Chapter 2, and (3) spatially interacted tree, namely py ramidal tree. Those three structures 55 PAGE 56 (graphical models) are depicted in Figure 52. For Gaussian random vector12[,,,]T n x xxx, the joint pdf is param eterized by its mean ( x m) vector and covariance matrix ( ). The joint pdf of Gaussian process P x is then 11 ()exp( )~(,) 2TTpxxxzxxNz (5.1) 1([()()]),T 1 x xPExmxmzm x (5.2) where joint pdf is represented in information form and )( xp is the information matrix which is equivalent to the inverse covariance matrix of random vector x (i.e.1 P ) [Kay, 1993]. If x is Markov with respect to graph, then the information matrix is sparse with the structure of specif ic graphical model. In the edge link between neighborhood nodes are represented by nonzero off diagonal For smooth image field, we employ prior model (5.3) which controls the gradient quantities by adjusting the differences between the neighborhood nodes [Willsky, 2002]. If we denote the neighboring nodes of as then in (5.1) becomes ix ( N ) )ix ( xp2 ()()exp(())iij iVjNxpx xx or ()exp()T prior p xx x (5.3) where V is a set of all nodes in gr aph structure. The parameter controls how we can control the differences between a node and its neighborhood nodes. By (5.1) and (5.2), Suppose we have noisy measurementPrior0 z vCxy then the conditional distribution ),0(~ RN v x given y denoted by )( yxp 1 Pr Pr1 ()exp(( )( )) 2TTTT ior ior 1 p xyxCRCxxzCRy (5.4) 56 PAGE 57 5.2.2 Estimation of Gaussian Process The conditional distribution in (5.4) is also Gaussian such that For Gaussian cases, given noisy measurement) (~)( PxNyxp vCxy the conditional mean is the best estimate of the unknown vector x x under MMSE error criteria. S pecifically, it is both the Bayes least squares estimate minimizing the mean squared error ( ) and the maximum a posteriori (MAP) estimate maximizing conditional pdf in (5.4) [Kay, 1993]. The MAP estimate and its error variance are 2 min{()}xExxy 1 argmax(()) x pxyh (5.5) 1 {()()}TPExxxxy (5.6) where and We can notice from (5.5) and (5.6) that if the number of vari ables (nodes on graph) becomes large, simple algebraic computations for optimal estimate and its error variance are not tractable since we must compute large size of matrix inversion (11()T priorCRC1 1T priorzzCRyx 1 P ). For graph model without cycle (or loop) as shown in Figure 52 (B),however, there is an inference method which can compute optimal estimate and its error variance very efficiently. (e.g. MKS in Chapter 3). 5.2.3 MultiScale Modeling Using Pyramidal Tree While tree structure introduc ed in Chapter 2 is more advantageous in capturing the correlation over monoscale stru cture (or monoscale MRF), it s capability of capturing long range correlation is still limited by the lacks of inter connections between adjacent nodes in each scale. To overcome this limitat ion, pyramidal tree structure in Figure 52 (C) can be exploited. By a llowing statistical links (statistical dependencies) between 57 PAGE 58 pairs of neighboring nodes within scale in quadt ree structure, the in teractions among all nodes in graph would be captured well. 5.2.3.1 Prior model of pyramidal tree In this section, we will describe the basic covariance structure of Gaussian pyramidal tree ( Figure 52 (C ) ) using prior model intr oduced in (5.3). Using the obtained covariance structure, we will make comparisons of correlation decay with 1D monoscale (Figure 52 (A) ) and dyadic tree (Figure 52 (B) ). The result will show that pyramidal tree can capture longer correlation than both monoscale and dyadic tree at finest scale, and does not produce blockiness experienced in dyadic tree model. The prior model of a Gaussian graphica l model can be represented by the corresponding information matrix1P The matrix for the prior that we use in pyramidal tree structure consists of two components Priorts (5.7) where accounts for the statistical re lations between different scale, and t s represents edges within scale. For pyramidal tree, t corresponds to a dyadic tree whose parentchild pair is connected by an edge, and s corresponds to nearestneighbor grid models within scale (see Figure 53). We will describe the structures of t and s separately. 5.2.3.2 Intrascale structure ( For t ) In dyadic tree, we can consider a parent node and its two children nodes are connected by a statistical dependency. Let be a set of all nodes in scale where and is a set of nodes at one scale finer than m. Let be a mVmmVM m,2,1 1 mV1)( iC 58 PAGE 59 set of children nodes whose parent node is mVi (nodes in a set of in scale )( iC1 m are just two children nodes of node i in scale ). Using prior model in (5.3), is defined as follows: m(,) )(Cixt2 (1,) 1(exp()exp( ))mM T tmm i m miVjxx x j (5.8) where the parameter m (Figure 54) determines how we control the difference between the value at a node at scale mand the values at each of its children node at scale 1 m Then is decomposed by scale as follows: t11 2 21 2 23 ,111 1121() 0 0 00MM MNT TN t MT NqI cI 1100T MIm, and ( where 5.9) is the number of nodes at scale is the mN mNImN mNidentity matrix The constant q is the order tree structure. ,1mmT is a sparse 1 m mN N matrix in which each entry corresponding to a parentchild pair. We denote the set of parameter m s a],,[ s 1 21 M ) 5.2.3.3 Interscal e structure (For s s de The nearestneighbor grid model is related with smoothness of image fields within scale. Since the statistical depennci es between different scales are modeled byt, every element of s that corresponds to scale by scale connection is zero. Inter scale prior model s m at scale m is 59 PAGE 60 2 (,)(,) ()exp()exp( ())mT msmm m mimj iVjNixx xx (5.10) where is a set of nodes in scale m and is a set of neighborhood of node i in scale Then mVm)( iN s is represented as 11 22000 00 0 000s s s MsM 0 (5.11) where dimension of each s m for Mm,2,1 is1122 m m. The diagonal elements of s m are just equal to the number of neighbor nodes in scale m. The parameter m (Figure 54) determines the degree of smoothne ss of the field at scale If we want a smoother field, we can increase the value ofmm We denote the collection ofm s as ],,[21 M 5.2.4 Comparisons of Correlation Deca ys (MonoScale, Dyadic Tree and Pyramidal Tree) To compare the correlation characterist ics of different graphical models, onedimensional (1D) process of length 64 is considered. The monoscale graph is just 1st order Markovchain model shown in Figure 55 (A), and we construct seven scales for both the dyadic (B) and pyramidal (C) trees. For the dyadic tree model, we used the same m ( ) parameters used in pyramidal tree, but remove all edges within scale (i.e. set 1,2,7 m 0m M for ). For monoscale Markovchain, we use the parameter 1,2,7 m of the pyramidal tree (M=7). Fi gure 56 represents the covariance matrices for each structure obtained by (5.6 ). Figure 57 shows the correlation decays derived by node distance between 8th node (i.e. 8 i ) and node where i begins from i 60 PAGE 61 8 through 37 for the monoscale Markov, dyadi c, and pyramidal tree structures. Nodes distance is defined by the physical distance between 8th and ith nodes. For example, 8th and 11th nodes have distance 3. Although dyadic st ructure (represented by blue curve) can capture longer correlation than monoscale (represented by black curve), it shows stairlike shape. This turn s out that the computed estima te using (5.5) in dyadic tree structure should have blocky embeddings where finescale data are not available. In this example, we have performed simple 1D correlation analysis, but its extension to 2D (i.e. quadtree) is straightforward. In addi tion, we have to note that this simulation results hold same in MKS algorithm sinc e both MAP in (5.5) ~ (5.6) and Kalman filter/smoothing algorithm in (3.9) ~ (3.12) should have equiva lent estimation results. 5.3 Conclusions We have explored structural characteristi cs of three different types of graphical models namely monoscale, quadtree and pyramidal tree. The illustrative example showed that quadtree structur e can capture long range co rrelation among the nodes over monoscale case, but its covariance structure exhibits blocky shape. Hence, MAP estimate of quadtree structure by (5.5) and (5 .6) should suffer from blocky artifact. We also observed that efficient estimator can be implemented on quad tree structure with attractive computational cost (as shown in Chapter 3 and Chapter 4) but, blocky artifact shown on fine scale estimate is still problematic. Simply we can consider this result as a tradeoff between algorithm efficiency and accuracy but in certain application, demanding for both visually and analytically improved result is more preceded. For Chapter 6, we will address th is image blocky artifact problem in MKS algorithm using new scheme of fusion method. 61 PAGE 62 Figure 51.An example of blocky artifact. In this example, two images are being fused by MKS algorithm in Chapter 3. At final estimate, we can notice that there are blocky regions shown, and these appear at the pixel locations where finescale image pixel values are not available. (Data missing is represented by white at finescale image). All elevation units are in meter. A B C Figure 52. Three process models used in th is example. A) monoscale Markov chain, B) dyadic tree structure, and C) pyra midal tree structure. The edges (or connections) between nodes represent the statistical dependency between a node and its neighborhood ones. 62 PAGE 63 Figure 53. Interscale ( s ) and intrascale (t ) prior models in pyramidal tree structure. Figure 54. Pyramidal tree structure (1D case). Penalizing parametersm andm of each scale are represented on py ramidal tree structure. 63 PAGE 64 A B C Figure 55. A) 1D monoscale Markov chai n structure (64 variables), B) Dyadic tree structure (64 variables at finest scale), and C) pyramidal tree structure (64 variables at finest scale). 64 PAGE 65 A B C Figure 56. Obtained covariance matrices (1P ) in (5.6) using A) monoscale Markov chain, B) dyadic tree, and C) Pyramidal tree. 0 5 10 15 20 25 30 1014 1012 1010 108 106 104 102 100 distanceCorrelation Quadtree Pyramidal Monoscale Figure 57. Correlation decay curves (all va lues are in log scale) of three different structures at the finest scale (1D ca se). Parameters used in (1) Monoscale: 1M ,N/Am (2) Dyadic tree: N/Am [1 1 1 1 1 1 1]m and (3) Pyramidal tree: [1 1 1 1 1 1 1]m and [1m 1 1 1 1 1] 65 PAGE 66 CHAPTER 6 IMAGE FUSION USING SINGLE FRAME SUPER RESOLUTION 6.1 Introduction Usually, MKS algorithm provides nice c apability on estimating best pixel values when the finescale image pixels are available or size of missing area is small. However, if missing region of finescale image becomes large, the estimates at finer scale are simply driven by estimate values at coar ser scale where dense low resolution image is populated (e.g. weighted by in (3.13)). Since four ch ildren nodes at the finer scale are guided by corresponding parent node, the quant ities of estimates at these offspring nodes become identical. As smoothing algorithm in (3.13) keeps continuing toward the finest scale, the interpolation (e.g. nearest) processes are repeated and these result in the blockyshapes at final estimation image (at the finest scale). To describe this process, we will revisit smoothing steps in MKS algorithm. In (3 .13), scale recursive smoothed estimate and its erro r variance are computed by () Js ()()()[()()]ss x sxssJsxBsxBss (6.1) In (6.1), it computes th e smoothed estimate at scale At the regions where no data points are available, s ()0 xBss since they are not supported by measurement updates during upward Kalman filtering in (3. 10) ~ (3.11). That is without measurement update, ()s x s at that region is simply filled with the interpolated estimate values ( ()s x Bs) at coarser scale. As shown in Chapter 5, the blockiness is caused by structural issues involved in the quadtree. In t he view points of estimation al gorithms, the MAP estimates using (5.5) and (5.6) are dominantly determi ned by model structure, hence the blocky covariance structure of quadt ree originates the pixel blockiness. Since the MAP 66 PAGE 67 estimator tends to output equivalent result s to Kalman filtering and smoothing pair, same must be held in MKS algorithm. Therefore, in Chapter 6 we will address blockiness issues by introducing more sophisticated fusion algorithm. The stem of this work is motivated by the fact that if we can provide the additional finescale image set which covers data void regions, then we would apply this newly generated image as an auxiliary measurement in fusion algorithm, thus solvable by vectorvalued MKS introduced in section 4.2. Once this is properly performed, then in (6.1) ()0 xBss at data missing areas of the finer scale, thus detail information will be injected at fused estimates. This fusion scheme is theoretically sound because we will employ appropriate resolution enhancement technique which constructs most probable high resolution image within available training image sets and the resulting image wil l be applied to vectorvalued MKS ( in section 4.2) to provide additiona l high frequency information at fused result. Intuitively, if the quality of upscaled (resolution enhanced) im age is acceptable, it will nicely support measurement update in (3.10) at estimate where pixel values are not measured or degraded by noise or artifact. To generate dense auxiliary high resolution image, image super resolution (SR) technique will be considered. The SR method is extensively used to reconstruct lost detail information during high to low resolution transformations. For example, it is useful in data decompression applications since no rmal signal compression steps include the downscale or resampling ( down sampling) of original signal. When decompression is performed, the lost high frequency components can be inferred by SR method. 67 PAGE 68 In our fusion case, since we have only lim ited numbers of image s, i.e. a pair of dense low and sparse high resolution images learningbased SR technique will be under consideration. Learning process plays an important role in this implementation, because small local patch which has no counterpart high resolution patch will be learned by multiple sets of training image pa irs (lowhigh training image sets). Then, the upscaled patch is obtained by the linear combinations of corresponding high resolution patches whose low resolution counterparts are selected as similar patches with input patch. To find the set of similar patches, each input patch is represented in feature space and the KNearest Neighborhood (KNN) searching algorithm [Duda, 2001] is exploited. It searches K most similar pat ches from training image sets by computing Euclidean distances between input feature ve ctor and training feat ure vectors. Upon completing the computations of distances bet ween input and all combinations of training feature vectors, only K nearest neighborhood patches are considered and their corresponding high resolution patches will be chosen to reconstruct the high resolution counterpart of input low resolution patch. To provide optimality in reconstructed high resolution patch, the weighting vector will be computed to score the similarities within K nearest patches. The computed weight vector is now linearly combined with high resolution patches found by KNN method. These searching and reconstructing steps repeat for whole input image field and appropr iate embedding constraints will be applied to satisfy the compatibility and smoot hness between neighborhood patches in high resolution domain. Finally, t he generated super resolved image is driven to the fusion algorithm introduced in section 4.2 to suppor t the data void regions at finescale. We 68 PAGE 69 refer this proposed fusion algorithm to as Super Resolution Multiscale Kaman Smoothing algorithm (SRMKS). 6.2 Super Resolution Method Resolution enhancement methods based on si mple smoothing and interpolation have been broadly used in image processing areas because of their computational conveniences. Smoothing is usually performed by utilizing various spatial kernel filters such as low pass, Gaussian kernel, Wiener, and median filters. Nave interpolation methods, such as bicubic and spline inter polations, approximate an unknown point spread function (PSF) of a set of local pat ches to achieve higher resolution. Even though both methods are usually fast and easy to be implemented on practical applications, their basic operations are blending high frequency information in nonadaptive ways resulting in blurring effect in the output images. To overcome this limitation, edge sharpening techniques have been proposed to improve the results from interpolation methods [Greens pan, 2000; Morse, 2001]. These methods result in more realistic image by preserving the edges of objects in image fields, but may cause distracting highlighting artifacts. An alter nate solution involving deconvolution technique by employing deblurring filter [Mrio, 2003; Shubin Zhao, 2003] provides quite good result, but it only enhances features that are present in the lo w resolution image. Recently, some image Super Resolution (SR) methods have been proposed by different literatures to provide realistic upscaled version of images [Chang, 2004; Freeman, 2002; Freeman, 2000; Baker, 2000]. The basic idea of super resolution is to recover a high resolution image from single or multiple low resolution inputs. Broadly, it can be divided into two main categories as follows: 69 PAGE 70 The multipleimage Super Resolution: Each low resolution image introduces a set of linear constraints on the unknown high resolution counterpart. ExampleBased Super Resolution: Correspondences between input low and unknown high resolution image patch pair are in ferred from sets of lowhigh resolution training pair, and then recovers most likely highresolution version of input image. The concept of superresolution was fi rst introduced in [Tsai, 1984], using the frequency domain approach. A robust super resolution in [Kim et al.,1990] demonstrated the restoration of supe rresolution images from noisy and blurred circumstance. More comprehensive approach to the superresolution restoration problem was suggested by [Irani et al., 1991] and [Peleg, 1993], bas ed on the iterative back projection method. [Joshi and C haudhuri, 2003] proposed a learningbased method for image superresolution from zoom ed measurements. They define the high resolution image as a Markov random fi eld (MRF) and, the m odel parameters are trained from the most enlar ged measurement. [Bishop et al., 2003] proposed more sophisticated learning based superresolution enhancement for video clip processing. 6.3 Super Resolution MultiScale Ka man Smoothing Algorithm (SRMKS) The resolution enhancement method proposed in this section is classified under learningbased superresolution. Let a lowresolution image be an input to image super resolution system. Then, its high resolution pair will be reconstructed by employing multiple numbers of lowhigh resolution training image pairs. Specifically, if we are able to search a set of low resolution patches from training sets whose local geometries are similar to an input patch, their correspondi ng high resolution patches will also have similar local geometries with upscaled version of input low resolution patch. To satisfy 70 PAGE 71 this requirement, we will define local geometri c features and search the most similar patches within training sets. 6.3.1 Summary of Super Resolution Algorithm The learningbased image SR algorithm is summarized as follows. Given an input lowresolution image s L, we estimate its counterpart highresolution image s H by multiple training lowhi gh resolution image pairs s and s. We assume that lowhigh resolution image pair consists of a set of small overlapping image patches and the number of small pat ches consisting oftLtH s Land s H pair is same. This holds for each of and pair as well. For notational convenience, we define the sets of small image patches as ,, and respectively. tLtHsLsHtLtH12 12 12 12{,,,} {,,,} {,,,} {,,,}s s t tN sss N sss N ttt N tttlll hhh lll hhh s s t tL H L H (6.2) where n s l,n s h, and are small patches embedded in m tlm th s L, s H, and tLtH s N is number of patches defined in inpu t lowhigh resolution images and is total number of patches in training image pairs. tN s N, and the degree of patch overlapping depend on the implementation. We assume that each estimate patch tNn s h needs to not only be related with corresponding patch n s l, but it should also appropriately preserve some interpatch relationships with neighborhood patches 1n s h and 1n s h The former satisfies the accuracy of patch similarity matching and the latter defines the local smoothness constraints of the estimated highresolution image. By satisfying these requirements as 71 PAGE 72 close as possible, we can claim that any patch n s h reconstructed by patches in holds similarity matching with multiple patches learned from the training set and local relationship between patches in tH s L should be preserved in Furthermore, neighboring patches in should have smoothness constraint thr ough overlapping co mpatibility. tHtHLsF{,LsLet ,, and be the sets of defined feat ure vectors for patches in ,, and respectively. FtHsHLtFt tFsLsHL12 12 12 12{,,,} {,,, ,,} {,,,}s s t tN LsLs N} H sHsHs N LtLt N HtHtfffLt Ht f ff fff fff Ls Hs Lt HtF F F F (6.3) If n s land n s h have similar local geometries, t heir corresponding feature vectors n Ls f and n H s f are closely located in feature space. More specifically, if we can extract appropriate from whose feature vectorm tlm Lt tL f is close enough ton Ls f then the distance between m H t f and n H s f are also close in feature space. For each n Ls f of n s l, the algorithm is then generalized by how we can find closest m Lt f s, and finally it follows finding its high resolution feature pairs m H t f and n H f s whose corresponding patches are in and respectively. To search candidat e similar patches of input patchtHsHn s l, we employ KNearest Neighborhood (KNN) algorithm which searches most K closest feature vectors 12{, } K tttl ll by minimizing the norm between 2ln Ls f and all available feature vectors in F. Once the computations of norm are complete in feature space, it seeks the K corresponding patches Lt2l12,, K tttl ll from training image sets whose norm 2l 72 PAGE 73 are K smallest. The example of KNN algorithm is illustrated in Figure 61 and Figure 62. In this example, the size of input patch is 5 5. Figure 61 illustrates the K (K=4) most similar patches found within 8 training lowhigh image pairs (F igure 63). We consider the 4 times magnification of original low resolu tion patch, thus it is equivalent to obtain patch upscaled by 2 in quadtree. Figure 62 shows the corresponding estimated high resolution counterpart of input patch in Fi gure 61. As we expected, the searching algorithm employed successfully finds the upscaled version of original input patch. Based on the patches searched by KNN al gorithm, we need to compute the best reconstruction weight vector 12{,, } K T n 12 nnn{, to construct the super resolved patch. To hold optimality, the weight vector ,} K T nnnnis obtained by minimizing the local error n for each patchn s l. 2 12here ,,nn 1min(min w }s.t 1nnK K TT sqt nnn q )n{nnqll (6.4) By solving minimization problem, we can obtain 1 11 11n n (1ns)(1 and nTTnT s n Tll KK) (6.5) where n s l and have dimensions of q tl 21 q (qis size of patch) and now they are represented as vectorized versions of n s land respectively. 1 is a unit column vector. Then, the corresponding upscaled patch estimateq tl 1Kn s his obtained by 1 K nq q s nt qhh (6.6) 73 PAGE 74 where each element in 12 {,} K ttthhh K is corresponding high resolution patches in 12{,,} K tttlll Finally, we enforce local compat ibility and smoothness constraints between adjacent patches 1n s h 1nand s h in the estimated highresolution image through overlapping. 6.3.2 Feature Representation As discussed above, each image patch is r epresented by feature vector. In this section, we will consider about the feature representations for both low and highresolution images. For the lowresolution images, we use the relative intensity changes of pixels in patch because it captures more detail configurations of shape in the patch than simple intensity (or elevation) values. We choose the firstorder and secondorder gradients of each pixel in patch to repres ent. For example, in Figure 64, first and second order gradients vector of33 x can be easily be derived as follows: 1st gradient at 33 x :3432 4323 x x x x (6.7) 2nd gradient at33 x :353331 5333132 2 x xx x xx (6.8) By concatenating two computed gradient ve ctors together, we obtain four feature values for each pixel. For instance, if p p patch is used, the dimension of corresponding feature vector becomes24 p For high resolution patches, we define the features for each patch based only on the intensity (elevation) values of the pixels. Since the features used for the lowresolu tion patches do not reflect the elevation values, we subtract the mean value of each trained low resolution patch from m tl 74 PAGE 75 corresponding high resolution patch. When we obtain the mean elevation value of input low patch n th n s l will be added. 6.3.3 Super Resolution MultiScale Kalman Smoothing (SRMKS) Using the superresolution techniques intr oduced in sections 6.3.1 and 6.3.2, an auxiliary high resolution image will be generat ed. For accurate result, we choose the smoothed estimate at coarse scale, i.e. scale where dense low resolution image is populated, as an input image for s uper resolution process. Thus, we run standard MKS algorithm to obt ain the smoothed estimate at coarsescale prior to applying super resolution algorithm. This is reasonable approach because the smoothed estimate at coarsescale already contains blended information of available images within tree structure. Al ternatively, we can consider the upscaling of unfused low resolution image (e.g. noisy and no finescale information is fused), but poor sensor often provides incorrect information of target of interest. If this is the case, even if we can successfully enhance the resolution of th is image, the obtained result will have chance to produce completely erroneous version of high resolution image. Furthermore, fused estimate at data missed area is directly derived by estimated values from coarse scale. So, if successful resolution enhancem ent is made on the smoothed estimate at coarsescale, it will guarantee that the resulting image hold s optimality carried by MKS fusion algorithm. (s coarsexs )6.4 Simulations To verify the performance of proposed method, real terrain elevation data sets are used. In this simulation, we use 5 5 local patch with an overlap of four pixels between adjacent patches, hence the number of over lapped pixels among the high resolution 75 PAGE 76 patches is16. Also, we designed KNN with K= 4. The lowhigh image pair used in this section is cropped different site from image sets used in section 4.2. Figure 65 shows the input image data sets for SRMKS algorithm. The low resolution image at 9th scale (256 256 and 20m spacing) and incomplete high resolution image at 11th scale (1024 1024 and 5m spacing) are being fused by standard (as in Chapter 3) and proposed methods. To construct vectorvalued MKS, the smoothed estimate at 9th scale is computed using standard MKS and superresolved by algorithm in 6.3.2. For each high resolution training image in Figure 63, we obtain the corresponding low resolution training image by down sampling order of 4. Figure 66 compares the fusion results using standard MKS algorithm and SRMKS. To illustrate details of fused results, we show estimates at some zoomed areas w here finescale image is void. Since the difference between coarse and fine scales is 2 (4 ( ) in resolution), 4 times pixel magnification is performed based on the algorithm in 6.3.2 and 6.3.3. Table 61 summarizes the MSE performances of fusion resu lts. The simulation results show that SRMKS successfully works on suppressing blocky artifact. The MSE performance obtained by new method is better than standar d MKS. This new fusion method by exploiting image super resolution technique shows very pleasing empirical results in the senses of analytical and visual improvements. 226.5 Conclusions We have proposed a new multiresolution fusion scheme by employing the concept of singleim age super resolution technique. The resolution enhanced image is applied to the fusion algorithm as a new s ource of measurement to mitigate the blockiness. More specifically, gaining of re solution at data missing area of fused image 76 PAGE 77 does not depend only on a pair of input lowhigh resolution images. Instead, it is brought simultaneously by multiple high input reso lution images, i.e. original incomplete and generated (super resolved) high resolution image pair. We verified the performance of the proposed scheme by real data simulati on. It shows that the proposed SRMKS provides better estimation results in the senses of both MSE performance and visual improvement over sta ndard MKS algorithm. 77 PAGE 78 A B C Figure 61. An example of nearest neighborhood searching algorithm applied to 5 5 lowresolution patch for 4 times magnifica tion: A) input low resolution patch, B) 4 nearest neighbor patches extrac ted from the training images, C) reconstructed input patch by weighted combination of image patches in (B). (Weighting vector used: n ={ 0.4787, 0.3110, 0.4518,0. 2415} ). Darker red represents higher elevation. 78 PAGE 79 A B C Figure 62. A) Corresponding high resoluti on patch of Figure 61 (A), B) 4 high resolution patches corresponding to low resolution patches in (A), C) estimated high resolution patch construc ted by (B). (Weight ing vector used: ={ 0.4787, 0.3110, 0.4518,0.2415} ) Darker red represents higher elevation. n 79 PAGE 80 Figure 63. 8 sets of training images used fo r the results in Figur e 61 and Figure 62. Only high resolution training images are shown. Each of corresponding low resolution training images is obtained by down sampling by order of 4. 80 PAGE 81 Figure 64. A 5 5 local neighborhood in the lowresolution image for computing the firstorder and secondorder gradi ents of the pixel at th e center with elevation value 33 x A B C D Figure 65 The input image sets to the fusion algorithm. A) Gr ound truth, B) low resolution image (256256), C) sparse high resolution image (10241024: data void regions are represented as white), D) Super resolved image of smoothed estimate at coarse scale (at 9th scale). All elevation units are in meters. 81 PAGE 82 A B 3 2 1 C Figure 66 Comparisons of fused es timates at the finest scale (11th scale). A) using standard MKS introduced in Chapter 3, B) using proposed SRMKS in Chapter 6, C) zoomed fusion results of data void areas (areas circled by red in (A) ). 1st row: zoomed area at 1, 2nd row: zoomed area at 2, and 3rd row zoomed area at 3. For each row of re sult (left) finescale ground truth, (center) fused by MKS in Chapter 3, and (right) fused by proposed method (SRMKS). All elevation units are in meters. 82 PAGE 83 Table 61. Mean square errors of image fusion results in Figure 66 (in meters) Multiresolution fusion algorithms Using MKS in Chapter 3 Using SRMKS in Chapter 6 Figure 66 (A) and (B) 1.6903 1.0396 Area #1 in Figure 66 (A) 4.3326 2.8011 Area #2 in Figure 66 (A) 7.2523 4.1066 Area #3 in Figure 66 (A) 9.6478 4.4058 83 PAGE 84 CHAPTER 7 CONCLUSIONS AND FUTUREWORKS 7.1 Conclusions This dissertation aims at developing and in vestigating various multiresolution image fusion problems. Theref ore, the robust image proce ssing framework is suggested and employed to merge complementary multi resolution images at desired resolutions. Combining of information acquire d by different sources plays an important role in many research areas such as the Earth sci ence for modeling dynamic environmental processes or some of impor tant phenomenon that impact specific topography zone over a large range of spatial and temporal scale s. By conducting comprehensive theoretical reviews and practical simulations, we become to know that Kalman filter based multiresolution image fusion scheme successfully me rges different sources of image sets. For these processes, we have employed st atistically optimal fusion method utilizing multiscale estimation framework (in Chapt er 2 and Chapter 3) which provides both extensive coverage and locally highresolu tion details in fused estimates. Using hierarchical image model on quadtree, we proved that highly paral lelizable, thus efficient inference algorithm can be impl emented for fusion process. Specifically, modeling of natural process such as topogr aphy or bathymetry is wellsuited on multiscale tree structure because it effectively ca ptures multiscale characters of natural processes or signals. By transforming spatial data models to hierarchical method, the Multiscale Kalman Smoothing algorithm (MKS) is implemented to blend the information contained in images at different resolutions. This multiscale approach is theoretically sound and has shown nice performance for estimation of image data in terms of computational complexity and optimality in mean square error sense. Unlike other 84 PAGE 85 classical image inference methods such as traditional Markov Random Field (MRF) based, this is noniterative to compute not only optimally fused estimates but error measures at different spatial scales wh ich are useful for evaluating accuracy and confidence of algorithm under study. To show the potentia ls of proposed fusion scheme, we appropriately redesign algorithm to provide solutions for the cases when computational loads are too intense to be afforded by conventional MKS algorithm (in Chapter 4). The first case is arising when the image data sets being fused are differed in resolution by large order of scale and the finescale image available is spare (in section 4.1). If this is the ca se, the original form of MKS in Chapter 3 is inefficient. We aggregate this problem by reducing the floating point operations per node by deriving a new algorithm. It then reduc ed the number of nodes that mu st be processed depending on the degree of sparseness in finestscale dat a, which led to a favorable reduction in floating point operations required for estima ting a fused DEM from the component data. (more than 80% for data used in simulation). Also, we have proposed a new method to address vector formatted measurement fusion ca se (in section 4.2). In this work, we introduce an improved algorithm to fuse s paceborne data from ERS1/2 platforms with multiple sets of airborne data acquired by the NASA/JPL TOPSAR platform to obtain statistically optimal highresolution es timates of topography. Because multiple measurements are available at the same scale from the same time period, simply iterating a scalarvalued MKS al gorithm leads to the degenerative case of estimating an unknown constant, and the smoothed covariance is therefore driven asymptotically towards zero as a function of iteration. A vectorvalued implem entation of the MKS algorithm can in principle solve this probl em, but the dimension of measurement being 85 PAGE 86 processed grows overwhelmingly. We overco me this by processing the measurements sequentially, such that the Kalman gai n, estimate covariance, and the a posteriori estimate are calculated serially using the information form of t he MKS algorithm. We ameliorate this problem by employ ing a serial measurement update. Despite of the attractive features driven by multisca le based image processing, the primary disadvantage involved is the presence of pixel blockiness in final results. It is tradeoff between simplicity an d accuracy, but it often makes ones hesitating to apply it in certain applications. As shown in Chapter 5 and Chapter 6, the blocky artifact is the natural phenomenon caused by nonloopy quadtree model and the algorithm implemented on this structure. The quadtree based model employed may not adequately model the correlation functions betw een spatially adjacent pixels. This leads quadtree structure producing distract blocky artifact (in Chapter 5). Therefore, in Chapter 6, Super Resolution Multiscale Kalman Smoothing (SRM KS) algorithm is suggested to address this problem. This nov el method exploits the concepts of image super resolution technique. Based on the works in Chapter 3 and Chapter 4, an algorithm is developed to mitigate blocky artifa ct exhibited at data void regions of the finest scale in MKS algorithm. Single fr ame learning based image super resolution technique is employed in multiresolution fusion scenario. The super resolved image is served as an auxiliary input to the vectorval ued MKS such that it supports the regions where measurements are not av ailable. This effectively suppresses the blockiness in estimates and returns both visually and analyt ically, thus theoretically acceptable, enhanced results. 86 PAGE 87 Throughout this image fusion research, we have proved that multiscale method provides less computational complexity, more accurate and flexible so that it is conveniently applicable on variety of real wo rld remote sensing applications. Our main concerns of this dissertation have been mu ch concentrated on the fusion of multiresolution images, but it can be broadly ex tensible to other image processing applications such as image compression, pattern classification/recognition or, object detection. 7.2 Future Works Although Image data sets considered in th is dissertation have same modalities (e.g. elevation values or int ensities), the fusion of differ ent types of measurements can be also under consideration. For exampl e, the multiscale fusion of degraded or damaged color image with high resolution intens ity image, or fusion of low luminance image with high resolution image which contains only curvature or edge information of interests. These works will require more so phisticated data m odels because the characteristics of multiresolution data sets being fused are completely heterogeneous. Unlike fusion of multiresolution images havi ng similar modality, more challenging steps must be preceded to understand the relations and compatibilities between image data sets. Once the appropriate modeling for mult iple images is made, the fusion algorithms suggested in this dissertation would be redes igned or replaced to be suited with newly introduced models. These applicati ons will be very useful in real industry applications such as designing low complexity mobile ph one camera which supports high definition, vehicle navigation system driven by various sensors, as well as military or meteorological purposes. The impact of effi cient and reliable information fusion will be 87 PAGE 88 88 appreciated by various fields of remote s ensing researches because the costly data measuring procedures can be replaced by effective and convenient signal processing techniques. PAGE 89 LIST OF REFERENCES N.J. Adams. Dynamic Trees: A Hierarchical Probabilistic Approach to Image Modeling, PhD dissertation Division of Informatics, Univ. of Edinburgh, Edinbu rgh, U.K., 2001. S. Baker and T. Kanade. Hallucinating Faces, Fourth Internati onal Conference on Automatic Face and Gesture Recognition 2000. J. Besag. Spatial interact ion and the statistical an alysis of lattice systems, J. Royal Statistical Society B, vol. 36, pp. 192225, 1974. C.M. Bishop, A. Blake, and B. Marthi. S uperresolution enhancement of video, Int. Conf on Artificial Intelligent Statistics Key West, FL, 2003. C.A. Bouman, and M. Shapir o. A multiscale random fi eld model for Bayesian image segmentation, IEEE Transactions on Image Processing 3, 162, 1994. R. Brown, and P. Hwang. Introduction to Random Signals and Applied Kalman Filtering, Wiley, New York, NY, 3rd ed., 1997. W.E. Carter, R.L. Shrestha, G. Tuell, D. Bloomquist, and M. Sartori. Airborne Laser Swath Mapping Shines New Li ght on Earth's Topography, EOS, Transactions American Geophysical Union 82(46), 549555, 2001. H. Chang, D.Y. Yeung, and Y. Xiong. SuperResolution Through Neighbor Embedding, Proc. of IEEE Conf. CVPR 2004. M.J. Choi, and A.S. Willsky. Multiscale Gaussian Graphical Models and Algorithms for LargeScale Inference, IEEE 2007 Statistical Signal Processing Workshop Madison, Wisconsin, Aug. 2007. K.C. Chou. A stochastic modeling approac h to multiscale signal processing, Ph.D. dissertation Dept. Elec. Eng. and Comp. Sci., Massa chusetts Institute of Technology, Cambridge, MA, May 1991. K. Chou, A. Willsky, and R. Nik oukhah. Multiscale systems, Kalman filters, and Riccati equations, IEEE Transactions on Automatic Control 39 3, pp. 479. 1994. K. Chou, A.S. Willsky, and A. Benveniste. Multiscale recu rsive estimation, data fusion and regularization, IEEE Trans. Automat. Contr. ,vol.39, pp. 464478, 1994. M. Daniel, and A. Willsky. A multiresoluti on methodology for signallevel fusion and data assimilation with applications to remote sensing, Proc. IEEE vol. 85, pp. 164, Jan. 1997. P.J.J. Desmet, and G. Govers Comparison of routing algor ithms for digital elevation models and their implications for predicting ephemeral gullies, International Geographical Information Systems 10, pp.311331, 1996. 89 PAGE 90 R.O. Duda, P.E. Hart and D.G. Stork. Pattern Classification, WileyInterscience, New York, NY,2001. P.W. Fieguth, W. C. Karl, A. S. Willsky, and C. Wunsch. Mult iresolution optimal interpolation and statistical analysis of TOPEX/POSEIDON satellite altimetry, IEEE Trans. Geoscience and Remote Sensing vol. 33, pp. 280, Mar. 1995. P.W. Fieguth, W. Irving, and A. Willsky. Multiresolut ion Model Development for Overlapping Trees via Canoni cal Correlation Analysis, Proc. IEEE International Conference on Image Processing (I), pp.4548, 1995. W.T. Freeman, T.R. Jones and E.C. Pasztor. Exam plebased superresolution, IEEE Computer Graphics and Applications 2002. W.T. Freeman, E.C. Paszt or, and O.T. Carmichael. Lear ning lowlevel vision, International Journal of Computer Vision 2000. J.C. Gallant, and M.F. Hutchinson. Producing digital elevation mode ls with uncertainty estimates using a multiscale Kalman filter, 7th International Sy mposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences., pp.150~159, 2006. Q. Gan, and C.J. Harris. Co mparison of two measurement fusion methods for Kalmanfilterbased multisensor data fusion, IEEE Transactions on Aerospace and Electronic System 37, issue 1, pp. 273279, 2001. D. Gesch, J. William, and W. Miller. A comp arison of U.S. geological survey seamless elevation models with shuttl e radar topography mission data, Geoscience and Remote Sensing Symposiu m. IEEE 2001 International, 2, pp. 754756, 913 July, 2001. R. Gonzalez, and R. Wood. Digital Image Processing Prentice Hall, 2nd ed, Upper Saddle River, NJ ,2002. H. Greenspan, C. Anderson, and S. Akber. Image enhancement by nonlinear extrapolation in frequency space, IEEE Transactions on Image Processing 9(6), 2000. M. Grewal, and A. Andrew. Kalman Filtering: Theory and Practice Englewood Cliffs, New jersey, 1993. D. Hall, and J. Linas. An introducti on to multisensor data fusion, In Proceedings of the IEEE, 85, pp. 623., Jan, 1997. S. Haykin. Adaptive Filter Theory Prentice Hall, 4thed, Upper Saddle River, NJ, 2002. 90 PAGE 91 IHRC. WSMP (Windstorm Simulation & Modeling Project): Airborne LiDAR data and digital elevation models in MiamiDade, Florida, final report. International Hurricane Research Center (IHRC), 2004. M. Irani, and S. Peleg. Improvi ng resolution by image registration, CVGIP:Graphical Models and Image Process 53, 231, 1991 W. Irving, W. Karl, and A. Willsky. A Theory for Multiscale Stochastic Realization, 33rd Conference on Decision and Control, Orlando, FL, pp.655662, 1994. W. Irving. Multiresoluti on Stochastic Realization and Model Identification with Applications to LargeScale Estimation Problems, PhD thesis Laboratory for Information and Decision Syst ems, Massachusetts Instit ute of Technology, Sept. 1995. H. Jhee, S. Cheung, and K. Clint. Slatton. Effi cient Observational Updating for Fusion of Incomplete Image Data, Proc. IEEE 2005 International Geoscience and Remote Sensing Symposium (IGARSS) vol. 4, pp. 2807 2810. Jul., 2005. R.A. Johnson, and D. W. Wichern. Applied Multivariate Statistical Analysis Prentice Hall, Upper Saddle River, NJ, 4th ed., 1998. M.V. Joshi and S. Chaudhuri. A learningbased method for image superresolution from zoomed observations, Proc Fifth Int Conf. Adv Pattern Recogn Indian Statistical Institute, Kolkata, India, pp. 179, 2003 E.W. Kamen, and J. K. Su. Introduction to Optimal Estimation, SpringerVerlag, London, U.K, 1999 S.M Kay. Fundamentals of Statistical Signal Processing: Estimation Theory Prentice Hall, Upper Saddle River, NJ ,1993. S.P. Kim, N.K. Bose, and H.M. Valenzuela. Recursive reconstruction of high resolutionimage from noisy undersampled multiframes, IEEE Trans Accoustics,Speech, Signal Process 38, 1013, 1990 P.D. Komar. Beach Processes and Sedimentation Prentice Hall, Upper Saddle River, NJ, 2nd ed. 1998. P. Kumar. A multiple scale st atespace model for characteri zing subgrid scale variability of nearsurface soil moisture, IEEE Trans. Geosci. Remote Sensing vol. 37, pp. 182 197, Jan. 1999. J. Laferte, P. Perez, and F. Heitz. Discrete Markov image modeling and inference on the quadtree, IEEE Trans. Image Proc. vol. 9, no. 3, pp. 390, 2000. M.R. Luettgen, W.C. Karl, and A. Willsky. Multiscale R epresentations of Markov Random Fields, IEEE Trans. Signal Processing vol. 41, pp. 33773395, 1993. 91 PAGE 92 M. Luettgen, W. Karl, and A. Willsky. Effi cient Multiscale Regularization with Applications to the Com putation of Optical Flow, IEEE Trans. on Image Processing (3) #1, pp.4164, 1994. A. Mrio, T. Figueiredo, and R.D. Nowak. An EM algorithm fo r waveletbased image restoration, IEEE Transactions on Image Processing 2003. B. Morse, and D. Schwartzwald. Image magni fication using level set reconstruction, Proc. of Intl Conf. on Computer Vision pp. 333341, 2001. K.P. Murphy, Y. Weiss, and M.I. Jor dan. Loopy belief propagation for approximate inference: an empirical study, Proceedings of Uncertainty in AI 1999. K. Murphy. Dynamic Bayesian Networks: Representation, In ference and Learning, PhD thesis Univ. of California, Berkeley, 2002. NGDC. National Geophysical Data Cent er (NGDC), Available online at: www.ngdc.noaa.gov National Oceanic & Atmospheric Administration, Washington, DC. 2005. NOAA, National Geophysical Data Center (NGDC), www.ngdc.noaa.gov B. Potamianos, and J Goutsias. Stochastic si mulation techniques for partition function approximation of Gibbs random field images, Technical rept ., Dep. Electrical and Computer Engineering, Johns Hopkins University. Baltimor e, MD., 1. Jan 1990 30. Jun 1991. M.K. Schneider, P. W. Fieguth, W. C. Karl, and A. S. Willsky. Multiscale methods for the segmentation and reconstruc tion of signals and images, IEEE Trans. Image Proc., vol. 9, pp. 456, Mar. 2000. K.C. Slatton, S. Cheung, and H. Jhee. Reducedcomplexi ty fusion of multiscale topography and bathymetry data over the Florida coast, IEEE Geoscience and Remote Sensing Letters 2, pp. 389393., 2005. K.C. Slatton, M. M. Crawford, and B. L. Evans. Combining interferometric radar and laser altimeter data to improve estimates of topography, in Proc.IEEE Int. Geosci. Remote Sensing Symp. vol. 3, (Honolulu, HI), pp. 960 962, Jul. 2000. K.C. Slatton, M.M. Crawford, and B.L. Ev ans. Fusing interferometric radar and laser altimeter data to estimate surface topography and veget ation heights, IEEE Transactions on Geoscience and Remote Sensing 39, pp. 24702482., 2001. H.W. Sorenson. Leastsquare estimation: From Gauss to Kalman, IEEE Spectrum pp.6368, 1970. J.L. Starck, F. Mu rtagh, and A. Bijaoui. Image Processing and Data Analysis: the Multiscale Approach, Cambridge University Press, Cambridge, UK, 1998. 92 PAGE 93 93 G. Strang. Wavelets and dilation equations: A brief introduction., SIAM Rev Volume 31, Issue 4, pp. 614627, Dec. 1989. R.Y. Tsai, and T.S. Huang. Mult iframe image restoration and registration, In Advances in computer vision and image processing, JAI Press pp.317. 1984 R. Tenney, and A. Willsky. Multiresolution estimation for image processing and fusion, Proc. AFIT Workshop on Wavele ts and Signal Processing, 1992 S. Todorovic, and M.C. Nechyba. Interpreta tion of complex scenes using dynamic treestructure Bayesian networks, Computer Vision and Image Understanding, v.106 n.1, p.7184, Apr. 2007. D.L. Turcotte. Fractals and Chaos in Geology and Geophysics Cambridge University Press, 2nd ed.1997 A.S. Willsky. Multiresolution Markov models for signal and image processing, Proceedings of the IEEE 90 (8), 1396., 2002 K. Zhang, S.C. Chen, D. Whitman, M. Sh yu, J. Yan, and C. Zhang. A progressive morphological filter for removing nongr ound measurements from airborne LiDAR data, IEEE Transactions on Geoscience and Remote Sensing 41, pp. 872882.,2003. S. Zhao, H. Han, and S. Peng. Wavele tdomain HMTbased image superresolution, Proc. of IEEE Intl Co nf. on Image Processing 2003. H. Zebker, S. Madsen, and J, Martin. The TOPSAR Interferometric Radar TopographicMapping Instrument, IEEE Trans. Geosci. Remote Sensing vol. 30, NO.5, pp. 933, Sep 1992. H.A. Zebker, T. G. Farr, R. P. Salazar, and T. H. Dixon. Mapping the worlds topography using radar interferomet ry: The TOPSAR mission, Proc. IEEE vol. 82, pp. 1774, Dec. 1994. PAGE 94 BIOGRAPHICAL SKETCH Hojin Jhee was born in Seoul, South Korea. He receiv ed Bachelor of Science degree in electronic engineering at Dongguk University, Seoul, South Korea in 1997 and his Master of Science in electrical and co mputer engineering at University of Florida, Gainesville, Florida, U.S.A. in 2001. At University of Florida, he was a member of Adaptiv e Signal Processing Laboratory (ASPL) under Dr. Clint Slatton until August, 2008, and has served as a research assistant at High Sp eed Digital Architecture Labor atory (HSDAL) from August 2008 to May 2010. He received his Ph.D. degree in electrical and computer engineering at the University of Florida May 2010. His re search interests include statistical signal processing, data fusion, digital image proces sing, machine learning and remote sensing. 94 