<%BANNER%>

Hyperspectral Endmember Detection Using Morphological Autoassociative Memories


PAGE 1

HYPERSPECTRAL ENDMEMBER DETECTION USING MORPHOLOGICAL AUTOASSOCIATIVE MEMORIES By DANIEL S. MYERS A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2005

PAGE 2

Copyright 2005 by Daniel S. Myers

PAGE 3

iii ACKNOWLEDGMENTS I thank my advisor, Dr. Gerhard Ritter, for the benefits of his guidance and experience, and Dr. Mark Schmalz for his assistance with proofreading this thesis, and his continued support during my undergraduate and graduate studies.

PAGE 4

iv TABLE OF CONTENTS page ACKNOWLEDGMENTS.............................................................................................iii LIST OF FIGURES........................................................................................................vi ABSTRACT..................................................................................................................vii CHAPTER 1 INTRODUCTION .......................................................................................................1 2 MORPHOLOGICAL AUTOASSOCIATIVE MEMORIES.........................................5 2.1 Mathematical Basis for Morphological Memories..............................................5 2.2 Lattice-Based Matrix Operations........................................................................6 2.3 Lattice-Based Associative Memories.................................................................7 2.3.1 Early Associative Memories.....................................................................8 2.3.2 Constructing Lattice-Based Memories......................................................9 2.4 Properties of Lattice-Based Memories..............................................................10 2.4.1 Conditions for Perfect Recall..................................................................10 2.4.2 Fixed Point Sets and Lattice Independence.............................................11 2.4.3 Strong Lattice Independence and Affine Independence..........................14 3 HYPERSPECTRAL IMAGES AND THE LINEAR MIXING MODEL....................18 3.1 Hyperspectral Imaging Devices........................................................................18 3.1.1 Image Cubes..........................................................................................18 3.1.2 Spectroscopy..........................................................................................20 3.1.3 Endmembers..........................................................................................20 3.2 The Linear Mixing Model................................................................................22 3.3 Computing Fractional Abundances..................................................................23 4 HYPERSPECTRAL ENDMEMBER DETECTION WITH MORPHOLOGICAL MEMORIES...........................................................................................................26 4.1 Motivation for the Algorithm............................................................................26 4.2 Scaling and Positioning Endmembers...............................................................27 4.3 The Algorithm..................................................................................................28 4.4 Advantages of the Algorithm............................................................................29

PAGE 5

v 5 EXPERIMENTAL RESULTS...................................................................................30 5.1 Data Characteristics..........................................................................................30 5.2 Endmember Determination...............................................................................31 5.3 Material Abundance Maps................................................................................31 6 CONCLUSIONS.......................................................................................................44 LIST OF REFERENCES...............................................................................................46 BIOGRAPHICAL SKETCH.........................................................................................50

PAGE 6

vi LIST OF FIGURES Figure page 3.1: Reference spectra for Juniper bush..........................................................................21 3.2: Reference spectra for th e mineral Mont morillioni te................................................21 5-1: Plot of detected endmember spectra and reference spectra for the mineral Kaolinite..............................................................................................................34 5-2: Plot of detected endmember spectra and reference spectra for the mineral Alunite..35 5-3: Plot of detected endmember spectra and reference spectra for the mineral Calcite..36 5-4: USGS reference map for the Cuprite, Nevada, site..................................................37 5-5: Abundance map for the first variety of the mineral Kaolinite..................................38 5-6: Abundance map for a second variety of the mineral Kaolinite.................................39 5-7: Abundance map for the mineral Alunite.................................................................40 5-8: Abundance map for the mineral Calcite..................................................................41 5-9: Abundance map for the mineral Muscovite.............................................................42 5-10: Abundance map for the mineral Buddingtonite.....................................................43

PAGE 7

vii Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science HYPERSPECTRAL ENDMEMBER DETECTION USING MORPHOLOGICAL AUTOASSOCIATIVE MEMORIES By Daniel S. Myers December 2005 Chair: Gerhard X. Ritter Major Department: Computer and Information Science and Engineering Hyperspectral (HS) imaging devices are a special class of remote sensor capable of simultaneously measuring and recording light at hundreds of different wavelengths. Since hyperspectral devices are capable of measuring energy at wavelengths outside the range of the human eye, HS images can reveal information that would be undetectable by monochromatic imaging systems. In particular, different physical materials such as vegetation, soils, and minerals possess unique hyperspectral signatures, making material discrimination and identification possible from HS imagery. In an aircraft-mounted hyperspectral imager, pixel resolution referred to the target is on the order of tens of meters, and each hyperspectral image pixel is modeled as a linear combination of the spectra of known materials that make up the scene. These fundamental material spectra are called endmembers Given a set of endmembers, each image pixel may be unmixed to determine the percentage of each endmember spectrum that is present in the area covered by a given pixel.

PAGE 8

viii This thesis presents a new method for automatically detecting the endmembers in a hyperspectral image by using morphological autoassociative memories. An associative memory creates a pairing between two patterns, x and y such that the memory will recall y when presented with x Autoassociative memories are a special class of these memories where the input and desired output are the same pattern. Morphological memories are based on lattice algebra, which alters conventional vector algebra by replacing the multiplication operator with a maximum or minimum operator. This mathematical basis gives morphological memories several unique properties, including theoretical maximum information storage capacity, convergence in a single training epoch, and efficient hardware implementations due to the lack of multiplication operations. The approach detailed herein uses a morphological memory to determine extreme points of the set of hyperspectral image pixels. These points correspond to pixels of greater purity, and are more likely to be endmembers representing the fundamental materials in the image scene. Experimental results on hyperspectral images from Cuprite, Nevada, reveal that endmember detection with morphological memories is fast, and produces results competitive with other autonomous endmember detection methods.

PAGE 9

1 CHAPTER 1 INTRODUCTION Hyperspectral (HS) imaging devices are a class of sensors capable of simultaneously measuring and recording the intensities of many different wavelengths of light. Because some electromagnetic wavelengths lie outside the range visible to the human eye, images produced by hyperspectral devices can reveal information about a scene that is undetectable in conventional imaging devices, which exploit only the visible spectrum. Hyperspectral devices were first developed in the 1980s [1], and have since become an important remote sensing tool, with applications in the geosciences, environmental monitoring, agriculture, and national security [2-5]. The collection of measured intensities associated with a single pixel in a hyperspectral image is called the spectrum of the pixel. The science of spectroscopy is concerned with characterizing and identifying various real-world materials such as organic compounds, inorganic chemicals, and minerals by analyzing their spectra. The high spectral resolution produced by a hyperspectral device facilitates identification of dominant materials that make up a remotely sensed scene and thus supports discrimination between them [2]. Spectra that represent the fundamental materials in a scene are known as endmembers For many applications, endmembers are determined a priori using expert knowledge of the application domain. In this case, hyperspectral image processing can be expressed as a pattern recognition problem, matching spectra in the HS image to predetermined endmember spectra stored in a library. In general, however, the

PAGE 10

2 endmembers cannot be determined in advance, and must be selected from the image itself by identifying the pixel spectra that are most likely to represent fundamental materials. This comprises the problem of automated endmember detection. Unfortunately, the nature of hyperspectral devices complicates the process of endmember determination. Most modern hyperspectral imaging devices are mounted on aircraft or satellite systems [2, 6]. In these systems, the spatial resolution of each pixel is on the order of tens of meters, and an area of such size is unlikely to be composed of a single endmember material. Thus, the search for endmembers becomes the search for the image pixels that are pure, that is, having spectra comprised of one principal endmember, with as little contamination from other endmembers as possible. It has been shown that these pure pixels can be represented as vertices of a high dimensional convex simplex that encloses the pixel spectra [7, 8]. Thus, most endmember determination algorithms find the set of pixels in the image that form the largest possible simplex enclosing the data, then treat the spectra of these pixels as the fundamental endmember spectra [9, 10]. This thesis presents a new method for endmember determination using morphological autoassociative memories. Given a pair of patterns {}y x ,, an associative memory M is a mapping that recalls the pattern y when presented with the pattern x The Hopfield net is arguably the best-known example of an associative memory [11]. An autoassociative memory is a special case of the general associative memory where the input and output patterns are identical; that is, the memory M should recall x when presented with the input x In order to be useful, the autoassociative memory must also have the ability to recall x when presented with a corrupted version of the input, x ~

PAGE 11

3 Morphological memories are one member of a family of neural network models based on lattice algebra [12-14]. The conventional vector algebra is a ring over the real numbers with the operations of multiplication and addition, denoted by{}+ ,. Lattice algebra replaces the operation of multiplication with the discrete maximum and minimum operators to produce the new semi-ring {} + , ,. Because the maximum and minimum operators are inherently nonlinear, neural network models based on lattice algebra exhibit behaviors different from their vector algebra counterparts. The networks are called morphological because of similarities to the operations of erosion and dilation contained in the theory of mathematical morphology developed for image processing [15]. Neural network models originally developed from image algebra incorporated erosion and dilation operators [16]. These models were the forerunners of current morphological neural network paradigms [17]. Autoassociative memories based on lattice algebra have several desirable qualities, including theoretical maximum information storage capacity, convergence in one training epoch, and robust performance in the presence of certain types of noise. For the purpose of endmember detection, the memory is used to determine a set of extreme points from the patterns it learns. These extreme points form a high-dimensional simplex, and are therefore endmembers of the hyperspectral image pixels learned by the memory. Experimental results reveal that using morphological autoassociative memories for endmember determination is fast, easy to implement, and produces results competitive with other endmember determination techniques. M. Graa proposed the first methods for endmember detection using lattice-based memories, first with Gallego [18] and then with Sussner and Ritter [19, 20],

PAGE 12

4 The remainder of this thesis is organized as follows: Chapter 2 introduces morphological autoassociative memories, their construction, and relevant theoretical properties. Chapter 3 discusses hyperspectral images, the linear mixing model for hyperspectral pixels, and methods for spectral unmixing. Chapter 4 introduces an algorithm for endmember determination using morphological memories, and Chapter 5 applies the new method to hyperspectral data and provides experimental results. Chapter 6 summarizes the topics of the thesis and presents conclusions.

PAGE 13

5 CHAPTER 2 MORPHOLOGICAL AUTOASSOCIATIVE MEMORIES This chapter discusses the model of a morphological autoassociative memory in terms of its mathematical basis, construction, and relevant properties. Such memories belong to a class of artificial neural network models that mimic the human minds ability to store and recall information on the basis of associated cues. For example, the name of a friend recalls that persons face. Similarly, a picture of a person helps one recalls his name [14]. Morphological memories differ from other artificial memories because they are based on lattice algebra, giving them unique and useful properties. 2.1 Mathematical Basis for Morphological Memories Most existing neural network models are based on a mathematical structure that features operations of addition and multiplication performed over the real numbers. Mathematically, this structure is known as a ring and is denoted by {} + ,. In this model, the neural computation, denoted by is expressed as the weighted sum of its inputs: !==N i i iw x1, (1) where ix denotes the i th input and iw denotes its corresponding weight. The computations for morphological neural networks are carried out using lattice algebra, which replaces the operation of multiplication with the maximum or minimum operator, and is represented by{}+ , ,, where the symbols and represent the

PAGE 14

6 discrete maximum and minimum operators. Using this mathematical foundation, the morphological analogues of equation (1) are given by: i i i Nw x+ ==1 (2) and i i i Nw x + ==1. (3) Equations (2) and (3) respectively correspond to the operation of erosion and dilation in the theory of image morphology [15]. Hence, they are known as morphological neural networks. Numerous models for morphological neural networks have been proposed. The most prominent are the associative memories [14, 21, 22, 23], and morphological perceptrons with dendritic structures [12, 13, 24, 25, 26, 27, 28]. The latter have the ability to learn any compact set in N while requiring only one epoch of training for convergence and yielding perfect classification of the data set used for training. 2.2 Lattice-Based Matrix Operations Before continuing the discussion of morphological memories, it is necessary to define a group of matrix operations based on lattice algebra. For the most part, latticebased matrix operations are clearly related to their counterparts over {} + ,. First, the usual operation of matrix addition is replaced by the pairwise maximum or minimum of two matrices. Suppose A and B are n m real-valued matrices. The matrix maximum operator B A C =, is defined by ij ij ijb a c =. The definition is similar for the minimum operator.

PAGE 15

7 There is also a lattice-based operation that is analogous to matrix multiplication. As with the previous definition, there are two types of lattice matrix multiplication, one based on the maximum operation, denoted as !, and one based on the minimum operation, denoted by !. Given an p m matrix A and a n p matrix B, the n m matrix !B A C = is defined as ()()()pj ip j i j i kj ik k N ijb a b a b a b a c + + + = + ==!2 2 1 1 1. (4) This operation is known as the max product. Similarly, the min product !B A C = is defined by ()()()pj ip j i j i kj ik k N ijb a b a b a b a c + + + = + ==!2 2 1 1 1. (5) Given a real-valued vector x its conjugate transpose, x is defined by () =x x, where x denotes the transpose of x. Further, given two real-valued vectors n y x,, their minimax outer product is the matrix computed as " # $ % % % & + + + + = +n n n nx y x y x y x y # $ # "1 1 1 1x y. (6) Together, the conjugate transpose and outer product operations provide the mathematical tools required for constructing morphological associative memories. 2.3 Lattice-Based Associative Memories The goal of associative memory theory is to construct a mathematical representation of the relationships between patterns, such that a memory can recall the pattern m y when presented with the pattern n x, where the pairing {}y x,

PAGE 16

8 expresses some desirable pattern correlation [14]. More formally, let an associative memory be denoted by M, and the let associated sets of patterns be denoted by{}kXx x, ,1!= and {}kYy y, ,1!=. M is an associative memory if it recalls the pattern bywhen presented with the pattern b x for all k 1= b In order to be practically useful, M should also recall by when presented with a corrupted or noisy version of b x denoted as b x ~ 2.3.1 Early Associative Memories Hopfield and Kohonen produced some of the first associative memories based on linear neural network models [11, 29]. In these approaches, a memory M is constructed as the sum of outer products of the paired x and y patterns: ()!= t =kM1b b bx y, (7) where x denotes the transpose of x. This produces perfect recall of each output pattern by when the input patterns are orthonormal, that is, when ()( ) *n = = t j i j ij i0 1x x. (8) In this case, the reconstruction of pattern by is computed as ()()b b f b f f b b b by x x y x x y x = # $ % & t + # $ % & t = t!nM. (9) Unfortunately, the input patterns thus produced will not be orthonormal in most practical cases. Filtering using activation functions must be performed to extract the desired output pattern [11].

PAGE 17

9 2.3.2 Constructing Lattice-Based Memories Memories based on lattice algebra are similar to the aforementioned associative memories. For a set of pattern associations()Y X,, define the two lattice-based (or morphological) memories XYW and XYM as ()[] =+ =b b bx y Wk XY 1 (10) and ()[] =+ =b b bx y Mk XY 1. (11) Individual elements of the min memory XYW can be calculated using the formula ()b b bj i k ijx y w ==1. (12) The formula for individual elements of the max memory XYM is similar. Recall of stored patterns is accomplished using the max and min matrix products defined in Section 2.2. If the memories are capable of recalling the stored output pattern y when presented with the input pattern x, then the following relationships are true: !x y =XYW, (13a) !x y =XYM. (13b) Thus, patterns are recalled from the min memory, W, using the max product, and from the max memory, M, using the min product. If the set of pattern associations is given by ()X X,, then the memories XXW and XXM are called morphological autoassociative memories [14, 21, 22]. Notice that the diagonal of an autoassociative memory matrix will be composed entirely of zeros, since

PAGE 18

10 ()()ii i i k i i k iim x x x x w = = = == = b b b b b b1 10. (14) Further, the autoassociative memories XXW and XXM are related by the conjugate transpose operator defined in 2.2, such that =XX XXM W and =XX XXW M, since ()()b b b b b bi j n j i nx x x x = = = 1 1. Thus, it is possible to derive both autoassociative memories with only pass through the set X. 2.4 Properties of Lattice-Based Memories This section discusses salient properties of associative memories that are based on lattice algebra. Particular attention is given to the conditions required for perfect recall of stored patterns and the geometric interpretation of autoassociative memories, since these properties are most important for designing and understanding the endmember detection algorithm discussed in Chapter 4. 2.4.1 Conditions for Perfect Recall It is reasonable to investigate the conditions necessary for perfect recall from a morphological memory. The following theorem, proven by Ritter and Sussner, relates perfect recall of stored patterns to the structure of the memory matrix [14]. Theorem 2.1. XYW is a perfect recall memory for the pattern association ()r ry x, if and only if each row of the matrix () ( ) XYW +r rx y contains a zero entry. Similarly, XYM is a perfect recall memory for the pattern association ( ) r ry x, if and only if each row of the matrix () ( ) + r rx yXYM contains a zero entry.

PAGE 19

11 Recall that the autoassociative memories XXW and XXM have their major diagonals composed entirely of zeros as a consequence of their definition. Thus, the conditions of Theorem 2.1 are automatically satisfied for autoassociative memories, and the following relationship is true: !rx XXW = = r x !rx y =XXM, (15) for all X r x [8]. Notice that this formulation does not place any constraint on the orthogonality of the patterns in X, or on the maximum number of patterns stored in the memory. In fact, a morphological autoassocaitive memory can store the theoretical maximum number of patterns, while still giving perfect recall [14, 22]. Further, a morphological memory is trained with only one pass through the stored data. This is in significant contrast to other neural network models used for associative memories, such as the Hopfield net, which utilizes a recurrent neural network model [11]. 2.4.2 Fixed Point Sets and Lattice Independence Given an autoassociative morphological memory XXW, the set of fixed points is the set of all possible patterns x such that !x x =XXW. As previously established, this fact is guaranteed to be true when X x but may also be true for infinitely many other points that are not part of the set X used to construct the memory. Ritter and Gader proved that the two autoassociative memories formed from X, XXW, and XXM, share the same fixed point set, denoted as ()X F [22]. Since the application domain for these methods is traditional pattern recognition, the remainder of this section will assume that all patterns are represented as vectors in n, unless otherwise noted.

PAGE 20

12 Similar to the notion of linear dependence in traditional vector-space algebra, lattice algebra contains the notion of lattice dependence. Consider the set { } kXx x, ,1!=. A linear minimax combination of vectors from X is any pattern x that can be formed by the expression ()()b b bx aj k J j k+ = == 1 1, x x S x!, (16) where J is a finite set of indices and J j aj ,band k, 1! = b [22]. This expression is known as a linear minimax sum. As an alternate definition, any finite combination involving the maximum and minimum operators and vectors of the form ,rx+a for a and ,Xrx is a linear minimax sum. The set of vectors that can be formed by a linear minimax sum of the vectors in X is called the linear minimax span of X. We are now able to define the notions of lattice dependence and independence. A pattern y is said to be lattice dependent on {}kXx x ,1!= if and only if {}kx x S y ,1!= for some linear minimax sum of the patterns in X. A pattern is lattice independent if and only if it is not lattice dependent. The following theorem connects the definition of the fixed point set F(X) with the definition of lattice dependence [22]. Theorem 2.2 If n y, then y is a fixed point of XXW if and only if y is lattice dependent on X. Proof Assume () =ny y ,1! y is a fixed point of XXW For each n j , 1 = and each k , 1 = b set b bj j jx y a =. A linear minimax sum formed from the patterns in X is given by

PAGE 21

13 ()()() ()b b b b b bx x x S + = + == = j j k J j j k J j nx y x a1 1 1, (17) Letting {}n J , 1!= and manipulating the placement of the maximum and minimum operators, one obtains ()+ + + . / 0+ += = b b bxj k j j nx y1 1. (18) Making the maximum operator in Eqn. (18) explicit, as ()()+ + + . / 0+ + + + + . / 0+ += = b b b b b bx xn k n kx y x y1 1 1 1!. (19) We also can make the term b bx + 1x explicit, and obtain + + + + + . . / 0 " " # $ % % % % % & + + + + + + . . / 0 " " # $ % % % % % & += = b b b b b b b b b b b b b bn n n n k n n kx x x x x x y x x x x x x y # #2 1 1 1 1 2 1 1 1 1. (20) This expression is simplified by applying the definition of XXW, as follows " " # $ % % % % % & + + + " " # $ % % % % % & + + + " " # $ % % % % % & + + +n nn n n n n n nn n n n n ny w y w y w y w y w y w y w y w y w # # #2 1 2 1 1 1 1 21 1 11 (21) which is simply the definition of the max product !yXXW. Since we assume that y is a fixed point, we have !y y =XXW. Thus, y is contained in the memory XXW and is also lattice dependent on X. This proves the theorem.

PAGE 22

14 The application of Theorem 2.2 gives a convenient way to check if a pattern y is lattice independent on a set of patterns X. That is one simply forms the autoassociative memory XXW, and attempts to reconstruct y using the max product. If !y y nXXW, then y is not in the fixed point set of X, and is therefore lattice independent from the patterns in X. 2.4.3 Strong Lattice Independence and Affine Independence Traditional vector algebra includes the notion of a base, which is the smallest set of vectors that span a given space. The idea of a base is also applicable to linear minimax sums and autoassociative memories. Specifically, given a set of vectorsnX does there exist a smaller setnB such that the vectors in B generate the same linear minimax span as the vectors in X and B is minimal in some sense? Note that if the two sets B and X have the same linear minimax span, then the two memories XXW and BBW must be equal as a consequence of Theorem 2.2. This section expands on this question by examining a more rigorous kind of lattice independence relationship called strong lattice independence and providing a method for computing a strong lattice independent base of the set nX that contains only n or fewer vectors. The theory of strong lattice independence is essential to the development of the endmember detection algorithm discussed in Chapter 4. Definition 2.1 A set of vectors {}n kX =x x ,1! is said to be max dominant if and only if for every {}k, 1! r there exists an index {}n j, 1! r such that (){}n i x x x xi j k i j, 11! = =b b b r rr r. (22)

PAGE 23

15 Definition 2.2 A set of vectors {}n kX =x x ,1! is said to be min dominant if and only if for every {}k, 1! r there exists an index {}n j, 1! r such that (){}n i x x x xi j k i j, 11! = =b b b r rr r. (23) Definition 2.3 A set of lattice independent vectors {}n kX =x x ,1! is said to be strong lattice independent if and only if X is max dominant, or min dominant, or both. Interestingly, the set of vectors W formed from the columns of the autoassociative memory XXW is always max dominant. The following theorem verifies that a strong lattice independent set can always be constructed from the set W. Theorem 2.3 Let {}n kX =x x ,1! and let nW be the set of vectors consisting of the columns of the matrix XXW. n V W V, V is strongly lattice independent, and XX VVW W =. The proof of the theorem requires the following lemma. Lemma If W V is any non-empty subset of W, then V is max dominant. Proof of the Lemma If ()1= V card, then V satisfies the max dominance condition vacuously. If ()2 V card, let W V v u ,. Thus Wj =w u and Wl =w v for some {}k j l, 1 ,! with j l n. Thus, ij ij jj j i j j i jw w w W W = = = u u. (24) It was established in [22] that i j l i l j il jl ijw w w w wv v = = (25)

PAGE 24

16 This relationship is true for n i, 1! = so u and v satisfy the conditions for max dominance. This proves the lemma. Proof of Theorem 2.3 It is always possible to construct a lattice independent set from W by eliminating lattice dependent patterns. Let {}1 1\ wW W =. If the memory XX W WW W =1 1, set 1 1W V =, otherwise, set W V =1. That is, 1 1V w if 1w is lattice independent and 1 1V w if 1w is lattice dependent. In either case, XX V VW W =1 1. Now, set {}2 1 2\ wV V = if XX V VW W =2 2, otherwise set 1 2V V =. Again, in either case XX V VW W =2 2. Continue in this manner until {}k k kV Vw \1= if XX V VW Wk k=, or 1=k kV V if the memories are unequal. Now let kV V =. Note that nkV and by construction, {}t independen lattice is W V V V V Wkw w :2 1 = = !. (26) As previously shown, V satisfies the condition for max dominance. By the lemma V is max dominant lattice independent by construction. Therefore V is strongly lattice independent. This proves the theorem. Theorem 2.3 provides a straightforward method for deriving a strongly lattice independent base for any set of pattern NX Form the memory XXW and let the set W consist of the columns of the memory. Now remove any lattice dependent patterns from W using the method described above to yield the strongly lattice independent set V. Strong lattice independent sets are essential to endmember detection because of a connection between strong lattice independence and affine independence. An affine combination of a set of vectors{}kXx x ,1! = is a linear combination, where all of the combination coefficients sum to one:

PAGE 25

17 !=tk i i ia1x (27) with 1 0 i ia a, and 11=!=k i ia. The set X is said to be affinely independent if no vector r x can be written as an affine combination of the remaining vectors, {}rx \X. The following theorem, again proven by Ritter and Gader, connects strong lattice independence to affine independence [22]. Theorem 2.4 If {}n kX =x x ,1! is strongly lattice independent, then X is affinely independent. The proof utilizes a geometric description of the boundaries of the fixed point set()X F, a subject discussed in [22], which is beyond the scope of this thesis. The convex hull of a set of 1+ naffinely independent points yields an ndimensional simplex, the simplest geometrical structure capable of enclosing n-dimensional space. Chapters 3 and 4 expand on the importance of simplexes in representing endmembers in hyperspectral image processing.

PAGE 26

18 CHAPTER 3 HYPERSPECTRAL IMAGES AND THE LINEAR MIXING MODEL Hyperspectral (HS) imaging devices are a special class of remote sensor capable of simultaneously measuring and recording light at hundreds of different wavelengths. Because hyperspectral imaging devices capture light energy outside the range visible to the human eye, HS images can reveal information that is undetectable in conventional monochromatic imaging systems. 3.1 Hyperspectral Imaging Devices Hyperspectral imaging devices belong to a class of remote sensors called imaging spectrometers. An imaging spectrometer is a sensor that measures light at multiple wavelengths simultaneously, by collecting incoming light, then dividing it into many adjacent frequency bands using a separating element such as a prism. Other elements in the sensor measure the energy in each band [1]. Early imaging spectrometers had a spectral resolution on the order of tens of bands. Today, these sensors are called multispectral, to distinguish them from hyperspectral devices, which have spectral resolutions on the order of hundreds of bands. Since first appearing in the 1980s, hyperspectral imaging spectrometers have grown in precision and sophistication and are now a mainstream technology within the remote sensing community [1]. 3.1.1 Image Cubes An individual HS image may be represented as a collection of monochromatic images. In this framework, each pixel of an n m pixel monochromatic image records

PAGE 27

19 the light energy from a single spectral band. Therefore, if a hyperspectral sensor measures k spectral bands, then the complete hyperspectral image is a collection of k n m pixel monochromatic images, called an image cube of size k n m Image cubes produced by modern hyperspectral sensors require a significant amount of memory. For example, consider the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) operated by NASAs Jet Propulsion Laboratory. The AVIRIS image cube has size224 512 614 pixels [30]. Each entry in the image cube is stored as a two byte unsigned integer, so the total storage space required for a single AVIRIS image is computed as 864 836 140 224 512 614 2= t t t bytes. Considering that an AVIRIS experiment may require several images collected over the region of interest, it is crucial that algorithms for processing HS images consider memory requirements and image size. In practice, the size of an image cube can often be significantly reduced by applying techniques such as Principal Component Analysis or the Minimum Noise Fraction Transform [31, 32]. Because adjacent spectral bands are likely to be highly correlated, these techniques can reduce the size of an image cube from hundreds of dimensions to a relatively small number of important components. There is, however, a disadvantage to such dimensionality reduction techniques. Transforming the image spectra can destroy their physical meaning, since the transformed spectra no longer correspond to real-world physical materials, making expert analysis of transformed HS images more difficult.

PAGE 28

20 3.1.2 Spectroscopy Spectroscopy is the study of light emitted by or reflected from different materials and the variation in this light energy with respect to wavelength [33, 34]. Real-world materials such as organic compounds, minerals, and inorganic chemicals reflect light in different ways. Thus, the data collected by imaging spectrometers makes material differentiation and identification possible. Though traditional spectrometers are labbased or hand-held, modern hyperspectral imaging devices are mounted on aircraft or satellite platforms [2, 30] and can collect spectral data over a wide geographical area in a relatively short amount of time. As discussed in Section 3.1.1, an k n m HS image cube can be considered as a collection of k n m monochromatic images, where each image records the measured intensities in a different spectral band. Corresponding pixels in multiple monochromatic images represent different spectral measurements collected at the same geographical location. For a given geographical location, the vector formed from corresponding pixels in all k images of the HS cube is the spectrum associated with that location. Thus, an k n m image cube contains n m spectra, each represented as a k element vector. Spectra can be visualized as continuous plots of wavelength vs. reflected intensity. Figs. 3.1 and 3.2 show spectra for the Juniper bush and the mineral Montmorillionite. 3.1.3 Endmembers The fundamental materials that make up a scene are known as endmembers. In many HS imaging applications, the endmembers are determined a priori using domain specific knowledge. In this case, material is accomplished by matching pixel spectra in the hyperspectral image to representative endmember spectra stored in a library. In general, however, the endmember spectra are not known in advance, and matching pixel spectra

PAGE 29

21 Figure 3.1: Reference spectra for Juniper bush. The large peak in the near infrared range is characteristic of living vegetation. Figure 3.2: Reference spectra for the mineral Montmorillionite. The dips located at approximately 1.4, 1.9, and 2.25m are known as absorption bands. Characteristic absorption bands are one of features used to match recorded spectra to physical materials.

PAGE 30

22 to a library of reference spectra is not feasible, since any library suitable for general use must contain a large enough number of spectra to be applicable to any application domain. Most of these reference spectra will not match any pixels from the image, leading to a great deal of wasted computation, or will produce partial matches, increasing the difficulty of determining the true endmembers in the scene. Therefore, it is desirable to determine endmembers from the image itself by identifying the pixel spectra that represent fundamental materials [31, 32]. This comprises the problem of automated endmember detection. 3.2 The Linear Mixing Model The practical limitations of hyperspectral devices increase the challenge of automated endmember detection. Many modern HS imaging systems are mounted on airborne platforms [2, 30]. In these systems, the spatial resolution of each pixel is on the order of tens of meters, and an area of such size is unlikely to be composed of only one material. However, if the scene is dominated by a relatively small number of endmember materials, it is reasonable to assume that mixtures of endmembers account for the spectra observed in each pixel. The dominant mixing model for HS images represents each pixel as a linear combination of fundamental endmember materials [32]. Let S be the set of M endmembers and x an observed pixel spectra. In the linear mixing model, !=+ =M i i ia1w s x (28) where Sis is an endmember, the scalar ia is the fractional abundance associated with is and w is the additive observation noise vector [32]. In order to be physically

PAGE 31

23 meaningful, the fractional abundance coefficients should satisfy the following constraints: []!= =M i i ia a11 0 1. (29) That is, the abundances must have values between zero and one, and all fractional abundances for a given pixel sum to unity. If these constraints are satisfied then the fractional abundance ia represents the percentage of endmember is present in pixel x In the linear mixing model, the automated endmember detection process becomes the search for pure pixels: those spectra composed of one principal material with as little contamination of from other materials as possible [8, 31, 32]. Craig showed that there is a connection between these pure pixels and the theory of convex sets [7, 8]. In this formulation, the endmember pixels lie at the exterior of a high dimensional volume that encloses all the pixel spectra. The mixed pixels occupy the hulls interior, and can be represented as linear combinations of the extremal pixels. The simplest model for this k-dimensional hull is a simplex, the convex hull of 1+ k affinely independent points. The simplex is the simplest polyhedron that can enclose k-dimensional space. For example, a triangle and a tetrahedron are simplexes in two and three dimensions, respectively. Further, if a mixed pixel x is interior to the simplex, its fractional abundance coefficients will automatically satisfy the required physical constraints. 3.3 Computing Fractional Abundances After endmember detection is complete, the newly discovered fundamental materials are used to compute fractional abundances for each pixel. This step is known as inversion or unmixing.

PAGE 32

24 The simplest technique approaches inversion as an unconstrained optimization problem. For example, let x be the HS pixel to unmix, and S an M L matrix with the L-dimensional endmembers arranged as its columns. The goal of the optimization is to find a vector of fractional abundances a such that the squared error 2a S xt is minimized [32]. The closed form solution to this problem is given by ()x S S S at t t = 1 1. (30) Note that this solution is unconstrained and is not guaranteed to satisfy the full-additivity and non-negativity constraints discussed in section 3.2. It is possible to derive a closed form solution for a set of coefficients that satisfies the full-additivity constraint. This approach uses the method of Lagrange multipliers to constrain the vector a to lie on the hyperplane where !==M i ia11 [32]. The corresponding closed form solution is given by ()()[]()11 1 1 t t t t t t t t = U Ua Z Z S S Z Z S S a a, (31) where Z is a M 1vector having ones as its entries, and Ua is the unconstrained solution. There is no known closed-form solution that produces abundances satisfying the non-negativity constraint. The Non-Negative Least Squares (NNLS) Algorithm is an iterative approximation that has been employed in practice [35]. This method iteratively estimates the abundances a by finding least squares solutions at each step for the coefficients of a that are negative. Unfortunately, coefficients produced by this technique rarely satisfy the full-additivity constraint. Because both constraints are difficult to satisfy in practice, there are a number of hybrid methods that combine multiple approaches to find abundances. One such method,

PAGE 33

25 proposed by Ramsey and Christenson, relies on the reasonable assumption that individiual pixels may be successfully unmixed using only a subset of the endmembers [36]. This technique proceeds iteratively, computing the unconstrained solution for a at each iteration, then culling any endmembers that produce negative abundances. After all negative coefficients have been eliminated, the resulting vector is scaled so that its elements sum to unity.

PAGE 34

26 CHAPTER 4 HYPERSPECTRAL ENDMEMBER DETECTION WITH MORPHOLOGICAL MEMORIES This section describes an algorithm for detecting endmembers in hyperspectral images using morphological autoassociative memories. The proposed technique relies on the properties of strong lattice independent sets and the geometrical description of a set of endmembers. 4.1 Motivation for the Algorithm As established in Section 3.2, the set of endmembers can be interpreted as the extreme points of a high-dimensional volume that encloses the hyperspectral image pixels. The simplest model for this volume is an n-dimensional simplex, formed from the convex hull of 1+ n affinely independent points. As established in Section 2.4.3, the columns of the memory XXWcan be reduced to a strong lattice independent set that is affinely independent. After scaling and translating, this strong lattice independent set yields a set of endmembers for the hyperspectral pixels in X. Thus, the columns of the XXW memory provide up to n affine independent vectors that serve as extreme points of the simplex. The 1+ n point of the simple is the shade or dark point formed from the minimum of all vectors in X [32]. The dark point serves as the apex of the simplex, and the minimum possible energy present in a pixel in the image. The columns of XXW may yield a strong lattice independent set with fewer than n vectors. This is an acceptable result, and occurs if the hyperspectral pixels lie in a

PAGE 35

27 simplex with dimensionality less than n. In practice, however, the size of hyperspectral data sets implies that the XXW matrix will likely have n lattice independent columns. As established in Section 2.3.2, the max memory XXM is equal to the conjugate transpose of XXW. Because XXM also satisfies the theories relating to strong lattice independence, this provides a convenient way to derive a second set of endmembers that may yield information unavailable from only the XXW memory. 4.2 Scaling and Positioning Endmembers The strong lattice independent set V determined from the columns of XXW yields a set of candidate endmembers for the pixels in X. Before the algorithm completes, these endmembers must be translated into a set more suitable for the linear unmixing process described in Section 3.3. In particular, the XXW memory may contain several negative terms. Let V be the set of vectors formed from the columns of XXW. For each V v, set b bi i nv v v = =1, (32) that is, subtract the overall minimum element in the set V from each v. Note that if v has no negative elements, then 01= =b bi i nv, (33) since the diagonal of XXW consists entirely of zeros and v remains unchanged. Further, the vectors produced by this transform will still exhibit strong lattice independence. Let X be a strong lattice independent set and X x It was shown in [22] that the set {}{}x x/X a +, where a, will still be strong lattice independent.

PAGE 36

28 The presence of zeros on the diagonal requires one additional manipulation to the vectors of V. Let iv be a vector in V. By the definition of XXW, 0=i iv. If iv is scaled to eliminate negative entries, i iv will also be scaled upwards, and may appear as a spike in the final endmember. This artifact is removed by setting ()1 ( 1 ) n n + = = =+ +k i i k i ii i i i i i i i i i, 1 2 / 11 1 1 1v v v v v (34) where k is the dimensionality of iv Because adjacent spectral bands tend to be highly correlated, this smoothing improves the quality of the resulting endmembers by eliminating discontinuities that might appear along the diagonal. Endmembers produced from the max memory XXM cannot contain negative entries, but the zeros along the diagonal may still manifest themselves as downward spikes or dips in the spectral plot. These are removed by a smoothing transform analogous to the one used for XXW 4.3 The Algorithm Now that the motivation for the method has been established, the algorithm for deriving endmembers using morphological memories is presented in greater detail as follows: 1. Input X the set of hyperspectral pixels. 2. Form the memory XXW by scanning the set X 3. Set WS equal to the strong lattice independent set formed from columns of XXW 4. Scale and position the endmembers of the set WS 5. Set XXM = (XXW *)'.

PAGE 37

29 6. Compute MS the strong lattice independent set formed from the columns of XXM 7. Scale and position the endmembers of the setMS 8. Return WS and MS The majority of execution time is spent on Step (2), scanning the hyperspectral image to produce the morphological memory. The remaining steps take relatively little time in comparison. 4.4 Advantages of the Algorithm The algorithm has several properties that make it a desirable method for endmember detection. Because of their size, HS images are stored on disk or dedicated tape devices, with relatively slow access speeds. The algorithm requires only one pass through the image, reducing the time spent on memory access operations. Further, the memory can be constructed incrementally, eliminating the need to dedicate a large amount of main memory to storing and manipulating image pixels. Third, the algorithm is based on lattice algebra, which does not include multiplication operations. Because hardware implementation of multiplication operations requires a relatively large number clock cycles, lattice algebra based methods may outperform other algorithms that have the same computational complexity. Finally, the addition and comparison operations required to execute lattice algebra algorithms are well suited for implementation in programmable logic devices or application specific integrated circuits. Chapter 5 presents the results of applying the algorithm to experimental hyperspectral data collected over Cuprite, Nevada, an established test site for HS imaging.

PAGE 38

30 CHAPTER 5 EXPERIMENTAL RESULTS This chapter details the results of applying the proposed method for endmember detection to a real hyperspectral image collected over the area of Cuprite, Nevada. A mining area located approximately 200 km northeast of Las Vegas, Cuprite has been extensively used for remote sensing tests since the 1980s and has been thoroughly mapped [2, 9, 37, 38]. The area chiefly consists of volcanic rocks modified by hydrostatic processes. Principal minerals of interest at the Cuprite site include Alunite, Kaolinite, Calcite in the form of limestone, and Silica [38]. 5.1 Data Characteristics The data used for these experiments comes from images taken by NASAs Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) in 1997. The AVIRIS device is an aircraft-mounted 224-band spectrometer, measuring wavelengths in the ranges of 400 to 2500 nm with an approximate spectral resolution of 10 nm. Spatial resolution of AVIRIS images is approximately 20 square-meters per pixel [30]. The uppermost 51 bands of the AVIRIS data were selected for the endmember detection and unmixing experiments, corresponding to infrared wavelengths from 2000 to 2500 nm. Other studies of the Cuprite area have identified this range of the spectrum as most important for distinguishing between the areas characteristic minerals [9, 38]. The pixel spectra were not subjected to dimensionality reduction using Principal Components Analysis (PCA) or the Minimum Noise Fraction Transform (MNF). This approach contrasts with other methods [9, 39], that use dimensionality reduction to make

PAGE 39

31 endmember detection more computationally efficient. Though these techniques decrease computational effort, they hinder spectral identification and mineral mapping, since the detected endmembers having been transformed by PCA or MNF will bear no resemblance to the real, physical material spectra they represent as described in Section 3.1.1. 5.2 Endmember Determination A section of the AVIRIS image cube of Cuprite covering 614 1182 pixels with 51 spectral bands per pixel was selected for the experiment [30]. Using a MATLAB 7.0.4 implementation of the algorithm on a Dell PC with the Windows 2000 operating system, a 3.06 GHz Pentium 4 processor, and 1 GB of RAM required 135 seconds to compute a morphological memory containing every pixel in the image, and to scale the columns of the memory to produce the final endmembers. Figures 5-1, 5-2 and 5-3, show three endmembers produced by the technique compared to the U.S. Geological Survey reference spectra for the minerals Kaolinite, Alunite, and Calcite, respectively [40]. Two sets of endmembers were produced, one each from theXXW and XXM memories. The two sets of endmembers each yield different minerals of interest in the Cuprite scene. 5.3 Material Abundance Maps Following endmember determination, a geologically interesting subset of the Cuprite image covering 614 670 pixels was unmixed using the hybrid-unmixing algorithm proposed by Ramsey and discussed in Section 3.4. This yields an abundance map for each endmember, where greater pixel intensity corresponds to a higher concentration of the endmember material in that pixel. For each map, any pixel

PAGE 40

32 containing more than 50% of the given endmember was brightened to more clearly show regions of interest. Figure 5-4 is a multicolored reference map produced by the USGS covering the region of interest [40]. The reference map is shifted slightly to the east, relative to the experimental data set, but it is easy to see any similarities between the USGS map and the abundance maps produced by unmixing. The first set of material abundance maps were produced from endmembers derived from the min memory XXW Figs. 5-5 and 5-6 show mineral abundance maps for two different varieties of Kaolinite, corresponding favorably to the distributions of the two varieties of that mineral shown in the USGS map. Fig. 5-7 shows the abundance map for the Alunite endmember previously shown in Fig. 5-2. The horseshoe shaped formation visible in orange on the USGS map is clearly visible, as is the ring to the east. Fig.5-8 shows an abundance map for the Calcite endmember. Again, this map matches the distribution shown in mauve on the USGS map. All of these maps correspond favorably to abundance maps produced by the NFINDR algorithm of Winter [9, 10], another geometrically-based method for automated endmember detection. Fig. 5-9, shows the distribution of the mineral Muscovite. Though less visually striking than previous examples, the key shape in the center of the map clearly matches the distribution shown in blue on the reference map. The USGS reference map of Fig. 5-4 contains multiple varieties of Kaolinite and Alunite, as well as Dickite, a mineral with a spectrum similar to Kaolinite. In the 51dimensional pixel space, these similar materials correspond to extreme points of the highdimensional enclosing volume that are clustered together. When using a simplex as the

PAGE 41

33 model for the volume, this cluster may be represented by only a single detected endmember. For this reason, the method has difficulty distinguishing between materials with very similar spectra. Thus, the abundance map shown in Figs. 5-5 and 5-6 each capture multiple varieties of Kaolinite shown in the reference map of Fig. 5-4. For the most part, the endmembers derived from the max memoryXXM correspond to materials already detected in the XXW endmembers. There is, however, one material of interest that appears in the XXM endmembers but is undetectable in the first set. The abundance map shown in Fig. 5-10 corresponds to the mineral Buddingtonite, which is visible in only a few pink pixels in the USGS reference map. The small brightly lit region of the abundance map corresponds to the distribution of this mineral. The detection of a material present in such small quantities is a practical demonstration of the usefulness of endmember detection with morphological memories.

PAGE 42

34 Figure 5-1: Plot of detected endmember spectra and reference spectra for the mineral Kaolinite.

PAGE 43

35 Figure 5-2: Plot of detected endmember spectra and reference spectra for the mineral Alunite.

PAGE 44

36 Figure 5-3: Plot of detected endmember spectra and reference spectra for the mineral Calcite.

PAGE 45

37 Figure 5-6: USGS reference map for the Cuprite, Nevada, site. The map is shifted slightly to the east and south relative to the material abundance maps

PAGE 46

38 Figure 5-5: Abundance map for the first variety of the mineral Kaolinite.

PAGE 47

39 Figure 5-6: Abundance map for a second variety of the mineral Kaolinite.

PAGE 48

40 Figure 5-7: Abundance map for the mineral Alunite.

PAGE 49

41 Figure 5-8: Abundance map for the mineral Calcite.

PAGE 50

42 Figure 5-9: Abundance map for the mineral Muscovite.

PAGE 51

43 Figure 5-10: Abundance map for the mineral Buddingtonite.

PAGE 52

44 CHAPTER 6 CONCLUSIONS The problem of automatically determining the fundamental material spectra present in a hyperspectral image is of practical interest to researchers in diverse fields such as geology, agriculture, computer science, and national defense. A new method for detecting endmember materials using morphological autoassociative networks has been described and presented. The morphological memory is used to determine a set of spectra that lie at the exterior of a high-dimensional volume enclosing the image pixels. These extreme spectra correspond to pure spectra that are composed principally of one endmember material with as little contamination from others as possible. Experimental results have shown that the proposed technique produces results competitive with other automated endmember determination techniques. Hyperspectral images obtained from the NASA AVIRIS sensor over Cuprite, Nevada were processed to obtain endmembers and produce material abundance maps. Minerals detected in the scene include kaolinite, alunite, calcite, muscovite, and buddingtonite. The distributions of these minerals in the experimental material abundance maps correspond favorably to their location in reference maps produced by the U.S. Geological Survey. The proposed detection method has several desirable properties. First, it is fast, requiring only one pass through the image pixels, and does not require the use of dimensionality reduction techniques to be computationally feasible. Second, the memory can be built incrementally by scanning the image. This eliminates the need to allocate a large amount of memory to store and manipulate the image pixels. Third, the

PAGE 53

45 implementations of algorithms based on lattice algebra do not use multiplication operations, making them well suited to dedicated hardware implementation in programmable logic devices or application specific integrated circuits. Future work will focus on expanding and extending the method to apply to a wider array of hyperspectral processing problems. There are numerous real-world problems where the technique is readily applicable, such as landmine detection, target recognition, and agricultural monitoring. Future applications of morphological memories and the theory of lattice independence include real-time target tracking, clustering, and computational geometry. In general, neural computation based on lattice algebra is a new and growing discipline of pattern recognition with many open problems and interesting research opportunities.

PAGE 54

46 LIST OF REFERENCES 1. R.B. Smith, Introduction to hyperspectral images, tutorial by Microimages, Inc., Lincoln, NE, 2001. 2. R.G. Resmini, M.E. Karpus, W.S. Aldrich, J.C. Harsanyi, and M. Anderson, Mineral mapping with Hyperspectral Digital Imagery Collection Experiment (HYDICE) at Cuprite, Nevada, USA, Int. J. of Remote Sensing vol. 18, 1997. 3. D. Manolakis, D. Marden, and G.A. Shaw, Hyperspectral image processing for automatic target detection, Lincoln Laboratory Journal vol.14, no. 1, 2003, pp. 79-116. 4. A. Hirano, M. Madden, and R. Welch, Hyperspectral image data for mapping wetland vegetation, Wetlands vol. 23, 2003, pp. 436-48. 5. M. Lewis, V. Jooste, A. De Gasparis, Hyperspectral discrimination of arid vegetation, in Proc. 28th Int. Symposium on Remote Sensing of Environment Cape Town, South Africa, 2000, pp. 148-51. 6. G. Vane, R.O. Green, T.G. Chrien, H.T. Enmark, E.G. Hansen, and W.M. Porter, The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), Remote Sensing of the Environment vol. 44, 1993, pp. 127-43. 7. M. Craig, Unsupervised unmixing of remotely sensed images, in Proc. of 5th Australasian Remote Sensing Conference Perth, Australia, 1990, pp. 324-30. 8. M. Craig, Minimum volume transforms for remotely sensed data, IEEE Transactions on Geoscience and Remote Sensing vol. 32, 1994, pp. 542-52. 9. M.E. Winter, Fast autonomous spectral end-member determination in hyperspectral data, in Proc. of 13th Int. Conf. On Applied Geologic Remote Sensing , Vancouver, B.C., Canada, 1999, pp. 337-44. 10. M.E. Winter, Autonomous hyperspectral end-member determination methods, Proc. SPIE vol. 3870, pp. 150-58. 11. J.J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, in Proc. of the National Academy of Sciences 1982, pp. 2554-558.

PAGE 55

47 12. G.X. Ritter and G. Urcid, Lattice algebra approach to single neuron computation, IEEE Trans. on Neural Networks vol. 14, pp. 282-95, 2003. 13. G.X. Ritter, L. Iancu, and G. Urcid, Morphological perceptrons with dendritic structure, in Proc. FUZZ-IEEE St. Louis, Missouri, USA, 2003, pp. 1296-1301. 14. G.X. Ritter, P. Sussner, and J.L. Diaz de Leon, Morphological associative memories, IEEE Trans. on Neural Networks vol. 9, no. 2, pp. 281-93. 15. J. Serra, Image Analysis and Mathematical Morphology London: Academic Press, 1982. 16. G.X. Ritter and J.N. Wilson, Handbook of Computer Vision Algorithms in Image Algebra 2nd ed., CRC Press, Boca Raton, FL, 2001. 17. J.L. Davidson and G.X. Ritter, A theory of morphological neural networks, in Proc. SPIE vol. 1769, 1992, pp. 378-88. 18. M. Graa and J. Gallego, Associative morphological memories for endmember induction, in Proc. IGARSS Toulouse, France, 2003, pp. 3757-3759. 19. M. Graa, P.Sussner, and G.X. Ritter, Associative morphological memories for endmember determinations in spectral unmixing, in Proc. FUZZ-IEEE 2003, pp. 1285-1290. 20. M. Graa, P.Sussner, and G.X. Ritter, Innovative applications of associative morphological memories for image processing and pattern recognition, Mathware and Soft Computing vol. 7, 2003, pp.155-168. 21. G.X. Ritter, G. Urcid, and L. Iancu, Reconstruction of noisy patterns using morphological associative memories, J. of Mathematical Imaging and Vision vol. 19, no. 2, pp. 95-111. 22. G.X. Ritter and P. Gader, Fixed points of lattice transforms and lattice associative memories, Advances in Imaging and Electron Physics Academic Press, In press. 23. G.X. Ritter, L. Iancu, and M.S. Schmalz, A new auto-associative memory based on lattice algebra, in Proc. of 9th Iberoamerican Congress on Pattern Recognition Puebla, Mexico, 2004, pp. 148-55. 24. G. Urcid, G.X. Ritter, and L. Iancu, Single layer morphological perceptron solution to the N -bit parity problem, in Proc. 9th Iberoamerican Congress on Pattern Recognition Puebla, Mexico, 2004, pp. 171-78. 25. G.X. Ritter, G. Urcid, and R. Selfridge, Minimax dendrite computation, in ASME Proc. ANNIE St. Louis, Missouri, USA, 2002, pp. 75-80.

PAGE 56

48 26. G.X. Ritter and L. Iancu, Lattice algebra approach to neural networks and pattern classification, in Proc. 6th Open German-Russian Workshop on Pattern Recognition and Image Understanding Katun Village, Altai Region, Russian Federation, 2003, pp.18-21. 27. G.X. Ritter, L. Iancu, and G. Urcid, Neurons, dendrites, and pattern recognition, in Proc. 8th Iberoamerican Congress on Pattern Recognition Havana, Cuba, 2003, pp. 1296-1301. 28. G.X. Ritter and L. Iancu, Lattice algebra, dendritic computing, and pattern recognition, in 8th Iberoamerican Congress on Pattern Recognition Havana, Cuba, 2003, pp. 16-24. 29. T. Kohonen, Correlation matrix memory, IEEE Trans. on Computers vol. C-21, pp. 353-59. 30. G. Vane, R.O. Green, T.G. Chrien, H.T. Enmark, E.G. Hansen, and W.M. Porter, The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), Remote Sensing of the Environment vol. 44, 1993, pp. 127-43. 31. N. Keshava, A survey of spectral unmixing algorithms, Lincoln Laboratory Journal vol.14, no. 1, 2003, pp. 55-78. 32. N. Keshava and J.F. Mustard, Spectral unmixing, IEEE Signal Processing Magazine vol. 19, 2003, pp. 44-57. 33. G. Vane and A.F.H. Goetz, Terrestrial imaging spectroscopy, Remote Sensing of Environment vol. 24, pp. 1-29. 34. R.N. Clark, Spectroscopy of rocks and minerals, and Principles of spectroscopy, in A.N. Renz (ed.) Remote Sensing for the Earth Sciences: Manual of Remote Sensing 3rd ed., vol. 3, John Wiley & Sons, New York, pp. 3-58. 35. C.L. Lawson and R.J. Hanson, Solving Least Squares Problems Englewood Cliffs, NJ: Prentice Hall, 1974. 36. M.S. Ramsey and P.R. Christenson, Mineral abundance determination: Quantitative deconvolution of thermal emission spectra, J. Geophys. Res. vol. 103, no. B1, 1998, pp. 577-96. 37. A.F.H. Goetz and V. Srivastava, Mineralogical mapping in the Cuprite mining district, in Proceedings of the Airborne Imaging Spectrometer (AIS) Data Analysis Workshop JPL Publication 85-41, pp. 22-29. 38. F.A. Kruse, Comparison of AVIRIS and Hyperion for hyperspectral mineral mapping, in Proc. of 11th JPL Airborne Geoscience Workshop Pasadena, CA, 2002, JPL Publication 03-4 (CD-ROM).

PAGE 57

49 39. A. Ifarraguerri and C.-I Chang, Multispectral and hyperspectral image analysis with convex cones, IEEE Transactions on Geoscience and Remote Sensing vol. 37, 1999, pp. 756-70. 40. R.N. Clark and G.A. Swayze, Evolution in imaging spectroscopy and sensor signal-to-noise: An examination of how far we have come, in Summaries of the 6th Annual JPL Airborne Earth Science Workshop Palo Alto, CA, 1996, pp. 49-53.

PAGE 58

50 BIOGRAPHICAL SKETCH I was born in the city of Kingsport, Tennessee, in 1982. In 1998, my family relocated to Miami, Florida, where I graduated from high school. I remained in the state to attend the University of Florida, majoring in computer engineering. During my time as an undergraduate I participated in the University Scholars program under the supervision of Dr. Mark Schmalz, researching acoustic modeling of auditoriums and concert halls. Following the completion of that project, I continued work as Dr. Schmalzs research assistant, principally focused on projects in automated target detection, remote sensing, and pattern recognition. I completed my bachelors degree in 2004, graduating with highest honors. I remained at the university to complete a masters degree, also in computer engineering. As a graduate student, my principal interests are intelligent systems, pattern recognition, programming languages, and interfaces between digital systems and music. During my summers, I worked as an intern at two national laboratories: first at NASA Langley Research Center, programming digital signal processors for aircraft noise control, and second at Sandia National Laboratories, conducting research on applications of pattern recognition methods to network intrusion detection. My work at Sandia led to a technical advance paper for a swarm-based data simplification algorithm I developed. I completed my third internship, also at Sandia, in the summer of 2005, working on ground-based nuclear explosion monitoring. I will return to Sandia Labs as a permanent employee in 2006.

PAGE 59

51


Permanent Link: http://ufdc.ufl.edu/UFE0013341/00001

Material Information

Title: Hyperspectral Endmember Detection Using Morphological Autoassociative Memories
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0013341:00001

Permanent Link: http://ufdc.ufl.edu/UFE0013341/00001

Material Information

Title: Hyperspectral Endmember Detection Using Morphological Autoassociative Memories
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0013341:00001


This item has the following downloads:


Full Text












HYPERSPECTRAL ENDMEMBER DETECTION USING MORPHOLOGICAL
AUTOASSOCIATIVE MEMORIES















By

DANIEL S. MYERS


A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE

UNIVERSITY OF FLORIDA


2005





























Copyright 2005

by

Daniel S. Myers















ACKNOWLEDGMENTS

I thank my advisor, Dr. Gerhard Ritter, for the benefits of his guidance and

experience, and Dr. Mark Schmalz for his assistance with proofreading this thesis, and

his continued support during my undergraduate and graduate studies.















TABLE OF CONTENTS

page

A C K N O W L E D G M E N T S ......... ............... ............................................................... iii

LIST OF FIGURES ............. .................................... ..............vi

A B ST R A C T ......... .... ......... ........................................................... ......vii

CHAPTER

1 IN TR O D U C T IO N ...................................... ............................. .............. 1

2 MORPHOLOGICAL AUTOASSOCIATIVE MEMORIES.........................................5

2.1 M athematical Basis for M orphological M memories .............................................5
2.2 Lattice-Based M atrix Operations.................................... .......... .............. 6
2.3 Lattice-Based Associative M memories ....................................... .............. 7
2.3.1 Early A ssociative M em ories................................ ........................ 8
2.3.2 Constructing Lattice-Based M em ories................................ ................... 9
2.4 Properties of Lattice-B ased M em ories........................................ .................... 10
2.4.1 C conditions for Perfect R ecall............................................ .................... 10
2.4.2 Fixed Point Sets and Lattice Independence............................................ 11
2.4.3 Strong Lattice Independence and Affine Independence ....................... 14

3 HYPERSPECTRAL IMAGES AND THE LINEAR MIXING MODEL .................... 18

3.1 H yperspectral Im aging D evices........................................................................ 18
3.1.1 Im age C ubes .............................................. ................................. 18
3.1.2 Spectroscopy.................................... ......................... ... ..... 20
3.1.3 E ndm em bers ............................................ .................... 20
3.2 The Linear M ixing M odel ....................................................... .............. 22
3.3 Computing Fractional Abundances ........... .............................................23

4 HYPERSPECTRAL ENDMEMBER DETECTION WITH MORPHOLOGICAL
M EM O RIE S .......................................................................................... 26

4.1 Motivation for the Algorithm ................................ ..............26
4.2 Scaling and Positioning Endmembers ..... ...................................27
4.3 T he A lgorithm ................................................ .................. .............. 28
4.4 Advantages of the Algorithm .................. ........... .......... 29










5 E X PER IM EN TA L R E SU L T S ...................................................................................30

5.1 D ata C characteristics .................................................. .............................. 30
5.2 Endm em ber D eterm nation ........................................................................... 31
5.3 M material A bundance M aps ................................................................... ........ 31

6 C O N C L U S IO N S .................................................................................................. 4 4

L IST O F R E FE R E N C E S .....................................................................................46

BIO GRAPH ICAL SKETCH .................................................. ............................. 50













































v
















LIST OF FIGURES


Figure e

3.1: R reference spectra for Juniper bush........................................................... ......... 21

3.2: Reference spectra for the mineral Montmorillionite ..........................................21

5-1: Plot of detected endmember spectra and reference spectra for the mineral
K ao lin ite. ........................................................ ............... 34

5-2: Plot of detected endmember spectra and reference spectra for the mineral Alunite. .35

5-3: Plot of detected endmember spectra and reference spectra for the mineral Calcite. .36

5-4: USGS reference map for the Cuprite, Nevada, site........................................... 37

5-5: Abundance map for the first variety of the mineral Kaolinite.................................38

5-6: Abundance map for a second variety of the mineral Kaolinite..............................39

5-7: Abundance map for the mineral Alunite. ............................. ..........40

5-8: Abundance map for the mineral Calcite. ........................................ .............. 41

5-9: Abundance map for the mineral Muscovite.............. .................. ................42

5-10: Abundance map for the mineral Buddingtonite................................. .............43















Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Science

HYPERSPECTRAL ENDMEMBER DETECTION USING MORPHOLOGICAL
AUTOASSOCIATIVE MEMORIES

By

Daniel S. Myers

December 2005

Chair: Gerhard X. Ritter
Major Department: Computer and Information Science and Engineering

Hyperspectral (HS) imaging devices are a special class of remote sensor capable of

simultaneously measuring and recording light at hundreds of different wavelengths.

Since hyperspectral devices are capable of measuring energy at wavelengths outside the

range of the human eye, HS images can reveal information that would be undetectable by

monochromatic imaging systems. In particular, different physical materials such as

vegetation, soils, and minerals possess unique hyperspectral signatures, making material

discrimination and identification possible from HS imagery.

In an aircraft-mounted hyperspectral imager, pixel resolution referred to the target

is on the order of tens of meters, and each hyperspectral image pixel is modeled as a

linear combination of the spectra of known materials that make up the scene. These

fundamental material spectra are called endmembers. Given a set of endmembers, each

image pixel may be unmixed to determine the percentage of each endmember spectrum

that is present in the area covered by a given pixel.










This thesis presents a new method for automatically detecting the endmembers in a

hyperspectral image by using morphological autoassociative memories. An associative

memory creates a pairing between two patterns, x and y, such that the memory will recall

y when presented with x. Autoassociative memories are a special class of these

memories where the input and desired output are the same pattern. Morphological

memories are based on lattice algebra, which alters conventional vector algebra by

replacing the multiplication operator with a maximum or minimum operator. This

mathematical basis gives morphological memories several unique properties, including

theoretical maximum information storage capacity, convergence in a single training

epoch, and efficient hardware implementations due to the lack of multiplication

operations.

The approach detailed herein uses a morphological memory to determine extreme

points of the set of hyperspectral image pixels. These points correspond to pixels of

greater purity, and are more likely to be endmembers representing the fundamental

materials in the image scene. Experimental results on hyperspectral images from Cuprite,

Nevada, reveal that endmember detection with morphological memories is fast, and

produces results competitive with other autonomous endmember detection methods.














CHAPTER 1
INTRODUCTION

Hyperspectral (HS) imaging devices are a class of sensors capable of

simultaneously measuring and recording the intensities of many different wavelengths of

light. Because some electromagnetic wavelengths lie outside the range visible to the

human eye, images produced by hyperspectral devices can reveal information about a

scene that is undetectable in conventional imaging devices, which exploit only the visible

spectrum. Hyperspectral devices were first developed in the 1980s [1], and have since

become an important remote sensing tool, with applications in the geosciences,

environmental monitoring, agriculture, and national security [2-5].

The collection of measured intensities associated with a single pixel in a

hyperspectral image is called the spectrum of the pixel. The science of spectroscopy is

concerned with characterizing and identifying various real-world materials such as

organic compounds, inorganic chemicals, and minerals by analyzing their spectra. The

high spectral resolution produced by a hyperspectral device facilitates identification of

dominant materials that make up a remotely sensed scene and thus supports

discrimination between them [2].

Spectra that represent the fundamental materials in a scene are known as

endmembers. For many applications, endmembers are determined apriori using expert

knowledge of the application domain. In this case, hyperspectral image processing can

be expressed as a pattern recognition problem, matching spectra in the HS image to

predetermined endmember spectra stored in a library. In general, however, the









endmembers cannot be determined in advance, and must be selected from the image itself

by identifying the pixel spectra that are most likely to represent fundamental materials.

This comprises the problem of automated endmember detection.

Unfortunately, the nature of hyperspectral devices complicates the process of

endmember determination. Most modern hyperspectral imaging devices are mounted on

aircraft or satellite systems [2, 6]. In these systems, the spatial resolution of each pixel is

on the order of tens of meters, and an area of such size is unlikely to be composed of a

single endmember material. Thus, the search for endmembers becomes the search for the

image pixels that are "pure," that is, having spectra comprised of one principal

endmember, with as little contamination from other endmembers as possible. It has been

shown that these pure pixels can be represented as vertices of a high dimensional convex

simplex that encloses the pixel spectra [7, 8]. Thus, most endmember determination

algorithms find the set of pixels in the image that form the largest possible simplex

enclosing the data, then treat the spectra of these pixels as the fundamental endmember

spectra [9, 10].

This thesis presents a new method for endmember determination using

morphological autoassociative memories. Given a pair of patterns {x, y}, an associative

memory M is a mapping that recalls the pattern y when presented with the pattern x. The

Hopfield net is arguably the best-known example of an associative memory [11]. An

autoassociative memory is a special case of the general associative memory where the

input and output patterns are identical; that is, the memory M should recall x when

presented with the input x. In order to be useful, the autoassociative memory must also

have the ability to recall x when presented with a corrupted version of the input, x .









Morphological memories are one member of a family of neural network models

based on lattice algebra [12-14]. The conventional vector algebra is a ring over the real

numbers with the operations of multiplication and addition, denoted by {9i, x, +}. Lattice

algebra replaces the operation of multiplication with the discrete maximum and minimum

operators to produce the new semi-ring {9i, +, v, A}. Because the maximum and

minimum operators are inherently nonlinear, neural network models based on lattice

algebra exhibit behaviors different from their vector algebra counterparts.

The networks are called morphological because of similarities to the operations of

erosion and dilation contained in the theory of mathematical morphology developed for

image processing [15]. Neural network models originally developed from image algebra

incorporated erosion and dilation operators [16]. These models were the forerunners of

current morphological neural network paradigms [17].

Autoassociative memories based on lattice algebra have several desirable qualities,

including theoretical maximum information storage capacity, convergence in one training

epoch, and robust performance in the presence of certain types of noise. For the purpose

of endmember detection, the memory is used to determine a set of extreme points from

the patterns it learns. These extreme points form a high-dimensional simplex, and are

therefore endmembers of the hyperspectral image pixels learned by the memory.

Experimental results reveal that using morphological autoassociative memories for

endmember determination is fast, easy to implement, and produces results competitive

with other endmember determination techniques. M. Grafia proposed the first methods

for endmember detection using lattice-based memories, first with Gallego [18] and then

with Sussner and Ritter [19, 20],






4


The remainder of this thesis is organized as follows: Chapter 2 introduces

morphological autoassociative memories, their construction, and relevant theoretical

properties. Chapter 3 discusses hyperspectral images, the linear mixing model for

hyperspectral pixels, and methods for spectral unmixing. Chapter 4 introduces an

algorithm for endmember determination using morphological memories, and Chapter 5

applies the new method to hyperspectral data and provides experimental results. Chapter

6 summarizes the topics of the thesis and presents conclusions.














CHAPTER 2
MORPHOLOGICAL AUTOASSOCIATIVE MEMORIES

This chapter discusses the model of a morphological autoassociative memory in

terms of its mathematical basis, construction, and relevant properties. Such memories

belong to a class of artificial neural network models that mimic the human mind's ability

to store and recall information on the basis of associated cues. For example, the name of

a friend recalls that person's face. Similarly, a picture of a person helps one recalls his

name [14]. Morphological memories differ from other artificial memories because they

are based on lattice algebra, giving them unique and useful properties.

2.1 Mathematical Basis for Morphological Memories

Most existing neural network models are based on a mathematical structure that

features operations of addition and multiplication performed over the real numbers.

Mathematically, this structure is known as a ring and is denoted by {9(, +, x }. In this

model, the neural computation, denoted by z, is expressed as the weighted sum of its

inputs:

N
7=i>x w (1)
=(1)


where x, denotes the ith input and w, denotes its corresponding weight.

The computations for morphological neural networks are carried out using lattice

algebra, which replaces the operation of multiplication with the maximum or minimum

operator, and is represented by {9, v, A, + }, where the symbols v and A represent the










discrete maximum and minimum operators. Using this mathematical foundation, the

morphological analogues of equation (1) are given by:

N
=Ax, + (2)
I=1

and

N
T=V x +w. (3)
1=1

Equations (2) and (3) respectively correspond to the operation of erosion and

dilation in the theory of image morphology [15]. Hence, they are known as

morphological neural networks.

Numerous models for morphological neural networks have been proposed. The

most prominent are the associative memories [14, 21, 22, 23], and morphological

perceptrons with dendritic structures [12, 13, 24, 25, 26, 27, 28]. The latter have the

ability to learn any compact set in 9jN while requiring only one epoch of training for

convergence and yielding perfect classification of the data set used for training.

2.2 Lattice-Based Matrix Operations

Before continuing the discussion of morphological memories, it is necessary to

define a group of matrix operations based on lattice algebra. For the most part, lattice-

based matrix operations are clearly related to their counterparts over {9(, +, x First, the

usual operation of matrix addition is replaced by the pairwise maximum or minimum of

two matrices. Suppose A and B are mx n real-valued matrices. The matrix maximum

operator C = A v B, is defined by c, = a, v b The definition is similar for the


minimum operator.









There is also a lattice-based operation that is analogous to matrix multiplication.

As with the previous definition, there are two types of lattice matrix multiplication, one

based on the maximum operation, denoted as 0 and one based on the minimum

operation, denoted by F[. Given an m x p matrix A and a p x n matrix B, the m x n

matrix C = A [M B is defined as

N
c, = Va +bk =( +b,)v(a, + b )v...v(ap +b b). (4)
k=1

This operation is known as the max product. Similarly, the minproduct C = A B is

defined by

N
c, = Aak + bk + + b1 )A (a + b,2)A... A (al + bp). (5)
k=1

Given a real-valued vector x, its conjugate transpose, x*, is defined by

x* = (-x ) where x' denotes the transpose of x. Further, given two real-valued vectors

x,y e =9", their minimax outer product is the matrix computed as

y, +x, 1 y, +x,
y+x = +: ". : (6)
'Y" +x, ... y +x l

Together, the conjugate transpose and outer product operations provide the mathematical

tools required for constructing morphological associative memories.

2.3 Lattice-Based Associative Memories

The goal of associative memory theory is to construct a mathematical

representation of the relationships between patterns, such that a memory can recall the

pattern y e 9Cm when presented with the pattern x e 9", where the pairing {x, y}









expresses some desirable pattern correlation [14]. More formally, let an associative

memory be denoted by M, and the let associated sets of patterns be denoted

byX = (x',...,xk} and Y = yl',...,yk}. Mis an associative memory if it recalls the

pattern y when presented with the pattern x for all j = 1...k. In order to be practically

useful, M should also recall y when presented with a corrupted or noisy version of x ,

denoted as x.

2.3.1 Early Associative Memories

Hopfield and Kohonen produced some of the first associative memories based on

linear neural network models [11, 29]. In these approaches, a memory M is constructed

as the sum of outer products of the paired x and y patterns:

M=k y .(x (7)
=1

where x' denotes the transpose of x. This produces perfect recall of each output pattern

y when the input patterns are orthonormal, that is, when


x x (8)
S0 i j

In this case, the reconstruction of pattern y is computed as


M-x =y x -x +) y y xy).x =y (9)


Unfortunately, the input patterns thus produced will not be orthonormal in most practical

cases. Filtering using activation functions must be performed to extract the desired

output pattern [11].










2.3.2 Constructing Lattice-Based Memories

Memories based on lattice algebra are similar to the aforementioned associative

memories. For a set of pattern associations (X, Y), define the two lattice-based (or

morphological) memories W, and M, as


k
w, = A [y+(x \ (10)
=1

and

k
My = V[y + x (11)
(=1

Individual elements of the min memory W, can be calculated using the formula
k
w= A -xf). (12)


The formula for individual elements of the max memory My is similar.

Recall of stored patterns is accomplished using the max and min matrix products

defined in Section 2.2. If the memories are capable of recalling the stored output pattern

y when presented with the input pattern x, then the following relationships are true:

y = W, 3x, (13a)

y = M El x. (13b)

Thus, patterns are recalled from the min memory, W, using the max product, and from the

max memory, M, using the min product.

If the set of pattern associations is given by (X, X), then the memories W, and

M, are called morphological autoassociative memories [14, 21, 22]. Notice that the

diagonal of an autoassociative memory matrix will be composed entirely of zeros, since









k k
w =A(x x)=O =\/(x -xf)==m,,. (14)

Further, the autoassociative memories W, and M, are related by the conjugate

transpose operator defined in 2.2, such that W, = MA and MA = W since


A (x, x )= V (x xf ). Thus, it is possible to derive both autoassociative memories
5=1 5=1


with only pass through the set X

2.4 Properties of Lattice-Based Memories

This section discusses salient properties of associative memories that are based on

lattice algebra. Particular attention is given to the conditions required for perfect recall of

stored patterns and the geometric interpretation of autoassociative memories, since these

properties are most important for designing and understanding the endmember detection

algorithm discussed in Chapter 4.

2.4.1 Conditions for Perfect Recall

It is reasonable to investigate the conditions necessary for perfect recall from a

morphological memory. The following theorem, proven by Ritter and Sussner, relates

perfect recall of stored patterns to the structure of the memory matrix [14].

Theorem 2.1. Wx is a perfect recall memory for the pattern association (x, yA)

if and only if each row of the matrix (y + (x ) )- W, contains a zero entry. Similarly,

M is a perfect recall memory for the pattern association (x", y ) if and only if each

row of the matrix M (yA + (x) ) contains a zero entry.









Recall that the autoassociative memories W, and M. have their major diagonals

composed entirely of zeros as a consequence of their definition. Thus, the conditions of

Theorem 2.1 are automatically satisfied for autoassociative memories, and the following

relationship is true:

Wr0E x =x ~=y=M Qx x (15)
for all xA e X [8].

Notice that this formulation does not place any constraint on the orthogonality of

the patterns in X, or on the maximum number of patterns stored in the memory. In fact, a

morphological autoassocaitive memory can store the theoretical maximum number of

patterns, while still giving perfect recall [14, 22]. Further, a morphological memory is

trained with only one pass through the stored data. This is in significant contrast to other

neural network models used for associative memories, such as the Hopfield net, which

utilizes a recurrent neural network model [11].

2.4.2 Fixed Point Sets and Lattice Independence

Given an autoassociative morphological memory W,, the set of fixed points is the

set of all possible patterns x such that x = W, 2 x. As previously established, this fact

is guaranteed to be true when x e X, but may also be true for infinitely many other

points that are not part of the set Xused to construct the memory. Ritter and Gader

proved that the two autoassociative memories formed from X, W,, and M,, share the

same fixed point set, denoted as F(X) [22].

Since the application domain for these methods is traditional pattern recognition,

the remainder of this section will assume that all patterns are represented as vectors in

9S", unless otherwise noted.









Similar to the notion of linear dependence in traditional vector-space algebra,

lattice algebra contains the notion of lattice dependence. Consider the set

X = {x',...,x k. A linear minimax combination of vectors from Xis any pattern x that

can be formed by the expression

k
x=S(x',...,x )= VA(= Vx (16)
jeJ =1

where J is a finite set of indices and a. e 9a, Vj e J and = 1,..., k [22]. This

expression is known as a near minimax sum. As an alternate definition, any finite

combination involving the maximum and minimum operators and vectors of the form

a + x", for a 9s and x" e X, is a linear minimax sum. The set of vectors that can be

formed by a linear minimax sum of the vectors in Xis called the linear minimax span of

X.

We are now able to define the notions of lattice dependence and independence. A

pattern y is said to be lattice dependent on X = {x',..., xk if and only if

y = S{x',...,xk for some linear minimax sum of the patterns in X. A pattern is lattice

independent if and only if it is not lattice dependent. The following theorem connects the

definition of the fixed point set F(X) with the definition of lattice dependence [22].

Theorem 2.2. If ye 9", then y is afixedpoint of W, if and only if y is lattice

dependent on X

Proof. Assume y = (y ,..., y,) is a fixed point of W,. For each j = 1,...,n and

each = 1,..., k set a, = y, x A linear minimax sum formed from the patterns in X

is given by









k k
S(x',...,x") = VA(a, +x) = VA((, -xf)+x) (17)
jeJ i=1 yeJ =1

Letting J = 1,..., n } and manipulating the placement of the maximum and minimum

operators, one obtains

n k
V y,+A(-xf +x) (18)
.=1 =1


Making the maximum operator in Eqn. (18) explicit, as

k k
Yl"+A(-x +x') v...v Yn+A(-x+x) (19)
=1 =1


We also can make the term xi + x explicit, and obtain

x x-x
k k -x

2=1 : =1
Y +A+A c V...V Yn+A (20)



This expression is simplified by applying the definition of W as follows

Wi1 +y I 'wIn + n win +Yn
+ Y Wn+ yn W2n + Yn
S v ...v v (21)

wnl+ + y n + yn Wnn + Yn

which is simply the definition of the max product W, 97 y. Since we assume that y is

a fixed point, we have y = W, [3 y. Thus, y is contained in the memory W, and is

also lattice dependent on X This proves the theorem.









The application of Theorem 2.2 gives a convenient way to check if a pattern y is

lattice independent on a set of patterns X. That is one simply forms the autoassociative

memory W,, and attempts to reconstruct y using the max product. If y # W, 1 y,

then y is not in the fixed point set of X, and is therefore lattice independent from the

patterns in X.

2.4.3 Strong Lattice Independence and Affine Independence

Traditional vector algebra includes the notion of a base, which is the smallest set of

vectors that span a given space. The idea of a base is also applicable to linear minimax

sums and autoassociative memories. Specifically, given a set of vectors X c ( ", does

there exist a smaller setB c 9i", such that the vectors in B generate the same linear

minimax span as the vectors in X and B is minimal in some sense? Note that if the two

sets B and Xhave the same linear minimax span, then the two memories W, and WB

must be equal as a consequence of Theorem 2.2. This section expands on this question

by examining a more rigorous kind of lattice independence relationship called strong

lattice independence and providing a method for computing a strong lattice independent

base of the set X c 9j" that contains only n or fewer vectors. The theory of strong

lattice independence is essential to the development of the endmember detection

algorithm discussed in Chapter 4.

Definition 2.1. A set of vectors X = {x',..., xk c 9" is said to be max dominant

if and only if for every Ae 1,..., k} there exists an index j, e {1,..., n} such that


k
xA -x, =V (x )) Vie {1,...,n}. (22)
A=1






15


Definition 2.2. A set of vectors X = {xl,...,xk c 9 is said to be min dominant

if and only if for every A2 (1,..., k there exists an index j, A (1,..., n such that


k
x, -x =A (xf -x ) Vie {1,...,}. (23)
=1

Definition 2.3. A set of lattice independent vectors X = {x',...,xkc 9n" is said

to be strong lattice independent if and only if X is max dominant, or min dominant, or

both.

Interestingly, the set of vectors Wformed from the columns of the autoassociative

memory W, is always max dominant. The following theorem verifies that a strong

lattice independent set can always be constructed from the set W.

Theorem 2.3. Let X = {x',...,xk}_ 9i and let W c (9" be the set of vectors

consisting of the columns of the matrix W 3V c W 3 V # 0, Vis strongly lattice

independent, and W, = W,.

The proof of the theorem requires the following lemma.

Lemma. If V c W is any non-empty subset of W, then Vis max dominant.

Proof of the Lemma. If card(V) = 1, then V satisfies the max dominance

condition vacuously. If card(V) 2 2, let u, v e V c W. Thus u = wJ e W and

v = w' e Wfor some 1, je {1,..., k with j Thus,

u, u, = W W = wj wj = -wj (24)

It was established in [22] that


-W >-Wj_-W, =W -W =Vj --V.
1i J1 fli


(25)









This relationship is true for Vi = 1,..., n, so u and v satisfy the conditions for max

dominance. This proves the lemma.

Proof of Theorem 2.3. It is always possible to construct a lattice independent set

from Wby eliminating lattice dependent patterns. Let W, = W \ {w1 If the memory

W, = W,, set V = W, otherwise, set V = W. That is, w' e V, if w' is lattice

independent and w' < V if w' is lattice dependent. In either case, Wl = W,. Now,

set V = V \{w2} if W = W,, otherwise set V2 = V. Again, in either case

W, = W,. Continue in this manner until Vk = Vk- \ {w if W = W or Vk = Vk

if the memories are unequal. Now let V = Vk. Note that Vk 0 and by construction,

W VI D V2 D ... D Vk = V = {w e W:w is lattice independent}. (26)

As previously shown, V satisfies the condition for max dominance. By the lemma V is

max dominant lattice independent by construction. Therefore Vis strongly lattice

independent. This proves the theorem.

Theorem 2.3 provides a straightforward method for deriving a strongly lattice

independent base for any set of pattern X c 9N Form the memory W, and let the set

W consist of the columns of the memory. Now remove any lattice dependent patterns

from Wusing the method described above to yield the strongly lattice independent set V.

Strong lattice independent sets are essential to endmember detection because of a

connection between strong lattice independence and affine independence. An affine

combination of a set of vectors X = x',..., xk is a linear combination, where all of the

combination coefficients sum to one:









k
a' -x' (27)
7=1

k
with a' e 9 0 < a' < 1, and a' = 1. The set Xis said to be affinely independent if no
7=1

vector x can be written as an affine combination of the remaining vectors, X \({x .

The following theorem, again proven by Ritter and Gader, connects strong lattice

independence to affine independence [22].

Theorem 2.4. If X = {x',...,xk } 9n" is strongly lattice independent, then Xis

affinely independent.

The proof utilizes a geometric description of the boundaries of the fixed point setF(X), a

subject discussed in [22], which is beyond the scope of this thesis.

The convex hull of a set of n +1 affinely independent points yields an n-

dimensional simplex, the simplest geometrical structure capable of enclosing n-

dimensional space. Chapters 3 and 4 expand on the importance of simplexes in

representing endmembers in hyperspectral image processing.














CHAPTER 3
HYPERSPECTRAL IMAGES AND THE LINEAR MIXING MODEL

Hyperspectral (HS) imaging devices are a special class of remote sensor capable of

simultaneously measuring and recording light at hundreds of different wavelengths.

Because hyperspectral imaging devices capture light energy outside the range visible to

the human eye, HS images can reveal information that is undetectable in conventional

monochromatic imaging systems.

3.1 Hyperspectral Imaging Devices

Hyperspectral imaging devices belong to a class of remote sensors called imaging

spectrometers. An imaging spectrometer is a sensor that measures light at multiple

wavelengths simultaneously, by collecting incoming light, then dividing it into many

adjacent frequency bands using a separating element such as a prism. Other elements in

the sensor measure the energy in each band [1].

Early imaging spectrometers had a spectral resolution on the order of tens of bands.

Today, these sensors are called multispectral, to distinguish them from hyperspectral

devices, which have spectral resolutions on the order of hundreds of bands. Since first

appearing in the 1980s, hyperspectral imaging spectrometers have grown in precision and

sophistication and are now a mainstream technology within the remote sensing

community [1].

3.1.1 Image Cubes

An individual HS image may be represented as a collection of monochromatic

images. In this framework, each pixel of an mx n pixel monochromatic image records









the light energy from a single spectral band. Therefore, if a hyperspectral sensor

measures k spectral bands, then the complete hyperspectral image is a collection ofk

mx n pixel monochromatic images, called an image cube of size mx nx k.

Image cubes produced by modern hyperspectral sensors require a significant

amount of memory. For example, consider the Airborne Visible/Infrared Imaging

Spectrometer (AVIRIS) operated by NASA's Jet Propulsion Laboratory. The AVIRIS

image cube has size 614 x 512 x 224 pixels [30]. Each entry in the image cube is stored

as a two byte unsigned integer, so the total storage space required for a single AVIRIS

image is computed as 2 614 512 224 = 140,836,864 bytes. Considering that an

AVIRIS experiment may require several images collected over the region of interest, it is

crucial that algorithms for processing HS images consider memory requirements and

image size.

In practice, the size of an image cube can often be significantly reduced by

applying techniques such as Principal Component Analysis or the Minimum Noise

Fraction Transform [31, 32]. Because adjacent spectral bands are likely to be highly

correlated, these techniques can reduce the size of an image cube from hundreds of

dimensions to a relatively small number of important components. There is, however, a

disadvantage to such dimensionality reduction techniques. Transforming the image

spectra can destroy their physical meaning, since the transformed spectra no longer

correspond to real-world physical materials, making expert analysis of transformed HS

images more difficult.









3.1.2 Spectroscopy

Spectroscopy is the study of light emitted by or reflected from different materials

and the variation in this light energy with respect to wavelength [33, 34]. Real-world

materials such as organic compounds, minerals, and inorganic chemicals reflect light in

different ways. Thus, the data collected by imaging spectrometers makes material

differentiation and identification possible. Though traditional spectrometers are lab-

based or hand-held, modern hyperspectral imaging devices are mounted on aircraft or

satellite platforms [2, 30] and can collect spectral data over a wide geographical area in a

relatively short amount of time.

As discussed in Section 3.1.1, an mxnxk HS image cube can be considered as a

collection of k mx n monochromatic images, where each image records the measured

intensities in a different spectral band. Corresponding pixels in multiple monochromatic

images represent different spectral measurements collected at the same geographical

location. For a given geographical location, the vector formed from corresponding pixels

in all k images of the HS cube is the spectrum associated with that location. Thus, an

m x nx k image cube contains m x n spectra, each represented as a k element vector.

Spectra can be visualized as continuous plots of wavelength vs. reflected intensity. Figs.

3.1 and 3.2 show spectra for the Juniper bush and the mineral Montmorillionite.

3.1.3 Endmembers

The fundamental materials that make up a scene are known as endmembers. In many HS

imaging applications, the endmembers are determined apriori using domain specific

knowledge. In this case, material is accomplished by matching pixel spectra in the

hyperspectral image to representative endmember spectra stored in a library. In general,

however, the endmember spectra are not known in advance, and matching pixel spectra











Clark et al., 1993 USGS
Digital Spectral Library

0 6 Juniper_Bush IH91-4B whol WIRlBa
10/03/1991 16:54
M splib04a r 5276
U


U0.4




0.2




0.0
0.5 1.0 1.5 2.0 2.5 3.0
WAVELENGTH (gim)



Figure 3.1: Reference spectra for Juniper bush. The large peak in the near infrared range
is characteristic of living vegetation.


0.5 1.0 1.5 2.0
WAVELENGTH (gm)


2.5 3.0


Figure 3.2: Reference spectra for the mineral Montmorillionite. The dips located at
approximately 1.4, 1.9, and 2.25 pm are known as absorption bands.
Characteristic absorption bands are one of features used to match recorded
spectra to physical materials.


1.0



0.8


U
40.6
U
r-1
0.4



0.2



0.0









to a library of reference spectra is not feasible, since any library suitable for general use

must contain a large enough number of spectra to be applicable to any application

domain. Most of these reference spectra will not match any pixels from the image,

leading to a great deal of wasted computation, or will produce partial matches, increasing

the difficulty of determining the true endmembers in the scene. Therefore, it is desirable

to determine endmembers from the image itself by identifying the pixel spectra that

represent fundamental materials [31, 32]. This comprises the problem of automated

endmember detection.

3.2 The Linear Mixing Model

The practical limitations of hyperspectral devices increase the challenge of

automated endmember detection. Many modern HS imaging systems are mounted on

airborne platforms [2, 30]. In these systems, the spatial resolution of each pixel is on the

order of tens of meters, and an area of such size is unlikely to be composed of only one

material. However, if the scene is dominated by a relatively small number of endmember

materials, it is reasonable to assume that mixtures of endmembers account for the spectra

observed in each pixel.

The dominant mixing model for HS images represents each pixel as a linear

combination of fundamental endmember materials [32]. Let S be the set of M

endmembers and x an observed pixel spectra. In the linear mixing model,

M
x= as, +w, (28)
1=1

where s, e S is an endmember, the scalar a, is the fractional abundance associated with

s,, and w is the additive observation noise vector [32]. In order to be physically









meaningful, the fractional abundance coefficients should satisfy the following

constraints:


a, = 1, a, [0,1]. (29)
=1

That is, the abundances must have values between zero and one, and all fractional

abundances for a given pixel sum to unity. If these constraints are satisfied then the

fractional abundance a, represents the percentage of endmember s, present in pixel x.

In the linear mixing model, the automated endmember detection process becomes

the search for "pure" pixels: those spectra composed of one principal material with as

little contamination of from other materials as possible [8, 31, 32]. Craig showed that

there is a connection between these pure pixels and the theory of convex sets [7, 8]. In

this formulation, the endmember pixels lie at the exterior of a high dimensional volume

that encloses all the pixel spectra. The mixed pixels occupy the hull's interior, and can be

represented as linear combinations of the extremal pixels.

The simplest model for this k-dimensional hull is a simplex, the convex hull of

k +1 affinely independent points. The simplex is the simplest polyhedron that can

enclose k-dimensional space. For example, a triangle and a tetrahedron are simplexes in

two and three dimensions, respectively. Further, if a mixed pixel x is interior to the

simplex, its fractional abundance coefficients will automatically satisfy the required

physical constraints.

3.3 Computing Fractional Abundances

After endmember detection is complete, the newly discovered fundamental

materials are used to compute fractional abundances for each pixel. This step is known

as inversion or unmixing.









The simplest technique approaches inversion as an unconstrained optimization

problem. For example, let x be the HS pixel to unmix, and S an L xM matrix with the

L-dimensional endmembers arranged as its columns. The goal of the optimization is to

find a vector of fractional abundances a such that the squared error i x S a 2 is

minimized [32]. The closed form solution to this problem is given by

a = (S'. S ) S-' x. (30)

Note that this solution is unconstrained and is not guaranteed to satisfy the full-additivity

and non-negativity constraints discussed in section 3.2.

It is possible to derive a closed form solution for a set of coefficients that satisfies

the full-additivity constraint. This approach uses the method of Lagrange multipliers to

M
constrain the vector a to lie on the hyperplane where a, = 1 [32]. The corresponding
'=1

closed form solution is given by

a=a" -(S'. S)1 Z'- [Z-(S'. S ) -. ZJ] (Z.-a -1), (31)

where Z is a IxM vector having ones as its entries, and aU is the unconstrained solution.

There is no known closed-form solution that produces abundances satisfying the

non-negativity constraint. The Non-Negative Least Squares (NNLS) Algorithm is an

iterative approximation that has been employed in practice [35]. This method iteratively

estimates the abundances a by finding least squares solutions at each step for the

coefficients of a that are negative. Unfortunately, coefficients produced by this technique

rarely satisfy the full-additivity constraint.

Because both constraints are difficult to satisfy in practice, there are a number of

hybrid methods that combine multiple approaches to find abundances. One such method,






25


proposed by Ramsey and Christenson, relies on the reasonable assumption that

individual pixels may be successfully unmixed using only a subset of the endmembers

[36]. This technique proceeds iteratively, computing the unconstrained solution for a at

each iteration, then culling any endmembers that produce negative abundances. After all

negative coefficients have been eliminated, the resulting vector is scaled so that its

elements sum to unity.














CHAPTER 4
HYPERSPECTRAL ENDMEMBER DETECTION WITH MORPHOLOGICAL
MEMORIES

This section describes an algorithm for detecting endmembers in hyperspectral

images using morphological autoassociative memories. The proposed technique relies on

the properties of strong lattice independent sets and the geometrical description of a set of

endmembers.

4.1 Motivation for the Algorithm

As established in Section 3.2, the set of endmembers can be interpreted as the

extreme points of a high-dimensional volume that encloses the hyperspectral image

pixels. The simplest model for this volume is an n-dimensional simplex, formed from the

convex hull of n +1 affinely independent points. As established in Section 2.4.3, the

columns of the memory W, can be reduced to a strong lattice independent set that is

affinely independent. After scaling and translating, this strong lattice independent set

yields a set of endmembers for the hyperspectral pixels in X.

Thus, the columns of the W, memory provide up to n affine independent vectors

that serve as extreme points of the simplex. The n +1 point of the simple is the shade or

dark point formed from the minimum of all vectors in X [32]. The dark point serves as

the apex of the simplex, and the minimum possible energy present in a pixel in the image.

The columns of W, may yield a strong lattice independent set with fewer than n

vectors. This is an acceptable result, and occurs if the hyperspectral pixels lie in a









simplex with dimensionality less than n. In practice, however, the size of hyperspectral

data sets implies that the W, matrix will likely have n lattice independent columns.

As established in Section 2.3.2, the max memory M. is equal to the conjugate

transpose of W,. Because M. also satisfies the theories relating to strong lattice

independence, this provides a convenient way to derive a second set of endmembers that

may yield information unavailable from only the W, memory.

4.2 Scaling and Positioning Endmembers

The strong lattice independent set V determined from the columns of W, yields a

set of candidate endmembers for the pixels in X. Before the algorithm completes, these

endmembers must be translated into a set more suitable for the linear unmixing process

described in Section 3.3. In particular, the W, memory may contain several negative

terms. Let Vbe the set of vectors formed from the columns of W,. For each v e V, set



v=v-AAvf, (32)
1=1 =

that is, subtract the overall minimum element in the set V from each v. Note that if v has

no negative elements, then


AAvf =0, (33)


since the diagonal of W, consists entirely of zeros and v remains unchanged.

Further, the vectors produced by this transform will still exhibit strong lattice

independence. Let Xbe a strong lattice independent set and x e X. It was shown in [22]

that the set { x + a {u X / x where a E 9R, will still be strong lattice independent.









The presence of zeros on the diagonal requires one additional manipulation to the

vectors of V. Let v' be a vector in V. By the definition of W,, v = 0 If v' is scaled

to eliminate negative entries, v\ will also be scaled upwards, and may appear as a spike

in the final endmember. This artifact is removed by setting


v 1 K = 1
v: = v :_ i #= k
v(_+v,+ /2oi l,i k


(34)


where k is the dimensionality of v' Because adjacent spectral bands tend to be highly

correlated, this smoothing improves the quality of the resulting endmembers by

eliminating discontinuities that might appear along the diagonal.

Endmembers produced from the max memory M. cannot contain negative

entries, but the zeros along the diagonal may still manifest themselves as downward

spikes or dips in the spectral plot. These are removed by a smoothing transform

analogous to the one used for W,.

4.3 The Algorithm

Now that the motivation for the method has been established, the algorithm for

deriving endmembers using morphological memories is presented in greater detail as

follows:

1. Input X, the set of hyperspectral pixels.

2. Form the memory W, by scanning the set X

3. Set SW equal to the strong lattice independent set formed from columns of W .

4. Scale and position the endmembers of the set S,.

5. Set MX,= (W, *)'.









6. Compute S,, the strong lattice independent set formed from the columns of M.

7. Scale and position the endmembers of the set S .

8. Return S, and S,.

The majority of execution time is spent on Step (2), scanning the hyperspectral

image to produce the morphological memory. The remaining steps take relatively little

time in comparison.

4.4 Advantages of the Algorithm

The algorithm has several properties that make it a desirable method for

endmember detection. Because of their size, HS images are stored on disk or dedicated

tape devices, with relatively slow access speeds. The algorithm requires only one pass

through the image, reducing the time spent on memory access operations. Further, the

memory can be constructed incrementally, eliminating the need to dedicate a large

amount of main memory to storing and manipulating image pixels.

Third, the algorithm is based on lattice algebra, which does not include

multiplication operations. Because hardware implementation of multiplication operations

requires a relatively large number clock cycles, lattice algebra based methods may

outperform other algorithms that have the same computational complexity. Finally, the

addition and comparison operations required to execute lattice algebra algorithms are

well suited for implementation in programmable logic devices or application specific

integrated circuits.

Chapter 5 presents the results of applying the algorithm to experimental

hyperspectral data collected over Cuprite, Nevada, an established test site for HS

imaging.














CHAPTER 5
EXPERIMENTAL RESULTS

This chapter details the results of applying the proposed method for endmember

detection to a real hyperspectral image collected over the area of Cuprite, Nevada. A

mining area located approximately 200 km northeast of Las Vegas, Cuprite has been

extensively used for remote sensing tests since the 1980s and has been thoroughly

mapped [2, 9, 37, 38]. The area chiefly consists of volcanic rocks modified by

hydrostatic processes. Principal minerals of interest at the Cuprite site include Alunite,

Kaolinite, Calcite in the form of limestone, and Silica [38].

5.1 Data Characteristics

The data used for these experiments comes from images taken by NASA's

Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) in 1997. The AVIRIS device

is an aircraft-mounted 224-band spectrometer, measuring wavelengths in the ranges of

400 to 2500 nm with an approximate spectral resolution of 10 nm. Spatial resolution of

AVIRIS images is approximately 20 square-meters per pixel [30].

The uppermost 51 bands of the AVIRIS data were selected for the endmember

detection and unmixing experiments, corresponding to infrared wavelengths from 2000 to

2500 nm. Other studies of the Cuprite area have identified this range of the spectrum as

most important for distinguishing between the area's characteristic minerals [9, 38]. The

pixel spectra were not subjected to dimensionality reduction using Principal Components

Analysis (PCA) or the Minimum Noise Fraction Transform (MNF). This approach

contrasts with other methods [9, 39], that use dimensionality reduction to make









endmember detection more computationally efficient. Though these techniques decrease

computational effort, they hinder spectral identification and mineral mapping, since the

detected endmembers having been transformed by PCA or MNF will bear no

resemblance to the real, physical material spectra they represent as described in Section

3.1.1.

5.2 Endmember Determination

A section of the AVIRIS image cube of Cuprite covering 1182x 614 pixels with 51

spectral bands per pixel was selected for the experiment [30]. Using a MATLAB 7.0.4

implementation of the algorithm on a Dell PC with the Windows 2000 operating system,

a 3.06 GHz Pentium 4 processor, and 1 GB of RAM required 135 seconds to compute a

morphological memory containing every pixel in the image, and to scale the columns of

the memory to produce the final endmembers. Figures 5-1, 5-2 and 5-3, show three

endmembers produced by the technique compared to the U.S. Geological Survey

reference spectra for the minerals Kaolinite, Alunite, and Calcite, respectively [40].

Two sets of endmembers were produced, one each from the W, and M.

memories. The two sets of endmembers each yield different minerals of interest in the

Cuprite scene.

5.3 Material Abundance Maps

Following endmember determination, a geologically interesting subset of the

Cuprite image covering 670 x 614 pixels was unmixed using the hybrid-unmixing

algorithm proposed by Ramsey and discussed in Section 3.4. This yields an abundance

map for each endmember, where greater pixel intensity corresponds to a higher

concentration of the endmember material in that pixel. For each map, any pixel









containing more than 50% of the given endmember was brightened to more clearly show

regions of interest.

Figure 5-4 is a multicolored reference map produced by the USGS covering the

region of interest [40]. The reference map is shifted slightly to the east, relative to the

experimental data set, but it is easy to see any similarities between the USGS map and the

abundance maps produced by unmixing.

The first set of material abundance maps were produced from endmembers derived

from the min memory W,. Figs. 5-5 and 5-6 show mineral abundance maps for two

different varieties of Kaolinite, corresponding favorably to the distributions of the two

varieties of that mineral shown in the USGS map. Fig. 5-7 shows the abundance map for

the Alunite endmember previously shown in Fig. 5-2. The horseshoe shaped formation

visible in orange on the USGS map is clearly visible, as is the ring to the east. Fig.5-8

shows an abundance map for the Calcite endmember. Again, this map matches the

distribution shown in mauve on the USGS map. All of these maps correspond favorably

to abundance maps produced by the NFINDR algorithm of Winter [9, 10], another

geometrically-based method for automated endmember detection. Fig. 5-9, shows the

distribution of the mineral Muscovite. Though less visually striking than previous

examples, the key shape in the center of the map clearly matches the distribution shown

in blue on the reference map.

The USGS reference map of Fig. 5-4 contains multiple varieties of Kaolinite and

Alunite, as well as Dickite, a mineral with a spectrum similar to Kaolinite. In the 51-

dimensional pixel space, these similar materials correspond to extreme points of the high-

dimensional enclosing volume that are clustered together. When using a simplex as the









model for the volume, this cluster may be represented by only a single detected

endmember. For this reason, the method has difficulty distinguishing between materials

with very similar spectra. Thus, the abundance map shown in Figs. 5-5 and 5-6 each

capture multiple varieties of Kaolinite shown in the reference map of Fig. 5-4.

For the most part, the endmembers derived from the max memoryM. correspond

to materials already detected in the W, endmembers. There is, however, one material of

interest that appears in the M. endmembers but is undetectable in the first set. The

abundance map shown in Fig. 5-10 corresponds to the mineral Buddingtonite, which is

visible in only a few pink pixels in the USGS reference map. The small brightly lit

region of the abundance map corresponds to the distribution of this mineral. The

detection of a material present in such small quantities is a practical demonstration of the

usefulness of endmember detection with morphological memories.











10000

9000

8000

7000

6000

5000

4000

3000

2000

1000

0


Endmember Spectra (Blue) and Kaolinite (Red)


2.05 2.1 2.15 2.2 2.25 2.3 2.35 2.4 2.45 2.5
Wavelength (nm)


Figure 5-1: Plot of detected endmember spectra and reference spectra for the mineral
Kaolinite.











10000

9000

8000

7000

6000

5000

4000

3000

2000

1000

0


Endmember Spectra (Blue) and Alunite (Red)


2.05 2.1 2.15 2.2 2.25 2.3 2.35 2.4 2.45 2.5
Wavelength (nm)


Figure 5-2: Plot of detected endmember spectra and reference spectra for the mineral
Alunite.











10000

9000

8000

7000

6000

5000

4000

3000

2000

1000

0


Endmember Spectra (Blue) and Calcite (Red)


2.05 2.1 2.15 2.2 2.25 2.3 2.35 2.4 2.45 2.5
Wavelength (nm)


Figure 5-3: Plot of detected endmember spectra and reference spectra for the mineral
Calcite.












Cuprite, Nevada
AVIRIS
1993 Data
USGS
Tricorder 2.3
product


K-Alunite 250C

K-Alunite 450C


Na-Alunite 400C


Kaolinite wxl

Kaolinite pxl

i- I .-

Dickite


Alunite+Kaolinite
and/or Muscovite

Kaolinite +
smectite

Calcite


'. .- .Calcite +
; Montmori lonite


Na-Montmorillonite


.. I,'










Figure 5-6: USGS reference map for the Cuprite, Nevada, site. The map is shifted
slightly to the east and south relative to the material abundance maps
















































Figure 5-5: Abundance map for the first variety of the mineral Kaolinite.
















































Figure 5-6: Abundance map for a second variety of the mineral Kaolinite.
















































Figure 5-7: Abundance map for the mineral Alunite.
















































Figure 5-8: Abundance map for the mineral Calcite.


















































Figure 5-9: Abundance map for the mineral Muscovite.
















































Figure 5-10: Abundance map for the mineral Buddingtonite.














CHAPTER 6
CONCLUSIONS

The problem of automatically determining the fundamental material spectra present

in a hyperspectral image is of practical interest to researchers in diverse fields such as

geology, agriculture, computer science, and national defense. A new method for

detecting endmember materials using morphological autoassociative networks has been

described and presented. The morphological memory is used to determine a set of

spectra that lie at the exterior of a high-dimensional volume enclosing the image pixels.

These extreme spectra correspond to "pure" spectra that are composed principally of one

endmember material with as little contamination from others as possible.

Experimental results have shown that the proposed technique produces results

competitive with other automated endmember determination techniques. Hyperspectral

images obtained from the NASA AVIRIS sensor over Cuprite, Nevada were processed to

obtain endmembers and produce material abundance maps. Minerals detected in the

scene include kaolinite, alunite, calcite, muscovite, and buddingtonite. The distributions

of these minerals in the experimental material abundance maps correspond favorably to

their location in reference maps produced by the U.S. Geological Survey.

The proposed detection method has several desirable properties. First, it is fast,

requiring only one pass through the image pixels, and does not require the use of

dimensionality reduction techniques to be computationally feasible. Second, the memory

can be built incrementally by scanning the image. This eliminates the need to allocate a

large amount of memory to store and manipulate the image pixels. Third, the









implementations of algorithms based on lattice algebra do not use multiplication

operations, making them well suited to dedicated hardware implementation in

programmable logic devices or application specific integrated circuits.

Future work will focus on expanding and extending the method to apply to a

wider array of hyperspectral processing problems. There are numerous real-world

problems where the technique is readily applicable, such as landmine detection, target

recognition, and agricultural monitoring. Future applications of morphological memories

and the theory of lattice independence include real-time target tracking, clustering, and

computational geometry. In general, neural computation based on lattice algebra is a new

and growing discipline of pattern recognition with many open problems and interesting

research opportunities.
















LIST OF REFERENCES


1. R.B. Smith, "Introduction to hyperspectral images," tutorial by Microimages, Inc.,
Lincoln, NE, 2001.

2. R.G. Resmini, M.E. Karpus, W.S. Aldrich, J.C. Harsanyi, and M. Anderson,
"Mineral mapping with Hyperspectral Digital Imagery Collection Experiment
(HYDICE) at Cuprite, Nevada, USA," Int. J of Remote Sensing, vol. 18, 1997.

3. D. Manolakis, D. Marden, and G.A. Shaw, "Hyperspectral image processing for
automatic target detection," Lincoln Laboratory Journal, vol.14, no. 1, 2003, pp.
79-116.

4. A. Hirano, M. Madden, and R. Welch, "Hyperspectral image data for mapping
wetland vegetation," Wetlands, vol. 23, 2003, pp. 436-48.

5. M. Lewis, V. Jooste, A. De Gasparis, "Hyperspectral discrimination of arid
vegetation," in Proc. 28th Int. Symposium on Remote Sensing of Environment, Cape
Town, South Africa, 2000, pp. 148-51.

6. G. Vane, R.O. Green, T.G. Chrien, H.T. Enmark, E.G. Hansen, and W.M. Porter,
"The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)," Remote Sensing
of the Environment, vol. 44, 1993, pp. 127-43.

7. M. Craig, "Unsupervised unmixing of remotely sensed images," in Proc. of 5th
Australasian Remote Sensing Conference, Perth, Australia, 1990, pp. 324-30.

8. M. Craig, "Minimum volume transforms for remotely sensed data," IEEE
Transactions on Geoscience and Remote Sensing, vol. 32, 1994, pp. 542-52.

9. M.E. Winter, "Fast autonomous spectral end-member determination in
hyperspectral data," in Proc. of 13th Int. Conf. On Applied Geologic Remote
Sensing," Vancouver, B.C., Canada, 1999, pp. 337-44.

10. M.E. Winter, "Autonomous hyperspectral end-member determination methods,"
Proc. SPIE, vol. 3870, pp. 150-58.

11. J.J. Hopfield, "Neural networks and physical systems with emergent collective
computational abilities," in Proc. of the National Academy of Sciences, 1982, pp.
2554-558.









12. G.X. Ritter and G. Urcid, "Lattice algebra approach to single neuron computation,"
IEEE Trans. on Neural Networks, vol. 14, pp. 282-95, 2003.

13. G.X. Ritter, L. Iancu, and G. Urcid, "Morphological perceptrons with dendritic
structure," in Proc. FUZZ-IEEE, St. Louis, Missouri, USA, 2003, pp. 1296-1301.

14. G.X. Ritter, P. Sussner, and J.L. Diaz de Leon, "Morphological associative
memories," IEEE Trans. on Neural Networks, vol. 9, no. 2, pp. 281-93.

15. J. Serra, Image Analysis and Mathematical Morphology. London: Academic Press,
1982.

16. G.X. Ritter and J.N. Wilson, Handbook of Computer Vision Algo i/1th1, in Image
Algebra, 2nd ed., CRC Press, Boca Raton, FL, 2001.

17. J.L. Davidson and G.X. Ritter, "A theory of morphological neural networks," in
Proc. SPIE, vol. 1769, 1992, pp. 378-88.

18. M. Grafia and J. Gallego, "Associative morphological memories for endmember
induction," in Proc. IGARSS, Toulouse, France, 2003, pp. 3757-3759.

19. M. Grafia, P.Sussner, and G.X. Ritter, "Associative morphological memories for
endmember determinations in spectral unmixing," in Proc. FUZZ-IEEE, 2003, pp.
1285-1290.

20. M. Grafia, P.Sussner, and G.X. Ritter, "Innovative applications of associative
morphological memories for image processing and pattern recognition," 3 A//ul/ e
and Soft Computing, vol. 7, 2003, pp.155-168.

21. G.X. Ritter, G. Urcid, and L. Iancu, "Reconstruction of noisy patterns using
morphological associative memories," J. ofMathematical Imaging and Vision, vol.
19, no. 2, pp. 95-111.

22. G.X. Ritter and P. Gader, "Fixed points of lattice transforms and lattice associative
memories," Advances in Imaging and Electron Physics, Academic Press, In press.

23. G.X. Ritter, L. Iancu, and M.S. Schmalz, "A new auto-associative memory based
on lattice algebra," in Proc. of 9th Iberoamerican Congress on Pattern Recognition,
Puebla, Mexico, 2004, pp. 148-55.

24. G. Urcid, G.X. Ritter, and L. Iancu, "Single layer morphological perception
solution to the N-bit parity problem," in Proc. 9h Iberoamerican Congress on
Pattern Recognition, Puebla, Mexico, 2004, pp. 171-78.

25. G.X. Ritter, G. Urcid, and R. Selfridge, "Minimax dendrite computation," in ASME
Proc. ANNIE, St. Louis, Missouri, USA, 2002, pp. 75-80.









26. G.X. Ritter and L. Iancu, "Lattice algebra approach to neural networks and pattern
classification," in Proc. 6th Open German-Russian Workshop on Pattern
Recognition andlmage Understanding, Katun Village, Altai Region, Russian
Federation, 2003, pp. 18-21.

27. G.X. Ritter, L. Iancu, and G. Urcid, "Neurons, dendrites, and pattern recognition,"
in Proc. 8th Iberoamerican Congress on Pattern Recognition, Havana, Cuba, 2003,
pp. 1296-1301.

28. G.X. Ritter and L. Iancu, "Lattice algebra, dendritic computing, and pattern
recognition," in 8th Iberoamerican Congress on Pattern Recognition, Havana,
Cuba, 2003, pp. 16-24.

29. T. Kohonen, "Correlation matrix memory," IEEE Trans. on Computers, vol. C-21,
pp. 353-59.

30. G. Vane, R.O. Green, T.G. Chrien, H.T. Enmark, E.G. Hansen, and W.M. Porter,
"The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)," Remote Sensing
of the Environment, vol. 44, 1993, pp. 127-43.

31. N. Keshava, "A survey of spectral unmixing algorithms," Lincoln Laboratory
Journal, vol.14, no. 1, 2003, pp. 55-78.

32. N. Keshava and J.F. Mustard, "Spectral unmixing", IEEE Signal Processing
Magazine, vol. 19, 2003, pp. 44-57.

33. G. Vane and A.F.H. Goetz, "Terrestrial imaging spectroscopy," Remote Sensing of
Environment, vol. 24, pp. 1-29.

34. R.N. Clark, "Spectroscopy of rocks and minerals," and "Principles of
spectroscopy," in A.N. Renz (ed.) Remote Sensing for the Earth Sciences: Manual
of Remote Sensing, 3rd ed., vol. 3, John Wiley & Sons, New York, pp. 3-58.

35. C.L. Lawson and R.J. Hanson, Solving Least Squares Problems. Englewood Cliffs,
NJ: Prentice Hall, 1974.

36. M.S. Ramsey and P.R. Christenson, "Mineral abundance determination:
Quantitative deconvolution of thermal emission spectra," J. Geophys. Res., vol.
103, no. B1, 1998, pp. 577-96.

37. A.F.H. Goetz and V. Srivastava, "Mineralogical mapping in the Cuprite mining
district," in Proceedings of the Airborne Imaging Spectrometer (AIS) Data Analysis
Workshop, JPL Publication 85-41, pp. 22-29.

38. F.A. Kruse, "Comparison of AVIRIS and Hyperion for hyperspectral mineral
mapping," in Proc. of 11th JPL Airborne Geoscience Workshop, Pasadena, CA,
2002, JPL Publication 03-4 (CD-ROM).






49


39. A. Ifarraguerri and C.-I Chang, "Multispectral and hyperspectral image analysis
with convex cones," IEEE Transactions on Geoscience and Remote Sensing, vol.
37, 1999, pp. 756-70.

40. R.N. Clark and G.A. Swayze, "Evolution in imaging spectroscopy and sensor
signal-to-noise: An examination of how far we have come," in Summaries of the 6th
Annual JPL Airborne Earth Science Workshop, Palo Alto, CA, 1996, pp. 49-53.














BIOGRAPHICAL SKETCH


I was born in the city of Kingsport, Tennessee, in 1982. In 1998, my family

relocated to Miami, Florida, where I graduated from high school. I remained in the state

to attend the University of Florida, majoring in computer engineering.

During my time as an undergraduate I participated in the University Scholars

program under the supervision of Dr. Mark Schmalz, researching acoustic modeling of

auditoriums and concert halls. Following the completion of that project, I continued

work as Dr. Schmalz's research assistant, principally focused on projects in automated

target detection, remote sensing, and pattern recognition.

I completed my bachelor's degree in 2004, graduating with highest honors. I

remained at the university to complete a master's degree, also in computer engineering.

As a graduate student, my principal interests are intelligent systems, pattern recognition,

programming languages, and interfaces between digital systems and music.

During my summers, I worked as an intern at two national laboratories: first at

NASA Langley Research Center, programming digital signal processors for aircraft noise

control, and second at Sandia National Laboratories, conducting research on applications

of pattern recognition methods to network intrusion detection. My work at Sandia led to

a technical advance paper for a swarm-based data simplification algorithm I developed. I

completed my third internship, also at Sandia, in the summer of 2005, working on

ground-based nuclear explosion monitoring. I will return to Sandia Labs as a permanent

employee in 2006.