Maxpolynomials and morphological template decomposition


Material Information

Maxpolynomials and morphological template decomposition
Physical Description:
v, 98 leaves : ill. ; 29 cm.
Crosby, Frank J., 1967-
Publication Date:


bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )


Thesis (Ph. D.)--University of Florida, 1995.
Includes bibliographical references (leaves 96-97).
Statement of Responsibility:
by Frank J. Crosby.
General Note:
General Note:

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 002045659
notis - AKN3588
oclc - 33399213
System ID:

Full Text








I would first like to thank my parents for their continuous support and encouragement.

They have given me a belief in myself which is what I have needed most. I would also

like to thank my friends. They Ihive stood by me, so that in spirit I was never alone.

The Florida Education Fund deserves special thanks, not only for its financial aid

but also for its moral support.

There have been many that I have met during my journey who have not been

supportive. I know that every obstacle that I am able to overcome will make stronger,

so I thank them as well.


ACKNOWLEDGMENTS ................... ............ ii

ABSTRACT ........... ...................... ......... iv


1 INTRODUCTION................... ............... 1

2 MINIMAX ALGEBRA ....... ................. ........ 7
2.1 Introduction ................... ................ 7
2.2 Belts ........................................ 8

3 IMAGE ALGEBRA ................... ............... 13
3.1 Introduction ................ .. ................... 13
3.2 Basic Definitions ......... .................. ...... 14
3.3 Operations ................... .................. 16

4 MAXPOLYNOMIALS ................... ............. 22
4.1 Introduction . .. . . . 22
4.2 Basic Definitions .......... ... .................... 24

5 FACTORIZATION .................................. 42
5.1 Introduction ................... ............... 42
5.2 Basic Properties ............. ..................... 44
5.3 Maxpolynomials over (R-o, V, +) ............... ..... 48
5.4 Maxpolynomials over ( {-oo, O}, V, +) ...... ... ......... 74

6.1 Introduction .................................... 80
6.2 Basic Definitions ......... .................... ..... 81
6.3 Matrix Decomposition ....... ...................... 89


REFERENCES ................ ......................... 96

BIOGRAPHICAL SKETCH ........ ....................... 98

Abstract of Dissertation Presented to the Graduate School of the University of Florida
in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy



Frank J. Crosby

May 1995

Chairman: Dr. Gerhard X. Ritter
Major Department: Mathematics

Image algebra and combinatorial optimization have led to the consideration of

polynomials over lattice-ordered groups instead of over the usual structure of rings. These

polynomials are referred to as maxpolynomials. Maxpolynomials were first introduced to

solve combinatorial problems. This use is more restricted than their applications to image

algebra. Therefore, a general development of the concepts related to maxpolynomials

was in order.

A general definition of maxpolynomials is the starting point of this research. Max-

polynomials are defined for both the single-variable and the several-variable cases. These

new definitions allow for the complete classification of maxpolynomials by way of a uni-

versal mapping property.

Past research in image algebra has established that maxpolynomial factorization is

equivalent to morphological template decomposition. Several elements of maxpolynomial

factorization are also investigated. First a division algorithm is demonstrated. From

there, new factorization techniques are presented. Two lattice-ordered groups are the

central focus of the factorization techniques. The first is built around the real numbers

and negative infinity. This lattice-ordered group is used for gray-scale morphological

templates. The second is built around just zero and negative infinity. Its applications

are chiefly in binary morphology.

Another method of template decomposition is based on matrix analysis. A matrix

decomposition algorithm utilizing nonlinear operations and the definition of rank in terms

of minimax algebra is also developed.


The results presented here add to the expanding frontiers of image algebra. There

are many specific examples of algebraic structures, and the power of the abstract point of

view becomes apparent when results for an entire class are obtained by proving a single

result for an abstract structure. This is the goal of image algebra.

The history of image algebra begins with mathematical morphology. The term

morphology denotes a study of form. It is commonly used to describe a branch of

biology which studies the structure of plants and animals. In image processing there is

mathematical morphology. It is a tool which is used to rigorously quantify geometric

structure or texture within an image. Mathematical morphology views the image as a

collection of sets and then interprets how other sets interact with the image. It was

developed in the mid 1960s by G. Matheron and J. Serra at the Paris School of Mines in

Fontainbleau [1]. From a few basic operations they developed many different algorithms.

Two very important theorems about mathematical morphology were proved by

Hadwiger and Matheron. In 1975, Matheron proved that any increasing mapping on

R" is both a union of erosions and an intersection of dilations [2]. Hadwiger showed

that suitably well-behaved image functionals posses a similar property [3]. The beauty

of morphology lies in these two theorems. They show that a wide class of operators can

be represented by just a few morphological operations. Complete characterizations such

as these are some of the most powerful theorems in mathematics. They generally serve

to confirm a particular approach to a problem.


These observations led Serra and Sternberg to unify the concepts of morphology in

hopes of bringing together many different aspects of image processing. Sternberg began

to use the term image algebra to describe this unification [4].

Their attempt at generalization had a serious drawback. Many image operations

are not expressible in morphological terms. Some transformations such as the Fourier

transformation and histogram equalization are basic to digital image processing but cannot

be accomplished using purely morphological methods. To remedy this shortcoming, G.

X. Ritter set out to develop a universal system. The goals were to define a complete

algebra which would encompass all image processing techniques, and to define a simple

algebra whose operands and operators would be intelligible to those without an extensive

mathematical background [5].

Once a comprehensive framework was built, the relationships between image algebra

and other existing algebraic structures could be determined. This would turn out to be

a prolific means of enhancing image understanding.

J.L. Davidson and H.J. Heijmans independently discovered that mathematical mor-

phology could be formulated in terms of lattice algebra as well as the traditional set

theoretic approach [6, 7]. Davidson's results further showed that morphology, with this

reformulation, could be embedded into image algebra. They showed that morphologi-

cal operations can be computed using lattice convolutions. In fact, lattice convolutions

can do more that just morphology. The results further established the connection be-

tween mathematical morphology and minimax algebra. Lattice convolutions are based

on minimax algebra.

Minimax theory has long been used to solve problems in operations research, such as

machine scheduling and shortest-path problems. This theory is built around semilattice-

ordered semigroups, also known as belts. A belt is set together with a lattice operation


and a binary operation which distributes over the lattice operation. It is typically denoted

by (F, V, +). In much the same way that one investigates structures over rings, one also

investigates homomorphisms, linear transformations, and matrices over belts. In fact,

minimax problems for piecewise linear functions lead Cuninghame-Green and Miejer to

develop the theory of maxpolynomials, which are polynomials over belts [8].

Maxpolynomials have the additional property that, in much the same way that poly-

nomials can be used to calculate linear convolutions, they can be used to calculate lattice

convolutions. However, a major drawback of the original development of maxpoly-

nomials is that they were viewed as functional expressions. Unlike a polynomial, a

maxpolynomial is quite different when viewed alternately as a formal expression and as

a functional expression. For example, while it is true that for .r E Ro,

(2 + 2.r) V (1 + x) V I = (2 + 2x) V 1.

formally they differ. When calculating lattice convolutions, maxpolynomials are taken to

be formal expressions. All of the development given in this work will treat them as such.

The processing of images is a computationally intensive task. Convolutions require

a large number of operations, which is proportional to not only the size of the image,

but also to the size of the template. Template decomposition is one of the best ways to

reduce the computational complexity of an algorithm.

In their initial investigation Cuninghame-Green and Meijer presented a factorization

theorem for maxpolynomials. The necessary and sufficient condition for the application

of their result is that the maxpolynomial be irredundant. This means that when viewed

as a functional expression, it has no extraneous terms. In the above example, (1 + x) on

the left side of the equation is an extraneous term. Furthermore, when expanding their

factorization it was only guaranteed that the result would be functionally the same as the


original. Li expanded their theorem to give conditions under which the original and the

expansion of their factorization would be identical formal expressions [9].

The goal of this dissertation is to develop the theory of maxpolynomials beyond the

work of Li [9], and Cuninghame-Green and Meijer [8]. By solidifying the foundation

of maxpolynomials, we hope that they will become a generous resource for many

applications. To insure the usefulness of the factoring techniques presented in this

dissertation, all maxpolynomials are regarded as formal expressions. Hence, they are

directly applicable to lattice convolutions.

In order to develop the theory of maxpolynomials, this dissertation begins with a

review of some relevant minimax definitions. We present these axiomatics and basic

manipulative properties in Chapter 2. The names and definitions for several types of

belts are given. In addition, the concepts of homomorphism and duality are presented.

These concepts form the basis of maxpolynomials analogously to the way in which ring

theory is the basis of polynomial investigations.

Next, we present some of the foundations of image algebra. The focus of the third

chapter is to show some of the ways in which minimax algebra and image algebra interact.

The presentation is far from complete. However, it serves to familiarize the reader with

the basic concepts.

Chapter 4 begins a rigorous establishment of maxpolynomials. First maxpolynomials

are defined for a single indeterminate. Some elementary properties and notation are

then developed. The construction of maxpolynomials in n indeterminates is next and

is followed by some of their basic properties. In particular, we relate the structure of

maxpolynomials back to the structures mentioned in Chapter 2. The main result of

Chapter 4 is the complete classification of the belt of maxpolynomials using a universal

mapping property.


In Chapter 5 we explore various concepts associated with factorization. Among the

basic properties is the establishment of an analog to the division algorithm. From there,

particular factorization theorems are presented for the two most common belts used in

lattice convolutions.

Many of the considerations used in factoring maxpolynomials stem from those in

the work of Z. Manseur and D. Wilson [10]. They used conditions such as symmetry

and skew symmetry to aid in factoring polynomials. They also looked at how factoring

boundary polynomials effected factorization.

Section 5.3 focuses on (R-_, V,+), which is used for gray scale morphology.

Several techniques for the single-variable situation are developed. Then the two prin-

cipal techniques are applied to the two-variable case. The belt of Section 5.4 is

({-oo,0},V,+), which corresponds to binary image manipulations. The first part of

the section shows that factoring by grouping arises in three important cases. It is then

shown that when decomposing a binary restricted-convex template (see Section 5.4), only

decompositions of the boundary need be considered. We then prove that the boundary

involves only the three cases shown in the beginning. Once this is done, we have finished

classifying the problem of decomposing restricted-convex templates.

The main focus of this dissertation is the development of the theory of maxpolynomi-

als. Particular emphasis is placed on their use in morphological template deomposition.

There are other methods used in morphological template decomposition. One of those

methods is based on matrix analysis.

In the setting of linear algebra, D. O'Leary showed that if a 5 x 5 matrix has either

rank 1 or all of its nonzero terms are on a single diagonal, then it can be factored into the

product of two 3 x 3 matrices [11]. Z. Manseur and D. Wilson reduced the number of

factors implied by O'Leary's result for the decomposition of an arbitrary matrix by using


polynomial methods [10]. J. Davidson studied some nonlinear matrix decompositions

based on minimax algebra [12]. However, the work of Davidson did not utilize the rank

of a matrix. The Goal of Section 6.3 is to prove a rank based decomposition in terms

of minimax algebra.


2.1 Introduction

When solving problems chiefly of interest to the operational researcher, a number of

different authors discovered that these problems could be reformulated under a nonlinear

algebraic structure. This reformulation presented a unifying language and thus a mutual

strategy for solution. The language consists of an algebra. This algebra contains

the extended real numbers and two binary operations. The two binary operations are

maximum or minimum, and addition. We can denote this algebra by (Ro, V, A, +).

Authors such as Giffler applied this structure to solve machine scheduling problems [13].

Others used it in shortest-path problems of graph theory [14, 151. The properties of the

lattice-ordered group (Ro, V, A. +) have been investigated over many years. However,

the study of spaces of n-tuples over this algebra led to an elemental connection between

operations research and linear algebra.

A unified account of this algebra and its connection to linear algebra was presented

by Cuninghame-Green in his book Minimax Algebra [16].

J. Davidson showed that minimax algebra could be embedded into image algebra

and that some of the basic results which had been obtained in the area of operations

research have applications in image processing [6]. Although it had already been formally

proven that image algebra was capable of representing any image transformation, the

isomorphism that Davidson developed showed that minimax theory could be applied to

image analysis.


The next section introduces some of the basic definitions and notation of minimax

algebra. This presentation does not aim at completeness. Only those concepts which

will be used are covered.

2.2 Belts

Let F be a set. We define on F two binary operations, V and *, having the following


1. Associativity of V: x V (y V z) = (. V y) V z.

2. Commutativity of V: x V y = y V ..

3. Idempotent: x V x = x.

4. Associativity of *: x (y z) = (x y) z.

5. Right distributive: x (y V z) = (x y) V (x z).

6. Left distributive: (y V :) = (y x) V (z ).

The ordered triple (F, V, *) is known as a belt.

The properties 1-3 define a semilattice structure, that is, an abelian semigroup in

which every element is idempotent. A semilattice is also referred to as a commutative

band in some literature. It is the basis for what follows, similar to the way a group is the

basis for the structure of a ring. In addition, the operation is associative and satisfies

"distributive" laws. Due to the similarity between this structure and a ring we call the

structure (F V. *) a belt. We also refer to V as addition and as multiplication. A belt

is also known as a semilattice-ordered semigroup.

If we define V to be the maximum of two numbers and to be the usual addition,

then the set of real numbers with these operations, denoted by (R. V, +), is an example

of a belt. Another example may be formed by taking the set F to be the positive real


numbers, R+, and binary operations to be maximum, V, and multiplication, x. This

belt is denoted by (R, V, x).

Any semilattice may be viewed as a belt when the multiplication is defined to be

identical to the semilattice operation. In this case, we say that the belt is a degenerate belt.

Let (F, V,*) and (F, V, *) be belts. A function /, : F, F, is a belt homo-

morphism if

'(.x V y) = '(.r) V ,'(y)


N,(x* y) = ,(- .') ,(y)

Similarly, we use the terms isomorphism, endomorphism, and automorphism.

For example, if 0/' : R -* R is defined by

(x) = t ,

then it is evident that (R, V, +) is isomorphic to (R+, V, x), where R+ is the positive

real numbers.

A particular belt may also satisfy

7. Commtativity of *: x y = y .

Such a belt is called a commutative belt.

If there exists an element IF such that

8. Identity: IF X = IF = -,

then element 1F is called the identity element and a belt satisfying axiom 8 is a belt

with identity.

Suppose that for each x E F there exists an element x' such that

9. Inverse: x x' = 1F

Such an element is called the inverse of x.

It can be shown that the one-sided inverse of an element is its two-sided inverse and

that the inverse is unique. We denote the inverse of an element x by x-1. It is also

evident that (x1)-' = x and (IF)- = IF If a belt satisfies both axioms 8 and 9,

then it is a division belt.

If there exists an element -oo such that

x V (-oo) = x


-00 2 = (-OC) = -o0,

then such an element is unique and termed the null element. The existence of a null

element is quite significant in the sequel. In fact, many of the derivations will depend on

its presence. Fortunately an arbitrary nondegenerate belt may be extended to include a

null element. The element -oo can be adjoined to the set F and this new set is denoted

by F-o. This element serves as a lower bound for the semilattice. So we define

V (-oo) = (-oo) V = x .

The semigroup operation can be extended by defining

-oo = x (-o) = -oo .

The elements of F which are different from -oo are called the finite elements of F. It

has been shown that, except in the trivial case, where F= {1} a partially ordered group

cannot have universal bounds [17]. Thus a division belt cannot have a null element.

Notice that (R, V, +) is in fact a division belt. We may adjoin -oo to R. It follows

that (Roo, V, +) is a belt with a null element in which the finite elements form a division

belt. The belt ({-o,O0},V,+) may be considered as a subbelt of (Roo,V,+). The

belt ({-oo,0),V,+) is again a belt with a null element in which the finite elements

form a division belt.

Under the mapping ,(x) = e', the belt (R, V. +) can be shown to be isomorphic

to (R, V, x). If we extend the map <, by defining ',(-oo) = 0, then we have that

(R-o, V, +) is isomorphic to (R0o, V, x), where Ro denotes all real numbers greater

than or equal to zero. Note that the null element of (R>O, V, x) is zero. By the

uniqueness of the null element,

R!o, V, x) = (RO, V, x)

The possibility exists to expand the structure of an arbitrary belt to include dual

operations. That is, it would have the additional properties that for all x, y, z E F,

1'. Associativity of A: x A (y A z) = (x A y) A z.

2'. Commutativity of A: r A y = yA x.

3'. Idempotent: x A = x.

4'. Associativity of *': x *' (y *' z) = (x *' y) z.

5'. Right distributive: x *' (y A z) = (x *' y) A (x *' z).

6'. Left distributive: (y A z) x = (y *' x) A (z *' x).

If the two semilattice operations satisfy

10. Lattice absorption law: x V (y A x) = x A (y V x) = x.

then it is said that the two semilattice operations are consistent and that (F, A, *') is the

dual of (F, V, *) and vice versa. Thus, if it is possible to define these two additional


operations we say that (F, V, *) has duality. This is often represented by (F, V, A, ,, *').

It is not assumed that and *' are related. However, if they should coincide, then we

say that the belt has a self dual multiplication. If (F_,o, V, *) is a division belt, then by

defining x A y = (X-1 V y-') we have introduced a dual semilattice operation and we

get a division belt with self dual multiplication.

The belt (R, V, +) may be expanded by the inclusion of a minimum operation. It is

easily checked that (R.A, +) is the dual of (R, V. +).

Let (F. V. *) and (F2. V.*) be belts. We shall say that (F1.V, *) is conjugate to

(F2, V, *), if there exists a function ,' : Fi -- F2 such that

', is bijective;

for all y e F, (/'(x V y) = ,(x) A '(y);

for all x, y F, 7'(.x y) = i,(.) *' '(y).

In particular, if (F. V, *) is a belt with duality, then we say that it is self-conjugate,

if (F, V, *) is conjugate to (F,A,*).

If (F, V, *) has a conjugate we denote by (F. V, *)* the image of the conjugate map

/,. If f E F we denote by f* V,(f). We call f* the conjugate of f It is immediate that

((F, V, )*)* = (F,V,*) and (f*)* = f.

We note that every division belt is self conjugate under the map f -- f-1. Unless

otherwise noted, our reference to the dual of a given division belt shall be with respect

to this mapping.

If we again consider the belt (R, V, +), then under the map ,(r) = -r, we see that

(R, V, +) is conjugate to (R,A, +).


3.1 Introduction

Image algebra is a response to the need of the image processing community to have

an axiomatic development of the field of image processing. In an axiomatic, or abstract,

treatment of a given type of algebraic structure one assumes a small number of properties

as axioms and then deduces many other properties from those axioms. Thus, it is possible

to deal simultaneously with all the structures satisfying a given set of axioms instead of

with each structure individually.

The term image algebra was first used by Sternberg to describe morphological

operations [4]. Mathematical morphology is well suited for algebraic abstraction of its

properties. Many of its techniques are expressible as combinations of simple operations.

However, it lacked the generality to express many common image processing techniques.

Techniques such as histogram equalization and image rotation are not expressible in

terms of simple morphological operations.

The establishment of a general image algebra became the goal of G. X. Ritter at the

University of Florida. Objects such as value sets and images were defined in general

terms, with minimum specification. The result of Ritter's work has been shown to be

capable of expressing all image processing operations [5].

J. Davidson showed that minimax algebra could be embedded into image algebra

and that some of the basic results which had been obtained in the area of operations

research have applications in image processing [6]. Although it had already been formally


proven that image algebra was capable of representing any image transformation, the

isomorphism that Davidson developed showed that minimax theory could be applied to

image analysis. In particular, the use of lattice convolutions showed how morphology

is a subalgebra of image algebra.

Image algebra is a heterogeneous algebraic structure. That is, it consists of a number

of different operands and operators. This chapter presents some of the basic concepts

and notation of image algebra. Only those concepts which will be used in the sequel are

reviewed. An in depth review may be found in Ritter et al. [18].

3.2 Basic Definitions

The value set is a homogeneous algebra. It is a set together with at least one binary

operation. Generally, our interest will be concentrated on the set consisting of the real

numbers along with negative infinity. Several different operations may be considered.

We denote this by R-o. An arbitrary value set will be denoted by F.

A spatial domain can be any topological space. Subsets of R' will be our main

focus with most applications being Z". The symbol Z" represents the n fold Cartesian

product of the integers.

Let X be a spatial domain and F a value set. An F valued image on X is any map

from X to F. We denote the set of all F valued images on X by Fx.

We shall not distinguish between the graph of an image and the map. The graph of

an image is also referred to as the data structure representation of the image. Given the

data structure representation a = {(x, a(x)) : x E X}, then an element (x, a(x)) of the

data structure is called a picture element or pixel. The first coordinate, x, of a pixel is

called the pixel location or image point, and the second coordinate, a(x), is the pixel

value or gray value of a at location x.


Let X and Y be spatial domains and F a value set. An F valued template from Y

to X is a function t : Y FX.

Thus, a template is an image whose pixel values are images. We denote the set

of all F valued templates from Y to X by (FX) For notational convenience we

define ty t(y). The pixel values, ty(x), of the image ty are called the weights of the

template at the target point y.

If t is a real or complex valued template from X to Y, then the support of t is

defined as

S(ty) = {x EX : ty(x) $ 0}

For extended real-valued templates we also define the following support at infinity,

S+,o(ty) = {x EX : ty(x) / +o),},

,_o(ty) = {x X :X ty(x) -,o} .

If X is a spatial domain with an operation +, then a template t C (FX)X is

said to be translation invariant (with respect to the operation +) if and only if for

each x,y,x+z,y+z EX we have that ty(x) = ty+z(x+z). Templates that

are not translation invariant are called translation variant or simply variant. Of-

ten a translation invariant template can be represented pictorially. For example,

let X = Z2 and y = (x, y) be an arbitrary point of X. Define t E (RiX)Xo by

ty(y) = 2, ty(x y) = 1, ty(r, ,yl 1) = 1, ty(r 2. y) = 0, ty(r, y 2) = 0,


ty(x-1, y 1) = ty(x+l,y ) = ty(x+l, y+l)

ty(x, y) = -oo otherwise. The representation of t is

ty(x 1,y+1) = 0 and


t= 0 1


3.3 Operations

The operations on and between Fx are naturally derived from the algebraic structure

of the value set F. For example, if 7 is a binary operation defined on F, then induces

a binary operation on Fx defined as follows.

Let a,b E FX. Then

ayb = {(x,c(x)) : c(x) = a(x)-b(x), x e X} .

For an F valued image on a coordinate set X we have the following basic operations;

a + b {(x,c(x)) : c(x) = a(x) + b(x), x X}

a b {(x, c(x)) : c(x) = a(x) b(x), x X}

aVb ={(x,c(x)) : c(x) = a(x)Vb(x), x X}

a A b ={I(x, c(x)) : c(x) = a(x) A b(x), x EX).


Induced unary operations are defined in a similar fashion. Any unary operation

g : F -+ F induces a unary operation g : Fx -- FX defined by

g(a)= {(x,c(x)) : c(x) g(a(x)), x X} .

Let F = R,. The additive dual of (R,, V, +) is denoted by (R,, A, +) and

is determined by the map r -- -r. For aE (R+)x, the additive dual is defined by

-a(x) if a(x) E R
a*(x) = -oo if a(x)= +oo
+oc if a(x) = -oo .

Similarly, if a E (RO) X, then the multiplicative dual is defined by

(1/a(x) if a(x) E R
a*(x) = 0 if a(x) = +o
+oo if a(x) = -oc.

Generalized convolutions are one of the most useful consequences of the concept of

a heterogeneous image product. They provide rules for combining images with templates

and templates with templates.

Let F1, F2 and F3 be three value sets, and suppose O : F1 x F2 -* F3 is a binary

operation. If a EF t (Fx ) and 7 is an associative binary operation on F, then

for each y E Y we have ty E Fx. Thus, aO ty EFX and r (a O ty) E F. It follows

that the binary operations O and X induce a binary operation


b = at eF

is defined by

b(y) = F(a O ty) = r(a(x) O ty(x)) .


The expression a @t is called a generalized convolution or the right convolution

product of a with t.

Substitution of different value sets and specific binary operations for and 0 results

in a wide variety of different image transforms. The main focus here will come from

the belt (Ro, V, +).

The bounded lattice ordered group (R,o, V, A, +, +') provides for two lattice con-


b=aa t


b(y)= V [a(x)+ ty(x)]
xEXn5_, (ty)


b= aEt


b(y) = A [a(x) + ty(x)] .
x6XnS_,. (ty)

We designate M as the additive maximum and [E as the additive minimum.

The bounded lattice ordered group (R., V, A, x, x') provides for two lattice con-


b=a @t


b(y) = V [a(x) x ty(x)]

b = a@t


b(y)= [a(x) x ty(x)].
We designate @ as the multiplicative maximum and @ as the multiplicative minimum.

The common unary and binary operation on templates correspond to those defined

on images. For example, if g : Fi F2 and t e (FX)Y then r= g o t (FX)Y

is defined by

ry = g(ty),

where g is applied point-wise to the image ty.

Let t e (FX)y. The transpose of t is a template t' E (FY)X defined by

ty(x) tx(y).

For t ((Rlm)X) the additive dual of t is the template t defined by

-ty(x) if ty(x) ER
tx(y) = -0c if ty(X) = +,xo
-+o if ty(x) = -0o .

For t E ((R x)X)Y, the multiplicative dual of t is the template t defined by

f1/ty(x) if ty(x) E R
tx(y) = -oo if ty(x) = +oo
+oo if ty() = -=oo.

We saw previously how two binary operations, 7 and 0, could be combined to

induce a convolution operator. This notion extends to templates as well.

Suppose that s E (FZ) ,t E (F) : F1 x F2 F3, (F3,) a commutative

semigroup, and X a finite point set. The generalized convolution product r=sOt,

where r e (F) Y, is defined as

ry(z) = Fl(sx(z) 0 ty(x)) .


Let t E (RxoY and s E (R ) X. Then r = s M t is defined by the formula

ry(z)= V [Sx(z)+ty(x)].

If t e ((R o) and t e ((R!W) then r = s @t is defined by the formula

ry(Z)= V [Sx(z) x ty(X)
Many other image and template operations are described in Ritter et al. [18].

In the subsequent discussion, we assume that X = Z2, and t E (FX)x is a shift-

invariant template with finite support at a point y E X. If x = (r, y) E X, then define

pi(x) = :x and p2(x) = y. We have then that S-,o(ty) is finite and the following are

well defined,

i(y)mi, = inf{[pi(x) :x e S,(ty)]}, i(y),a = sup{[pi(x) : x E S-oo(ty)]}

J(y)m,n= inf{[p2(x) : x S-oo(ty)]}, j(Y)m.x = sup{[p2(x) : x S-oo(ty)]}.


me(y) = i(Y)max i(y),i.,

n(y) = j(y)max (Y)min ,

and define

R(ty) = {(i(y),n + i, J()mi, + ) : 0 < i < m(y) 0 < j < n(y), i,j E N}.

By definition R(ty) is a rectangular array, and it is the smallest rectangular array

containing S-'o(ty).


A template, t, with finite support is called a rectangular n x n template if R(ty)

is of size m x n.

Example. Let a morphological template, t, is given by



t= 0 1


The set R(ty) is given by

The diamond designates the origin.


4.1 Introduction

The algebraic structure of a belt can be applied to the solution of minimax problems

for piecewise linear functions. Cuninghame-Green and Meijer noted that certain combi-

natorial problems can be expressed using maxpolynomials [8]. These problems involve

using maxpolynomials as functional expressions.

Maxpolynomials have a different use when they are considered as formal expressions.

One use is the calculation of lattice convolutions such as M] or @. To illustrate the

similarities and differences between linear convolutions and lattice convolutions, suppose

that two finite, discrete, one-dimensional signals are given. These signals may be regarded

as functions from the set of integers into some set, the real numbers for example. Their

convolution results in a finite, discrete signal and so it is also a polynomial. Let

f = Co + bax + + abnr"

g= bo+bla + -- +bxmm


f g = Co + Cld + + + C m+n.S n

The coefficients of the polynomials are the discrete values of the signal. The powers of

the variable x serve to preserve the order of the coefficients.


The convolution of f and g is given by

(f*g)(j) = y f(m)g(j 7n) for 1,2,...

If f, g and f g are replaced with their polynomial representations the convolution

formula becomes

Cj t bj_ .

Taking into account were the coefficients of f and g are nonzero, the formula reduces to

Cj = arb_-m .

which is just the product of the polynomials.

Image algebra has the capability to represent generalized convolutions. These are

convolutions where different binary operations are used, instead of the usual operations

of addition and multiplication. For example, there is the generalized convolution called

the additive maximum. The additive maximum of two finite, discrete, one-dimensional

signals is represented in image algebra as M and is calculated by the formula

(f M g)(j) = V (f(m) +- g(j ,n)).

One may now be led to believe that it is possible to define a certain kind of

"polynomial" whose product corresponds to this convolution. In the linear convolution

we had

S= u -J- (i ( .') + (ax 2) + (a3.' 3) + -.

The two operations were addition and multiplication. In a lattice convolution, the two

operations are maximum and addition. To separate the coefficients we will now use

V, and to preserve the order of the coefficients instead of powers of a variable we use

multiples and write

f = ao V (al + .r) V (a2 +- 2xr) V (a3 + 3x1) V .

Dong Li noted the connection between maxpolynomials and the additive maximum

convolution [9 ]. All of these observations may be extended to signals in two (images)

or more dimensions.

The aim of this chapter is to classify maxpolynomials. That is to say that they will

be identified as a member of an algebraic structure. By doing so, any investigation is not

limited to the specific, and other results may be applied to this new member.

4.2 Basic Definitions

Let (F_-o,V,*) be a belt with lower bound -oo.

Definition. All sequences of elements of F which have only finitely many elements

which are not negative infinity are called maxpolynomials over F.

The set of maxpolynomials over F is denoted by F_-,[.x].

Theorem 4.2.1. Let (Fo, V, *) be a belt.

(i) F_ oo[.] is a belt with V and defined by

(o. ar,. ..) V (bo, bl...) = (ao V bo, a V b, ...)


(ao, a .l ...) (bo, b ....) = (co, cl...)


c, = V\(a,,- b).


(ii) If (F-o,, V, *) is a commutative belt [resp. a belt with identity], then so is Foo[x].

(iii) The map F -> F-oo[x] given by ,/(f) = (f, -oo, -oo, -oo,...) is a monomor-

phism of belts.

Proof: If a, b, c F_., then a = (ao, a, ...),b = (bo, bl...), and c = (co, c,...),

a V (b c) = a V (bo V co, bl V cl, ...)

= (ao V bo V co, a V b1 V cl,...)

= (ao V bo,ai V b,...)Vc

= (a V b) V c

aV b = (ao V bo,ai V bl,...)

= (bo V ao, bi V a1,...)

= bVa

aVa = (ao V ao,0a V al,...)

= (ao,al,...)



(a *b) c = V V a b,,-j-i cj
j=0 i=o

= V V a -- b-j- *
j=0 i=0

= V V a" bn-,,j- *c
i=0 j=O
SV a V bn-.-i cj
i=0 j=O
=a (b c).
Let d,, be the nth coefficient of a (b V c). By calculation

d. = V an.-i(bi V ci)

= V (an-i bi) V (a,-i, ci)

=V (a.n_,* bi) V V (a,,- c).
i=0 i=0
Hence, dn is also the nth coefficient of (a b) V (a c). So

a (b Vc)= (aV b) (a V c).

Next, let d, be the nth coefficient of (b V c) a. Again by calculation

d,, = V (bn-i V c.-i) ai

= V (ai bn-i,) V (ai c,-i)

= (a n-i) V V(a, *c ) .
.i=0 i= 0
Hence, d, is the nth coefficient of (b a) V (c a). So

(b c)*a = (b V a)* (c V a).

If F-_, is commutative, then

a b V ai b,-i
=V bj-i *ai
b a ,

which shows that F-o~[x] is also commutative.

If F-o has an identity IF, then the element (F,-oo, -0o,...) e F-oo[x] acts as

an identity in F_o[,r]. By calculation,

(1F, -oo, -o ....) (ao, al,a2,...) (ao, al,a2,...) .

To show that the mapping is a belt monomorphism, let fi, f2 E F. It follows that

S(.fl V f2) =

(.fi V 2, -oc, -00,o ...) = (fi, -0o, -00, ...) V (./2, -00, -oo ...)


(fl f2)=

(fi f2,-oo00-oo, ...) = (f,-o,-00, ...) (f2,-o,-oo, ...) .
So the map is a belt homomorphism.

Suppose that

(fI, -0cl, 0o,...)= ( -oo, -oo ,...),

then clearly fl = f2. So the map is also a monomorphism.



In view of part ii of the previous theorem, F-_, may be identified with its isomorphic

image in F_-,[x] and we will write (f, --oo,-o,...) as simply f. By calculation, we

have that f (ao, at,...) = (f* ao,f* a, ...).

The next theorem develops a notation which makes the connection between polyno-

mials and maxpolynomials easier to see.

Theorem 4.2.2. Let (F_,, V, *) be a belt with identity and denote by x the element

(-_0, 1F,-0, --c ....) of F_,,[x].

(i) nz = (-00o. -oo,..., IF, -oo,...), where 1F is in the (n + 1)st coordinate.

(ii) Iff E F_,, then for each n > 0, f nx = f = (- ,.... -oo, f, -oo,...),

where f is in the (n + 1)st coordinate.

(iii) For every non negative infinity maxpolynomial (that is a maxpolynomial with

some element which is not -oo) in F_-,[], there exists an integer n and elements

ao, a, ...,a,, E F,_ such that g = ao V (al x)V. ..., V(a,, nx). The integer n and

the elements (a are unique.

Proof: (i) By definition, the formula is true for n = 1. Suppose that (n 1). =

(-o, -00..., IF. --oo,...), where 1F is in the (n 1)-th coordinate. It follows that

n. = x + (n l)x = (-c0, IF, -oo, -oc....) + (-o ,., -0-oo, 1F, -oo,...)

= (cu, 1, ...)

If j = n, then cj = IF F = IF. If j n. then ci = -oc.

(ii) f = (f, -O0, -00,...) (-oo, ..., -0o, OF, -oo,...). Straightfor-

ward computation show that (f, -oo, -oo,...) (-0-...., ,o, IF, -00,...)

(-00,..., -oo, f, -oo,...). Similarly, foi n.r f.


(iii) If g = (ao, al, ...), there must be a largest index n such that a,, $ -oo. It follows

that ao, a, ..., a,, E Fo are the desired elements. If g = bo V(bl + x)V .- V(bm + mx),


(bo, b, ..., b.,, -oo -oo,...) = (ao, a ..., a,, -oo, -oo,...)

and ai = bi.


If F has an identity, then Ox = IF and we may write the maxpolynomial ao Ox V

(a( lx) V ... V (a, nx) as ao V (al x) V .. V (an nx). An important difference

between the two cases is that when there is an identity element, x is an element of the

belt Foo[x]. Hereafter, a maxpolynomial f over a belt with identity will always be

written in the form f = ao V (al x) V ... V (an nx). In this notation, maximum and

addition are given by the following analogs to the familiar rules,

11 'it ti
V (a* ix) V (b ix)= V ((ai V bi) ix)
i=0 i=0 i=0

(n m+n
V(ai ix) + (b x)) = V (ck k), where c = V (a b).
i=0 \j=0 =0 i+j=k

If P = V (ai ix) e F_[xr], then the elements ai are called the coefficients of
P. The coefficient ao is called the constant term. Elements of F_o, which all have the
form f = (f. -oo, -oo,...), are called the constant maxpolynomials. If P = V (ai ix)
has a, 5 c, then an is called the leading coefficient. If Fo has an identity and the

leading coefficient of P is 1F, then P is said to be a monic maxpolynomial. It shall be

the convention here that when writing P = V (ai ix), we have a,, / -oo.


The next step is to define maxpolynomials in several variables. The starting point

is that a sequence is a function defined on the Natural numbers. Let N be the Natural

numbers and N" = N x N x ... x N (n factors).

Theorem 4.2.3. Let (Fo, V, *) be a belt and denote by F_-[xoi,..., x,] the set of all

functions g : N" Fo such that g( ) $ --oo for at most a finite number of elements

it of N".

(i) F-_[x'l,... x,,] is a belt with V and defined by

(g V h)(u) = g(u) V h(u)


(g *h)(u)= V g(v)* h(w).
(ii) If F-o is commutative ( resp. a belt with identity), then so is F_-oo ..., xn].

(iii) The map

: F-oo F-oo[l,.... x ] ,

given by /,(Jf) = gf, where gf(0,..., 0) = f and

gf(u) = -o

for all other u E N", is a monomorphism of belts.

Proof: (i)
(h V g)(u) =

= h(u) V g(u)

= g(u) V h(u)

= (g V h)(u)

[(.f v )V h](u) =

= (f V g)(u) v h(u)

= f(u) V g(u) V h(u)

= (u) v (g(u) V h(u))

= [.f (g V h)](u)

(q v g)(u) =- (u) v g(u) = g(u)

((f/*)*h) =

= V f()+ V g(y)*h())
v+w=u \-Yz=w /

= V V f(v)*g(y) h(~)

(+Z=(( 1'+Y=U'

= V V f(W)* (y) *h(z).
w+4-z=u v+y=uw


g (h V s)(u)

= g(u) (h V s)(u)

= g(u) (h(u) V s(u))

= (g(u) h(u)) V (g(u) s(u))

= (g h) v ( *s).

(h Vs)*g=

[(h V s) *g](u) =[(h V s)](u) *g(tu)

=(h(u) V s(u)) g(u)

= (h(u) g(u)) V (s(u) g(u))

(h g) V (s g)
(g* h)(u)= V g(v)* h(w)
(v-- U' =

= \ h( w)*g(v)

= (h* g)(u)
Let 1F be the identity of F. Define I: N" Foo by I(u)= 1F if =

(0, 0,0, ..., 0), and I(u) = -oo otherwise. We have then

(g I)(u) =g(u) I(u)

= V g I(mw)

= (u (0, 0,... 0)) I(0, 0,...,0)

= g(u) *

(iii) First, l(.fi V f,) = gflvf.

If a = (0,..., 0), then

gf vf2(u) = fl V f2 = 9 f1(u) V gf2(u).

If u 7 (0,...,0), then

gf2 ivf() = -00 = gf, () V gf(u).

Next, ,(.f *a J') = gfh.f.,.

If u = (0,..., 0), then in order for v + tw = i, it must be that v = (0,..., 0) and w =

(0,....0) simultaneously. So,

(gf, g9)(") = V i (1') gf(w)

= g (0, .... 0) gf (O ....,0)

= Ji J2

=-- gf. (u) .
= f//1*f2(U) -
If u / (0,...,0), then it is not possible to have v = (0,...,0) and t = (0,...,0)

simultaneously. Hence,

(g9f *9f2) () = V gf1(V) g2(U')
= -00

= gf*f(u).
So, <' is a homomorphism.

If V'(fi) = '(f), then gf,(u) = gf(u) for all u. In particular, if u = (0,...,0),

we see that fl = fJ.


The belt of the previous theorem is called the belt of maxpolynomials in n indeter-

minates over F-oo. If = 1, then F-oo[x] is the belt of maxpolynomials. As in the

previous case, there is a more familiar notation.

Let n be a positive number and for each i = 1,2,..., a let

(0, ...0, 1. 0, ..0) e N",

where I is in the ith coordinate of si. If k E N, let k, = (0,..., 0, k, 0, ..., 0), then every

element of N" may be written in the form ki1l + k22 + + kn- n.


Theorem 4.2.4. Let (F_oo, V, *) be a belt with identity and n a positive integer. For each

i = 1,2,...,n, let xi C F_-o[, ...,xrn] be defined by xi(Ei) = 1F_ and xi(u) = -oo

for u / Ei.

(i) For each integer k E N, x (kei) = lF_, and xri(u) = -oo for i k-i;

(ii) For each (ki... k,,) C N", x' 1 .2 k* .x"(kil +- + knEs) = IF_, and

SX 2 ... X "(u) o= fOr u (kl + + -kne),

(ii) x = ffor all s, t = 1,2, ...,;

(iv) xf = fx' for allf C F and all t C N;

v) for every maxpolynomial g in F-oo[al,...,,,] there exists unique elements

akl,...,k, C F, indexed by all (k1, ..., kn) E N" and non -oo for at most a finite number

of (kl,..., kn) E N", such that

S=V aki,,..., knk1 kX

where the maximum is taken over all kl, ..., k,, Nn.

Proof: (i) The case for k = 1 is given by definition. When k = 2, we have

xi (2) I V xr(z) xt()w)
x;i(i) Xi( -i) = IF.

If u and v are not simultaneously ci, then xi(v) xi(w) =

formula holds for k = n 1. It follows that

-oo. Assume that the

1( "- 1/ '=11,

= -[(,- 1)(Ei)] xi(i)

= 1F 1F

= IF.

.;11 ...2 :1"( 7l1- ..+ k,, cn)
:r ,r2- "

= IF IF ... IF

= IF .

If u $ (klc + + knEn), then it is not possible for vl = kEii, -2 = k2E2, v..,n =

knEn, simultaneously. Hence, .rk' -.. ,,"(u) = -oc.


'j(.ru) = lF if and only if u = s- + tEj ,

but sei + tE tj = + se and

.r. f(u) = IF if and only if u = tcj + sji .

Hence, xs = x.
aiZ = ,J i.

V )X((> 2 n-(v,
v'1l+ 2+---+v,=kl 1+---+k,,,n
= .rX (k~ll) k2 ()k2 (2 2)... "( ,, )


= l=
Sti(1~itI'f(u li)

= IF '/'f

= d'f IF

=O/'f -i(ti)

\'f(w)x (v)

(v) Let ak,,...,ak, = g(kl,...,kn). The ak,,... ,ak are the desired elements. To

show uniqueness, if

V ak,, k k1 n = Vbk, ., bk, m

then aj = b, for j e N".


If (F_,o,V,*) is any belt, then the map F_oo[] -- F-_o[l,...,Xn], defined by
m. m
V ai ixr -- V ai i:'l 02 *... OX,, is easily seen to be a monomorphism of belts.
i=0 i=0
Similarly, for any subset {il,..., ik} of {1,2,...,n} there is a monomorphism Fo [] -+

F- [xl ,.... x,]. The belt F-o[x,, ...,xik] will be identified with its isomorphic image

and considered to be a subbelt of F-o[i ..., sn].

For the purposes of the next theorem, will shall the need the following definitions

and well known theorems [19].


Definition. A category is a class C of objects together with

(i) a class of disjoint sets, denoted hom(A,B), one for each pair of objects in C; (an

element f' of hom(A,B) is called a morphism from A to B and is denoted f : A -- B;

(ii) for each triple (A,B,C) of objects of C a function

hom.(B, C) x homn(A,B) -+ honm(A, C) :

(for morphisms : A -+ B, g : B -- C, this function is written (g, f) g o f and

g o f : A C is called the composite of f and g); all subject to the two axioms:

(I) Associativity. If g : A -- B, h : B -* C, s : C -- D are morphisms of C, then

h o (g os) = (ho g) os

(II) Identity. For each object B of C there exists a morphism 1B : B -+ B such that

for any g : A -- B, h : B -+ C,

1B og = g and ho 1B = h.

In a category C, a morphism g : A B is called an equivalence if there is in C

a morphism h : B A such that ho g = 1A and g o h = 1. If g : A -- B is an

equivalence, A and B are said to be equivalent.

Definition. An object I in a category C is said to be universal if for each object D of

C there exist one and only one morphism I C.

Theorem 4.2.5. Any two universal objects in a category C are equivalent.

Theorem 4.2.6. Let (F-o, V, *) and (S_-, V, *) be commutative belts with identity and

S: Fo S_- a homomorphism of belts such that P(IF) = P(ls).- If s s2,..., sn E S,

then there is a unique homomorphism of belts ; : F_[oo[, ...,Xn] S such that

F I F-oo = and (n(xi) = si for i 1,2, ..., .. This property completely determines the

polynomial belt F-oo[i, ...xn] up to isomorphism.

Proof: If g E F_,[xl,...,n], then
g = .V a"i" k,"*. (ai E F-oo; ij N)
by Theorem 4.2.4. The map 7 given by ;(g) = pg(sl,...,s,) is well defined map such

that F-oo = and ;(xi) = si. We use the fact that o is a homomorphism to show

that 7 is a homomorphism. If g, h E F-_oo [.r, ...xn], then
7(.g V h) = (g V h)(si .. )

= V ((ai V bi)si --. s

= V[,(ai)V V,(b,)]si,--S

= V[p(ai)S ... s] V [p(bi)sl .. s n]

= V p(a,)si ... V p(bi)s s,

= (g) V (h)
7(g h)
k1 k,2 k k ,2 k k1 2
=1 2 "1 "2 = 2 =0
"[V V V V1 I

= [(.g )(. ...,.S,,)]

= V V V V V V (aI ... b.i2.
i'=0 z =0 i =0 i2 i01=0 i
1 ((i + ) ( +)

ki k i k 2 k' ki

i =0 i2=0 il=0 i'=0 il=0 i2

((i + i)1 +- (i + i)s)

k k k (kik 2
.. n... i ,... V y- V (b ...~ isi .-
i" =0 ii=0 \i =0 i, =0

= (.f) (g)

Suppose that : F-oo[xi, ...,a n] -- S,- is a homomorphism such that | Foo =

p and 4(xi) = si for each i. Computing ,/'(g), we have




V ("k),,'' -O.--

= Yg(sl,.., s,,)

So, then 0 = and so 7 is unique. Category theory is now employed to show that this

property completely determines the belt F_,[xi,...,,r]. Define a category C whose

objects are all (n + 2)-tuples, (0,K_,,i,...s), where K_,o is a commutative belt

with identity, si e K and (/ : F_o -- K_, is a homomorphism with V'(1F) = 1K. Our

aim is to show that the object (t,F-o[xi, ..., Xj, x1,...x?1) is universal in this category.

Define a morphism in C from (0, Ko, ,s. .... ) to ((, G_,, at, ..., an) as a homo-

morphism of belts p : Ko, Go such that

p(K) = 1G

p,' = 0


P(S) I

for i = 1,2,...,n. p : K_ G_- is an equivalence in C if and only if p is an

isomorphism of belts. If F : F_, -+ Fo[.ri, ..., x,,] is the inclusion map, then the first

part of the proof shows that (L,F-_o[x, ...,. ,], ;X,...., rx) is a universal object in C.

Any other object which is universal is equivalent and so will be isomorphic. Therefore,

F-oo[x1,..., xr] is completely determined up to isomorphism by Theorem 4.2.5


Corollary 4.2.7. Let (F-_o, V, *) be a commutative belt with identity and n a positive

integer. For each k (1 < d < n) there are isomorphisms of belts

F-_oo[xi,..., ][ [Xk+1 .... n],,--

F-_~o [, ..., s]-

F-.~,[.rk+1 .... X,,][Xl, ..., dk]

Proof: The universal mapping property established in Theorem 4.2.6 is invoked to

prove the corollary. Given a homomorphism ; : F_o -+ So of commutative

belts with identity and elements .f E F_(o[.l,....r,,], there exists a homomorphism

. : F-oo[xl,, ... -- So such that j F_, = 9 and W(xi) = si for i = 1,2,.., k by

Theorem 4.2.6. Applying Theorem 4.2.6 with F_o,[.x, ..., xk] in place of Fo, yields a

homomorphism : Fo[xl, .., xk][xk+, ..... ] --+ S_- such that I Fo [xl, ...,k] =

Sand (xri) = si for i = 1,2, ...., By construction 5 I F_- = Fo = p and

(ri) = si for i = 1,2,...,n. Suppose that q : F-_,[xl,...,k.k][+l,,...,Xn] -- S-oo

is a homomorphism such that I1 Fo = ( and O(xi) = si for i = 1,2,...,n. The

same argument used in the proof of uniqueness statement of Theorem 4.2.6 shows

that 0 | F-oo[xi,...,rk] = ;. Therefore, the uniqueness statement of Theorem 4.2.6

implies that 0 = -. Consequently, F,[xi ...., k] [k+.1, ., .,] has the desired universal

mapping property, whence F-oo[4i,..., kk.+1.rF, by Theorem

4.2.6. The other isomorphism is proved similarly.



5.1 Introduction

On the forefront of mathematical morphology research is the area of template

decomposition. The area consists of taking a template with a large support and reducing

it to a number of templates with smaller supports. The fundamental property which gives

rise to such a study is the fact that convolutions are associative. So, if t is a template

which has the following decomposition;

i=h j=1 S j
i==lrir)V(M '),

then the convolution of an image a, with t, is given by

a[t = aM i r, V M j= Sj

= [(...((a M ri) M r2)...) M rh] V [[(...((a s) s)...) M s]]

Similarly, we may use a templates decomposition to rewrite a template-template con-


One of the goals of any algorithm is to reduce computational complexity. Template

decomposition is one of the best tools for achieving this end. A template may be

represented as a maxpolynomial.

To represent a two-dimensional template as a maxpolynomial, let the coefficients

aij be defined by aij = t(o,o)(i,j) for all (i,j) E Z2 [9]. Next eliminate any negative

multiples of the indeterminants from the expression

V V aij +x +ixjy
icZ jEZ
where aij 5 -oo, by adding the lowest negative multiples of x and y which are present

in the expression.

The adding of the indeterminants amounts to a shift of the template so that its support

lies in the first quadrant. Care must be taken to keep this shift in mind when translating

from maxpolynomials back to templates.

Since maxpolynomials can represent templates, factoring the maxpolynomials is one

way of reducing a large template into smaller ones Maxpolynomials may be applied to

the four lattice convolutions M, M @ and @.

The relationship between the additive max, [M, and the additive min, E is given

in terms of lattice duality by

a Et = (t* a*)*,

where the image a* is defined by a*(x) = [a(x)]* and the conjugate of t E (RXo) is

the template t* e (RYI,) defined by t*(y) = [ty(x)]*. Similarly, there is a duality

relation between the multiplicative max and the multiplicative min given by

a@t= (t* @a*)*

Here however, t e ((Ro)x)Y.

From these relations it is clear that any results obtained for i and @ are also

results for E and @.

The convolution @ is often computed over (RO, V, x). But under the map

V,(x) = e", (R-o, V, +) is isomorphic to (RO, V, x). Therefore, it suffices to consider

only the EM convolution.


Two common value sets used in the M convolution are R_- and {-oc,0}. Section

3 is devoted to the former and Section 4 to the latter case.

5.2 Basic Properties

In this section we mention a few properties which can be applied to maxpolynomials

over general belts.

Definition. If P(x) is a maxpolynomial over the belt (Fo, V, *), then P(x) is factor

of a maxpolynomial Q(x), if there exists a maxpolynomial R(.) such that

R(x) P(x) = Q(x).

The degree of a maxpolynomial is defined in the same manner as regular polynomials.

That is, if aax .2 ... "d is a monomial, then the exponent di is called the degree in xi.

The sum d = di + d2 +* + dn is called the degree of the monomial. The ordered n-tuple

(dl, d2,..., d,,) is the multi-degree of the monomial. The degree of a maxpolynomial is

the largest degree of any of its monomial terms. There is one notable exception to these

familiar rules. The degree of the -oo maxpolynomial is defined to be -oo and the degree

of the zero maxpolynomials is 0. Additionally, we have the following observations about

the degree of a maxpolynomial:

Theorem 5.2.1. Let P, Q E Fo[.x], then

(i) deg(Q V P) = max (deg(Q) deg(P))

(ii) deg (Q P)= deg(Q) + deg(P)

For the traditional polynomial, the way to check if Q(x) divides P(x) is to apply

the Division Algorithm and see if there is a nonvanishing remainder. The Division

Algorithm is usually stated as follows [20].


Theorem 5.2.2. If R be a field and f, g E R[x], then there exists q, r C R[x] such that

f = g q + r and deg(r) < deg(g).

The proof of this theorem relies on the group structure of R. In the case of a belt,

there is not as strong a condition on F-oo. Hence, a strict translation of the division

algorithm is not possible. The next example demonstrates this shortcoming.

Example. Let (F_,, V,) = (R-o, V, +). Consider

f = V (4 + x) V (2 + 2x)


g = 3 V (2 + .).

Since deg(r) < deg(g), deg(r) = 0. So, r must be a constant. Also, deg(f) = 2 and

since deg(g) = 1, it must be true that deg(q) = 1. Let q = ao V (ai + x). Then

q + g = (ao + 3) V (ao + 2 + x) V (al + 3 + x) V (ail +2+ 2x)

Since r is a constant, we must have a1 +2 = 2. So, al = 0. This implies that ao+2 = 4.



However, there does not exist r C R-o such that r V 5 = 0.

This does not mean that there is not some analogue to the division algorithm. It is

given next. Let P(x) = ao V (a + x) V ... and Q(x) = bo V (b + x) V .. be any two

maxpolynomials. The reference P(x) > Q(x) means that ao > bo, al > bl,....


Theorem 5.2.3. Let (F-o, V, *) be a belt with duality such that the finite elements form

a division belt and P, Q E Fo[x]. Suppose deg(P) = n and deg(Q) = m with n > m.

Let P(x) = ao V (al + x) V V (a, + ax) and Q(x) = bo V (bl + x) V.. V (bm + mx).

Let K be the set of indices such that bk -oo. For each k G K, let

hk = V ((aj bk) + (j k)x)

If H is defined by

H= hk ,

then H satisfies H(x) Q(x) < P(x). Furthermore, ifR(x) is any other maxpolynomial

such that R(x) Q(x) < P(x), then H(x) > R(x).

Proof: If Hj is the j-th term of H then

Hj = (aj+k b1)


(H *Q)j = V (H-i bi) .
If there exists k E K such that k < j, then

(H* Q) = V (A (ak- ))* bk
< V (ak bk) bk
k = ak -

Otherwise, all b6, i = 0,1, ...,j are -oo and so, (H* Q)j = -oo.

Suppose that there exists an R(x) such that R(x) > H(x) and R(x) Q(x) < P(x).

Let Rj > H,. Since m G K, K / 0. Hence, there exists k e K with Rj > aj+k bk1.

This gives

(R*Q)j+k = V Rj+k-i b

> ak b- bk- = k ,

and this is a contradiction.


Corrollary 5.2.4. Let P, Q, and H be as in Theorem 5.2.3. Then Q(x) is a factor of

P(x) if and only if H(x) Q(x) = P(x).

Proof: If Q is a factor of P, then there exists R(x) such that Q(x) R(x) = P(x).


P(x) = R(x) Q(x) < H(x) Q(x) < P(x).

The other direction is clear.


We define the division of two maxpolynomials as P/Q = H.

In the example before Theorem 5.2.3, we saw how the Division Algorithm can

breakdown. However, we can apply Theorem 5.2.3 to the example in a well defined


Example. Again, let

f = OV (4 + ) V(2 +2x)


q=3V(2 +r)

The quotient, f/q, is calculated by first finding

hi =(0-3) V(4-3 + x)


h2 = (4 2) V (2 2 + x).


f/q = hi A h2

= -3 V (0 + ).
Notice that f/q + q / f, which shows that q is not a factor of f.

In the next two sections extensive use is made of the fact that the finite elements

of the belts under consideration form a division belt. To include the most general of

possibilities, we note a procedure for when the element under consideration is -oo. For

the all subsequent discussions, if x E F, then x (-oo) = x + oo = +oo. However,

-oo (-oo) = -00.

5.3 Maxpolynomials over (R-oo, V, +)

Keeping in mind the structure (R_,, V, +), the following is noted.


Remark. A maxpolynomial P(x) is afiactor of the maxpolynomial Q(x), if there exists

a maxpolynomial R(x) such that

R(x) + P(x) = Q(-).

Theorem 5.3.1. Let P(x) = ao V (a, + x) V -. V (an + nx) be a maxpolynomial. If

the first degree term (b V (0 + x)) is a factor of P, then b must satisfy

ao -al < b < n-1 an.

Proof: Let

P(x)/(b V (0 + x)) = Yo V ('y + x) V .. V (7n-1 + (7n 1)x) .

By computation, if

P(x) = ao V (ai + x) V .. V (a, + nx),

then it must be true that yo = (ao b) A (al 0). Since (b V (0 + x)) is a factor of

P(x), b + yo = ao. So, -o = ao b. Therefore, ao b < al 0.

Looking at 7Y-1, reveals that -,_1 = an A (a,-1 b). In a similar method, it may

be computed that an < an-1 b.


In certain cases Theorem 5.3.1 can be strengthened. Types of symmetries often

have aided in the factorization of polynomials [10]. In maxpolynomials as well, these

properties can be exploited. We shall need the next definition.


Definition. A maxpolynomial P(x) = ao V (al + .) V V (an + nr) is said to be

skew symmetric if ai = -an-i for all i = 0, 1, .. n/2. Note that this implies that if n

is even then the center term is zero.

Theorem 5.3.1 can be particularly useful when dealing with a skew symmetric

maxpolynomial. If it is applied to this case, the following result is obtained:

Corollary 5.3.2. Let P E R-o[xr] be skew symmetric. If the first degree term

(b V (0 + x)) is a factor of P, then

b = to ai

It can be shown that for skew symmetric maxpolynomials of degrees 2, 3, and 4 the

term (bV (0 + x)), with b = ao al, is always a factor. The three cases are shown in

the following results:

Let P = ato V (0 + x) V (-ao + 2x). The first step is to divide P by ao V (0 + x),

resulting in 0 V (-ao + x). By adding back the term it can be seen that

[0 V (-ao+x )] +[ao V (0+ )] = P.

Thus, (bV (0 + x)) is a factor in this case.

If P = ao V (al + z) V (-al + 2x) V (-ao + 3x), then there are two possibilities for

P/((ao al) V (0 + x)). If -al < --ao + 2al, then

P/((ao al) V (0 + x)) = ai V (-ai + x) V (-ao + 2x)

and again

[a( V (-ai + x) V (--ao + 2;)] + [((ao a) + (0 + ))] = P.


On the other hand, if -al > -ao + 2al, then

P/((ao al) V (0 + x')) = a1 V ((-ao + 2al) + x) V (-a0 + 2r)

and it is still true that

[ai V (-al + x) V ((-ao + 2al) + 2x)] + [((ao al) + (0 + x))] = P.

If P = ao V (a1 + .) V (0 + 2r) V (-a, + :3.) V (-ao + 4x), then there are still just

two possibilities for P/((ao a1) V (0 + x)). If -ao + 2ai < 0, then
P/((ao al) V (0 + r))=

a( V (-ao + 2ai + x) V (-ao + al + 2x) V (-ao + 3x).
Adding ((ao al) V (0 + x)) to this, recovers P. If -ao + 2al > 0, then

P/((ao al) V (0 + .)) = al V (0 + x) V (-aI + 2x) V (-ao + 3x).

Adding back ((ao al) V (0 + )), again gives us P.

Of course it is not always true that (b V (0 + x)) is a factor. A counter example is

of degree 5. If

Q(x) = 1 V (-2 + a') V (-1 + 2x) V (1 + 3x)

V(2 + 4x) V (-1 + 5) ,


Q(x)/(3 V (0 + x)) = -2 V (-5 + r) V (-4 + 2r)

V(-2 + 3x) V (-1 + 4x).

Now, by adding back (3 V (0 + x)), we see that Q is not recovered.


Theorem 5.3.3. Let P(x) = ao V (al + .z) V V (a, + nx) be a maxpolynomial with

ai 7f -oo for i = 0, 1,..., n. Compute the numbers bi = ao al, b~ = al a2,..., bn =

an-l an. If there exists a number j such that

max bi < min bi,
i=l,j z=J+l,?1n

then P(x) can be factored into a maxpolynomial of degree j and a maxpolynomial of

degree n j.

Proof: Define

Po = ao V (al + x) V .. V (aj + jx)


PI = 0 V (aj+ aj + x) V (aJ+2 aj + 2x) V .. V (an aj + (n -j)x).

Let Po + P1 = co V (cl + x) V ... V (cn + nx). If k < j, then for i = 0,1,..., k 1

aj+l aj+l+l > ai+l ai+l+i I = 0,..., k i 1.

k-i-1 k-i-l
S(a+l aj+l+l) > (ai+l ai+l+l)
1=0 1=0

This gives

ak lV kc-i
ak (k V [V (a, + aj+k-i a)
= V (ai + aj+k-i aj)
= Ck-

If k > j, then for i = 0,1,...,j 1

ak+l ak++1 > ai+l a,+l+1 =, ...,j i 1.

j-i-1 j-i-1
S(ak+l ak+1+1) > E (ai+l ai++l),
1=0 1=(0
which gives

ak = ak V (ai + aj+k-i aj)
= V ( + aj+k- aj)

This theorem can be applied to some cases in which some of the coefficients are -oo.

The next corollary shows that a strict inequality on the differences of the coefficients is

all that is needed.

Corollary 5.3.4. Let P(x) = ao V (al + r) V ... V (a,, + nx) be a maxpolynomial.

Compute the numbers b1 = ao al, b2 = al a2, .. b, = a,_- an. If there exists

a number j such that

max bi < min bi,
i=1,j i=j+l,n

then P(x) can be factored into a maxpolynomial of degree j and a maxpolynomial of

degree n j.

Proof: The strict inequality means that aj 7 --oo. The proof is the same as that of

the previous theorem.


Example. If

P = (0 + x) V (2 + 2x) V (0 + 3;),


bt = -oo 0 = -oo,

b2 =0 -2 = -2,

Corollary 5.3.4 says that one

b3 =2 0 = 2.

possible factorization is

P = [(0 + x) V (2 + 2x)] + [0 V (-2 + x)].

Example. This example shows that the conditions of Theorem 5.3.3 are only sufficient

conditions. Let

P = 5 V (3 + x) V (5 + 2x) V (4 + 3x) V (4 + 4x) V (4 + 5x).

This maxpolynomial may be factored as

P = [2 V (0 + x) V (2 + 2x)]

+[3 V (1 + x) V (2 + 2x) V (2 + 3x)].

However, it does not meet the conditions of Theorem 5.3.3.

One class of maxpolynomials which is common in template representation is sym-

metric maxpolynomials. Symmetric polynomials were studied by Manseur [10]. We

follow that definition for symmetric polynomials.


Definition. A maxpolynomial P(x) = ao V (al + x) V. V (a, + nx) is symmetric with

respect to n, if ai = an-i for all i = 1,2,..., n.

When a maxpolynomial is said to be symmetric, we shall always mean with respect

to the degree of the maxpolynomial.

Corollary 5.3.5. If P is a symmetric maxpolynomial of even degree such that the

coefficients increase from ao to a,,1, then P factors into two maxpolynomials of degree


Proof: The conditions on P imply that the numbers bi are greater than or equal to 0

for i = 1,2,.... /2 and less than or equal to 0 for i = + 1,...,n. Hence, Theorem

5.3.3 applies.


When the conditions of the corollary are met and a, is even, aa/2 may be subtracted

from Po and added to P1. Doing so results in a factorization which shall be shown to

be valuable in the decomposition of two variable maxpolynomials. This corollary will

be used in Theorem 5.3.14.


Theorem 5.3.6. If P(x) is a symmetric maxpolynomial of even degree and P factors

into first degree terms, then all the factors appear in conjugate pairs.

Proof: Let P2(x) = 0 V (al + r) V (0 + 2x). Since P2 factors, the factors must have

constant terms which add to give the constant term of P2 and the coefficients of the

highest terms must add to give the highest term. Therefore, if (co V (ci + x)) is a factor

then the other factor must be (-co V (-cl + x)).


Next, assume that the results holds for a maxpolynomial of degree n.

Given P,,2, the reducibility criterion provides that

Pn+2 = Pn + (bo V (bi + x)) + (b' V (b' + x)).

The constant term of P, is 0. Therefore, P,, + (bo V (bi + x)) has bo as the constant

term. Also Pn+2 has a constant term of 0.

Hence, b'O must equal -bo. Similarly, it is shown that b' = -bl.


Theorem 5.3.7. Let P = 0 V (a + x)V V(al + (n 1)) )V(0 + nx) be a symmetric

maxpolynomial of even degree. If (b V (0 + x)) is a factor of P, then b < al.

Proof: Suppose that b > al and (b V (0 + x)) is a factor of P. The division theorem

is used to calculate P/(b V (0 + a)). The candidates for the coefficient of (n 1); are

al b and 0. In order for

[P/(b v (0 + z))] + (b (0 + x)) = P,

it must be true that al b > 0. Thus, there is a contradiction.

Theorem 5.3.8. Let P = 0 V (al + x) V (a2 + 2a.) V V (al + (n l)x) V (0 + nx)

be a symmetric maxpolynomial of even degree. Define cl = al and ci = ai ai-I for

i = 2, 3,..., n/2. The maxpolynomial P factors into first degree terms if and only if
P =(ci V (0 + x)) + (-cl V (0 + a))

+ (c2 V (0 + X)) + (-C2 V (0 + a))

+ (Cn/2 V (0 + X)) + (-c,,/2 V (0 + -.)).

Proof: Suppose that P factors into first degree terms. By theorem 5.3.6

Pn = (di V (0 + x)) + (-di V (0 + x))+

+ (dan/2 V (0 + X)) + (-dn/. V (0 + .)).

An ordering on the di, such that dl > d.2 > -.. > da, may be assumed. Combining

conjugates first, yields

P = (0 V (dl +. ) V (0 + 2.))

+((O V (d2 + .) V (0 + 2x)))

+((o V (dn/2 + x) V (0 + 2x))).

Using the ordering on the di, we begin combining more terms. The first step yields

P = (0 V (di + x) V (di + d- + 2x) V (di + 3x) V (0 + 4x))

+(0 V (d3 + x) V (0 + 2x)) + (0 V (d4 + x) V 0 + 2x)+

... + (0 V (d,,/ + r) V (0 + 2x)).

Continuing in this way results in

P = 0 V (di + x + d + 2) V (i d ) d + d2 + d3 + 3x)

V -. V (di + d2 + + dn/2+ (n/2)x) V .-

V(dl + (n 1)x) V (0 + n.).

Thus, dl = a1 and di = ai ai- .


Example. Consider the template

p= 0 3X. 4X 3X 0

where A is a free parameter. The corresponding maxpolynomial is

0 V (3A + x) V (4A + 2x) V (3A + 3x) V (0 + 4x) .

According to Theorem 5.3.8, this factors as

(-A V (0 + x)) + (A V (0 + ))

+ (-3A V (0 + x)) + (3A V (0 + x)).

The corresponding templates are

X 0 -3 0

p = -X 0

3W 0

Theorem 5.3.8 leads immediately to several observations. One is that a symmet-

ric maxpolynomial can only factor completely if all the terms are positive. Another

observation is shown in the next theorem.

Corollary 5.3.9. If P = ao V (ai + x') V ... V (ao + nx) is symmetric and factors into

first degree terms, then ak < ak+1 for k = 0,1, ..., *

Corollary 5.3.10. IfP = ao V (al + x) V ... V (ao + nx) is symmetric and factors into

first degree terms, then ai+l ai < ai+2 uji+ for i = 0, 1, .., 2.

Theorem 5.3.11. If P is symmetric of odd degree and the coefficients increase from al

to as, then there exists Q, symmetric of even degree, such that

Q + (0 V (0 + x)) = P

Proof: Let

P = 0 V (al + x) V V (a + -
2 2 )x)

S(an-i + 2 --+1 x V ..V(a+(n- ).r)V(0+nx).

Next, divide P by (0 V (0 + X)). Recall that P/(O V (0 + r)) = A hj, where the
coefficients of hi are

(0, al,a2,...,o, a -i, a,-i,..,a. a l

and the coefficients of h2 are

(aG ,a.2,...,a,,_ ,-, ..., ,ai,,0 ).

Thus, the coefficients of P/(O V (0 + .)) are

(O0, a a .I ..., a ...., a .02 ) .

By calculation P/(0 V (0 + x)) + (0 V (0 + x)) = P.


We now begin the consideration of two variable maxpolynomials. One of the most

desirable factorizations of two variable maxpolynomials is a decomposition into two one

variable maxpolynomials. First, this special case. Note that the next theorem is an

extension of the result for templates given by Li [21].

Theorem 5.3.12. Let T(x,y) = V V (tij + ix +jy) be a maxpolynomial in two
i=0 j=
variables with tm,, -oo, then T( x, y) = P(x)+Q(y) if and only if tij = tin +tmj -tmn

for 0 < i < m and0 <_ j < n.

Proof: Suppose that T = P + Q. Let

P = ao V (al + z) V V (amn + mx)


Q = bo v (b + y) v V (b, + ny),

where am z -oo and b,, -oo. It may assumed that an = 0 and thus that

tmj = bj for j = 0,1,...,n. In particular, note that tmn = -oo. The relation

tj + ix + jy = (ai + ix) + (bj + jy) also holds. However, bj may be calculated by

bj = tmj and ai = tirn tmn.


If T satisfies tij = tin + tmj tnn for 0 < i < n 0 < j < m, then define

P =V (tnj t, + j)


Q V (tin + i"r)

Calculation shows that P + Q = T.


Maxpolynomials, or corresponding templates, which satisfy the conditions of this

theorem are referred to as separable.

Example. A parabolic structuring element can be used to bring out texture information

and suppress both point noise and white noise [22]. In the following parabolic template,

t, the parameter A is a free parameter.


0 3X 4X 3X 0

3X 6X 7X 6X 3X

= 4X 7X 8X 7X 4X

3X 6X 7X 6X 3X

0 3X 4X 3X 0

According to Theorem 5.3.12 this template is separable. Hence, it may be decom-

posed into a row template and a column template. So t=p M] q, where

P= 0 3U 4k



3X 0 and q= 4X


Recall that a rectangular template is one whose support is a subset of a rectangle.

The previous results on the separability of templates was limited to templates whose

support was identical to the smallest rectangle containing the support [21]. Theorem

5.3.12 applies to a wider class of templates. Consider the following template, t.

Example. Let

The corresponding maxpolynomial is given by

o0V (O + 2.,) V (O + 2y) V (O + 2x+ 2y).

This factors as

[O V (0 + 2x)] + [0 V (0 + 2y)]

0 0 0


There are often cases when a two variable maxpolynomial is not separable. In such

cases, it may be possible to apply the one variable theorems already presented to reduce

the two variable maxpolynomial.

For the next definition, let t be is a translation invariant rectangular template with
rIl n
maxpolynomial representation T(x, y) = V V (t+ ix + +jy).
i=0 j=0

Definition. The boundary maxpolynomials of a rectangular translation invariant

template are the maxpolynomials P = V (tio +ix), P-2 V (to +jy), P3 =
i=0 j=0
m 71
V (ti, + .r + ny), and P4 = V (t,,, + mx + jy).
i=0 j=0
If t is a rectangular template, then the boundary maxpolynomials may be obtained

by first finding the maxpolynomial that corresponds to t and then isolating certain

coefficients. The coefficients to isolate are from the terms which have the highest degree

in each variable and the lowest degree in each variable. This will give the four boundary


Example. Let


0 1 0

t= 0 1 2 1 0

0 1 0


The boundary maxpolynomial for this template are

P1 = 0 + 2x

P2 =0 + 2y

P3 = 0 + 2x + 4y

P4 = 0 + 4x + 2y

Suppose s and t are two rectangular templates. To compute the boundary max-

polynomial of their convolution, it is only necessary to add corresponding boundary

maxpolynomials from the two templates. This is obvious when one considers that, for

example, the terms with lowest degree in x from s M t are obtained by adding the terms

with the lowest degree in x from s with those of t.

These observations are recorded in the next proposition.

Proposition 5.3.13. Suppose that t is a rectangular template and

Al(x,y), A2(x,y), A3(x,y), and A4(2,y) correspond to a counterclockwise rep-

resentation of the boundary of t where any AZ(x, y) could be a monomial. If t

is reducible into the convolution of two rectangular templates, then there exists

factorizations of A' (x, y), ..., A4(.x, y),

A' = Af + A'

A = A + Ai

such that A'(x, y), A (x, y), A3(., y), A (x, y) and A(.x, y), A (rx, y), A (x, y), A4(x, y)

correspond to a counterclockwise representation of the boundary of two templates.

Proof: Suppose that t=sZIr. Let A}(x, y),..., A4(x, y) correspond to the boundary of

s and A.(x, y), ..., A4(.x, y) correspond to the boundary of r.


m n
Definition. A maxpolynomial in two variables P(x, y) = V V (tij + i + jy)
i=0 j=
V (ix + Pi(y)) is symmetric with respect to y, if each Pi(y) is symmetric with respect
to n.

A similar definition can be given for the variable x.

Definition. A maxpolynomial in two variables is symmetric, if it is symmetric with

respect to both x and y.

Theorem 5.3.14. Suppose P(x, y) = V V (a,j + ix + jy) corresponds to a rectangu-
i=0 j-=
lar template, and T is symmetric with both m and n even. If

aoo '< ap,


aoo < ao0 < < aon

ago = ao0, ao0 is even ai > auo for 1 < i < m 1 and 1 < j < n 1, then there

exists maxpolynomials P(x,y),Q(x,y), and R(x,y) such that

T(x, y) = [P(x, y) + Q(x, y)] V R(x, y)

n-1 n-1
R(x,y) = V V aij + ix+ jy.
j=1 i=1

Proof: Since the support of the template may not be rectangular, several of the

coefficients of T may be -oo. The boundary maxpolynomials are symmetric with a

center term that is even. Even with certain coefficients equal to -oo, Corollary 5.3.5 and

the procedure in the comments that follow it, may be applied to each of the boundary

maxpolynomials. The results for each of the boundary maxpolynomials are

V aio + Ix

a0oo -a o ao + x V + -
12 V 9 2 ) 2 9
+ a(.-,)- 1( ) + X I r

(+ a o V a( z-)o o + V... V aoo ao + x

= A + A2 ,

V a + ix + n
n [( 1 ) ( 1 ) (1 (m )]
= -y + aon amn V 1n- -an + V V (an + ()x
22 2 22 2 2
n [(1 ) ( I 1 m2M
+ y + [(a-) v a(-,i)n ay ) + x) V .. V aon 2a!n + x

= B + B2 ,

V aoi + iy

Saoo -- a V aol ao + Y V V ao +
aoo 2 o2- 2 2-

+ ao) V a (-1) ao2 + ) V -V aoo- + 2+

= C + C-2,
V ami + x + i

-= x+ (am0o am V (ami -am + V -V (-am? + ry

+-x + amx V am ( -) a, + y- V... v amo -am n+

P(x,y) = A V 2 V B2 V C


Q(x. y)= A2 V D1 V B, V C2.

P + Q = (41 + A2) V [A2 + (D2 V B2 V (7)j

+ (D, + D2) v [D1 + (A1 V B2 V C1)]

+ (B1 + B2) V [B1 -t (A1 V D. V Ci)]

+ (C' + C2) V [C2 + (A1 V D2 V B2)].


Thus, P + Q gives back the boundary maxpolynomials of T. The terms from
[A2 + (D2 V B2 V Ci)]

[D1 + (Ai V B2 V Ci)]

[B1 + (A1 V D2 V CI)]

[C2 + (A \V D2 V B2)]
form the interior of P+Q. The largest terms of P added to the largest terms of Q naturally

give the largest terms of P + Q. Those terms from P are laoL and aoa + z2 + y'

and those from Q are (1ao- + La and (aoa + y. Notice that in P + Q, these

terms will be in the boundary maxpolynomials. Hence, the condition aij > ao0 for

1 < i < m 1 and 1 < j < n 1, insures that the coefficients produced by P + Q

are not larger than the coefficients of T.

Thus, it is possible to define

n-1 m-1
R(,y) =V V a + + ix +y.
j=1 i=1


Example. This example demonstrates the use of Theorem 5.3.14. The following tem-

plate is used for location determination [23]. Let


0 1 0

t= 0 1 2 1 0

0 1 0


The maxpolynomial which corresponds to this template is
T = (0 + 2y) V (0 + x + y) V (1 + x + 2y) V (0 + x + 3y)

V (0 + 2x) V (1 + 2x + y) V (2 + 2x + 2y) V (1 + 2x + 3y) V (0 + 2x + 4y)

V (0 + 3x + y) V (1 + 3x + 2y) V (0 + 3x + 3y) V (0 + 4x + 2y).
This factors according to theorem 5.3.14. The result is

T = [0 V (0 + 2x + 2y) + (0 + 2y) V (0 + 2x)]

V[0 V (1 + y) V (0 + 2y)

V(1 + x) V (2 + x + y) V (1 + x + 2y)

V(0 + 2x) V (1 + 2x + y) V (0 + 2x + 2y)].

Thus, we have T = [P + Q] V R, where

P = O V (0 + 2x + 2y),

Q= (0 + 2y) V(0+ 22)
R = [0V(1 y) V (0+2y)

V (1 + x) V (2 + x + y) V (1 + x + 2y)

V (0 + 2x) V (1 2x -r y) V (0 + 2x + 2y)].
The template representation is t = p S q V r, where

0 0

0 0


0 1 0

r= 1 2 1

0 1 0

Theorem 5.3.15. Suppose that T(x,y) = V =o V1io + -rix + y is a symmetric

maxpolynomial such that the boundary maxpolynomials factor into first degree terms with

m,n > 4. If

aij > (aoa aol + alo) V (a-o alo + aol)

for 1 < j < n 1 and 1 < i < m 1, then there exists maxpolynomials P(x, y), Q(x, y),

and R(x,y) such that

T(x, y) = [P(x, y) + Q(x, y)] V R(x, y),

n-1 nm-1
R(,y) = V V aij + jy.
j=1 i=i

Proof: If


V aoj + jy = C

V ani + nx + jy = D,

then A can be written as

(0 V (aol + x) V (0 + 2x)) + A2 ,

B as

((0 + 2y) V (aol + x + 2y) V (0 + 2x + 2y)) + B2,

C as

(0 V (alo + y) V (0 + 2y)) + C2,

and D as

((0 + 2x) V (aol + 2x + y) V (0 + 2x + 2y)) + D2 .


P(x, y) = (0 V (aol + x) V (0 + 2x))

V ((0 + 2x) V (ai,, + 2x + y) V (0 + 2x + 2y))

V ((0 + 2y) V (ail + x + 2y) V (0 + 2x + 2y))

V (0 V (aol + y) V (0 + 2y))


Q(x,y) = A2 V B2 V C2 V D2 .

The proof proceeds as before, noting that the highest term of P + Q is the maximum of

the largest terms of P added to the largest of Q. This is given by

(ao2 aol + alo) V (a2o alo + aol)



Example. To demonstrate Theorem 5.3.15, we again look at a template which is used

for location determination [23]. Let

-2 -2 -2 -2 -2

-2 -1 -1 -1 -2

t= -2 -1 0 -1 -2

-2 -1 -1 -1 -2

-2 -2 -2 -2 -2

The template decomposition is given by t p q V r, where

-2 -2 -2 -2 -2 -2

p = -2 -2 q= -2 -2

-2 -2 -2 -2 -2 -2

-1 -1 -1

and r= -1 0 -1

-1 -1 -1

Factorization methods for polynomials are often recursive. If a symmetric polynomial

is factored as T = P*Q+R, then R is symmetric and can usually be factored by the same

theorem which led to the factorization of T [10, Corollary 2 to Theorem 3.1]. However,

the same is not true for maxpolynomials. As is demonstrated in the next example, there


may exist a factorization T = (P + Q) V R, but R does not satisfy either the hypotheses

of Theorem 5.3.14 or Theorem 5.3.15.

Example. Let

The template t may be

However, in both cases we


will have

by either theorem 5.3.14 or Theorem 5.3.15.

4 5

5 5

r= 7 4

5 5

4 5






The template r does not satisfy the hypothesis of either theorem.

To show that r can not be decomposed into symmetric templates, suppose that such

templates exist. Let r = sl E s2 V r2, where

ai b1

Si= C1

ai bi a,

and r2 =

By simple computation of sl 2 s2, we

sl M S2 V r2, we also have that

a2 b2

S2 = C2


r1 r12 r13

r21 r22 r23

r31 r32 r33

know that a1 + a2 = 4. Since r =

3 = max {a, + a2, bi + bg, ci + c3, r22}

This contradiction shows that r can not be decomposed into symmetric templates.

5.4 Maxpolynomials over ({- 0}, V, +)

When binary images are involved, the templates used in the M convolution often

have values in {-oo, 0}. The principal tool in factorization of maxpolynomials over

the belt ({-oo,0},V,+) is factoring by grouping. Here are three special cases when

factoring by grouping is easily done.

Theorem 5.4.1. Let k be any real number. If

P(x,y) = V O + j + ky

is a maxpolynomial in two variables, then P(x,y) = (mex + ky) + n,(0 V (0 + x)).


P(x,y) = V O+ jx + cky

S0 + mx + ky V (0 + (m + )x + ky) V ..

V (0 + (m + n)x + ky)

= mx + ky + (0 V (0 ) V ... V (0 +nx))

= mx + ky + n(0 V (0 + x)).


Theorem 5.4.2. Let k be any real number. If P(x,y)= V 0 + jx + (j + k)y is a
maxpolynomial in two variables, then P(x, y) = mx + (m + k)y + n(0 V (0 + x + y)).


P(x,y) = V 0 +jx + (j + k)y

= 0 + mx + (m + k)y V (0 + (m + 1)x + (m1 + 1 + i)y) V ..

V (0 + (m + n)x + (m + n + k)y)

= x + (m + k)y + (0 V (0 + x+ y) V ... V (0 + nx + ny))

= m.r + (mn + k)y + n(0 V (0 + x + y)).


Theorem 5.4.3. Let k be any real number. If P(x, y) = V 0 + jx + (k j)y is a max-
polynomial in two variables, then P(x, y) = mx + (k mi n)y + n((0 + y) V (0 + x)).


P= V +j. ( (kj)y

= (0 + mn + (k m)y) V (0 (m ) + (k )y) V ..

V (0 + (m + n)x + (k m n)y)

= m7 + (k m n)y + ((0+ ny) V (0 + x + (n l)y V .. V (0 + nx)))

Smx + (k 7n n)y + n((0 -- y) V (0 + x)).


Although there are many other cases when factoring by grouping can be applied to

reduce a maxpolynomial, these three cases play a special role in the decomposition of a

certain class of convex binary templates.

Let X C Z x Z. Define its convex hull, C(X), as the intersection of the half planes,

H(a, k), which contain X;

C(X)= n {H(a, k) : H(a, k) X}.

Definition. We say that X is a convex set in Z x Z, when it is identical with its convex

hull. Note that this definition is identical to the following when X is bounded: Let xi E X

and integers Ai > 0 are such that E Ai = 1, then since x = E Aixi x C X if and only

if X is a convex set. This second approach is known as the barycentric approach.

Definition. A restricted convex shape is defined as a convex 4-connected component

whose convex hull has boundary lines oriented only at angles 0, 45', 900 and 1350 with

respect to the positive x-axis [1].


Definition. We say that a template is a convex (or restricted convex) template, if its

support is a convex (or restricted convex) subset of X.

If t is a restricted convex template, then its support forms a polygon in R2 with at

most eight sides. A maxpolynomial may be associated with each of those eight sides.

Theorem 5.4.4. A set of eight maxpolynomials corresponds to the boundary of a re-

stricted convex template if and only if there are two of the form

P(xy) = V 0 + j + (k )y

two of the form

P(x,y) = V O +jx+ ( + i)y ,

two of the form

P(x,y) = V O +jx + ky

two of the form

P(x,y) = V 0 + kx +jy ,

and each one has its first term and last term in common with another maxpolynomial in

the set.

Proof: Each of the polynomials represents two of the possible sides and every side

shares two vertices.


In the case of a convolution of binary templates, the effects of the boundary

maxpolynomials on the interior is no longer a concern. Hence, Proposition 5.3.13 may

be strengthened in the following way:


Theorem 5.4.5. Suppose that t is a restricted convex template and

Al(x,y), A2(x,y),..., A8(x, y) correspond to a counterclockwise representation of

the boundary oft, where any Ai(x, y) could be a monomial. The template t is reducible

into the convolution of two restricted convex templates if and only if there exists

factorizations ofJ' A( 2(y) A2x,y),..., A(x, y),

A' = Al + A'

A2 = A' + Al

Af = A8 + A

such that A (x, y), A (x, y), ..., AS(x, y) corresponds to a counterclockwise representation

of the boundary of a restricted convex template.

Proof: We have already proved one direction in theorem 5.3.13. Now suppose that such

a factorization of A'(x, y), A2(,y, ) ..., A8(x, y) exists.

We know that each of the A, are of the correct form since we know that the form

of the factors of the A' are of the correct form.

All that remains to show is that A (x,y), A (x,y),..., A(x,y) corresponds to

a counterclockwise representation of the boundary of a restricted convex template.

This is equivalent to showing that if A'(x,y), Aj(x,y) have a common term then

A ~(x, y), A (x, y) have a common term. Suppose that A (x, y), Ai (x, y) are adjacent. Let

the common term of A(x, y), AJ(x, y) be denoted by a and that of A'(x, y), A (x, y)


be denoted by 3. Consider a/f this term is in both A4/A' = A. and AJ/A = A.

Hence it is a common term for them.

Thus, both Al(xy, A 2(x, y),..., A(xr, y) and A'(x, y), A (Xr,y),..., A (x,y) cor-

respond to a counterclockwise representation of the boundary of a restricted convex


It is well known that the convolution of two restricted convex template is again a

restricted convex template. If such a factorization exists then the maxpolynomials will

give the correct boundary. And since this will be the boundary of a restricted convex

template the proof is done.


There is an important note to keep in mind when applying theorem 5.4.5. When

looking for a factorization of a boundary maxpolynomial we may only be looking for a

monomial and that monomial may be 0.

We have now proved the following theorem.

Theorem 5.4.6. Factoring by grouping can be used to decompose a restricted convex

template into a combination of irreducible templates.

Proof: By Theorem 5.4.5 we only need consider the boundary maxpolynomials and by

Theorem 5.4.4 we know their form. Theorems 5.4.1, 5.4.2, and 5.4.3 show how factoring

by grouping can be applied to these forms.



6.1 Introduction

Another method of template decomposition is based on matrix analysis. A rectangular

shift-invariant template can be represented as a matrix. This representation of a two-

dimensional rectangular shift-invariant template is achieved by letting the matrix entries,

aij, be defined by aij = t(0,0)(i,j) for all (i,j) E R(t(o,o)), where R(ty) is defined

in Chapter 3. This matrix representation of the template is called the centered weight

matrix associated with t. By representing templates in this way, we get a one-to-one

correspondence between shift-invariant templates and these matrices [24].

An image algebra computation of M] involves the operations V and +. Hence, the

usual matrix operations do not suffice for template decomposition. Instead, one must

consider minimax matrix operations.

The Ph.D. dissertation by J. Davidson showed that minimax algebra can be embedded

into image algebra [25]. An important implication of this embedding is that all the tools

of minimax algebra are directly applicable to solving problems in image processing

whenever any image algebra operation isomorphic or dual to M is used.

In the setting of linear algebra, D. O'Leary showed that if a 5 x 5 matrix has either

rank 1 or all of its nonzero terms are on a single diagonal, then it can be factored into the

product of two 3 x 3 matrices [11]. Z. Manseur and D. Wilson reduced the number of

factors implied by O'Leary's result for the decomposition of an arbitrary matrix by using

polynomial methods [10]. J. Davidson studied some nonlinear matrix decompositions


based on minimax algebra [12]. However, the work of Davidson did not utilize the rank

of a matrix. The Goal of Section 6.3 is to prove a rank based decomposition in terms

of minimax algebra.

Two common belts used in the M convolution are (R_,, V, +) and ({-oo, 0}, V, +).

In the second section, we shall extend an arbitrary belt to create a bounded lattice-ordered

group. Since (R-oo, V, +) and ({-oo, 0}, V, +) are commutative, many of the theorems

of Cuninghame-Green are only stated for commutative belts and commutative bounded

lattice-ordered groups [16].

The presentation of the definition of the rank of a matrix as defined by Cuninghame-

Green requires several preliminary definitions and theorems [16]. If one were to read

the definition of rank without referring to the associated theorems, one would have the

impression that the definition is too limited to encompass the most general of cases,

especially with regard to matrix decompositions. However, the main decomposition

method presented only depends on the number of dependent columns in a matrix.

Since the definition of rank is more restrictive than that of independence, rank based

decompositions follow as a corollary to the main technique.

6.2 Basic Definitions

Let (F, V, *) be a division belt. We now progressively extend (F, V, *) as follows.

First, we introduce the dual to V by defining for all x, y E F,

x A y= (,7-1 V I-)-1)1

So then, F becomes a lattice ordered group, or 1-group.

Next, adjoin universal bounds to F, The elements +00 and oo are the adjoined

elements and the result is denoted by Foo.


The group operation is extended in the following manner. If x, y F, then x y

is already defined. Let *' = be the self-dual multiplication on elements of F, that is,

x y = x y for all z, y F.

Otherwise, define for all x e F,

x --0 = -00 = -00

X 00 = 00 X = 00

X -00 = -00 X = 00

X *0 o =o a",1 = 00

(-00) 00 = 0 (--o) = -00

(-00) *' 00 = 00 *' (--C) = 00 .

Hence, the element -oo acts as a null element in the system (Fo, V,*) and the

element +00 acts as a null element in the system (F~, A, *). The resultant structure

(Fo, V, A, *, *') is called a bounded lattice-ordered group, or bounded 1-group. We

refer to F as the group of the bounded I-group (Fo,, V, A, *, *'). Reference to Fi as

a bounded 1-group shall be with respect to (F,,, V, A, I, *').

Two familiar examples of bounded I-groups are (Ro, V, A, +, -') and

(R V,A, x, x'). Note that (R,V,A,+) is isomorphic to (R>,V,A, x) both

as a group and as a lattice, and hence their extensions to 1-groups will be isomorphic

as well.

In recent years, lattice based matrix operations have found widespread applications

in engineering sciences. In these applications, the usual matrix operations of addition

and multiplication are replaced by corresponding lattice operations. For example, let

(Fo, V,*) be a bounded 1-group and A = (aj), B = (bij) two m x n matrices with

entries in Fo,.

Definition. The point-wise maximum, AVB of A and B, is the in xn matrix C defined by

A V B = C, where c,J = ai V bj .

Suppose that A is m x p and B is p x n.

Definition. The product of A and B, denoted by A B, is the m x n matrix C = A B,

Cij = / (aik bkj)

Definition. The dual product of A and B, denoted by A *' B, is the n x n matrix

C = A *' B, where

cj = A (ak *' bkj)

The set of all m x n matrices over Foo will be denoted by Mmn.

Recall from the theory of probability that a row-stochastic matrix is a (nonnegative)

matrix in which the sum of the elements in each row is unity. A column-stochastic matrix

has the sum of the elements in each column equal to unity, and a doubly stochastic matrix

is both row and column-stochastic.

Let (F+, V, A, x, x') be a belt with duality and (or, V, A, x, x') a sub-belt of Fo

with duality. We shall say that a finite subset 5 C F is a -astic, if it is true that

V x E 00.


Let 1F be the identity with respect to *. If o0 is just IF, then a ag-astic set satisfies:

V. = IF.

A matrix over F, will be called row-og-astic (respectively column-o0-astic, or

doubly o0-astic) if the elements in each row (respectively each column, or each row and

column) form a oa-astic set.

Definition. A square matrix A E M,, is strictly doubly 1F-astic, if it satisfies the

following two requirements.

(i) Aij <_ IF for all i = 1,...,n and j = 1,...,n.

(ii) On each row and on each column of A, we can find one and only one element

equal to 1F.

If A E Mmn, then A has n columns, ail, ai2, ..., ain, each of which is an m-tuple.

For notational purposes, let a(j) = aij, i = 1, 2,..., m7, so that a(j) is the j-th column.

Let X e Mn, and B C Mm,. The equation A X = B may then be written,

Definition. The relation, V a(j) =- B, expresses the linear dependence (over

F) of B on a(j). We shall also say that B is a linear combination of a(1), ...,a(n),

(even when n = 1).

Let F+ be a bounded 1-group. Suppose that we are given m-tuples, a(j), j =

1, ..., n and we wish to determine, for each of them, whether or not it is linearly dependent

on the other (n 1) m-tuples. The next theorem gives a convenient mechanical



Let A Mmn be the matrix having a(j) as its j-th column. Let A* be defined by

(A*)ij = (Aji)*, where (Aji)* is the conjugate of Aji as defined in Chapter 2. Define

a matrix A E Mmn as follows. Let

Aii = --oo, i = ...,n,


Aj = (A**'A)ij,i= 1,....n,j = ,...,m, i j.

In other words, A is the matrix A* *' A with its diagonal elements overwritten by -oo.

We now compare each column of A with the corresponding column of A E Mmn and

make use of the following theorem.

Theorem 6.2.1. (Cuninghame-Green, Theorem 16.2) Let F be a commutative bounded

1-group. Let the matrix A E Mm have columns a(j) E Mml ,j = l,...,n > 2, not

necessarily all different. For each j = 1,..., n, the j-th column of A A is identical with

a(j) if and only if a(j) is linearly dependent on the other columns of A. The elements of

the j- th column of A then give suitable coefficients to express the linear dependence.

Note that the proof of this theorem shows that if the d-th column is dependent, then

Ajd is the coefficient corresponding to the column a(j).

Example. Let

A= 3 4 2 1 .
2 5 5 3
To compute A, first find
-1 -3 -2 2
A A = 3' -43 4 2 1
2 -2 -5 5 5 3
-3 -1 -3


0 1 -1 -2
-3 0 -2 -3
-3 0 0 -2
-3 0 -1 0


-oo 1 -1 -2
-3 -oo -2 -3
-3 0 -oo -2
-2 0 -1 -oo

A A = 4 2 1 .
\2 5 3 3

Applying Theorem 6.2.1, it can be seen that the second column is linearly dependent on

the other three. However, note that column one is not linearly dependent on the other

columns. This is a major difference between conventional linear algebra and minimax

algebra. In conventional linear algebra, the equation

clal + c2a2 + +Cnan = b

would also imply that aj is a linearly dependent on {b} U {ai}li j.

There are situations, particularly if the matrix is symmetric, that minimax linear

dependence mimics conventional linear algebra in this regard. In those situations, one way

to effectively apply the methods of Theorem 6.2.1 is to analyze the columns inductively.

If a linearly dependent column is found, disregard it in the next step of the analysis.

If it is not dependent keep it in the next step. So, if a(j), j = 1,...,n 1 are not

linearly dependent, apply Theorem 6.2.1 to a(j), j = 1,..., n. If a(n) is dependent on

a(j), j = 1,...,n 1, then next apply the theorem to a(j), j = 1,...,n 1, n + 1,


leaving out a(n). If a(n) is not dependent on a(j), j = 1,..., n 1, then next apply the

theorem to a(j), j = 1,..., n + 1,, including a(n).

The purpose of the next two theorems is to show some of the anomalies associated

with linear dependence as it may lead to the definition of rank.

Theorem 6.2.2. (Cuninghame-Green, Theorem 16.4) Suppose that F is a commutative

bounded i-group other than ({-oo,0, +o}, V, A,+, +'). Let m > 2 and k > 1 be

arbitrary integers. We can always find k finite m-tuples, no one of which is linearly

dependent on the others.

Theorem 6.2.3. (Cuninghame-Green, Theorem 16.5) Suppose that F

({ -oo, 0,+00},V,A,+,+'). Let in > 2. We can always find (at least) m2 i

m-tuples, no one of which is linearly dependent on the others.

In conventional linear algebra, a number of different, but logically equivalent,

definitions are possible of the notion of linear independence of a set of elements of a vector

space. However, Cuninghame-Green formulated analogous minimax algebra definitions

of various alternative forms of linear independence of elements of a band-space, and

showed that they are not logically equivalent, although certain logical implications may

be demonstrated among them [16]. These considerations led to the following definition.

Definition. Let F be a bounded 1-group and let a(1),...,a(k) E M,,,. We shall say

that a(1),..., a(k) are strongly linearly independent, if there is at least one finite n-tuple,

B C M,,1, which has a unique expression in the form

(1) B= (V a(jr) + A)

with Ai, E F 1 < jr < k, (r = 1,...,t) andjr < if r < s (r = 1,...,t; s = 1...,t).

We shall abbreviate "strongly linearly independent" by SLI.


For a given belt, F,, define linear independence as the negation of linear depen-


Definition. a(1), ..., a(k) e F' are linearly independent exactly when no one of them

is linearly dependent on the others.

The next theorem relates the definitions of SLI and linear independence.

Theorem 6.2.4. (Cuninghame-Green, Theorem 16.10) Let Fo be a commutative

bounded 1-group and a(1),...,a(k) E M,,. For a(1),...,a(k) to be linearly indepen-

dent it is sufficient, but not necessary, that a(1),..., a(k) be SLI

Definition. Let F+ be any bounded 1-group and let A e Mmn. Suppose that we

can find r columns (1 < r < n) of A, but no more, which are SLI. We shall say that

A has column-rank equal to r. We define row-rank of A as the column-rank of the

transpose of A.

Before proving relationships among these ranks, we need one more definition.

Definition. A given matrix A E Mmn has 1F-astic rank equal to r, if the following is

true for k = r but not for k > r.

(i) There are X E M,, and Y E Mm,, both finite, such that B e Mmn is doubly

1F-astic and contains a k x k strictly doubly IF-astic submatrix, where

Bij = Yi Ayi Xj (i = 1, ..., m; j = 1, ..., n)

Theorem 6.2.5. (Cuninghame-Green, Theorem 17.7) Let Fto be a linear commutative

bounded i-group with group F and let A E Mmn be doubly 1F-astic. The following

statements are then equivalent.

(i) A has 1F-astic rank equal to r.

(ii) A has column-rank equal to r.

(iii) A has row-rank equal to r.

(iv) A* has dual column-rank equal to r

(v) A has dual row-rank equal to r

In view of Theorem 6.2.5, we may (for doubly F-astic A) simply use the expression

rank of A.

In the foregoing results, the equality of various ranks of a matrix have been demon-

strated, if they exist. We have not yet discussed whether a matrix necessarily has such

ranks. The next theorem answers this question.

Theorem 6.2.6. (Cuninghame-Green, Theorem 17.9) Let Fo be linear commutative

bounded 1-group with group F and let A E Mmn. There exists an integer r such that

A has IF-astic rank r if and only if A is doubly F-astic The integer r satisfies

1 < r < min{m,n}.

6.3 Matrix Decomposition

We begin with the weaker condition of linear independence.

Theorem 6.3.1. If A Mm is a matrix with r linearly independent columns, then

A = Ai v A2 V -.. V Ar,

where each Ai is of size m x n and has one linearly independent column.

Proof: Let D denote the set of indices of the dependent columns.

For each independent a(j), define A, as follows.


Let the j-th column of Aj be a(j). For each d E D, let the d-th column of

Aj be Ajd a(j), where A is from Theorem 6.2.1. According to Theorem 6.2.1,

A = A1 V A2 V ... V Ar. Since each Ai consist of a single non -oo column, a(j),

and Ajd a(j), they all have one linearly independent column.


Example. Let


We have that




Hence, the second column

A12 = 1, A32 = 0, A42 = 0.

A 1 2
A= 3 4
2 3

/I 3 2 0\
A*A= 1 4 2 1
2 5 3 3

is linearly dependent on the

According to Theorem 6.3.1,

-oo -oo -oo 2 2
-oo -oo V -o 2 2
-0 -00 -00 5 5

(-oo 3 -oo 3\
-oo 3 -oo 3

other columns.

Corollary 6.3.2. If A E Mmn is a matrix with rank r, then

A = A1 V A2 V ... V Ar

where each Ai is of size m x n and has one linearly independent column.


A= -3

Proof: If A C M,,, is a matrix with rank r, then by Theorem 6.2.5 A has r columns

which are SLI. By Theorem 6.2.4, r SLI columns implies r independent columns.


Thus, if the centered weight matrix, A, corresponding to a template, t, has r

independent columns, then we can write t as

t = tl V t2 V ... V tr,

where ti is separable template for each i = 1,2, ..., n. A separable template can then be

decomposed into a row and a column template, namely ti = ri E] si. Therefore,

t = (rl M si) V (ra2 M S2) V .. V (rn ]M sn).

Example. Let



t= -2



-2 -2

-1 -2

-1 -2

-1 -2

-2 -2

The centered weight

matrix corresponding to this

-2 -2 -2 -2
2 -] -1 -1
T = -2 -1 0 -1
-2 -1 -1 -1
\-2 -2 -2 -2

template is


According to Theorem 6.3.1, we may write T = T1 V T2 V T3 where

T,= -2


T2 = --oo






-00 C


/-oo -oo -2
-oo -oo -1

T3 = -oo -oo 0
-oo -oo -1
-oo -oo -2

If we take ti to be the template corresponding to




-3 -4\
-2 -3
-1 -2
-2 -3
-3 -4/

the centered

weight matrix Ti, then

each ti is separable. Thus, t = (ri M si) V (r.2 M s2) V (r3 M s3), where

ri= -2 r2= -1 ,r3 = 0

Si = 0

-00 -00 0 0

S2 = --o 0

-0o 0 -1

and S = -- -- 0 -1 -2


The converse of Theorem 6.3.1 is not true. Specifically, it can be shown that if a

matrix A has a decomposition in the form A = A1 V A2, where each Ai has one linearly

independent column, it may not be true that A has two linearly independent columns.

The next example shows how this can happen.

Example. If
/I 5 7\ -oo 7 -oo
A = 2 6 8 ) and A = o 5 -oo
4 8 10 -oo 9 -oo

A=AIVA= 2 6 8 .
\4 9 10
The matrix A has three linearly independent columns.


This dissertation has developed the theory of maxpolynomials. A particular emphasis

has been placed on using their factorization as a method of decomposing morphological

structuring elements.

The steps in the development were:

1) A definition of maxpolynomials given in terms of sequences of elements. This

definition allows for the complete classification of their algebraic structure. This

classification is based on existing minimax theory.

2) A counter example showed that a division algorithm does not hold for maxpolyno-

mials. However, we developed a division procedure for the one variable case which

can be applied in most practical cases.

3) The presentation of several sufficient conditions for the factorization of one variable

maxpolynomials. Particular emphasis was placed on those exhibiting symmetry, due

to their frequency of use in image processing.

4) The necessary and sufficient conditions under which a two variable maxpolynomials

can be decomposed into two one variable maxpolynomials. The previous result in this

area only applied to maxpolynomials which corresponded to rectangular templates.


5) A necessary condition for the decomposition of two-dimensional templates is the

decomposition of their boundaries. The one variable techniques were extended to the

two variable case. Since most template are two-dimensional, these results should be

the most useful.

6) A rank based matrix decomposition in terms of minimax algebra was proven.

The following are suggestions for further research:

The primary theoretical results on polynomial factorization and irreducibility are

derived from the algebraic structure on the coefficients. The theorems of Chapter 4 lead

to the investigation of such possibilities for maxpolynomials. We now may consider

conditions on the belt of coefficients. Do notions such as divisibility and irreducibility

exist in belts? Are there properties of certain belts which aid in the factorization of


The splitting field of the real numbers is the complex numbers. Is there an extension

of (R-,, V, +) which leads to an equivalent form of the fundamental theorem of algebra?

Since there is no fundamental theorem at this time, many more factorization techniques

for specific maxpolynomials need to be developed.

We considered methods for decomposing two variable maxpolynomials based on their

boundary. The arrangement of the boundary factorization has a substantial effect on the

interior. Is there a minimal configuration for the boundary factorization? Extensions of

the factorization results presented here can include algorithms to determine the interior

of the decompositions so to optimize any remainder which may exist.