Lattice structures in the image algebra and applications to image processing


Material Information

Lattice structures in the image algebra and applications to image processing
Physical Description:
ix, 161 leaves : ill. ; 28 cm.
Davidson, Jennifer L
Publication Date:


Subjects / Keywords:
Image processing   ( lcsh )
Lattice theory   ( lcsh )
Nonlinear theories   ( lcsh )
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )


Thesis (Ph. D.)--University of Florida, 1989.
Includes bibliographical references (leaves 157-160).
Statement of Responsibility:
by Jennifer L. Davidson.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001504685
notis - AHB7503
oclc - 21508940
System ID:

Full Text







Copyright 1989


Jennifer L. Davidson


I would like to thank my advisor, Dr. Gerhard X. Ritter, for teaching me so much

about doing research, for allowing me the opportunity to perform independent research on

his contract, and for giving me the chance to work in an exciting area of applied mathemat-

ics. Without his constant encouragement, I would not have seen the beauty of mathematics,

nor would I have succeeded in mathematics the way I did. I thank Dr. David C. Wilson for

providing the opportunity of working with him during a summer, and all the help he has

given since then. To Dr. Joseph Wilson I extend my deepest gratitude for helping me with

questions in computer science. I am also indebted to all the members of my committee for

the help and encouragement that they have given me. To my parents go a debt that I can

only repay in love: providing me the opportunity to attend a small, private and very good

school for my undergraduate education, which was a critical turning point in my life. I

would also like to thank Dr. Sam Lambert and Mr. Neal Urquhart of the Air Force Arma-

ment Laboratory and Dr. Jasper Lupo of DARPA for partial support of this research under

Contract F08635-84-C-0295.

Finally, I acknowledge my debt to the American taxpayers who provided the financial

support for the U.S. Fellowship programs, loans, research assistantships and state teaching

assistantships which supported me through most of my time in graduate school. I hope to

contribute to society so that this support will be justified.



LIST OF ShY BOLS . . . vi

ABSTRACT . . . .. vi i i


Background of Image Algebra . .. 1
Parallel Image Processing . . ... 3
Summary of Results . . ... 5

RESEARCH . .. ... 7


1. THE Tv' O ALGEBRAS . . 9
1.1. Image Algebra: Basic Definitions and Notation 9
1.2. Minimax Algebra . . 24

2. TIlE ISOMORPIIISM ... ... 36



3.1. Basic Definitions and Properties 46
3.2. Systems of Equations . 57
3.3. Rank of Templates .. 69
3.4. The Eigenproblem in the Image Algebra 79


5. TRANSFORM DECO POSITION .. .. .. .... 100

5.1. New Matrix Decomposition Results ... 100
5.2. Decompos i t i on of Templa es . 121
5.3. Applications to Rectangular Templates .127


6.1. A Division Algorithm in a Non-Euclidean Domain .136
6.2. An Image Algebra Division Algorithm 141

7. TW EXAMPLES . . .... 144

7.1. An Operations Research Problem Stated in Image
Algebra Notation . . 144
7.2. An Image Complexity Measure 148


REFERENCES . . . .... 157



Symbol Explanation

Z the set of integers
R the set of real numbers
R+ the set of non-negative real numbers
F an arbitrary value set
0 the identity element of F under its group operation
Fn the Cartesian product of F
2s the power set of S (set of all subsets of S)

0 the empty set
G, 4, C is an element of, is not an element of, is a subset of
U, n set union, set intersection
f :X-+Y f is a function from X to Y
f-1 the inverse of function f
F_ the set F U {-oo}
F+0 the set F U {-oo,+oo}
F,+ the set F U {+oo}
V, A maximum, minimum
X\Y the set difference of X and Y
W,X,Y coordinate sets
w, x, y pixel locations
Fx the set of all functions from X to F
a, b, c images
1 G Fx a constant image on X with values at each coordinate 1
0 G Fx a constant image on X with values at each coordinate 0


I (FX)x
(. E (F X )Y

r, s,t


t C Min


S_ (ti)
S0 (ti)

0 if x = y
a one-point template from X to X with l(x) = othe
S -oo otherwise

the null template with y,(x) = -oo for all y C Y,x E X
the characteristic function over set S of image a
the function f induced pointwise over image a
the support of template t (Rx)Y
the infinite support of template t C (FX,)
the positive infinite support of template t E (F )Y
the image function of template t at location y
the set of all F valued templates from Y to X
generalized convolution
multiplicative maximum, multiplicative minimum
additive maximum, additive minimum
the cardinality function, counting the number of elements in set S
the sum of all pixel values of the image a
the maximum pixel value in image a
the additive dual image of the image a C Rx
the multiplicative dual image of the image a E (R o)X
the additive dual template of the template t C (RX)Y
the multiplicative dual template of the template t ((R+))x)Y
an m X n matrix
the i-th column of the matrix t
the i-th row of the matrix t
the transpose of the matrix t, or the transpose of the template t
a blog with group F
the infinite support of matrix t E .Mrn at row i
the infinite positive support of matrix t E .M,,, at row i
the extended characteristic function
if and only if


Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy



Jennifer L. Davidson

August 1989

Chairman: Dr. Gerhard X. Ritter
Major Department: Mathematics

The research for this dissertation is concerned with the investigation of an algebraic

structure, known as image algebra, which is used for expressing algorithms in image process-

ing. The major result of this research is the establishment of a rigorous and coherent

mathematical foundation of the subalgebra of the image algebra involving non-linear image

transformations. In particular, a classification in the image algebra of a set of non-linear

image transformations called lattice transforms is presented, using minimax matrix algebra

as a tool. Several applications to image processing problems are discussed. Specifically, in

addition to describing several non-linear transform decomposition techniques, the subalgebra

is used as a model and a tool for the development of methods to compute lattice transforms


The basic operands and operations of the image algebra and minimax algebra are

defined, as well as the relationships between the two algebras. Properties of the minimax

algebra including the lattice eigenvalue problem are mapped to the image algebra.

Mathematical morphology is shown to be embedded in the image algebra as a special sub-

class of lattice transforms. Networks of processors are modeled as graphs, and images are

represented as functions defined on the nodes of the graph. It is shown that every lattice

image-to-image transform can be weakly factored into a product of lattice transformations

each of which are implementable on the network if and only if the graph corresponding to

the network is strongly connected. Necessary and sufficient conditions are given to decom-

pose a rectangular template into two strip templates. A division algorithm is given which is

a generalization of a boolean skeletonizing technique. The transportation problem from

linear programming is expressed in the image algebra. A method to produce an image com-

plexity measure is discussed. Most results are given both in image algebra and matrix alge-

bra notation.


Background of the Image Algebra

The results presented in this dissertation reflect the ongoing investigation of the struc-

ture of the Air Force image algebra, an algebraic structure specifically designed for use in

image processing. The idea of establishing a unifying theory for concepts and operations

encountered in image and signal processing has been pursued for a number of years now. It

was the 1950's work of von Neumann that inspired Unger to propose a "cellular array"

machine on which to implement, in parallel, many algorithms for image processing and

analysis [1,2]. Among the machines embodying the original automaton envisioned by von

Neumann are NASA's massively parallel processor or MPP [3], and the CLIP series of com-

puters developed by M.J.B. Duff and his colleagues [4,5]. A more general class of cellular

array computers are pyramids [6] and the Connection Machine, by Thinking Machines Cor-

poration [7].

Many of the operations that cellular array machines perform can be expressed by a set

of primitives, or simple elementary operations. One opinion of researchers who design paral-

lel image processing architectures is that a wide class of image transformations can be

represented by a small set of basic operations that induce these architectures. G. Matheron

and J. Serra developed a set of two primitives that formed the basis for the initial develop-

ment of a theoretical formalism capable of expressing a large number of algorithms for image

processing and analysis. Special purpose parallel architectures were then designed to imple-

ment these ideas. Several systems in use today are Matheron and Serra's Texture

Analyzer [8], the Cytocomputer at the Environmental Research Institute of Michigan (ERIM)

[9,10], and Martin Marietta's GAPP [11].

The basic mathematical formalism associated with the above cellular architectures are

the concepts of pixel neighborhood arithmetic and mathematical morphology. Mathematical

morphology is a mathematical structure used in image processing to express image processing

transformations by the use of structuring elements, which are related to the shape of the

objects to be analyzed. The origins of mathematical morphology lie in work done by H. Min-

kowski and H. Hadwiger on geometric measure theory and integral geometry [12,13,14]. It

was Matheron and Serra who used a few of Minkowski's operations as a basis for describing

morphological image transformations [15,16], and then implemented their ideas by building

the Texture Analyzer System. Some recent research papers on morphological image process-

ing are Crimmins and Brown [17], Haralick, Lee and Shapiro [18], Haralick, Sternberg and

Zhuang [19], and Maragos and Schafer [20,21,22].

It was Serra and Sternberg who first unified morphological concepts into an algebraic

theory specifically focusing on image processing and image analysis. The first to use the term

'Image Algebra" was, in fact, Sternberg [23,24]. Recently, a new theory encompassing a

large class of linear and nonlinear systems was put forth by P. Maragos [25]. However,

despite these profound accomplishments, morphological methods have some well known limi-

tations. They cannot, with the exception of a few simple cases, express some fairly common

image processing techniques such as Fourier-like transformations, feature extraction based on

convolution, histogram equalization transforms, chain-coding, and image rotation. At

Perkin-Elmer, P. Miller demonstrated that a straightforward and uncomplicated target

detection algorithm, furnished by the U.S. Government, could not be expressed using a mor-

phologically based image algebra [26].

The morphological image algebra is built on the Minkowski addition and subtraction of

sets [14], and it is this set-theoretic formulation of its basic operations which does not enable

mathematical morphology to be used as a basis for a general purpose algebraic based

language for digital image processing. These operations ignore the linear domain, transfor-

mations between different domains (spaces of different dimensionalities) and transformations

between different value sets, e.g. sets consisting of real, complex, or vector valued numbers.

The image algebra which was developed at the University of Florida includes these concepts

and also incorporates and extends the morphological operations.

Parallel Image Processing

The processing of images on digital computers requires enormous amounts of time and

memory. With the advent of Very Large Scale Integrated (VLSI) circuits, the cellular array

of von Neumann became a reality. There are many types of parallel architectures in

existence [27], and various ways of categorizing them have been attempted [28]. The general

scheme of one popular type of parallel processor is to have many processing elements, or

small processors with limited memory, interconnected by communication links. Each pro-

cessing element can communicate directly with a single controller as well as with a very

small number of its neighbors, usually 1 to 8. When the controller gives a signal, all process-

ing elements simultaneously perform some arithmetic and/or logic operation using the values

of its neighbors to which it is connected. This type of parallel processor is called a neighbor-

hood array processor, as communication links connect the center processor to a small subset

of its spatially nearest neighbors. Two typical neighborhood configurations for local inter-

connection links are given below. The box with the x represents the center processor, and

the four (or eight) boxes immediately adjacent to x represent the four (or eight) processors

with whom x can send and receive information via the communication links. The set of pixel

locations relative to the center pixel location, x, form the local neighborhood of x.

x x

(a) (b)

Figure 1. Two Neighborhood Configurations.
(a) The von Neumann Configuration; (b) The Moore Configuration.

Some of the parallel processors that have been built to implement this type of connection

scheme are the MPP, the Distributed Array Processor (ICL DAP) [27,29], the Geometric

Arithmetic Parallel Processor (GAPP), and the CLIP4. There are other types of parallel

architectures, such as pipeline computers (23] and systolic arrays [30], which differ in con-

struction and implementation of the neighborhood functions. However, the key feature in

most of these architectures is that they have a large number of processing elements, each of

which communicates directly with only a small subset of the others.

If every value of a transformed image at location x involves arithmetically or logically

manipulating information only from pixel locations in the local neighborhood of x, then the

transform is called a local transform. Assuming that a transform can be described in a local

manner, the amount of time to perform a local operation globally on neighborhood array pro-

cessors is the amount of time it takes one processor to perform it, often a single clock cycle.

Certain image transforms which were previously too computationally intensive can now be

implemented on parallel and distributed processors.

In general, image transforms are not local, that is, the calculation of a transformed

value may depend on input values which are spatially very distant from the processing ele-

ment. In order to use parallel processors, the transform must first be decomposed into a pro-

duct of local transforms. The existence of local decompositions is of theoretical and practical

interest, and as such provides the main thrust behind the research in this dissertation.

While such parallel architectures are attractive for use in image processing, much

research still needs to be done and implementation techniques developed in order to use the

architectures most efficiently.

Summary of Results

The results in this dissertation stem from an investigation into the image algebra opera-

tions of 1E, @, and V. A brief description of the image algebra and its use as a model for

image processing is presented. A full discussion of the entire image algebra is presented by

Ritter et al. [31]. The results given here focus mainly on two non-linear image transform

operations whose underlying values have the structure of a lattice. In particular, it is shown

that a previously determined, well-defined mathematical structure called the minimax alge-

bra can be used to place the study of a wide class of non-linear, lattice-based image

transforms on a solid mathematical foundation. We also discuss the mapping of these

transforms to certain types of parallel architectures.

It has been well established that the image algebra is capable of expressing all linear

transformations [32]. The embedding of linear algebra into the image algebra makes this

possible. The major contributions of this thesis are the development of two isomorphisms

between the minimax algebra and image algebra which refines the lattice subalgebra of the

image algebra, and the development of new and useful mathematical tools which are of prac-

tical use in the area of image processing.

The dissertation is divided into two parts. Part I gives an introduction to the two alge-

bras, the image algebra and minimax algebra. Part II is devoted to presenting new matrix

theoretical results which have applications to solving image processing problems.

Specifically, Chapter 1 is of an introductory nature, presenting a historical background of the

image algebra and a brief discussion of where lattice structures appear to be useful in

mathematically characterizing problems in image processing and operations research.

Chapter 1 also presents a brief introduction to the image algebra as well as to the minimax

algebra. We mention that although vector lattices are contained within the image algebra,

they have been investigated [33] and will not be discussed here. The isomorphisms which

embed the minimax algebra into the image algebra are given in Chapter 2, and mapping of

the minimax algebra properties in image algebra notation are presented in Chapter 3. In

Chapter 4 we give the relationship of mathematical morphology to image algebra. In

Chapter 5 we present new matrix theoretical results, which have applications to template

decomposition. An algorithm similar to the division algorithm for integers is given both in

minimax algebra and image algebra notation in Chapter 6. In Chapter 7 we present the for-

mulation of an operations research in image algebra notation, and give an image complexity

algorithm. \e then present the conclusions and give suggestions for future research after

Chapter 7.


The algebraic structures of early image processing languages such as mathematical

morphology had no obvious connection with a lattice structure. Those algebras were

developed to express binary image manipulation. As the extension to gray valued images

developed, the notions of performing maximums and minimums over a set of numbers

emerged. Formal links to lattice structures were not developed until very recently [34],

including this dissertation. We present a little background in this area, showing how the lat-

tice properties were inherent in the structures being developed.

The algebraic operations developed by Serra and Sternberg are equivalent and based on

the operations of Minkowski addition and Minkowski subtraction of sets in Rn. Given A

C R" and B C R", Minkowski addition is defined by

A+B = {a+b:aEA,bEB}
and Minkowski subtraction is defined by

A/B =A+B,

where the bar denotes set complementation. Mathematical morphology was initially

developed for boolean image processing, that is, for processing images that have only two

values, say 0 and 1. It was eventually extended to include gray-level image processing, that

is, images that take on more than two values. The value set underlying the gray value

mathematical morphology structure was the set R_o, = R U { -oo }, the real numbers with

-oo adjoined. Sternberg's functional notation is most often used to express the two morpho-

logical operations, as it is simply stated and easy to implement in computer code. The gray

value operations of dilation and erosion, corresponding to Minkowski addition and

subtraction, respectively, are

D(x,y) = max [A(x i, y i) + B(i,j)]

E(x,y) = min [A(x i,y i) B(-i,-j)]

respectively, where A and B are real valued functions on R2.

As will be shown, mathematical morphology, which uses the lattice R_,, is actually a

very special subalgebra of the full image algebra. It is well known that

Ro = R U {+oo, -oo } is a complete lattice [35]. The lattice structure provides the basis

for categorizing certain classes of image processing problems, which is the main subject of

this dissertation.

Operations research has long been known for its class of problems in optimization. A

certain type of non-linear operations research problems has been the focus of Cuninghame-

Green during his research [36,37]. The types of optimization problems that were considered

by this author used arithmetic operations different from the usual multiplication and summa-

tion. Some machine scheduling and shortest path problems, for example, could be best

characterized by a non-linear system utilizing additions and maximums. A monograph enti-

tled Minimax Algebra [38] describes a matrix calculus which uses a special case of what is

called a generalized matrix product 139], where matrices and vectors assume values from a

lattice. A few more conditions such as a group operation on the lattice, and the self-duality

of the resulting structure, allow Cuninghame-Green to develop a solid mathematical founda-

tion in which to pose a wide variety of operations research questions. It is an interesting and

natural link between matrices with values in a lattice and templates in the image algebra

which provides the foundation of this dissertation.


1.1. Image Algebra: Basic Definitions and Notation

SThis section provides the basic definitions and notation that will be used for the image

algebra throughout the dissertation. We will define only those image algebra concepts neces-

sary to describe ideas in this document. For a full discourse on all image algebra operands

and operations, we refer the reader to a recent publication [31].

The image algebra is a heterogeneous algebra, in the sense of Birkhoff [40], and is capa-

ble of describing image manipulations involving not only single valued images, but mul-

tivalued images. In fact, it has been formally proven that the set of operations is sufficient

for expressing any image-to-image transformation defined in terms of a finite algorithmic pro-

cedure, and also that the set of operations is sufficient for expressing any image-to-image

transformation for an image which has a finite number of gray values [41,42]. We limit our

discussion to single valued images in this document, and refer the reader to other publica-

tions on multi-valued images [31].

We will present the six basic operands, some of the finitary operations defined between

the operands, and also give a few examples.

1.1.1 The Operands of the Image Algebra

The six basic operands are coordinate sets, elements of coordinate sets, value sets, ele-

ments of value sets, images, and generalized templates. They are defined as follows.

1. A coordinate set X is a subset of Rk for some k. Two familiar coordinate sets, the

rectangular and toroidal coordinate sets, are shown in Figure 2.


(1,4) (2,4) (3,4)

S3- (13) (2 3) (3 3)

S (1,2) (2,2) (3,2)
-- 0

1 2 3

(a) (b)

Figure 2. Two Coordinate Sets.
(a) Toroidal Lattice X C R3; (b) A Finite Rectangular Array in R2.

2. A value set F is a semi-group. Some value sets we are interested in are the real

numbers, the rational numbers, integers, positive reals, positive rationals, and positive

integers. These are denoted by R, Q, Z, R+, Q+, and Z+, respectively. We will also

be strongly interested in some of the extended number systems. If

F E {R,Q,Z, R, Q+}, then F_, denotes F U {-oo}, F+, denotes F U {+oo}, and

Foo denotes F U {-oo, +oo}. We denote an arbitrary value set by F.

3. An F valued image a on a coordinate set X is an element of FX. Thus, an image

a E Fx is of form

a = {(x,a(x)):x E X, a(x) E F }.

4. Let X and Y be coordinate sets. An F-valued template t from Y to X is an element

of (FX)Y. For each y E Y, t(y) is an image on X. Denoting t(y) by ty, we have

ty= { (x, t(x)) : x E X, ty(x) F } for all yE Y.

We give a pictorial representation of a generalized template t E (FX)Y in Figure 3.

They are discussed in detailed in the section below on generalized templates.

weights ty(x)

target pixel y
Source Configuration S(ty)

Target Array Y Source Array X

Figure 3. A Pictorial Representation of a Generalized Template.

The set X is called the set of image coordinates of a E FX, and the range of the func-

tion a is called the image values of a. Thus, the image values are a subset of F. The pair

(x,a(x)) is called a picture element, or pixel, and x is the pixel location of the pixel value or

gray value a(x). We shall use bold lower case letters, x, to represent a vector in Rn, and

lower case letters (not bold) for the components of the vector. Thus x = (x, ., x) R",

where xi E R for all i. The set of all F valued images on X is denoted by FX, and the set of

all F valued templates from Y to X is denoted by (FX)Y.

As we will not be using any of the operations concerning coordinate sets or value sets,

we refer the reader to other publications discussing this topic [31].

1.1.2. Operations on Images

The basic operations on and between F valued images are the ones induced by the alge-

braic structure of the value set F. The remaining operations can be defined in terms of these

basic ones. In particular, if F = R, then the basic operations for a, b E Rx are

a + b { (x,c(x)) : c(x) = a(x) + b(x), x E X}

a b ={ (x,c(x)) : c(x) = a(x)- b(x), x E X}

a V b = {(x,c(x)) : c(x) = a(x) V b(x), x E X}.

If X is finite, then we define the dot product of two images a, b C Rx by

a b = Ea(x) b(x).

We say an image a E Fx is a constant image if its gray value at every pixel location is

the same. Thus, a constant image a FX has form

a(x) = kEF, for allx X.

In this case we write k for the image a. There are two constant images of importance in the

image algebra. One is the zero image, defined by 0 E {(x,0)) : x E X}, and the unit image,

defined by 1 {(x,0)) : x E X}. These images have the following properties.

a+0 = 0+a = a

a*l = l*a = a

Suppose f: F -* F is given. Then f induces a function from Fx to Fx, also called f,

f(a) = {(x,(b(x)): b(x) = f(a(x))}.

For example, the function f:R\{0} --* R\{0 } where f(r) = r- induces a function f:

Fx -+ FX, where f(a) = b, and b(x) = 1/a(x), if a(x) $ 0, otherwise b(x) = 0. The image b

so described is denoted by a-'. It is obvious that a a-1 7 1 for every a. But it is true that

a a-' a = a. For this reason a-i is called the pseudo inverse of a.

If the value set F = R0, then the additive dual of a E RX is denoted by a* and

defined by

-a(x) if a(x) C R
a*(x) = -oo if a(x) = +oo
+00 if a(x) = -oo

Thus we have (a*)* =a.

If F = R+, then the multiplicative dual of a (R ,) is denoted by a and defined by

1/a(x) if a(x) 6 R+
a(x) = -o0 if a(x) = +oo
+00 if a(x) = -oo

It follows that (a) = a.

Another useful induced function is the characteristic function. Let XT denote the usual

characteristic function with respect to an arbitrary set T. Here

1 if x T
=X(x) 0 ifx O T

We now define the generalized characteristic function of an image a E RX. Let a E Rx and

S E (2F) Then the generalized characteristic function of an image a is defined as

,\(a) =c C Rx

C 1 if a(x) E S(x)
c = { (x, c(x)) : c(x)= 0 o otherwise }.
0 otherwise

Note that the usual characteristic function above is a special case of the generalized charac-

teristic function, where T C F and S(x) = T for all x E X. The typical thresholding func-

tion applied to an image is a simple example of the generalized characteristic function. Fix

b E Rx. Then S

To simplify notation, we define X

(1 if a(x) < b(x)

If we now consider the characteristic function on R we find that we would like our

binary output image to have the values -oo and 0 instead of O's and l's, respectively. We

define the extended characteristic function as the function induced by

0 if x ES
Xs() = -0oo otherwise

Thus, X_ (a) is defined as

S0 if a(x) b(x)
One unary operation on images is the sum operation, which we will use in Chapter 7. Let X

be a finite coordinate set. Then the sum of a E Rx is defined to be

Za = a*l = Z a(x).

In context of the lattice structures of R and R we make the following definition.

Let a E R The maximum of a is the scalar determined by

Va = V a(x).

1.1-3. Generalized Templates

For a generalized template t E (FX)Y, the coordinate set Y is called the target

domain or the domain of t, and X is called the range space of t. The pixel location y E Y

at which a template ty is evaluated is called a target point of the template t, and the values

ty(x) are called the weights of the template t at y.

IfF C {R, R ,,, C }, then for t (FX) the set

S(ty) = { x X: t,(x) o }

is called the support ofty. If F {RE R R, }, then for t C (FX)Y we define

_,(ty) = { x EX: ty(x) $ -oo}

S+,(ty) = {x X: ty(x) +oo0}

to be the (negative) and positive infinite support, respectively.

If t E (FX)x and for all triples x, y, z E X with y + z and x + z E X, we have

t,(x) = ty+z(x + z), then t is called translation invariant. A template which is not transla-

tion invariant is called translation variant, or simply variant. Translation invariant tem-

plates have the nice property that they may be represented pictorially in a concise manner.

The following translation invariant template is presented pictorially in Figure 4. Let

X = Y = Z2, and let y = (i,j) C Z2. Let x, = (i,j), x2 = (i+1,j), x3 = (i,j-1), and x4 =

f if x = x,, i = 1,...,4
(i+1,j-1). Define the weights by ty(x) = 0 otherwise Then

S(ty) = { X1, .. 4 }.


1 2
J .3 1 2
t= \-- --
j-1 3 4

I I x
i i+1
Figure 4. Example of a Translation Invariant Template.

The cell with the hash marks in the pictorial representation of t indicates the location of the

target point y.

There are several representations of a template that we will be concerned with. One is

the transpose of a template. Let t E (FX)Y. Then the transpose of t is a template

t' E (FY)X defined by t'(y) ty(x). If F {R R o}, then we can introduce a dual

template. For t E (R), the additive dualof t the advhe d o s template t* G (R)X defined by

-ty(x) if ty(x) R
tx(y) = -co if t(x)= +oo.
+00 if ty(x) = -oo

Similarly, if t E ((R 0)X)Y, the multiplicative dual of t is the template

t ((R+)Y)x defined by

1/ty(x) if ty(x) E R+
tx(y) = -00 if t,(x) = +oo.
+00 if ty(x) = -oo

1.1.4. Operations Between Images and Templates

One common use of templates is to describe some transformation of an input image

based on its image values within a subset of the coordinate set X. We first introduce the

generalized product between an image and a template. Let X C R" be finite,

X = {x1, ,xm }. Let y be an associative binary operation on the value set F. Then the

global reduce operation F on Fx induced by 7 is defined by

F(a) = r a(x) = a(a(xia(x2)- y. a(xm),

where aEF F. Thus, F:Fx F.

Images and templates are combined by combining appropriate binary operations. Let

F1, F2, and F be three value sets, and suppose o:F, F, -- F and b : F2 F1 -- F are

binary operations. If 7 is an associative binary operation on F, a E F, and t E(FX)Y, then

the generalized backward template operation of a with t (induced by 7 and o) is the binary

operation 3 :"FX (FX)Y -* FY defined by

a t {(y,b(y)): b(y) = F a(x)ot,(x), y E Y}.

If t E (FY)x, then the generalized forward template operation of a with t is defined as

t @ a = {(y,b(y)): b(y) = r tx(y) b a(x), y Y}.

Note that the input image a is an F, valued image on the coordinate set X, and the

output image b is an F valued image on the coordinate set Y, regardless of which template

operation, forward or backward, is used. Templates can therefore be used to transform an

image on one coordinate set and with values in one set to an image on a completely different

coordinate set whose values may be entirely different from the original image's.

Only three special cases of the above generalized operation have been investigated in

detail, one by Gader [32] and the other two in this dissertation. Future research will cer-

tainly discover other useful combinations. These three operations are denoted by j, i and

0. The operation is a linear one, and we refer the interested reader to other references

for recent research in this area [32,43,44]. The other two operations, 3 and are non-

linear, and investigation of the structure they induce on images and templates is the focus of

this dissertation.

Since our main interest concerns the operations 0 and @, we will omit the definition

O and refer the interested reader to another reference [31]. Let X C R" be finite and Y

C Rm. Let a E RX- and t E (Rxo)Y. Then the backward additive max is defined as

aE t {(y,b(y)): b(y) = V a(x)+ty(x), y E Y},

where V a(x)+ty(x) = max{ a(x)+ty(x): x C X }.

For t E (Ro)x we define the forward additive max transform by

tla = {(y,b(y)): b(y)= V a(x)+t,(y), y Y}.

We use the usual extended arithmetic addition r+-oo= -oo+r = -ooV rER_, to

define a(x) +ty(x) everywhere.

For a E Rx and t E ((R+f)x)Y we define the backward multiplicative max


aOt {(y,b(y)): b(y) = V a(x)-ty(x), y E Y}.

The forward multiplicative max transform is given by

tOa = {(y,b(y)): b(y) = V a(x)tx(y), y E Y},

where t E ((R+)Y)x.

Recall that a lattice-ordered group, or 1-group, is a group which is also a lattice. The

operation addition (multiplication) on the 1-group R (R+) can be extended in a well-defined

manner to addition (multiplication) on R, (R+f) by defining

x -co= -coo x = -oo, x E G U {-oo}

xX +oo=+ooX x= +oo, x G U {+0o}

--O X +0o= +o0 X -00o=-00

where x {+, }, depending on whether G = R or R+, respectively. Of course, the ele-

ments +oo, -oo have no additive inverse under the operation + or *, and hence Ro (or

R,+) is no longer a group. This is discussed in detail in section 1.2, where the notion of a

bounded lattice ordered group, an extension of a lattice-ordered group with extended

arithmetic, is introduced. This provides for the value set Roo to be used in the definition of

the image-template operation 10, for example, and the value set R+, to be used in the

definition of the image-template operation .

We remark that for computational as well as theoretical purposes, we can restate the

above two convolutions with the new pixel value calculated only over the support of the tem-

plate t. If So(ty) $ 0, then V a(x)+ty(x) = V a(x)+ty(x), and we have
xEX x S_ojty)

a E t {(y,b(y) : b(y) = V a(x)+t(x), y C Y}.

Similarly, if $_oolty) 0 0, then Vxa(x) ty(x) = V a(x) *ty(x), and
xEX xESAty)

a t -{(y,b(y) : b(y)= V a(x)*ty(x), y Y}.
xE S_ cty)

If in either case S_o(ty) = 0, then we define

V a(x)+ty(x) or V a(x)*ty(x) = -00
xE Sty) xE S-_ty)

We may therefore restrict our computation of the new pixel value to the infinite support of

ty. This becomes particularly important when considering mapping of transforms to certain

types of parallel architectures, as will be discussed in the introductory remarks to Part II,

and Chapter 5.

Because of the duality inherent in the two structures Ro and Ro, the operations 0

and induce dual image-template operations, called additive minimum and multiplicative

minimum, respectively. They are defined by

a El t = (t* a*)


a@ t =(t a).

Equivalently, we have

a ES t {(y,b(y)): b(y) = Aa(x) +'ty(x), y E Y}

= {(y,b(y) : b(y) = A a(x) +'t,(x), yE Y}
xE S+cty)

t a {(y,b(y)): b(y)= a(x)*'tx(y), y E Y}

= A a(x)*'t,(y),yE Y}.
xE S+ctx)

where the dual operations +' and *' are presented in section 1.2. As before, if $+~(ty) = 0

we define

A a(x) 'tx(y) or A a(x)*'tx(y) = +o0.
xE S+,t.) xE S+(t,)

The above definitions assume that the support S_,(ty) is finite for each y C Y. We

may extend the above definitions to continuous functions a and ty on a compact set S_-(ty).

This is well-defined as the sum or product of two continuous functions on a compact subset

of R", which is continuous, always contains a maximum. Extending the basic properties of

the image algebra operations involving [1 and from the discrete case to the continuous

case should present little difficulty, and remains an open problem at this time.

1.1.5. Operations Between Generalized Templates

The pointwise operations of the value set F can also be extended to to operations

between templates. For example, if F = R, then we have

s + t r, where ry = s +ty

s t r, wherr,rwe = sy ty

s Vt r, where r3 = Sy V ty.


If F = Ro then we define

8(x) + ty(x) if x E S_(ty)n O ,(s8y)
sy(x) if x E S _(sy) \ (ty)
s + t r, where ry(x) = ty(x) if x E S ,(ty) \ (ty)

-0o otherwise

Note that in the case where s and t have no values of -oo or +oo anywhere, then the

definition of s + t on the value set R, degenerates to the definition of s + t on the value

set R.

The generalized image-template operation & generalizes to a generalized template-

template product. Let X C Rn be finite, X = {x, ,x, }, and let 7 be an associative

binary operation on the value set F with global reduce operation F on FX. Let

F1, F2, and F be three value sets, and suppose o : F, X F2 -* F is a binary operation. If -7

is an associative binary operation on F, t E (FX)"r, and s E (FW)Y, then the generalized

template operation of t with a (induced by 7y and o) is the binary operation

S: (FX)w X (FW)Y defined by

t @ s =r (FX)Y, where

ry(x) = tw(x) Oy(w) y E Y, x X.

Note that if IX1 = 1, then the definition of the generalized template operation of t and s

degenerates to the definition of the generalized backward template operation of the image

t E FW with the template s E (FW)Y, and r E F'. If lY =1, then the definition of the

generalized template operation of t and s degenerates to the definition of the forward tem-

plate operation of the image s E Fw with the template t E (FX)w, where r E Fx.

The specific cases for @ = E, or 0 thus generalize to operations between tem-

plates. We give the definitions for 0 and @, and refer the reader to another reference for

the definition of 6 [31]. Let t E (Rx)')Y and a E (RY )x. Then s E t = r E (R,,) is

defined by

ry(w) = V t(x) + sx(w), where w E W .

Again, as in the image-template operations, we may restrict our computation to a subset of

X. In particular, for y 6 Y, we define the set

S_(w) = {x E X: x E S_(ty) and w E S_(s) }.

Then r = s t E (R )y is defined by

ry(w) = V ty() + S(w),

where we define V ty(x) + sx(w) = -oo whenever Soo(w) = 0.
xES_o w)

The operation 0 has a similar situation. We have r =s s t E ((R+o)w)Y which is

defined by

ry(w)= V t(x) x(w),
where we define V ty(x) x(w) = -oo whenever S(w) = 0.

It follows from these definitions that the infinite support of the template r is

S-o(ry)={w(EW: S_o(w)# }.

The definitions given in this section are the elemental ones. Further definitions that

play important parts in the theoretical development of the lattice structure of the image

algebra will be presented as needed.

We define the complementary operations of E3 and @ for templates in the natural

way. Let t E (RX)Y and a E (Rw)x. Then s E t C (R,) is defined by

S En t =(t* M s*)*

Similarly, for t E ((R+)x)Y and s E ((R+ )w)x, s @ t E ((Rl )w)Y is defined by

s t (t =_ s).

We would like to remark upon one notational deviation between the Overview's [31]

definition for the O operations and the one presented here. Let Ro+ =

{r R :r > 0} U {+oo}. In the Overview, for a E (Ro)x and t E ((R-)x)Y, the

backward multiplicative max transform is defined as

a t -= (y,b(y)) : b(y) = V a(x) ty(x), y E Y}

which is equivalent to

a t={(y,b(y)) : b(y)= V a(x)-ty(x), y Y}
XE S(ty)
with b(y) = 0 if S(ty) = 0. The difference between the definition given earlier and this one

is the value set, namely RQf in this document and R> in the Overview. The number 0 acts

as a lower bound in Ro exactly as -oo acts as a lower bound in R+,. Multiplication of the

element 0 with the element oo follows the same rules as multiplication of the element -oo

with the element oo as given on page 18. In other words, the element 0 can replace symboli-

cally the element -oo. The main advantage of using the number 0 instead of the symbol

-oo is for ease of machine and software implementation. Most real image processing data

will have no values corresponding to the symbol +oo, and quite often have non-negative

values, including O's. Using 0 as the bottom element enables that value to be represented

easily in the computer, while special programming methods would have to be considered to

represent the symbol -oo. For purposes which will become clear in the course of this docu-

ment, we have remained with the notation R+ In implementing any of the ideas in this

dissertation, if the value set at hand is R+ it should be clear that the symbol -oo can be

replaced with a 0 and S o(t(y)) replaced by S(t(y)), so that representation in computers may

be more easily accomplished.

1.2 Minimax Algebra

The last 40 years have seen a number of different authors discover, apparently indepen-

dently, a non-linear algebraic structure, which each has used to solve a different type of prob-

lem. The operands of this algebra are the real numbers, with -oo (or +oo adjoined), with

the two binary operations of addition and maximum (or minimum). The extension of this

structure to matrices was formalized mathematically, in the environment in which the above

problems were posed, by Cuninghame-Green in his book Minimax Algebra [38]. It is well

known that the structure of R with the operations of + and V is a semi-lattice ordered

group, and that (R,V,A,+) is a lattice-ordered group, or an 1-group [35]. Viewing Roo as

a set with the two binary operations of + and V, and then investigating the structure of the

set of all n x n matrices with values in Roo, leads to an entirely different perspective of a

class of non-linear operators. These ideas were applied by Shimbel [45] to communications

networks. Two authors, Cuninghame-Green [36,37] and Giffler [46] applied them to the

problem of machine-scheduling. Others [47,48,49,50] have discussed their usefulness in appli-

cations to shortest path problems in graphs. Cuninghame-Green gives several examples

throughout his book [38], primarily in the field of operations research. Another useful appli-

cation, to image algebra, was again independently developed by G.X Ritter et al. [51].

In fact, the notion of a matrix product can be generalized to what is called the general-

ized matrix product [39], whose definition is given below.

Let F denote a set of numbers. Let fand g be functions from F X F into F. For

simplicity, assume the binary operation fto be associative. Let Fm" denote the set of

all m x p matrices with values in F, and let (aij) = A E F'P and (bjk) = B E FP'.

Define f/ g to be the function from FP"' x F'" into Fm" given by

(f g)(A,B) = C,

where cik = (ag blk )f ( a2 gb)f ... f( p gbpk ), for i =,...,m, k = 1,...,n, and f

and g are viewed as binary operations.

Thus, if fdenotes addition and g multiplication, then (f- g)(A,B) is the ordinary matrix pro-

duct of matrices A and B. Cuninghame-Green develops the setting for a formal matrix cal-

culus based on the two binary operations + and V of the extended real numbers, analogous to

linear algebra which uses the two operations of multiplication and arithmetic sum. He terms

this matrix theory minimax matrix theory. The development of the theory is performed in

the abstract, with an eye towards applications for matrices with values in the set R+o. The

importance of Cuninghame-Green's work to the image algebra is that not only is the

minimax matrix algebra embedded in the image algebra for the set R, but also for the set

R++. The set (R+, V, A, *) is an 1-group also. An image algebra transform using either El

or 0 can thus be viewed as a matrix transform in the minimax algebra for the respective

case of R, or R+,. This completes the mathematical identification of the three main

subalgebras in the image algebra. The linear transforms were classified by Gader [32] who

showed that linear algebra is embedded into image algebra. As a result of each embedding

above, the full power of the respective mathematical theory can be applied to solving prob-

lems in image processing, as long as the image processing problem can be formulated using

image algebra operations of 6, E, or Since it has been formally proven that the image

algebra can represent all image-to-image transforms (see section 1.1), the embeddings are

very useful to have.

The rest of this section is devoted to introducing the basic notions of the minimax alge-

bra structure and properties.

1. .1. Basic Definitions and Notation

Let F be a semi-lattice ordered semi-group with semi-lattice operation V and semi-

group operation X. Thus, F satisfies

xV(yVz)=(xVy)Vz A1

xVy=yVx A2

xV x = x A3

as it is a semi-lattice, as well as

xx (yx z)=(xx y)X z A

as it has an associative group operation X, and

xX (yVz)=(xx y)V(xx z) A

(yVz)x x=(yx x)V(zx x) A

as it is an ordered semi-group. We call this structure a belt, in the vein of rings. The opera-

tion V is called an addition, and the operation X a multiplication. We shall also call a semi-

lattice an s-lattice.

Suppose the belt F also satisfies the dual to axioms Al through A6, where X'is another

binary group multiplication:

xA( y Az) =( xAy )Az A'

xAy=y A x A'2

x Ax =x A'3

xX'(y X')=( x X'y )X'z A'4

x '(y Az)=(x X'y)A( x X'z) A'5

(yAz)X'x=(y 'x)A(zx'x). A'

Here, X' is called a dual multiplication, and A is called a dual addition. The (group) multipli-

cation or dual multiplication is not assumed to be commutative.

If in addition to the above 12 axioms, F satisfies the following axiom,

xV(y Ax) = xA(yVx) = x,

then F is a belt with duality. If the multiplication x and dual multiplication X'coincide,

then we call the multiplication self-dual. A belt with duality and self-dual multiplication

corresponds to a lattice-ordered semi-group, or 1-semi-group, in lattice theory.

Let (F1, V) and (F, V) be two s-lattices. A function f: F1 --* F2 is an s-lattice

homomorphism if

f(x V y) = f(x) Vf(y),

for all x,y E F. If F1 and F2 are belts and f: F1 --* F, is an s-lattice homomorphism, then if

f also satisfies

f(x X y) = f(x) X f(y)

for all x,y C F, then we say that f is a belt homomorphism. The following is an example of a

belt isomorphism. Define f: R -- R+ by

f(x) = ex

Then f(x V y) = f(x) V f(y), and f(x + y) = f(x) f(y). It is trivial to show that f is a belt


The belts R and R+ are commutative belts, that is, the multiplication X commutes.

Each also has an identity element under the multiplication, namely 0 for R and 1 for R+.

Because they are groups, each element r F has a unique multiplicative inverse; we call

such a belt a division belt, by analogy with division rings. A belt has a null element if there

exists an element 0 E F such that

VxEF,xVO=x andxx 0=Ox x=0.

The belts (Ro, V, +) and (Ro, V,+) each have the element -oo as its null element.

A division belt with distinct operations X and V and with duality corresponds to a

lattice-ordered group, or 1-group. In fact, if (F,V,X ) is a belt with distinct operations V

and X, then by defining

x A y =( x-1 V y-1 )-1 V x,y E F (1-2)

we have introduced a second (dual) s-lattice operation A such that (F,V,A) becomes a (distri-

butive) lattice [35]. In our terms, the division belt F acquires a duality with a self-dual mul-

tiplication. Our main interest will be for the 1-groups (F,V,x ,A, x') =(R,V, +,A,+)

and (R+,V,*, A, *), representing real multiplication. From the above discussion, it fol-

lows that (R,V, +,A,+) and (R, V, *, A,*) are isomorphic as 1-groups.

An arbitrary 1-group F having two distinct binary operations V and x can be extended

in the following way. We adjoin the elements +oo and -oo to the set F and denote this

new set by Foo, where -oo < x < +ooV x E F. We define a multiplication and a dual

multiplication in F,, by: if x,y E F, then x X y is already defined. Otherwise,

x X- -o =-oo xx = -oo, x E F U {-oo}

xX +oo = +oo x =+oo, xE F U {+oo}

x x'-oo=-oo x' x =-oo, x E F U {-oo}

xx++o o =+cX'x =+oo, xEF U {+oo}.

-o0X +00=+c0 -00=-00

-00 X'+00 = +00 X -00 = +00

The element -oo acts as a null element in the entire system (F,., V,x ) and the element

+oo acts as a null element in the entire system (Foo, A,x'). However, the multiplications

x and x' are asymmetric between the elements -oo and +oo. The elements in F are called

the finite elements.

We call such a system (Fo,,V,x,A,x') a bounded 1-group, and F is called the group of

the bounded I-group Fo.

The two bounded 1-groups (Roo, V, +, A, -') and (R,+, V, *, A, *') will be our main

concern. Another bounded 1-group of interest is the 3-element bounded 1-group with group

0, denoted by F3. Note that the boolean algebra ({-oo,q},V,A) is embedded in F3, with OR

= V (maximum), AND = A (minimum), FALSE = -oo, and TRUE = q. It is simple to

check that the familiar truth tables hold.

Let (F,V,x ) be a belt, and let (T,V) be an s-lattice. Suppose we have a right multi-

plication of elements of T by elements of F:

xX X T Vpairsx,X, xE T, X EF.

We call (T,V) a right s-lattice space over (F,V,x), or just say T is a space over F if the fol-

lowing axioms are satisfied for all x,y E T and for all X, p E F:

(T,V) is an s-lattice

(xx X)x p=xx (Xx x-)

(xVy)x X=(xx X)V(yx X)

xx (XV )=(xX X)V(xx P)

and if F has an identity element 0,

xX 0=x.

Such spaces play the role of vector spaces in the minimax theory. If T and F are

known, then we shall simply say that T is a space.

A subspace is a subset of a space which is itself a space over the belt F.

Let (S, V), (T,V) be given spaces over a belt (F,V,x). An s-lattice homomorphism

g: (S,V) --* (T, V) is called right linear (over F) if

g( x X)=g(x)x X VxES,VXEF.

We denote the set of all right-linear homomorphisms from S to T over F by

Homy(S, T). That is,

HomF(S, T) = {g:S ---T is a homomorphism and g(x X X) = g(x)xX Vx E S, V X E F}.

Let (F,V,x) be a belt and (T, V) be an s-lattice, and suppose we have defined a left

multiplication of elements of T by elements of F:

XX x T V pairs x,X, x C T, X 6 F.

The left variants of the above five axioms are easily stated. We define a system satisfying

those left axioms a left space over F. This allows us to define a two-sided space. A two-

sided space is a triple (L, T,R) such that

L is a belt and T is a left space over L.

R is a belt and T is a right space over R.

VXEL, Vx T and V E R: Xx (x x Sp) =( Xx x)x p.

Let (F, V, x ) be a belt. An important class of spaces over F is the class of function

spaces. Here, the s-lattice (T, V) is (F U,V). Such spaces are naturally two-sided. We shall

only be interested in the case where I UI = n E Z'. A space (T, V) is of form (F ",V), and

hence our spaces F" are spaces of n-tuples.

When discussing conjugacy in linear operator theory, two approaches are commonly

used. One defines the conjugate of a given space S as a special set S* of linear, scalar-valued

functions defined on S. The other involves defining an involution taking x E S to x* E S*

which satisfy certain axioms. (Recall a function f is an involution if f-(f(x)) = x.) The situa-

tion is slightly more complicated in the case of lattice transforms.

Let (S,V,x) and (T,A,x') be given belts. We say that (T,A,x') is conjugate to

(S, V, x) if there is a function g: S -- T such that
g is bijective C1

Vx,y ES,g(xVy) = g(x) A g(y) C2

V x,y E S, g(x x y) = g(y) X'g(x). C3

In lattice theory, g is called a dual isomorphism. Note that conjugacy is a symmetric

relation. If (S, V, A) is an s-lattice with duality satisfying the first two axioms, then we say

that S is self-conjugate. If (S,V,x,A,x') a belt with duality, we say that (S,V,x,A,x') is

self-conjugate if (S,A,x') is conjugate to (S,V,x).

In particular, every division belt is self-conjugate under the bijection x* = x-1, and

every bounded 1-group is self-conjugate under the bijection (-0o)* = +oo, (+co)* = -oo, and

x = x- if x is finite.

1.22. Matrix Algebra

We now present the extension of the belt operations to matrices. Let (F, V, x ) be a

belt. Let Mmn be the set of all m x n matrices with values in the set F, and let

s = (sj), t =(tij) E Mm. Then we define

(sj) V (tij) ( sij V ti)
and for (sij) E Mmh, (tjk) Mhn, we have

(sij) x (tjk) ( [sijx tjk) mn

Suppose s E MMn and t Mhq. We say that s and t are conformable for addition

whenever both m = h and n = q, and conformable for multiplication whenever n = h. For

the remainder of this presentation, we use the notation F" and .Mn, as defined above. Also,

we call an n-tuple or a matrix finite if all its elements are finite, i.e. not equal to either

+00 or -oo.

If (F, V,x ,A,x') is a belt with duality, then we say that a space (T,V) over F has a

duality if

a dual addition A is defined where (T,V, A) is an s-lattice with duality;

(T,A) is a space over the belt (F,A,x').
We also have a dual matrix addition and dual multiplication defined for matrices over a belt

with duality.

(sij) A (t) ( sij A tij)
and for (sij) Mmhn (tjk) E Mhn> we have

(Sij) X'(tjk ([sijX'tjk) E mn
with the expressions conformable for dual addition A and conformable for dual multiplication

X' used in the obvious way.

Let (F,V,x ) be a belt and let Mpq denote the set of p x q matrices with values in F.

The following are some basic properties that are proven in [38].

(1) (.Mmn,V) is an s-lattice and (.Mnp,V) is a function space over (F,V,X);

(2) (.M,,V,x) is a belt;

(3) (Mnp,V) is a left space over the belt (.Mn,,V,X);

(4) Mnp is a right space over the belt F;

(5) Scalar multiplication of a matrix s by an element X E F is defined by

(sjj)X X = (sj X X)

xx (sj) (Xx s1j)

for all (sij) Mnp, X E F;

(6) For all s E .Mn, t, u E Mp, X E F,

aX (tVu)=(s8Xt)V(s u)
sx (tx X)=(sx t)x X.

Since the s-lattice (MAi,V) is isomorphic to the s-lattice F", we have F" is a function

space over F as well as a space over Mnn. This mimics the classical role of matrices as linear

transformations of spaces of n-tuples!

Two important matrices in our present setting are the identity matrix and the null

matrix. Suppose the belt F has identity and null elements 0 and -oo respectively. We

define the identity matrix e E Mnn by

e =


--00 .

and the null matrix 4 E Mmn by

Thus we have V a E Mn, and for (D

-00 -
-00. -00


E Ann,

eX s=sX e=s


sx =4 Ix s = .

In the bounded 1-group Ro we have

0 .
-0o .


-00 .


and in R+ we have

1 .

1 -oo.
-oo .
S 1

Conjugacy extends to matrices if the underlying value set is itself a self-conjugate belt.

This is stated in the next proposition.

Proposition 1.1 [38]. If(F,V,X ,A,X') is a self-conjugate belt, then (Mn,V,X ,A,X') is

a self-conjugate belt.

In linear algebra, we characterize linear transformations of vector spaces entirely in

terms of matrices. Are we able to do a similar classification here? The following results give

necessary and sufficient conditions for this to be the case.

Theorem 1.2 [38]. Let F be a belt which has an identity element q with respect to X and a

null element 0 with respect to -oo. Then for all integers m,n > 1, mn is isomorphic to


Corollary 1.3 [38]. Let F be a belt, and let n > 1 be a given integer. Then a necessary

and sufficient condition that .mn be isomorphic to HomF(Fn,Fm) for all integers n,m > 1 is

that F have an identity element 0 with respect to X and a null element 0 with respect to V.

We call a matrix s E Mm1 a lattice transform.

Many of the results that were stated in Cuninghame's book can be viewed in in context

of a dual lattice-ordered semi-group, which has been extensively researched [35]. However,

we wish to study the structure from a different perspective. The extension of the belt opera-

tions to matrices allows us to view matrices as operators on spaces of n-tuples, in a way simi-

lar to vector-space transformations. These operators are non-linear due to the lattice


structure of the underlying set F. Thus, we may study this particular class of non-linear

transforms in a mathematically rigorous setting, and, since an image can be viewed as a vec-

tor and a template as a matrix (as will be shown in Chapter 2), apply results from the

minimax matrix theory directly to solve image processing problems. For example, decompo-

sition of matrices corresponds to decomposition of templates. This particular application is

discussed in Chapter 5.


In his Ph.D. dissertation, P. Gader showed that linear algebra can be embedded into

the image algebra [32]. One very powerful implication of this is that all the tools of linear

algebra are directly applicable to solving problems in image processing whenever the image

algebra operation O is involved. We now show an embedding of the minimax algebra into

image algebra for the two cases where the belts are R and R+. We employ the same func-

tions 4I and v as used by Gader in his dissertation.

Let X and Y be finite arrays, with IXI = m and I YI = n. Assume the points of X are

labelled lexicographically xl, x, ..., xm. Assume a similar labelling for Y: Y = {y1, Y2, *., Yn }.

Let Ro have its usual meaning. Let Rm { = {(xx, : xi ER CR }. That is, RI, is

the set of row vectors of m-tuples with values in Ro. Let a E RX, Mmn denote the set of

m x n matrices with values in R, and define v : R --* Rm by

(a)= (a(xi),...,a(xm)).

Define % : (Rx) -- Mmn by

Y(t) = Mt = (Pij), where Pij = ty(xi).

Note that the j-th column of Mt is simply (v(tj))', the prime denoting transpose.

In the following lemmas, we assume that I X = m, I YI = n, and I W = 1. We claim

the following:

Lemma 2.1. v(a 12 t) = v(a) X T(t), fort E (R ) Y, a E R .

Lemma 2.2. v(a V b) = v(a) V v(b), a E F F E {R,R+}.

Lemma 2.3. 4I(s 0 t) = I(s) x *(t), fors E (RX ,), t E (RW)Y.

Lemma 2.4. '(sVt) = '(s) V P(t), s,t E (Fx Y, F E {R,R}.

The proofs are given below.

Proof to Lemma 2.1.

We must show that

(a lI t)(yk) = (v(a)X 'k(t))k.

First note that v(a M t) is a 1 x n row vector, as is v(a) X *k(t). We have

(a 1 t)(Yk) = V a(x) + ty(x) = a(xi) + ty(xi)-
xEX k i-1

Also, (v(a)X '(t))k (v(a))j +( (t))jk = a(xj)+tyk().


Proof to Lemma 2.2.

At location Xk, the image a V b has value a(xk) V b(Xk). At location k, the row vec-

tor v(a) V v(b) has value (v(a))k V (v(b))k = a(xk) V b(Xk).


Proof to Lemma 2.3.

Here, s E (RX)w and t (RW)' implies

s l t = rE (Rx)',

r(xi) = V ty(w) + s (Xi) Vty(wk) + wk(Xi)
WNow, let() (t) = have
Now, let *(s) x *(t) = u EMm. We have

uij = ((s))k + ((t))kj =V swkX ) + ,(W = V tj(Wk) + Swk(X)


Proof to Lemma 2.4.

Here, s, t C (R Then

(s V t),(xi) = s,(xi) V t(xi),

(V(s)v *(t))ij = ( P(s))ij V(, (t))ij = 9s(xi)V t(xi).
In order to prove the isomorphism theorem, we will use the following lemma.

Lemma 2.5. I(t*) = (\P(t))*, t E (F) Y, where F denotes either R or R+. In this

particular instance we let t* denote the conjugate template of t C (F) Y.

Proof: Let s = t*. Then s C (F ), and

*(t') = \l(s) = M, = (pij), where ij = sx(Yi)= (t*)x,(Yi)= [(ty(xj) *,

P(t) = Mt = (qij), where qj = t,(xi).

ij = [t,,(xj)]*= [q]ji

[ .=(Pij) =([qji ]*)=(qij)* =-(M)*,
and we have \P(t*) = Ms = (Mt)* = (\(t))*.

The following theorem, along with Lemmas 2.1 through 2.4, show how the embedding

of the minimax algebra into the image algebra is accomplished.

Theorem 2.6. For a finite array X, with IX1 = m,

{ R ,V, A; (R,) E V, E A; 1 El } is isomorphic to

{Rm ,V,A;Mmm,X, V,X',A;x,x'},

where .Mmm is the set of all m x m matrices with entries in the bounded 1-group R+.o

Proof: By Lemma 2.1, v preserves image-template multiplication, and by Lemma 2.2, V

preserves the image-image pointwise maximum operation. By Lemmas 2.3 and 2.4,

for X = Y = W, preserves the operations of 0M and V between templates. Let

1 E (RX)x denote the identity template defined by

(x0 if y = x
'(x) = -oo otherwise

It is trivial to show that P(1) = e E Mmm, the identity matrix in .Mmm

We now show that the operations of E0 and A are also preserved under I. It is not

difficult to show that %(s At) = 4k(s) A 1(t). Let r = sA t. Then

1(r) = M, = (mij) = (ryj(xi)), where ry,(xi) = sy,(xi) A ty(xi). Thus,

'(s) A 1(t) = M, A Mt = (sy,(xi)) A (ty(xi)) = (sy(xi) A t,(xi)) = (ry(xi)).

By definition, s El t = (t* 10 s*)*, and, using Lemma 2.5 with F = R, Lemma 2.3,

and property C3, we have

q/ (s E t) = q((t* EM s*)*) = [(t* to s*)]* = (W(t*)X x((s*))*

= (P(s*)) x '(W(t*))* = (S) x'I(t).
Thus, V(sE t) = if(s)X' 1(t).

It is straightforward to see that v is on-to-one and onto Rm,. To show that 'P is

one-one and onto Mmn, let s, t E (R )Y and suppose that 1(s) = P(t). Then

(4I(s))ij = (NU)ij = sy(xi) = t"y(xi) = (M)ij = ( (t))ij, and, thus,

syj(xi) = ty(xi) for all j = 1,...,n, and for all i = ,...,m.

So is one-to-one as s = t. Let M = (mij) E .m. Define t E (RxO,)t' by

ty (xi) = mij. Then IP(t) = M. Setting m = n, we see that 4 is one-one and onto



Thus, the minimax algebra with the bounded I-group R,, is embedded into image alge-

bra, by the functions %Y-1 and -~1. As the bounded 1-group R,+ is isomorphic to the

bounded 1-group R,, the minimax algebra with the bounded 1-group R+, is also embedded

into the image algebra. In this case, the matrix operation X corresponds to the image alge-

bra operation The isomorphism result is stated in Theorem 2.9.

Let X and Y be finite arrays as before. Let R, have its usual meaning, a E (R )x,,

Mnn denote the set of m x n matrices with values in R+, and let (R+ )m =

{(X1,X2,.) : Xi R, }. Define v: (R )x -+ (R o)m in the usual way by

(a= (a(x=),...,a(xr)).

Define : ((R+,) XY .mn as before by

'I(t) = M = (Pij), where pij = ty(xi).

In the following lemmas, we assume that IXI = m, I Y = n, and I WI = 1. We claim

the following, for a, b E (R+ )X:

Lemma 2.7. v(a t) = v(a) x I(t), for t C ((R+J )Y

Lemma 2.8. P(sO t) = (s) X IP(t), for s E ((R+,)x)W, t E ((R+)w)Y.

Proof to Lemma 2.7.

We must show that

(aO t)(yk) = (V(a) X '(t))k.

We have

(aO t)(yk) = a(x)* t(kx) = a(x) tk(X).
xEX ii1

Also, (v(a) x %(t))k = ((a))j ((t))jk = a (j)* tk(j)
j-1 j=1


Proof to Lemma 2.8.

Here, s E ((R' )x)w and t C ((R+)W)Y implies

s t = re((R )x)Y,

ry/(xi) = t(w) (i) t(Wk) wk().

Now, let k(s) x I(t) = u E Mn. We have

Uij = (())ik *(t))kj V S,(x) t (wk) ktY (k) Sw )
kU l k=fil k*s( i.

Theorem 2.9. For a finite array X, with |Xi = m,

{ (R+)x,V,A;((R+ )X)X, V, @ A ; ,@ } is isomorphic to

{(R ),V,A; 0mm,, V,x',A;x,x'},

where Mmm is the set of all m x m matrices with entries in the bounded l-group R++.

Proof: By Lemma 2.7, v preserves image-template multiplication, and by Lemma 2.2, v

preserves the image-image pointwise maximum operation. By Lemmas 2.8 and 2.4,

for X = Y = W, I preserves the operations of 1S and V between templates. Let

1 E ((R+o)x)x denote the identity template defined by

S1 if y =x
l (x) = -oo otherwise

It is trivial to show that *(1) = e E Mmm, the identity matrix in Mmm over R+,.

In Theorem 2.6, the proof that %(sAt) = %I(s) A %(t) was not dependent on the

value set R~, and hence is true also for templates s, t E ((R+ ))Y. We now

show that the operation of @ is also preserved under %. By definition, s @ t =

(t s), and, using Lemma 2.5 with F = R+, Lemma 2.8, and property C3, we have

K(st) = 0 V((t i))= [(t s)* = [yt)X 'K(s) I
= ( i())*X' (,(t))*= '(s)X (t).

Thus, I(s@t) = 4(s)x' (t).

We use the fact that Theorem 2.6 showed I and v are one-one and onto and also

that R, and R+, are isomorphic as bounded 1-groups, and we are done.


We have shown that the minimax algebra with two different interpretations for the

bounded 1-group F+0 with group F, namely F = R and F = R+, is embedded in the image

algebra. Using the notation Ro instead of R>W allows the reader to regard the value sets

R+ and R, as basically the same (they are isomorphic as belts), without shifting gears

from using 0 in one as the bottom element and -oo in the other. All minimax properties

stated in Cuninghame-Green's book will be valid in the correct context of image algebra


In using the minimax algebra results, we would like to point out that the the matrix-

vector multiplication, multiplication of a matrix by a vector from the right, is used mostly

throughout Cuninghame-Green's book. Left multiplication is mentioned at various places,

and in fact, most left variants of the right multiplication results will hold. However, for the

most part in our applications to image algebra, we will be using the right multiplication form

in the development of our theory. The functions I and v map the image algebra expression

a [ t = b to the matrix algebra expression v(a) X *(t) = v(b), the left multiplication form

which we have omitted in our presentation of Cuninghame's material. The following

diagram in Figure 5 explains how we will be taking advantage of the minimax algebra


a E3 t V(a) X W(t)

V~1 *-1 T T

mn nm

(,P(t))'X (v(a))'

Figure 5. How the Transpose is used in Conjunction with the Isomorphism.

Let Tdenote the function that takes a matrix to its transpose as well as the function that

takes a template to its transpose. Thus, T: Mn -+* Anm is defined by

T(a) = a',

the prime denoting as usual the transpose of a matrix, and T: (FX)Y -- (FY)X is defined


T(t) = t'.

Obviously, k(T(t)) = T(*(t)). In a clockwise manner, the functions v and if take the pro-

duct v(a M t) to v(a) X 1(t), which is the matrix '1(t) multiplied on the left by the row

vector v(a). Applying the transpose to v(a) x P(t), we get T[ (a) X '(t)] =

[((t)]'x [v(a)]', which is the matrix [ J(t)]' E Mm multiplied on the right by the column

vector [v(a)]'. We now use our minimax algebra theorems, where matrix-vector multiplica-

tion is the matrix multiplied on its right by a column vector. After getting the desired

results, we continue on around the diagram clockwise, mapping back by the transpose T

again and then by v-1 or k-1. Formally, if d represents the column vector which is the

result of applications of minimax algebra theorems to the initial column vector

(P(t))' x (v(a))', then v-1(T(d)) will be an image. A similar situation holds for templates.

The minimax algebra results are stated in the usual matrix-vector multiplication order,

and the isomorphisms %' and v are used along with the transpose Tto apply the matrix

results. When the word isomorphism is used in this context, it will mean the above functions

'1 and v explicitly (not with the transpose) unless otherwise stated, with images as row vec-

tors and templates as matrices with images ty as columns.


The objective of the chapters in Part II is to show how the minimax algebra can be

used to extend basic matrix algebraic results in such a way as to have applications in image

processing. The tool that makes the minimax algebra useful in image processing is the iso-

morphism between the image algebra and the minimax algebra. Before the research

presented in this dissertation was conducted, the relationship between the image algebra and

the minimax algebra had not been established. The power of the isomorphism is that it

makes all results in the minimax algebra applicable to solving image processing problems,

just as linear algebra results are applicable to solving image processing problems. For exam-

ple, template decomposition is presently a very active area of research. The problem of map-

ping transforms to some types of parallel architectures is equivalent to decomposing a

transform t into a product of transforms t = t' U t-2 [ tk, where each factor ti is

directly implementable on the parallel architecture. Since decomposing templates is the

same as decomposing matrices, matrix decomposition techniques can be applied to template

decomposition problems. Thus far, there exist no decomposition techniques for matrices

under the matrix operation X as presented in section 1.2. Hence, the methods developed in

Chapter 5 that decompose matrices are new results. They were developed mainly for solving

the problem of mapping of transforms to particular parallel architectures, though they stand

by themselves as a new theoretical result in the minimax algebra.

While some other areas of minimax algebra may seem to have no current applications

to image processing, such as the eigenproblem, we present them in their image algebra form

due to their interesting mathematical results.


This chapter is devoted to describing algebraic properties of the substructures

{(FX)', Fx, 3 ,V, E0 A}, where F is a subbelt of R or R+. During the investigation

of the properties and before the discovery of the link to minimax algebra, many basic proper-

ties, such as the associativity of the E0 operation, were proven within the context of the

image algebra. Many theorems had excessive notational overhead, and often the proofs were

laborious. Most of these same properties were found to have been stated and proven in con-

text of the minimax algebra [38]. Using the matrix calculus makes some proofs less tedious,

and in some cases makes them less cumbersome notationally. Thus, in order to place the

presentation in a more elegant mathematical environment, we are omitting proofs that were

done in the image algebra notation, and shall make use of the isomorphisms given in the pre-

vious chapter. Most of the theorems presented here are mapped into image algebra notation

using the isomorphisms, and the proofs will be omitted. The results will be stated for both

bounded I-groups, using the operations E and .

3.1. Basic Definitions and Properties

Unless otherwise stated, we shall assume that X, Y, and W are finite coordinate sets,

with lX1 = m, I Y = n, I WI = k, with the pixel locations lexicographically ordered as in

Chapter 2. The belt F with duality is a subbelt of either R, or R ,. The templates s and

t will be F valued templates on appropriate domains, and a, b will be F valued images. For

the appropriate subbelt F of R, or R+ according to the operation E or @, respectively,

we have the following basic properties.

(1) ((F)Y, V) is an s-lattice and ((Fx)Y,V) is a function space over (F,V,x);

(2) {(FX)X,V, E } is a belt; {(FX)x,V,@ } is a belt;

(3) ((FX)Y,V) is a left space over the belt ((FX)Y,V) is a left space over the belt

((FX)X, V, 10 ); ((Fx)X,V,@);

(4) (FX)Y is a right space over the belt F;

(5) We define scalar multiplication of a template t E (FX)Y by a scalar X C F as multipli-

cation by the one-point template X E (FX,)x or X E (FY)Y, depending on whether

the template X multiplies from the left or from the right, respectively, (and adjoining

-oo to F if necessary), as

t 1 X = X E t = s (FX)Y, where sy(x) =t(x) + X

t X = X = t = s (FX ), wheres(x) = t(x) X.
X if x =y
Here, Xy(x) = -oo otherwise

Next we state the distributive properties of E and with respect to V.

(6) a M (t V s) = (a t) V (a E s) a 9 (t V s) = (a 9 t) V (a 9 s)
a M (t 8) = (a El t) [] s a (t 9 s) = (a t) s
(aVb) t t= (a El t) V (b El t) (a V b) t = (a t) V (b 9 t)
(s Vt) E u =(s u) V (t 1 u) (sVt) 9 u =(8 0 u)V(t 9 u)
U 10 ( Vt)=U (u s)V(u 1L t) u ( Vt)=(u@ s)V(uO t)
s El (t El u) =(s E t) El u s (t u) = (s t) 0 u.

The dual to properties 1 through 6 also hold, as both the belts R and R' have duality.

(7) ((FX)Y, A) is an s-lattice and ((Fx)Y, A) is a function space over (F, A,x');

(8) {(FX)x,A, E } is a belt. {(Fx)x,A,@ } is a belt.

Now let F be a subbelt of R or R+, and F+, the bounded 1-group with group F.

Corresponding to the identity matrix and the null matrix we have the identity template

1 G (Fxx, defined by

in(x) = { ifx=y
) [ -oo otherwise

and the null template 4D E (Fx)Y defined by

4,(x) = -oo, for ally EY, xE X.

For the belt R, < = 0, and for the belt R+, 0 = 1. Thus we have

a l=a, t m 1 =1 la t=t V a E RX, Vt E (R )x x
For 4 E (FX),

tV4>=t, t iL 4 >= = at=D, a Em 4 = null image, V a E RX, Vt E (Rx)x

Similar properties hold for the operation @.

3 1.1. Homomorphisms

We now discuss homomorphisms in context of the image algebra. Let I XI = m. Since

the s-lattice {Fx, V} is isomorphic (via v) to the s-lattice {F,m, V}, {Fx,, V} is a space.

For X 6 FX the constant image, we have

aVX = XVa = bEF ,, where b(x)=a(x)VX,

and for the one-point template X E (RX,)x

a X = X = a = b FX, where b(x) = a(x) + X,

if F = R, and

a X = X ba = bE F where b(x) = a(x) X,

if F = R+. Let F E {R R }. Since {F V} is an s-lattice, an s-lattice homomorphism

from Fx to FY is a function f: Fx --+ FY satisfying

f(aVb) = f(a) Vf(b).

A right linear homomorphism g: Fx -* FY is an s-lattice homomorphism satisfying

g(a M X) = g(a) 1 X.

Thus, the set of all right linear homomorphisms from Fx to FY is denoted by

Hom(Fx, FY)= {g: FX -FY and g satisfies g(aVb)=g(a)Vg(b), g(a X)= g(a) X },

or if F is R+, then

Hom(FX, FY) ={g:FX--*FY ,and g satisfies g(aVb)=g(a)Vg(b),g(a@ X)= g(a) X}.

3.1.2 Cla.ssification of Homomorphisms in the Image Algebra

Right linear transformations can be characterized entirely in terms of template

transformations, and we give necessary and sufficient conditions for (FX)Y to be isomorphic

to HomF(Fx, FY).

Theorem 3.1. Let F be a belt with identity and null element. Then for all non-empty

finite coordinate sets X, Y, (FX)Y is isomorphic to HomF(FX,FY).

Corollary 3.2. Let F be a belt, and let X (0 be a finite coordinate set with IX1 > 1.

Then a necessary and sufficient condition that (FX)Y be isomorphic to HomF(F x, F Y), for

all non-empty finite coordinate sets Y, is that F have an identity element 0 with respect to

x and a null element 0 with respect to V.

We call a template t E (FX)Y used with the operation V, M, IN, or a lattice

transform. We will present an example of a transformation which is not right linear in sec-

tion 6.1.

3.1.3. Inequalities

Some useful inequalities are stated in the next theorem.

Theorem 3.3. Let F be a subbelt of R or R, Then the following inequalities hold for

images and templates with the appropriate domains, having values in F.

aV(b Ac) (a Vb) A (aVc)
aA(b Vc)>(aAb)V(aAc)

(aA b) t (a 1 t) A (b t)

aE1 (t As)<(a 10 t)A(a IL s)

(aV b) E t_ (aU3 t)V(b 29 t)

a IN (t Vs)2 (a IN t)V(a Ig s)











1n (sE r) ( a I s) C r and a li
EM (sa r ) (t a s) I] r and t C9

(aAb) t (a t)A(b t)

a@ (t As) <(a t) A(a as)

(aVb) t >(a t)V(b t)

a( (tV s) (a t) V(a s)

t@ (sAr)<(t s)A(t@ r)

(sAr) t <(s @ t)A(r t)

t (sVr) (t s)V(t r)

(sVr) t (s t)V(r t)

(s 1 r) (a El s) M r
(s a r)>(t tZ s)) l r

a ~(sa r) (a @s) rand a( (s@ r) 2( a s) r.
t (a s r)<(t s) Q r and t (a sr) (t s) 0 r.

s V(t Ar) (s Vt)A(s Vr)

sA(t Vr) (sAt)V(sAr)

t E3 (s Ar) (tEs)(tA(t

(sA r) t < (s 3 t)A(r IE

t E (s Vr) (t e as)V(t E

(s V r) t >(s El t) V(r Il

We remark that the above properties corresponding to the forward multiplications of

an image by a template as defined in Chapter 1 are also valid, namely,

t E3 (aAb) (t t0 a) A(t [0 b), etc.

3.1.4 Conjiuacy

The notion of conjugacy as discussed in section 1.2 extends to templates as well. Sup-

pose that F and F* are conjugate. Then for t E (FX)Y, t* E ((F*)Y)x is defined by

t;(y) = (t,(x))*.

The conjugate of t E (R ) is the additive dual t*, and the conjugate of

t E ((R+,)x)Y is the multiplicative dual t, both of which are defined in section 1.1.

Let P be any set of F valued templates from Y to X, with F and F* as conjugate sys-

tems. Define P* by
P* {t*:tEP}.

Here, the star symbol denotes the dual template for either value set Ro or R+,. Note

that P* C ((F*)Y)x. We have

Theorem 3.4. Let (F,V) and(F*,A) be conjugate. Then ((FX)Y, V, ) and

(((F*)Y) x,A, ) are conjugate, where F is a sub-bounded 1-group of R,, and

((Fx)Y, V, ) and (((F*)Y)X, A, @) are conjugate, where F is a sub-bounded i-group of

R+o, for any non-empty finite coordinate sets X, Y. In all cases the conjugate of a given

template t is the dual template t or t of the respective bounded i-group as defined in

Chapter 1.

Proposition 3.5. If(F,V,X,A,x') is a self-conjugate belt, then ((F*)X)Y = (FX)Y for all

non-empty finite coordinate sets X, Y. Also, ((RX'),V, ,A,Z51 ) is a self-conjugate belt,

and (((Rfoo)x)x,V,@,A,@) is a self-conjugate belt.

An ea.mple. In this section we give an application to a scheduling problem, showing the

use of the conjugate of a template. In particular, this example provides a physical interpreta-

tion of the conjugate of a template.

Suppose we have n tasks, or activities, or subroutines, labelled 1,...,n. Let a(xi) denote

the starting time of task i, and assume without loss of generality that task 1 is the starting

activity, task n is the finishing activity, and that tasks 2 through n-1 are intermediate activi-

ties. Suppose we are given the time of the starting activity, and we wish to know the

soonest time at which each subsequent activity can be started. In particular, what is the ear-

liest time that task n can start, or, what is the earliest expected time of completion of the

collection of tasks?

The relation of the tasks to one another can be described by a partial order R on the

set of tasks {1,...,n}:

j R i if and only if task j is to be completed before task i can start.

Let dij denote the minimum amount of time by which the start of activity j must pre-

cede the start of activity i. That is, dij is the the duration time of activity j, or the process-

ing time of task j, which must pass before activity i can start. Define w (RX )X by

I dij if j i
W(XJ) = -o otherwise

There is an obvious relationship between the weighted digraph associated with the partial

order relation R and the template w. For example, suppose we have 5 tasks or activities, or

subroutines of a program, which have the following relation or partial order:

(1,2) (1,3) (2,4) (2,5) (3,4) (3,5) (4,5)

Here, activity 1 is the start activity, activity 5 is the end activity, and tasks 2,3,4 are inter-

mediate tasks or subroutines. Suppose the duration times dij of the activities are:

d21 =1 d31 = 6 d4 = 2

d43 = 1 d2 = 1 d53 = 3 d4 = 3

and dii = 0 for each i = 1,...,5. This is consistent with a meaningful physical interpretation of

the definition of duration time for a task.

The corresponding weighted digraph is given in Figure 6.

2 2


Figure 6. A Scheduling Network.

The nodes represent the activities, and the duration times are given as numbers on the

directed edges linking the nodes.

In determining a(x4) for example, note that a(x4) must satisfy

a(x4) = max{d42 + a(x0), d43 + a(x3), d44 + a(x) }
or equivalently

a(x4) = max {w (xj) + a(xj)}.

This last equality follows from the fact that wx4(xj) = -oo if j is not related to i. In the gen-

eral setting, we must solve, for each i = 1,...,n:

a(xi) = max { wx,(xj) + a(xj)}
l or, writing the problem as an image algebra expression, we must solve for a in

a E1 w = a. (3-1)

Here, a is an image on X where I X = n.

An analysis of a network in this manner is called backward recursion analysis.

Under forward recursion, suppose we have n tasks with duration times fij, where fij is

the minimum amount of time by which the start of activity i must precede the start of

activity j, if the activities are so related. Otherwise, let fij have value -oo. Define

w E (RX)X by

Sfi if j i
Wx (Xj) oo otherwise'

As before, fii = 0 gives a consistent physical interpretation.

Let r be the planned completion date of the project, which is given, and define a(xi) to

be the latest allowable starting time for activity i. We wish to determine a(x,),...,a(x-_1)

such that a(x,) = T. Thus, we desire to solve for a in

a(xi) = min (-w (xj) + a(xj))
for i = 1,...,n. For example, for 5 nodes, suppose we have the following relations:

(1,2) (1,3) (2,4) (2,5) (3,4) (3,5) (4,5) .

Here we write (i,j) if task i must precede task j. Suppose the times fij of the activities are:

f12 = f13 = 6 f24 = 2

f34=1 f25 =1 f5 =3 f45=3

Suppose we would like to find a(x4), say, satisfying

a(x4) = mmin (-w,(xj) + a(xj)).

The value -Wx4(xs) + a(x5) is the latest allowable time to start task 5 minus the minimum

amount of time activity 4 must precede activity 5, and the time to start task 4 must be at

least as small as this number. Thus, the time to start task 5 must be at least as small as -1

+ a(x5). The value a(x4) = min {-wx,(xs) + a(x) } = -1 + T. (All other values

-w, (xj) + a(xj) = +oo as -wX,(xj) = +00 for j # 5.) Since r is given, this quantity can be

explicitly determined. The remaining equations can be solved similarly.

If we define u E (R+J)X by

f- W (Xj) if j i
Ux(Xi) +oo otherwise

then it is obvious that in general we must solve for a the following:

a 0 u = a. (3-2)

It is clear that the template u in equation (3-2) is the conjugate of the template w in Equa-

tion (3-1). That is,

U = W .

We can say that the templates w and w* define the structure of the network as we analyze it

backward or forward in time, respectively.

3.1.5. Alternating tt* and tt Products

This section discusses the concept of an alternating tt* or tt product of a template t

and its conjugate under the operation E or @, respectively. We shall state the results only

for the sub-bounded 1-groups of R, and the operations 1l and N with the understanding

that unless otherwise stated, an arbitrary sub-bounded 1-group of R+ and the operations

and @ may be substituted in the appropriate places.

Theorem 3.6. Let F0o, be a sub-bounded 1-group of R,, where F denotes the group of

the bounded I-group F.oo, andt E (F )Y. Then we have

t E (t*- 3 t) = tEV (t*' E t)=(t 3 t*) E[ t = (t t t' ) E t = t.


t* E3 (t EZ t*)=t* E (t E t*) =(t* 0 t) M t* = (t* El t) [E t* =t*.

We now define an alternating tt product. Write a word consisting of the letters t and

t*, in an alternating sequence. A single letter t or t* is allowed. If we have k > 1 letters,

now insert k-1 symbols of E0 and E3, in an alternating manner. For example, the following

sequences are allowed:

t* IN t

t C t* El t

t* E t V9 t* E t E t* t.

Now insert brackets in an arbitrary way so that the resulting expression is not ambiguous.

For example,

t*' t

t E (t* E t)

(t* ((t IE t*) El t)) IN (t* E] t).

Any algebraic expression so constructed is called an alternating tt* product.

Suppose an alternating tt* product an odd number of letters t and/or t*. Then we say

it is of type t if it begins and ends with an t, and that it is of type t* if it begins and ends

with an t*. If it has an even number of letters we say that it is of type

t El t or t I t or t* E t

exactly according to the first two letters with its separating operator, regardless of how the

brackets lie in the entire expression. As an example:

t* ] t is of type t* C t

t E[ (t* E0 t) is of type t

(t* 1 ((t U t*) El t)) E (t* El t) is of type t* 1l t.

Theorem 3.7. Let Fo be a sub-bounded I-group of R,, and t an arbitrary template in

(F,)'. Then every alternating tt* product P is well-defined, and ifP is of type Q, then P


If a product P has more than 1 letter, then we define P(z) to be the formal product

obtained when the last (rightmost) letter, t or t* (or t), is replaced by z, where z is a F

valued template on the appropriate coordinate sets X and Y.

Theorem 3.8. Let Fo be a sub-bounded 1-group of R, and t, z arbitrary templates

over F. IfP is an alternating tt* product containing four letters and P is of type Q, then

P(z) = Q(z).

3.2. Systems of Equations

We now discuss the problem of finding solutions to the problem:

Given t E (R,)x and b E R,,, find a E RX such that a E t = b. (3-3)

Similarly, we also wish to solve:

Given t E ((R,)X)Y and b E (R )Y, find a E (R+)X such that a t = b.

Here, XI = m, IY = n.

3.2.1. F-asticity and /-solutions

If F is a bounded 1-group and x, y E F, we say that the products x x y and x X'y are

/-undefined if one of x,y is -oo and the other is +oo. We say that a template product is /-

undefined if the evaluation of ty(x) requires the formation of a /-undefined product of ele-

ments of the bounded 1-group Foo. Otherwise, we say that a template product is /-defined

or l-exists. Some mathematical models require solutions which avoid the formation of /-

undefined products, as in practical cases these often correspond to unrelated activities. We

state these results for both bounded 1-groups where appropriate, with the results in

parentheses. As usual, the sub-bounded 1-group F 0 is dependent on which operation, B or

is used.

Lemma 3.9. Let F+oo be a subbelt of R, (R+). Let X and Y be non-empty, finite

arrays, and t E (F)Y. Then the set of all images a E Fx such that a E t (a 9 t) is

/-defined is a sub-s-lattice of FX. Hence the set of solutions a of statement (3-3) such that

a 1 t (a @ t) /-exists is either empty or is a sub-s-lattice of F .

Lemma 3.10. Let X, Y, and W be non-empty, finite arrays, and t E (F )Y. Then the

set of templates s C (F,)w, such that s M t (s 0 t) is /-defined is a sub-s-lattice of
(FX ).

Any solution a of statement (3-3) such that a 1M t (a t) /-exists is called a /-solution

of (3-3).

Theorem 3.11. Let Foo be a sub-bounded 1-group of RR (R+J. Then (3-3) has at least

one solution if and only if a = b 2 t* (a = b @ t) is a solution. In this case, a = b E t*

(a = b @ t) is the greatest solution.

Recall from probability theory that a row-stochastic matrix is a non-negative matrix in

which the sum of the elements in each row is equal to 1. We will make analogous definitions,

where the operation + is replaced by the operation V, and the unity element is -oo.

Let P C Fo, where F,. is an arbitrary sub-bounded 1-group of R, (R+,). A tem-

plate t C (F is called row-P-astic if t (xj) P for all i = 1,...,n and column-P-astic if

Vt'X(yi) E P for all j = 1,...,m. The template t is called doubly-P-astic if t is both row- and

column-P-astic. Note that if t is column-P-astic, then t' is row-P-astic.

Theorem 3.12. Let F be a sub-bounded 1-group of R, (RJ and t E (F)xY, b

E FY such that (3-3) is soluble. Then a = b in t* (a = b @ t) l-exists and is a /-solution

of(3-3), if and only if one of the following cases is satisfied:

(i) t E (Fx)Y, and b = +oo, the constant image with +oo everywhere.
(ii) tE(FX)Y, and b =-oo.
(iii) t E (FX,)' is doubly F-astic, and b E FX.

Moreover, every solution of(3-3) is then a /-solution, and b I t* (b @ t) is equal to +oo,

-oo, or is finite, respectively according as case (i), (ii), or (iii) holds.

In the following theorem, we state the dual and left-right generalizations of Theorems

3.11 and 3.12.

Corollary 3.13. Let F be a sub-bounded 1-group of R, (RJ, and let

t (FX)Y, b E F. Then for all combinations ofc,q, and 6 given in Table 1, the follow-

ing statement is true:

The image algebra equation c has at least one solution if and only if the product d is

a solution; and the product d is then the 6 solution. Furthermore, if the product d

is /-defined, and equation c is /-defined when a = d, then equation c is /-defined

when a is any solution of equation c. If F,0 is a sub-bounded 1-group of R+ then


the results in Table 1 hold for replacing ] everywhere and t replacing t* every-


Table 1.

c d 6

a 1 t = b b E t* greatest

a E t* = b b E t greatest

a t t =b b M t* least

aD t* =b b 1 t least

t a = b t* E b greatest

t* M a= b t 2 b greatest

t 3 a=b t* 1 b least

t*' a=b t b least

If d is a solution to c in Table 1, then d is called a principal solution.

We can also restate the last three theorems as a solubility criterion.

Problem (3-3) is soluble if and only if (b E t*) [] t = b [(b @ t) t) = b]; and every

solution is a /-solution if(b E] t*)M t [(b @ t) @ t) =b] /-exists.

Note that Theorem 3.12 identifies the cases in which (3-3) has a /-defined /-solution.

All solutions are then /-solutions. The next question to ask is: can we find all solutions? We

now focus on the following problem.

Given that F is R, (R+) and that (b C3 t*) 1E t) [(b @ t) 0 t)] (3-4)
/-exists and equals b, find all solutions of (3-3).

For cases (i) and (ii) of Theorem 3.12, we note that t is finite. The next proposition

gives solutions for these two cases.

Proposition 3.14. Let Foo be a sub-bounded 1-group of RO (RJ. Ifb = -oo (the con-

stant image), then Problem (3-4) has b as its unique solution. Ifb = +oo, then Problem (3-

4) has as its solutions exactly those images of FX which have at least one pixel value equal

to +0o.

To determine solutions to case (iii), we need to consider the particular case that F, is

the 3-element bounded I-group F3. Here b is finite with all elements having value q.

Lemma 3.15. Let Foo be the 3-element bounded 1-group F3. Let t be doubly F-astic and

b be finite. Then (3-3) is soluble, having as principal /-solution a = 1 where 1(x) = 0 for all

i. Hence, no solution to (3-3) contains +oo for any pixel value, and all solutions are /-


3.2.2. All Solutions to a M t = b and a C t = b

We now give some criterions for finding all solutions to problem (3-3) for the case where

the template t is doubly F-astic and b finite. We discuss the general case where F is the

belt R or R+.

If a template t E (Fx ) has form

t.X(Xi) = ai, and t,(xj) = --o, j i,

we write t = diag(a', 2,. a).

For b E F finite, define the template d (FX )X by d =

diag([b(x,)]*, [b(x,)], [b(xm)]*).

Since b is finite, so is dx((xi), and dx,(xi) = -b(xi) (or 1/b(xi)) V i = 1,...,m. Thus, solving

(3-3) is equivalent to solving

a 10 s= 1, (3-5)

a s =1,

where a = d t (s = d t) C (F) and 1 = I the constant image. Note that

syk(xj) = tyk(xj) b(xj) (sy(xj) = tyk(Xj) 1/b(Xj) ). Now, for each image s'xj FX, let Wj

= {(xj,yi) : s'x(yi) = k sx (k) }. Note that Wj C X X Y for every j. The elements s'x(Yi)

corresponding to (xj,yi) E W' are called marked values of W'. Notice that every image s'x

will have at least one marked value, as d, t and s are doubly F-astic. Our next theorem

gives conditions where there is no solution.

Lemma 3.16. Let F0 be a bounded I-group, t E (FX) Y where t is doubly F-astic, and

b FY. Define s E (FX )Y by

s=d d t (or s=d@t)

depending on whether the group F is R or R+, respectively, and d is as above. Suppose

there exists i such that for no j is Sya(xj) a marked value. That is, suppose there exists

yi E Y such that Sy(xj) is not a marked value for any j. Then there does not exist a C FX

such that a S t=b (a t =b).

There now remains the case in which for every i, there is at least one j such that sy(xj)

is a marked value. We transform the question into a boolean problem, where it can be


shown that the following procedure will give a set of solutions to equation (3-5) [38].

Step 1. For the bounded 1-group F0oo = F3, define g (FX)Y by

f if s'lx(Yi) is marked
gy(xj) = -oo otherwise

Letting f G Fx, now solve the boolean system

f L g = (or f g = ). (3-6)

As in the case for matrices [38], each solution to equation (3-6) consists of an assignment of

one of the values -oo or 0 to each f(xj).

Let f = (f(xi),...,f(x,)) be a solution to equation (3-6).

Step 2. For each j = 1,...,m: if f(xj) = then set a(xj) to be the value (Vs' )

(1/((Vs'x, ). If f(x) = -co then a(xj) is given an arbitrary value such that a(x) <

-(Vs') (1/( (Vs'x? .

For the boolean case, we have

Proposition 3.17. The solutions of equation (3-6) are exactly the assignments of the values

0 or -oo to the variables f(xj) such that for every i = 1,...,m there holds f(xj) = 0 for at least

one j such that sy(xj) is a marked value.

Theorem 3.18. Let F+oo be a bounded 1-group. Then the above two step procedure yields

all solutions to equation (3-7) without repetition.

3.2.3. Existence and Uniqueness

This section discusses some existence and uniqueness theorems concerning solutions to

Problem (3-3).

Theorem 3.19. Let F~ be a bounded 1-group, and let t (FX,)' be doubly F-astic and

b E FY be finite. Then a necessary and sufficient condition that the equation a E t = b

(a t = b) shall have at least one solution is that for all xi E X, there exists at least one j

such that for the template s = d 10 t (s = d t), where d is as defined as above,

syl(xj) is a marked value.

We remark that the solution a(x) = -Vs'j) (1/( (Vs'x )) gives exactly the principal


This is equivalent to

Theorem 3.20. Let F0 be a bounded 1-group, let t E (FX)Y be doubly F-astic, and let

b E FY be finite. Then a necessary and sufficient condition that the equation a E3 t = b

(a t = b) shall have exactly one solution is that for all xi E X, there exists at least one j

such that

syl(xj) is a marked value,

and for each j = 1,...,n, there exists an i, 1 < i < m such that I W' = 1.

Define a template t E (F o) to be strictly doubly ,-astic if it satisfies the following

two conditions.

(i) tyi(xj) < i,j=1,...,n

(ii) for each i = 1,...,n, there exists a unique index j { 1,2,...,n } such that ty(xj) has value q.

If t E (F ) Y, I XI = m, I Y = n, then we say that t contains a template

a (FW) if the matrix '(t) contains the matrix W -(s) of size h x k, where I W2 = h,

IW1 =k, and both h,k < min(m,n). We say that a template t E (F )Y contains an image

a E F if a = t for some y C Y.

Theorem 3.21. Let F+, be a bounded l-group, let t E (FX)Y be doubly F-astic, and let

b C FY be finite. Then a necessary and sufficient condition that the equation a EM t = b

(a t = b) shall have exactly one solution is that we can find k finite elements al, ... ,ak

such that the template d defined by

dy,(xj) = -b(yi) + ty,(xj) + aj ( or dy,(xj) = b(yi)-1 ty(xj) a

is doubly k-astic and that d contains a strictly doubly O-astic template s (FW)w, IWI =k.

3.2.4. A Linear Programming Criterion

Since one of our interests is the case where the bounded I-group is the R, we now

show that the problem can be stated as a linear programming problem for this bounded 1-


Theorem 3.22. Let t C (R )Y be doubly F-astic and b FY be finite. Let I be the set

of index pairs (i,j) such that ty,(xj) is finite, 1 < i < n, 1 j < m. Then a sufficient condi-

tion that the equation a E t = b be soluble is that some solution { ij (i,j) E I } of the fol-

lowing optimization problem in the variables zij, for (i,j) E I:

Minimize E (b(yi) t,.(xj)) zij

Subject to ij ) = 1, j = l,...,m

and ij > 0, (i,j) I

shall also satisfy: ZE (ij} > 0, i =l,...,n.

We now make a definition which will be used in the next section. Let Fo be a belt,

and let t E (FX)Y be arbitrary. The right column space oft is the set of all b C FX for

which the equation

a Lt=b (or a t=b)
is soluble for a.

3.2.5. Linear Dependence

Linear dependence over a bounded 1-group. We can consider the equation a 13 t = b

(or a t = b) in another way. For the images t'J, rewrite a E3 t = b as

[t'x, t a(xj)] = b, (3-7)

where a(xj) E (F )YY is the one-point template with target pixel value of a(xj). In this case,

we say that b is a linear combination of {t ,t, ,t }, or, that b C FX is (right)

linearly dependent on the set {t t t }. We can make analogous definitions for the

case of 0. While in linear algebra the concept of linear dependence provides a foundation

for a theory of rank and dimension, the situation in the minimax algebra is more compli-

cated. The notion of strong linear independence is introduced to give us a similar construct.

Theorem 3.23. Let Foo be a bounded 1-group other than F3. Let X be a coordinate set

such that I X > 2, and k > 1 be an arbitrary integer. Then we can always find k finite

images on X, no one of which is linearly dependent on the others.

If FCoo = F3, then we can produce a dimensional anomaly.

Theorem 3.24. Suppose Foo = F3, and let X be a coordinate set such that 1X1 = m >

2. Then we can always find at least (m2 m) images on X, no one of which is linearly

dependent on the others.

Since every bounded 1-group contains a copy of F3, the dimensional anomaly in

Theorem 3.24 extends to any arbitrary bounded I-group.

Let IX1 = m, IYI = n, and t E (Fx)Y where F is an arbitrary bounded 1-group. We

would like to define the rank of t in terms of linear independence, and to be equal to the

number of linearly independent images t', of t. Suppose we were to define linear indepen-

dence as the negation of linear dependence, that is, a set of k images on X (al, ,ak) is

linear independent if and only if no one of the ai is linearly dependent on any subset of the

others. Then applying Theorem 3.23 for IX1 = n and k > n, we could find k finite images

which are linearly independent. If we defined rank as the number of linearly independent

images ty of t, then every template would have rank k > n, which is not a useful definition

in this context.

Strong linear independence. As for the matrix algebra, we define the concept of strong

linear independence [38].

Let Fo be a bounded 1-group and let a(1),...,a(k) E FX, k > 1. We say that the set

{a(1),...,a(k)} is strongly linearly independent, or simply SLI, if there is at least one finite

image b E Fx which has a unique expression of the form

b = a(j,) 1 Xj \ (or b = VPa(jp) o j) (3-8)

with Xjr E F, p = 1,...,h, 1 < jp < k, p = 1,...,h, and jp < jq if p < q.

If A = {al,a,. ,ak } is a set of k images where each a E F I YI = n, then we

define the template based on the set A in the following way. For the integer k, we find a

coordinate set W which has k pixel locations, that is, I WI = k. To this end, choose a posi-

tive integer p such that k = p q + r, where r < p (by the division algorithm for integers).

Let W denote the set {(i,j) : 0 < i < p-, 0 < j < q-1 } U (-1,j) : 0 < j < r-1 ), which is

a subset of Z2 that is almost rectangular. There is an additional row in the fourth quadrant

corresponding to the r left-over pixel locations that don't quite make a full row. Of course,

there are other selections that can be made for W. Define the template t based on A by

t E(F )Y, where
t'w = a, i = 1,...,k.

To clarify notation, we will denote the template based on the set A = {al,a, ,ak } by t

= B(A). Hence, if t E (FX)Y, then for A = {ttx .2 ., t m}, we have B(A) =t. If D=

{al,a2, ,ah} is a set of h F valued images on X, we denote the right column space of

B(D) by < al,a., ,ah >. Thus, for t E (FX)', < t 'l t',..., m > is the right column

space of t. The set < al,a., ,ah > is also called the space generated by the set

{al,az, ,ah).

Lemma 3.25. Let Foo be a bounded I-group with group F. Let cl, ,c, b E F k >

1 be such that b is finite and has a unique expression of the form (3-8). Then h = k; jl =

1,..., jh = k; Xj E F, p = 1,...,h; and t is doubly F-astic, where t E (Fx)Y is the template

based on the set C = {c, ,ck}. Here, YI = k.

We also have

Corollary 3.26. Let Foo be a bounded l-group and let cl, .. ,c E FX for an integer n

> 1. Then {c, ,c } is SLI if and only if there exists a finite image b E Fx such that

the equation a 2 t = b (a t = b) is uniquely soluble for a, where t E (F) Y is the

template based on the set C = {c, ,.. ,c} t = B(C), IYI = n.

We can now define linear independence. Let Foo be a given belt. Then linear

independence is the negation of linear dependence: cl, ,cn E F are linearly indepen-

dent when no one of them is linearly dependent on the others. How is linear dependence

related to strong linear independence?

Theorem 3.27. Let Fio be a bounded l-group, and cl, ,c E F X. For c, to

be linearly independent it is sufficient, but not necessary, that c, ,ck be SLI.

We may call the above definition of SLI right SLI. If, in the definition of SLI, we were

to multiply by the scalars Xj's from the left, we define the concept of left SLI If formula

(3--8) is replaced by

b= a(j,) EI Xj, (orb = a(jp) Xjp)
p-l p=1
then we have the concept of right dual SLI. We define in an analogous way the concept of

left dual SLI.

3.3. Rank of Templates

Template rank over a bounded I-group. Let Foo be a bounded 1-group and t E (Fx )X

be arbitrary. We call the template t (right) or left column regular if the set of images

{t'x }xx are (right) or left SLI, respectively. We say t is right or left row regular if the tem-

plate t' is right or left column regular, respectively.

Now suppose that Foo is a bounded 1-group and t C (F ) Y. Suppose r is the max-

imum number of images t'x of t that are SLI. In this case we say that t has column rank

equal to r. The row rank of t is the column rank of t'. For a template t E (F ) Y, we say

that t has -astic rank equal to r Z' if the following is true for k = r but not for k > r:

Let W be a coordinate set, I WI = k < min(m,n). There exist a E FX and b E FY

both finite, such that the template s (FX)' is doubly -astic and a contains a

strictly doubly 0-astic template u E (Fw) where

sy,(xj) = b(yi) + ty(xj) + a(xj), V i = 1,...,n and j = 1,...,m

if F = R, and

s,(xj) = b(yi) ty,(xj) a(xj), V i = 1,...,n and j = 1,...,m

if F = R+.

Lemma 3.28. Let F0 be a bounded I-group with group F E {R,R+ }, and suppose that

t E (F )'Y has b-astic rank equal to r. Then t is doubly F-astic and t' contains a set of at

least r images, t k=l,...,r, which are SLI.

Lemma 3.29. Let FE {R,R+}, and suppose that t (Fx,) is doubly F-astic and con-

sists of a set of r images which are SLI. Then t has 4-astic rank equal to at least r.

Accordingly, we have

Theorem 3.30. Let FE {R,R+ }, and suppose that t E (Fx )Y is doubly F-astic. Then

the following statements are all equivalent:

(i) t has 0-astic rank equal to r
(ii) t has right column rank equal to r
(iii) t has left row rank equal to r
(iv) t has dual right column rank equal to r
(v) t* has dual left row rank equal to r .

If t is doubly F-astic, then we can apply Theorem 3.30 and simply use the term rank of

t, for ranks (i) to (iii), and the term dual rank of t for ranks (iv) and (v). If the bounded 1-

group F+C is commutative, as in both our cases, we have the following

Corollary 3.31. Let FE {R,R+}, and let t E (F )Y be doubly F-astic. Then the fol-

lowing statements are all equivalent:

(i) t has left column rank equal to r
(ii) t has right row rank equal to r
(iii) t* has dual left column rank equal to r
(iv) t* has dual right row rank equal to r .

.3.1. Existence of Rank and Relation to SI,I

We now discuss the existence of the rank of a template and the relationship of rank to


Theorem 3.32. Let FE {R,R+}, and let t 6 (F,) '. Then there is an integer r such

that t has k-astic rank r, if and only if t is doubly F-astic. In this case, r satisfies 1 < r <

min(m,n), where m = IX, ni =IYI.

We now have the tools to show that the previous dimension anomalies are avoided in

context of strong linear independence.

Theorem 3.33. Let FE {R, R+}, X an arbitrary non-empty, finite coordinate set with

lXI = m. Then for each integer n, 1 < n < m, we can find n images on X, aj E FX, j =

1,...,n which are SLI. This is impossible for n > m.

3.3.2. Permanents and Inverses

As in linear algebra, if t is a matrix all of whose eigenvalues satisfy I Xl < 1, then the


(e-t)-1 = e+t+t2+ + -
is valid. We state an analogous case in the image algebra.

For a bounded I-group F a template t E (F X is called increasing if

a E t > a for all a E F and s E3 t > s for allsE (FX)Y,

where Y is any arbitrary coordinate set.

We have

Lemma 3.34. Let Fo0 be a bounded l-group, and let t E (FX))X. Then t is increasing if

and only ift (x) > 0 V x E X.

Let t E (R,)X be a template, IXI = m. We define the permanent oft to be the

scalar Perm(t) E Roo given by
Perm(t) = V ( ty(xi))

where the maximum is taken over all permutations a in the symmetric group Sm of order m!.

For the bounded 1-group R+ let t E ((R+ )) be a template, lX1 = m. We define

the permanent of t to be the scalar Perm(t) C R+ given by
Perm(t) = y ( ty(xo1),
where again the maximum is taken over all permutations a in the symmetric group Sm.

The adjugate template of t E (F )X is the template Adj(t) defined by

[Adj(t)]y,(xj) = Cofactor[t]x,(Yi)

where Cofactor[t ](yi) is the permanent of the template s defined by

syk(h) = t,k(xh)

h = 1,...,j-1,j+1,...,m and k = 1,...,i-l,i+l,...,m.

Here, s C (FWo), where I W = m-1. For m = 1, we define Adj(t) = q.

3.3.3 Graph Theory

We now present some graph theoretic tools which will be used later.

A digraph or directed graph is a pair D = {V,E} where V is a finite set of vertices

{1,...,n} and E C V x V. The set E is called the set of edges of D. An edge (i,j) is directed

from i to j, and can be represented by a vector with tail at node i and head at node j.

A graph is a pair G = {V,E} where V is a finite set of vertices {1,...,n} and E C V x V

such that (i,j) E E if and only if (j,i) E E.

A u-v path in a digraph or graph is a finite sequence of vertices u = yo Y, Y, -,Ym = v

such that (yj, yj+1) E E for all j = 0,...,m-1. A circuit is a path with the property that

Yo = Ym. A simple path yo, y, ...,ym is a path with distinct vertices except possibly for yo

and y,. A simple circuit is a circuit which is a simple path.

A weighted digraph (graph) is a digraph (graph) to which every edge (i,j) is uniquely

assigned a value in Fo. We denote the weight of the edge (i,j) by t(i,j) or ti. Note that the

value tij is not necessarily equal to the value tji.

We remark that if G = {V,E} is a graph then if there exists a u-v path, there exists a

v-u path.

With each path (circuit ) a = yo, yl, *...,y of a weighted graph G, there is an associated

path (circuit) product p(a), defined by

tyoY X tyIy2X X tym-vYm

For each template t E (F Xo)x where I x = n, we can associate a weighted graph A(t)

in the following way. The associated graph A(t) is the weighted graph G = (V,E), where V

= {1,2,...,n}, and whose weights are tx,(xi), for the pair (i,j) such that xi E So(t,,). The pair

(i,j) is then considered an edge. If txj(xi) = -oo, then we can extend E to all of V x V by

stating that (i,j) E E with null weight -oo. An example of a template t and its associated

weighted graph A(t) is given below in Figure 7. We have omitted listing the values of -oo

on A(t). Here, IXI = 3.


tX3 =I o 0 7

Figure 7. A Template and its Associated Graph.
(a) A Template t; (b) Associated Graph A(t).

For the belt F_,, the correspondence is one-one. We note this in the next theorem.

Lemma 2.48. Let F_, be a belt, where +oo F. Let a : (F) = { G: G is a
weighted graph with n nodes } be defined by o(t) = A(t). Then a is one-one and onto.

Proof: Suppose a(t) = a(s). Let { tx,(xi) } be the weights for A(t) and { s, (xi) } be the

weights for A(s). By definition, tJ(xi) = sx,(xi) for all i,j, and hence t = s.

Now suppose that G = (V,E) is a weighted graph with weights { wij }. Define

t E (FX)x by tx (xi) = wij, if (i,j) E E, and t.((xi) = -oo otherwise. Then a(t) =



Let t E (FX)x. If for each circuit a in A(t) we have p(a) <_ and there exists at least

one circuit a such that p(a) = q, then we call t a definite template.

Lemma 3.35. A template t E (F X)X is definite if and only if for all simple circuits

a E A(t), p(a) < 0 and there exists at least one such simple circuit a such that p(a) = .

Theorem 3.30. Let t E (FX)x be either row-0-astic or column--astic. Then t is


Theorem 3.37. Let t C (FX)X. Ift is definite then so is tr, for any integer r > 0.

Let t C (FX)x where IX1 = n. The metric template generated by t is

r(t) = tVt2 V ... Vt".

The dual metric template is

r'(t) = t A (t ) A ... A(t")*.

The name metric originates from the application of the minimax algebra to transporta-

tion networks. If for the bounded 1-group R,, the value txj(xi) represents the direct distance

from node i to node j of a transportation network with tx (xi) = +oo if there is no direct

route, then (F(t))* represents the shortest distance matrix, that is, (( (t))* )j(xi) is the shor-

test path possible from node i to node j of all possible paths. A description of a transporta-

tion problem concerning shortest paths is discussed in Cuninghame-Green's book [38].

Theorems 3.38 through 3.40 are used to prove Theorem 3.41.

Lemma 3.38. Let t E (FX)x. Then

F(t) = (IVt)n-1 Ea t.

Lemma 3.39. (tVl)n-1 = 1VtV Vt"-1 ,tE (Fx)x.

Theorem 3.40. Lett E (Fx)x be definite. Then

tr < r(t) r = 1,2,...

Theorem 3.41. Lett E (FX)x be definite. Then

[(t) < r(t), r = 1,2,...

r(t) = ( V t)r' t ,r = 1,2,...,n-1.

Using the Adjugate of a template, we have

Theorem 3.42 [52]. Let F be a commutative bounded 1-group and t E (FI)x be

definite and increasing. Then Adj(t) = r(t).

Now we define the inverse of a template. For t E (F ,), we define

Inv(t) = (Perm(t))-1 M Adj(t) (or Inv(t) = (Perm(t))-1 Adj(t) )

by direct analogy in elementary linear algebra.

We note that the template Inv(t) is not necessarily invertible in the sense that

Inv(t) El t = 1, for example.

3.3.4 Invertibility

In order to define an invertible template, that is, a template t E (F ,) that has the

property that there exists a unique template s satisfying t =0 s = s 1 t = 1

(t s = s t = 1), we need to introduce the concept of equivalent templates.

Let Fo be a subbelt of Ro or R,. A template p (Fo)X is said to be invertible if

there exists a template qE(FX)X such that p [ q=q B p=1 (p@ q=qq p=l).

These templates can be described in close detail. Let us define a strictly doubly F-astic

template over a bounded 1-group F, to be an element t of (F )x satisfying

(i) ty1(xj) < +oo, i,j = ,...,n
(ii) for each index i there exists a unique index ji E { 1,2,...,m } such that ty(xj) is finite.

Theorem 3.43. Let F0 be a bounded 1-group with group F and let p E (FX)x be given.

Then p is invertible if and only if p is strictly doubly F-astic.

As is usual, if p is invertible, then the template q above is written as p-.

The intersection of the set of strictly doubly O-astic templates and the set of strictly

doubly F-astic templates we call the permutation templates. It is not difficult to show

Proposition 3.44. Let F ,o be a bounded I-group. Then the set of invertible templates

from X to X, where XI = m, form a group under the multiplication t (0), containing 1 as

the identity element and having the permutation templates as a subgroup isomorphic to the

symmetric group Sm on m letters.

Pre- or post-multiplication of a template t by a permutation template p will permute

the images tx or the images ty of t, respectively, and these permutation templates play a

role exactly like their counterparts in linear algebra.

3.3.5. Equivalence of Templates

Let FO be a bounded 1-group, and let t, s C (FX)Y be given. We say that t and s

are equivalent, written t = s, if there exist invertible templates p E (F)Y and

q C (F,)X such that p t t q = s (p t q = s).

Now we define elementary templates. An elementary template p E (F X over a

bounded I-group with group F is one of the following:

(i) a permutation template
(ii) a diagonal template of the form diag(, ..., 0, a, ., ), where a E F.

Elementary templates correspond to matrices which perform elementary operations on

matrices [38]. A permutation template

1. permutes the images t', of t; or
2. permutes the images ty of t,

depending on whether the multiplication is from the left or right, respectively. Diagonal

templates of the type listed in (ii) above have the effect of multiplying some image t'x of t by

a finite constant a, or multiplying some image ty of t by a finite constant a, depending on

whether the multiplication of t is from the left or right, respectively.

Lemma 3.45. Let F ,o be a bounded I-group, and let X and Y be given coordinate sets,

XI =m,YI =n. Then the relation of equivalence is an equivalence relation on (FX,). If

t,s E(F )Y, then t as if and only if there is a sequence of templates uo,u, ,uj such

that u0 = t and uj = s, and Up is obtained by an elementary operation on up_, p = 1,...,j.

Permutation and diagonal templates of this form will play an important role in the dis-

cussion on local template decompositions, as well as the following theorem.

Lemma 3.46. Let F be a bounded I-group with group F and let t E (Fx ) be given.

If a given image oft' (or t) is F-astic then t is equivalent to a template in which that image

oft' (or t) is O-astic and all other images in t' (or t) are identical with the corresponding

image in t' (or t). Hence if t is (row-, column-, or doubly) F-astic then t is equivalent to a

template which is (respectively row-, column-, or doubly) 0-astic.

Equivalence and rank. The following are results which show the relation between

equivalence and rank.

Proposition 3.47. Let F be a bounded l-group, and let t, s E (F,)Y. Then t has 0-

astic rank equal to r if and only if the following statement is true for j =r but not for j > r:

t is equivalent to a doubly 0-astic template d which contains a strictly doubly <-

astic template u E (F ) W, where I W =j.

Corollary 3.48. Let Fo be a bounded i-group with group F and lett, s E (FX )Y be

equivalent. Then if either t or a has a rank, then so does the other, and the ranks are


3.4. The Eigenproblem in the Image Algebra

Using the isomorphism, we can discuss the eigenproblem which is presented in its

matrix form [38] in context of the image algebra. In this section we present the eigenproblem

and solution in image algebra notation.

3.4.1. The Statement in Image Algebra

Unless otherwise stated, we assume that F is a subbelt of either R or R+, and let Fco,

F_,, and F+, have their usual meanings. The coordinate sets X and Y are assumed to be

non-empty, finite arrays, with I X = m and IYI = n.

Let X C FCo. LetX G (FX)x be the one-point template defined in the usual way by

S(x) = -oo otherwise

Suppose F is a subbelt of R, and t E (F ) x. Then the eigenproblem is to find

a C Fx and X E F 0 such that

a t = a 1X.
Similarly for the operation we need find a E Fx and X E F+oo such that

a t = a X.

For either belt, if such a and X exist, then a is called an eigenimage of t, and X a correspond-

ing eigenvalue. The eigenproblem is called finitely soluble if both a and X are finite.

As mentioned before, all results of this section are applicable for F a subbelt of with R

or R+. Hence, to avoid stating all results for both belts separately, we will state the results

for 0 with the understanding that in all theorems, definitions, etc. in this section of Chapter

3, with the exception of Theorem 3.57, E can be replaced by everywhere and the

theorems and results will still hold.

Theorem 3.49. Let t E (FX)Y. Then there exist s E (FY2)Y such that if b is in the

column space of t, then b is an eigenimage of s with corresponding eigenvalue 0. Here,

a = t* E t E (FX)x. Hence, b ]t = b lE = b.

Theorem 3.50. Lett E (FX,)x. If the eigenproblem for t is finitely soluble, t must be

row-F-astic. In particular, if t is row-0-astic, then the eigenproblem for t is finitely soluble,

in which case X = 0.

Let t E (FX)x be definite. We know that A(t) has at least one circuit a such that

p(a) = An eigennode of A(t) is any node on such a circuit. Two eigennodes are

equivalent if they are both on any one such circuit.

Lemma 3.51. Let t E (FX)x be definite. Then r(t) is definite, and ifj is an eigennode

of A(t), then

Cn((t))xere) = X
Conversely, f (r(t)), (xj) = 0 for some xj E X, then j is an eigennode of A(t).

Lemma 3.52. Let t (FX)X be definite. If j is an eigennode of A(t) then

aJ 0 t = aJ E 1 = aj
where aJ is the image [ (t) ]'.*

Thus, images [F(t)]' j where j is an eigennode give us eigenimages for the template t,

with corresponding eigenvalue q. For a given t, the set of all such images are called the fun-

damental eigenimages for t. Just as in the case for matrices, two fundamental eigenimages

are called equivalent if nodes j and h are equivalent, and otherwise the eigenimages are non-


Theorem 3.53. Let t E (F ,) be definite. If aj, ak E FX are fundamental eigenvectors

of t corresponding to equivalent eigennodes j and k, respectively, then

aj = ak Eo a,
where a E F, and a E (FX)x is the one-point template.

3.4 2. Eigenspaces

If t E (FX))x is definite, let {aJ, aj} be a maximal set of non-equivalent funda-

mental eigenimages of t. The space < a ', ak > generated by these eigenimages is

called the eigenspace oft.

Theorem 3.54. Let t C (FX)x be given. If the eigenproblem for t is finitely soluble then

every finite eigenimage has the same unique corresponding finite eigenvalue X. The tem-

plate t E -X is definite, and all finite eigenimages oft lie in the eigenspace oft E3 -X.

The non-equivalent fundamental eigenimages which generate this space have the property

that no one of them is linearly dependent on (any subset of) the others.

The unique scalar in Theorem 3.54, when it exists, is called the principal eigenvalue of


We call a bounded 1-group F radicable if for each a E F and integer k > 1, there exists

a unique f E F such that fk = a.

Some examples of radicable bounded I-groups are Ro, Qo, and R'+,. However, Zoo

is not radicable. Choosing a = 12 and k = 5, solving for f in the equation

S= 12
is just solving for f in (using regular arithmetic)

5f = 12

which, of course, has no integral solution.

Let F be a radicable bounded 1-group, and t E (FX)x. Let a = yo, Yl, ...Ym be a cir-

cuit in A(t). We define the length of a to be m. For each circuit a in A(t), of length 1(a)

and having circuit product p(a), we define a circuit mean p(a) E F by

[O)]^ = p(o).
We also define

X(t) = V { p(a) : a is a simple circuit in A(t)}.

For the template and associated graph A(t) in Figure 8, we have the following compu-


Simple Circuit a p(a) ((a) up(a)

(1,1) 4 1 4
(2,2) -1 1 -1
(3,3) 7 1 7
(1,2,1) 5 2 5/2
(2,3,2) -oo 2 -co
(3,1,3) -oo 2 -oo
(1,2,3,1) 1 3 1/3
(3,2,1,3) -oo 3 -oo

Figure 8. Computation of the Circuit Mean p(a).

In this example, X(t) = 7.

3.4.3. Solutions to the Eigenproblem

We now present the relation between the parameter X(t) and the principal eigenvalue

for t.

Theorem 3.55. Let Fo be a radicable bounded I-group and let t E (RX X be given. If

the eigenproblem for t is finitely soluble then X(t) is finite, and, in this case, X(t) is the only

possible value for the eigenvalue in any finite solution to the eigenproblem for t. That is,

X(t) is the principal eigenvalue of t.

Theorem 3.56. Let F-oo be a radicable sub-bounded l-group of Ro and let t E (R)x x

be given. Then the eigenproblem for t is finitely soluble if and only if X(t) is finite and the

template B(A) is doubly F-astic, where A =

{[ (t [E -X(t))]' [F(t to -X(t))]' [I(t m -X(t))l } is a maximal set of non-

equivalent fundamental eigenimages for the definite template t 1 -X(t).

The Computational Task. If IX1 is large, and t E (F,)X, then to directly evaluate the

circuit product for all simple circuits in t is very time consuming. We now state a theorem

which makes the task more manageable for the case where the bounded 1-group is Ro

Theorem 3.57. Let t E (F,)x be given. If the eigenproblem for t is finitely soluble,

then X(t) is the optimal value of X in the following linear programming problem in the n+1

real variables X, xl,...,Xn:

Minimize X Subject to X + xi xj > tx (xj)

where the inequality constraint is taken over all pairs i,j for which t.l(xj) is finite.

In Theorem 3.54, we noted the linear independence of the fundamental eigenimages

which generate an eigenspace. We are able now to prove a stronger result which has appli-

cations to Ro and Roo-

Theorem 3.58 Let F, be a radicable bounded I-group other than F3, and let t E(Fx,0)x

have a finitely soluble eigenproblem. Then the fundamental eigenimages of -X(t) t t

corresponding to a maximal set of non-equivalent eigennodes in A[-X(t) 1 t] are SLI.

We now present a result relating X(t) and Inv.

Theorem 3.59. Let F be a bounded 1-group and t E (F )x be such that X(t) < x.


Inv(1 Vt) = 1VtVt2V VtK

for arbitrary large K. Here, 1 denotes the identity template of(F,)x.


Up until the mid 1960's, the theoretical tools of quantitative microscopy as applied to

image analysis were not based on any cohesive mathematical foundation. It was G. Math-

eron and J. Serra at the Icole des Mines de Paris who first pioneered the theory of

mathematical morphology as a first attempt to unify the underlying mathematical concepts

being used for image analysis in microbiology, petrography, and metallography [16,53,54].

Initially its main use was to describe boolean image processing in the plane, but Sternberg

[55] extended the concepts in mathematical morphology to include gray valued images via

the cumbersome notion of an umbra. While others including Serra [56,57] also extended mor-

phology to gray valued images in different manners, Sternberg's definitions have been used

more regularly, and, in fact, are used by Serra in his book [16].

The basis on which morphological theory lies are the two classical operations of Min-

kowski addition and Minkowski subtraction from integral geometry [13,14]. For any two sets

A C R" and B C R", Minkowski addition and subtraction are defined as

AX B= U Ab and A /B = f Ab,
bEB bEB'

respectively, where Ab = {a + b : a A} and B' = { -b : b E B }. We have used the ori-

ginal notation as found in Hadwiger's book [14]. It can be shown that

A /B=(A x B')',

where AC denotes the complement of A in R". From these definitions are constructed the

two morphological operations of dilation and erosion. As used by Serra and Maragos [16,21],

the dilation of a set A C R" by a structuring element B C R" is denoted by A E B' and

defined by

A aB'= U Ab
while erosion of A by B is

A B= n Ab = (AC lB)C.

We remark that the actual symbols used in Serra's and Maragos' papers for the dilation

and erosion are 6 and 0. To avoid confusion with the image algebra operation 6, we have

replaced O and e with B and respectively.

To avoid anomalies without practical interest, the structuring element B is assumed to

include the origin 0 E R", and both A and B are assumed to be compact. Unfortunately, the

definitions for dilation and erosion defined by Serra are not the same as the Minkowski

operations. In addition, while Maragos uses the same definitions as Serra for dilation and

erosion, Maragos [21] uses the identical symbols 8 and 9 when defining Minkowski addition

and subtraction. To add to the confusion, Sternberg defines an erosion and dilation using the

same symbols 8 and q which are exactly the Minkowski operations [58]. The following table

lists the three definitions. In all cases, Ab = {a + b : a E A }, B'= { -b : b B }, and AC

denotes the complement of A in R".

Table 2.

Thus we see that while Sternberg's dilation of A by B is exactly Minkowski's addition

of A and B, Serra's dilation of A by B is Minkowski's addition of A and B'. Although both

definitions of erosion of A by B are equivalent to Minkowski's subtraction of A and B, Serra

uses the symbol B' while Sternberg uses simply B. For the remainder of this chapter we will

use Sternberg's definitions of dilation and erosion.

All morphological transformations are combinations of dilations and erosions, such as

the opening of A by B, denoted by A o B,

AoB = (A B) WB
and the closing of A by B, denoted by A B,

A B = (A [ B) B.
However, a more general image transform in mathematical morphology is the Hit or Miss

transform [54,53]. Since an erosion and hence a dilation is a special case of the Hit or Miss

addition subtraction

Minkowski A X B = U Ab A / B = n Ab = (Ac B')c
bEB bEB'

dilation of A by B erosion of A by B

Serra A IB'= U Ab A BB'= n Ab =(Ae B B)C
bEB' bEB'

dilation of A by B erosion of A by B

Sternberg A BB = U Ab A BB = n Ab = (ACe B')c
bEB bEB'

transform, this transform is often viewed as the universal morphological transformation upon

which the theory of mathematical morphology is based. Let B = (D,E) be a pair of structur-

ing elements. Then the Hit or Miss transform of the set A is given by the expression

AB={ a:DaC A, Ea AC}.

For practical applications it is assumed that D 1 E = 0. The erosion of A by D is obtained

by simply letting E = 0, in which case we have A 0 B = A D.

While there have been several extensions of the boolean dilation to the gray level case,

Sternberg's formulaes for computing the gray value erosion and dilation are the most

straightforward, although the underlying theory introduces the somewhat extraneous concept

of an umbra. Let f: R" -- R be a function. Then the umbra of f, denoted by U(f), is the set

U(f) C R"+1 defined by

U(f)= {p=(x,z) E R+': z
Again, the notion of an unbounded set is exhibited in this definition, for in general the value

z can approach -oo. Since U(f) C Rk, the dilation of two functions f and g is defined

through the dilation of their umbras,

U(f B]g) = U(f) L U(g),

and similarly the erosion of f by g,

u(f [ g) = U(f) [ U(g).

Any function d: Rn --* R has the property that d(x) = max {z E R : (x,z) E U(d)}, and

thus the set U(f B g) well-defines the function f E g. However, when actually calculating the

new functions d = f Sg and e = f 9g, Sternberg gives the following formulae for the two-

dimensional dilation and erosion, respectively:

d(x,y) = max [f(x i, y j) + g(i,j)] (4-1)

e(x,y) = min [f(x i, y j) -g(--i,--j)] (4-2)

The function f represents the image, and g represents the structuring element. Both f and g

are assumed to have finite support, with values of -oo outside. Also, in general the support

of g is much smaller than the coordinate set X, and g(0) $ -oo. So in practice, the notion

of an umbra need not be introduced at all.

Note that when applying these transforms to real data, we cannot simply substitute an

image a for the set A, as the expression Ac becomes meaningless to a computer. What is

actually assumed is that A corresponds to the black pixels in a boolean image a, that is,

given A C R", a coordinate set X C R" is chosen and a two-valued image a on X is found,

where 1 and 0 represent the two values:

I if x ACX
a(x) = 0 otherwise

For the two-dimensional gray value case, Sternberg's formulas (4-1) and (4-2) are easily writ-

ten in computer code, and this is, in fact, close to the image algebra definition for dilation.

In short, when implementing a problem which is posed in morphological terms, the solution

must be reposed in a setting which more closely represents the computing environment. On

the other hand, it has been established that the image algebra comes very close to ideally

modeling a large number of important image processing problems, such as mapping of

transforms to sequential and parallel architectures [44] and this dissertation, and expressing

sequential algorithms in a parallel manner [59].

The next part of this chapter is devoted to establishing an isomorphism between the

morphological algebra and the image algebra. We will show that performing a dilation is

equivalent to calculating

a t
for the appropriate a and t, and performing an erosion is equivalent to calculating

a t
for appropriate a and t.

Let A,B be finite subsets of Z", where B is a structuring element. Let X = Zn or

choose X C Zn to be a finite set such that A BB C X. Let F4 denote the value set { -oo, 0

1, +oo }. Define (: 2zn -- Fx by (A) = a where

1 ifxEA
a(x) 0 otherwise

Let B = {B C Z : BI < o0 and 0 E B }, and let I be the set of all F4 valued invariant

templates from X to X such that y E S_(ty). Define rl: B -- I by ir(B) = t where

S 0 if x E B'y
Y x) -oo otherwise"

Lemma 4.1. Let (, rl be as above. Let A C Z", and B E B a structuring element. Then

((A [B) = a(A) 1 t(B).

Proof: Choose X large enough such that A B B C X. Let D = A B B and

f = -(A) 0 rX(B). We must show that y E D if and only if f(y) = 1. To this end,

we note that

y A BB <= y EAbforsome b EB <= y=x+bforxEA,bEB

<> x=(-b)+y,-bEB',x A <= xEAandx=(-b)+yEB'y

-= a(x) = 1 and t,(x) =0 a V a(z) + ty() = f(y) = 1.

We call a the image corresponding to A, and t the template corresponding to the

structuring element B.

The next lemma shows the correspondence between the 0 operation and erosion.

Lemma 4.2. Let r] be as above. Let A C Z", and B C Z" a structuring element. Then

((A B) = (A) i [r(B)]*.

Proof: Let D =A BB and let c = (A) 3 [1r(B)]*. We must show that y E D if and only

if c(y) = 1.

yED <= yEApVpEB' y=xp+pVpEB',

where the choice of Xp E A depends on p. Let a = -(A) and t = rX(B). Then

c = a t* and

c(y) = A a(x) +' t(x) = A a(x) + t;(x).
xEX xES+o(y)

We have

O0 if y B'x
t(x)= [tx(y)] = +o0 otherwise'

We claim that S+,(ty) =By. To show this, note that

x E S.(ty) <=> t,(x) =0 = t(y) <> y E B' <=

y=p+xforsomepC B' x=b+yforsomebEB <= x By.

yED <=> y=x +pVp'EB' <> Xb=b+yVb B, foursome xb A

<> b+y=xEAVb B <=> By= S(ty) CA (by definition of ) <>

a(x) =1 Vx EBy CA andty(x) =OVx EBy = S+(ty) <>

A a(x) + t(x) = =c(y).