Title: Optima
ALL VOLUMES CITATION PDF VIEWER THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00090046/00059
 Material Information
Title: Optima
Series Title: Optima
Physical Description: Serial
Language: English
Creator: Mathematical Programming Society, University of Florida
Publisher: Mathematical Programming Society, University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: October 1998
 Record Information
Bibliographic ID: UF00090046
Volume ID: VID00059
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.

Downloads

This item has the following downloads:

optima59 ( PDF )


Full Text






P


T


M


A


SMathematical Programming Society Newsletter

OCTOBER 1998



Although pattern search methods were introduced
forty years ago, they have recently been the subject
of much renewed interest within the nonlinear
programming community. For those of us who are
new to the recent developments in the convergence
theory for these methods, Virginia Torczon,
Michael Lewis and Michael Trosset have prepared
an overview on why these methods work.
MARY BETH HRIBAR


Why Pattern Search Works


Robert Michael Lewis
NASA Langley Research Center;
Virginia Torczon,
Michael W. Trosset
College of William & Mary


1 Introduction
Pattern search methods are a class of direct search methods for nonlinear
optimization. Since the introduction of the original pattern search meth
ods in the late 1950s and early 1960s [2, 5], they have remained popular
with users due to their simplicity and the fact that they work well in
practice on a variety of problems. More recently, the fact that they are
provably convergent has generated renewed interest in the nonlinear pro
gramming community.
The purpose of this article is to describe what pattern search methods
are and why they work. Much of our past work on pattern search meth
ods was guided by a desire to unify a variety of existing algorithms and
provide them with a common convergence theory. Unfortunately, the
unification of this broad class of algorithms requires a technical frame
work that obscures the features that distinguish pattern search algorithms
and make them work. We hope here to give a clearer explanation of
these ideas. Space does not allow us to do justice to the history of these
methods and all the work relating to them; this will be the subject of a
lengthier review elsewhere; for a historical perspective, see [17].


*This research was supported by
the National Aeronautics and
Space Administration under
NASA Contract No. NAS1
97046 while the authors were in
residence at the Institute for
Computer Applications in Science
and Engineering (ICASE), NASA
Langley Research Center,
Hampton, VA 23681 2199.


conference notes 10 reviews 12


out & in chairs 8


gallimaufry 14







S *T I A 5 09


OCTOBER 1998


0



^^^a,^^


0

Figure 1. A simple instance of pattern
search



2 A Simple Example of Pattern
Search

We begin our discussion with a simple instance
of a pattern search algorithm for unconstrained
minimization: minimize f(4. At iteration k, we
have an iterate xk I and a step-length param
eter D > 0. Let i= 1, ..., n, denote the stan
dard unit basis vectors. We successively look at
the points x+ x + D, e, i 1, ..., n, until we
find x+ for which f{x) < f(x). Fig. 1 illustrates
the set of points among which we search for x+
for n = 2. This set of points is an instance of
what we call a pattern, from which pattern
search takes its name. If we find no x such that
fxG) < f(x), then we reduce D, by a half and
continue; otherwise, we leave the step-length
parameter alone, setting D, = D, and x, = x.
In the latter case we can also increase the step
length parameter, say, by a factor of 2, if we feel
a longer step might be justified. We repeat the
iteration just described until D, is deemed suffi
ciently small.
This simple example illustrates two attractive
features of pattern search algorithms:
They can be extremely simple to specify
and implement.
No explicit estimate of the derivative nor
anything like a Taylor's series appears in the
algorithm. This makes these algorithms
useful in situations where derivatives are
not available and finite-difference deriva
tives are unreliable, such as when f(x) is
noisy.
These qualities have made pattern search algo
rithms popular with users. Yet, despite their
seeming simplicity and heuristic nature and the
fact that they do not have explicit recourse to
the derivatives of f(x), pattern search algorithms
possess global convergence properties that are
almost as strong as those of comparable line


search and trust-region algorithms. In this article
we will attempt to explain this perhaps surprise
ing fact.
Before turning to the discussion of how this
can be, we note some further features of pattern
search which are manifest in this simple exam
ple.
We require only simple decrease in f(x). In
fact, we do not even need to know f(x) as a
numerical value, provided we can make the
assessment that f(x) is an improvement on

f(x).
If we are lucky, we need only a single evalu
ation of f(4 in any given iteration. Once
we find an x for which f(x) < f(x), we
can accept it and proceed. On the other
hand, in the worst case we will look in
quite a few directions (2n, for this example)
before we try shorter steps.
The steps that are allowed are restricted in
direction and length. In this example, the
steps must lie parallel to the coordinate axes
and the length of any step has the form
Do/2 for some integer N.
This simple example also suggests that there is a
great deal of flexibility in pattern search algo
rithms, depending on how one specifies the pat
tern of points to be searched for the next iterate.
These features will be recurring themes in our
discussion.


3 The General Pattern Search
Algorithm

For simplicity, our discussion will focus primari
ly on the case of unconstrained minimization,
minimize f(x).
We assume that fis continuously differentiable,
but that information about the gradient of fis
either unavailable or unreliable. Since the incep
tion of pattern search methods, various tech
niques have also been used to apply them to
solve the general nonlinear programming prob
lem
minimize f(x)
subject to c(x) 0
i x U.
More recently, pattern search methods specifical
ly designed for constrained problems with an
attendant convergence theory have been devel
oped in [6, 9, 8].
The form of a general pattern search algo
rithm is quite simple and not all that different
from any other nonlinear minimization algo
rithm: first, find a step s from the current iterate
x,; second, determine if that step is acceptable;


and finally, update the critical components of
the algorithm. At iteration k pattern search

methods will consider steps in directions denot
ed by d,. We require d to be a column of D,,
where D is an nx pk matrix (i.e., Dk represents
the set of directions under consideration).
Generalized pattern search:
Given x0 I /(, xo D4 .D IR"F and Do > 0,
for k 0, 1,... until done do {
1. Find a step s= Dk d using the procedure
Exploratory Moves D,, (D).
2. If f(x,+ D, d) <(x), then xk x + D, d;
otherwise, xk= x,.
3. Update (D D)


In order to establish convergence results for
this class of algorithms, we will, by and by, place
additional conditions on D,, the step calculation
procedure Exploratory Moves 0, and the update
procedure Update0. The analysis reveals that we
do not need to explicitly define
ExploratoryMoves0 or Update 0; for the pur
poses of ensuring convergence it suffices to spec
ify conditions on the results they produce. We
refer the interested reader to [16] for specific
examples of Exploratory Moves 0 and Update 0
used for some of the more traditional pattern
search methods.


4 Global Convergence Analysis

Here we will use global convergence of an opti
mization algorithm to mean convergence to a
stationary point of at least one subsequence of
the sequence of iterates produced by the algo
rithm. A slightly weaker assertion is
lim inf I Vf(x) = 0;

this is equivalent to the previous property if the
iterates {x. remain in a bounded set.
Classical analyses of such methods as steepest
descent and globalized Newton methods rely in
a fundamental way on f(x) to prove global
convergence. Moreover, the technical conditions
that make the proof of global convergence for
these algorithms possible, such as the Armijo
Goldstein-Wolfe conditions for line-search
methods, are actually built into the specification
of gradient-based algorithms.
On the other hand, no such technical condi
tions appear in the description of pattern search
algorithms (witness the example in 2). The phi
losophy of pattern search algorithms (and direct
search methods in general) is best described by
Hooke and Jeeves [5]:


PAGE 2







S P T I OE 18 P


We use the phrase "direct search" to describe
sequential examination of trial solutions
involving comparison of each trial solution
with the "best" obtained up to that time
together with a strategy for determining (as a
function of earlier results) what the next trial
solution will be. The phrase implies our pref
erence, based on experience, for straightfor
ward search strategies which employ no tech
niques of classical analysis except where there
is a demonstrable advantage in doing so .
This passage captures the basic philosophy of the
original work on direct search algorithms: an
avoidance of the explicit use or approximation of
derivatives. Instead, the developers of the origi
nal direct search algorithms relied on heuristics
to obtain what they considered promising search
directions.
Nonetheless, we can prove global convergence
results for pattern search methods, even though
this class of algorithms was not originally devel
oped with convergence analysis in mind. The
analysis does ultimately rely on f(x); hence the
assumption that fis continuously differentiable.
But because pattern search methods do not
compute or approximate A(x), the relationship
between these algorithms and their convergence
analysis is less direct than that for gradient-based
algorithms.


4.1 The Ingredients of Global
Convergence Analysis

We will now review the ideas that underlie the
global convergence analysis of line-search meth
ods for unconstrained minimization in order to
compare them with those for pattern search. We
focus on line-search methods rather than trust
region methods since the comparisons and con
trasts with pattern search are simpler for line
search methods.




it might strike the modem reader as odd that Hooke
and Jeeves would question the advantages of employ
ing techniques of "classical analysis" meaning cal
culus -given the success of quasi-Newton algo
rithms. However, direct search methods appeared in
the late 1950s and early 1960s, a time at which deriv
ative-based methods were not as efficient as today,
and no general convergence analysis existed for any
practical optimization algorithm, derivative-based or
not. The Armijo Goldstein Wolfe conditions [1, 4,
19], which form the basis for designing and analyzing
what we now consider to be practical line-search
algorithms, were several years in the future; trust
region algorithms [14] were further still.


In order to prove global convergence of a line
search algorithm, at the very least one must
show that if the current iterate x, is not a sta
tionary point, then the algorithm will eventually
find an iterate xk such that fix, ) < f(x). This
unavoidably leads to the contemplation of the
gradient, since the gradient ensures that a direc


Figure 2: Decrease is too small relative to the
length of the step


tion of descent can be identified: if xk is not a
stationary point of f then any direction within
90 of f() is a descent direction. For our
purposes, this will prove a crucial, if elementary,
observation: one does not need to know the neg
ative gradient in order to improve f(x), one only
needs a direction of descent. Then, if one takes a
short enough step in that direction, one is guar
anteed to find a point xkl such that f(xk+l) <


However, descent is not sufficient to ensure
convergence: one must also rule out the possibil
ity that the algorithm can simply grind to a halt,
converging to a point that is not a stationary
point. One begins by requiring at least one
search direction to be uniformly bounded away
from orthogonality with f(x). This ensures
that the sequence of iterates cannot degenerate
into steps along directions that become ever
more orthogonal to the gradient while produce
ing an ever diminishing improvement in f({).
This restriction on the search directions is still
not sufficient to prevent the iterates from con
verging to points that are not stationary points.
This unhappy situation can occur in two ways.
First, there is the pathology depicted in Fig. 2.
The ellipse represents a level set of f(x), which in
this case is a convex quadratic. The steps taken
are too long relative to the amount of decrease
between successive iterates.
While the sequence of iterates {x) produces a
strictly decreasing sequence of objective values
{/(x)}, the sequence of iterates converges to two
nonstationary points.
The other pathology, depicted in Fig. 3,
occurs when the amount of decrease between


Figure 3: Decrease is too small relative to the
norm of the gradient


successive iterates is too small relative to the
amount of decrease initially seen in the direction
from one iterate to the next. This time the steps
between successive iterates become excessively
short. This sequence converges to a single point
which again is not a stationary point.
These pathologies lead to the second standard
element of global convergence analysis: a mecha
nism that controls the length of the step. Both
of the preceding pathologies can be avoided, for
instance, by requiring that the amount of
decrease in f(x between successive iterates be
"sufficient," where sufficient relates the amount
of decrease, the length of the step, and the gradi
ent f(x). This is the purpose of the Armijo
Goldstein-Wolfe conditions for line-search algo
rithms: given a suitable descent direction dk, we
choose a step length Dk > 0 such that for some
fixed a (0,1) and fixed b (a,l), X. = k +
Dkdk satisfies both

I(xk 1) f(x) +aDkDf()dk (1)
and

xk+) Tdk b f(x) dk. (2)

The first condition precludes steps that are too
long; the second condition precludes steps that
are too short.


5 How Pattern Search Does Its

Thing

We can summarize the devices that ensure the
global convergence of line-search methods for
unconstrained minimization as follows:
1. The choice of a suitably good descent
direction.
2. Step-length control:
(a) a mechanism to avoid steps that are
too long, and
(b) a mechanism to avoid steps that are
too short, where long and short refer
to the sufficient decrease conditions
(1) and (2), respectively.


PAGE 3


OCTOBER 1998






OCTOBER 1998


U


Figure 4: Examples of a minimal and a maximal positive basis for R2


These mechanisms, which are explicitly built
into line-search algorithms, all depend on infor
nation about the gradient. However, pattern
search algorithms do not assume such informa
tion, and thus do not and cannot enforce such
conditions. What, then, ensures the global con
vergence of pattern search algorithms?
The answer resembles the classical arguments
for establishing the global convergence of line
search methods, but necessarily with novel ele
ments. As we shall see, pattern search algorithms
are globally convergent because:
1. At each iteration, they look in enough
directions to ensure that a suitably good
descent direction will ultimately be consid
ered.
2. They possess a reasonable back-tracking
strategy that avoids unnecessarily short
steps.
3. They otherwise avoid unsuitable steps by
restricting the nature of the step allowed
between successive iterates, rather than by
placing requirements on the amount of
decrease realized between successive iter
ates.
At the heart of the argument lies an unusual
twist: we relax the requirement of sufficient


decrease and require only simple decrease (f(k l)
< f(x)), but we impose stronger conditions on
the form the step sk may take. Furthermore, this
trade-off is more than just a theoretical innova
tion: in practice, it permits useful search state
gies that are precluded by the condition of suffi
cient decrease.


5.1 Pattern Search as a Crypto-Gradient
Method

The analysis begins by demonstrating that a
search direction not too far from the negative
gradient is always available. This is accomplished
by considering a set of step directions Dk suffi
ciently rich that it necessarily includes at least
one acceptable descent direction. In the absence
of any estimate of (x), pattern search algo
rithms hedge against the fact that (x could
point in any direction.
For the example in 2 the set of directions D,
is { + e, i 1, ..., n}, so the set of prospective
next iterates has the simple form xk + Dk e, i
1, ..., n}. If a step s = + Dk e producing simple
decrease on f(x) is found, then x, = xk + Dk e;
otherwise, reduce Dk and try again. Other of the
original pattern search methods, such as the
method of Hooke and Jeeves [5] or coordinate


Vf(xk)


Vf(Xk)


search [13], also include in D, the directions {+
e, i 1, ..., n}.
The analysis in [16] allows for more general
conditions on the set of directions. In particular,
Dmust contain a set of the form {+ p, i = 1,
..., n, where pl ..., pn is any linearly independ
ent set of vectors. One can allow this set to vary
with k, so long as one restricts attention to a
finite collection of such sets.
The discussion in [18] brought to our atten
tion that even less is required: it suffices that the
set of directions D, contain a positive basis P
for I' [7]. In terms of the theory of positive lin
ear dependence [3], the positive span of a set of
vectors {al,...,a) is the cone

{a IR a = cla + ... + ca, c, 0 for all i}.

The set {aI,...,a) is called positively dependent
if one of the as is a nonnegative combination of
the others; otherwise the set is positively inde
pendent. A positive basis is a positively inde
pendent set whose positive span is IR, i.e., a set
of generators for a cone that happens to be a
vector space. A positive basis must contain at
least r-+l vectors and can contain no more than
2n vectors [3]; we refer to the former as minimal
and the latter as maximal; Fig. 4 demonstrates
examples of both for R .
How do we know that at least one of the
directions in D is not too orthogonal to the
direction of steepest descent, regardless of what
f(xA might be? A proof by picture is given in
Fig. 5; see [7] for details. Consider the minimal
positive basis {(1,) (1,-1) (-1,0)} depicted
in Fig. 5 as directions emanating from xk. Notice
that the angles between these vectors are 90 ,
135 and 135 For any continuously different
tiable function 1. I fi IR, if xk is not a station
ary point, then (x can be no more than
67.5' from one of the vectors in the positive
basis, as shown in Fig. 5. Thus, including a posi
tive basis P k in the set of directions D guaran
tees that we can approximate the negative gradi
ent in a way that cannot be arbitrarily bad. This
is the first step towards establishing global con
vergence.


5.2 The Underlying Lattice Structure

As it happens -as it was meant to happen -pat
tern search methods are restricted in the nature
of the steps they take. This ultimately turns out
to be the reason pattern search methods can
avoid the pathologies illustrated in Fig. 2 and
Fig. 3 without enforcing a sufficient decrease
condition.


Figure 5: A minimal positive basis for R and the two worst cases for f()


S *T I A 5 09


PAGE 4







S *TIM A 5


0 0




0 9


0 O


0 0


0 0 0 0 0


S -


I -- I&


p


0 O 0 0


0






S 0








Xk


Xk
O 0









0 0


0 0 0


Figure 6: Some possible patterns


behavior of these algorithms. Suppose the set
xl f(x) f(O))} is bounded. Given any D, > 0,
there exists a finite subset (that depends on D,)
of the lattice of all possible iterates such that x.
must belong to this subset until Dk < D.. That is,
there is only a finite number of distinct values xk
can possibly have until such time as Dk < D..
This means that the only way to obtain an infi
nite sequence of distinct xk is to reduce the step
length parameter Dk infinitely often so that lim
infk, Dk 0.
This also reveals another role played by the
parameter Dk. Reducing Dk increases the set of
candidate iterates by allowing us to search over a
finer subset of the rational lattice of all possible
iterates. This is shown in Fig. 6 and Fig. 7. In
these pictures, halving Dk refines the grid over
which we are tacitly searching for a minimizer of
f(A), while halving the minimum length a step is
allowed to have.


5.3 Putting It All Together

We return to the general pattern search algo
rithm:
Generalized pattern search:
Given IRX, f(xo), Do IR x p,, and DO >
0, for k = 0, 1, ... until done do {


Let P denote the set of candidates for the
next iterate (i.e., P = x, + D, D, by abuse of
notation). We call P the pattern, from which
pattern search takes its name. Several traditional
patterns are depicted in Fig. 6.
Though it does not appear to have been by
any conscious design on the part of the original
developers of pattern search algorithms, these
algorithms produce iterates that lie on a finitely
generated rational lattice, as indicated in Fig 6.
More precisely, there exists a set of generators gl,
..., g independent of k, such that any iterate xk
can be written as

=xo+A4 o ,gk (3)
xk = XO + A0 Cg,
i=1

where each Ck is rational.
Note that this structural feature means that
the set of possible steps, and thus the set of pos
sible iterates, is known in advance and is inde
pendent of the actual objective f(A). This is in
obvious contrast to gradient-based methods.
Furthermore -and this is significant to the
convergence analysis of pattern search -byjudi
cious (but not especially restrictive) choice of the
factors by which Dk can be increased or
decreased, we can establish the following


0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0


0 0 0


0 0 0


0 0 0--

0 0 0 -
OO OO O M


0 0 0


0




- 0


0 6 o


0 0 0 0 0 0 0 0 0


0 0 0 0 0 0


0 0 0 0 0
0000,k
O O O O O

O O O


0 0 0 0 0 0 0 0 0


0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 -----
k0
0 00 0 0 0 0 0


0 0 0 0 0

0 0 0 0 0

0 0 0 0 0
O O O O O


0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
O O O O O O O O


0 0 0 0 0

0 0 0 :

0 0 0 0 0


0 0 0 0 0 0 0 0 0 0 0 0 0 0 0


Figure 7: The same patterns on a refinement of the grid


OCTOBER 1998


PAGE 5


0 0 0 0 0

0 0 0 0 0

0 0 0 0 0







S *T I A 5 09


OCTOBER 1998


1. Find a step s = D dk using the procedure
Exploratory Moves (D,, D).
2. Iff(xk+ D, d) f(x)' i ... \ = xk+ D
dk; otherwise, xk = xJ.
3. Update (D D)


The step sk returned by the
ExploratoryMovesO algorithm must satisfy
two simple conditions:
1. The step returned must be an element of
DD.
2. The step sk must satisfy either f(xk + s) <
fx) or Sk 0.
Furthermore, sk may be 0 only if none of the
steps in Dk P k yielded decrease on f(x).
The first condition prevents arbitrary steps along
arbitrary directions; the second condition is a
back-tracking control mechanism that prevents
us from taking shorter steps unless it is truly
necessary.
As for the procedure Update0 for Dk and D,,
we are free to make modifications to D before
the next iteration. Classical pattern search meth
ods typically specify a single D= D for all k.
Others make substantive changes in response to
the outcome of the exploratory moves. This is
just one of many options to consider when
designing a pattern search method, and it leads
to a great deal of flexibility in this class of algo
rithms. There are conditions that must be satis
fied to preserve the lattice structure, but these
are straightforward to satisfy in practice. The
interested reader is referred to [7], for a com-
plete discussion of the technical conditions, and
to [16], for a description of some traditional
choices.
The rules for updating D, are also restricted
by the need to preserve the algebraic structure of
the possible iterates. Historically, the popular
choices have been to halve Dk at unsuccessful
iterations, and to either leave D, alone at success
ful iterations or possibly double it. The conver
gence analysis leads to other possibilities: we can
rescale Dk by Q {tW,..., t}, where t is a
rational number, {W0,...,W} are integers, L 2,
Wo < 0 and WL 0. This provides at least one
option for reducing Dk when back-tracking is
called for, and at least one option that does not
reduce D .
The proof of convergence now goes like this.
Suppose x is not a stationary point of f(4).
Because at least one of the directions d in P, is
necessarily a descent direction, we can always
find an acceptable step Dd, once we reduce D,
sufficiently. Thus, we can always find x+1 with
f(x k) < f(x) for k in some subsequence K.


Now, ifliminfk II f(x)l 0, thenfor
some e> 0, I| f(x)|| > efor all k. Under this
assumption we can show that once Dk is suffi
ciently small relative to e it will no longer be
reduced. This is so because one of the directions
d in P k is sufficiently close to f(x) to be a
uniformly good descent direction, and 11f(x) l
is uniformly not too small, so we will have f(xk +
kd) < f(x) without having to drive Dk to zero.
However, if lim infkf Dk D. > 0, then due
to the lattice structure of the iterates, there can
be only finitely many possible xk, contradicting
the fact that we have an infinite subsequence K
with f(xk+) < f(x) for all k K(assuming
{ff(x) (x0) is bounded). Hence
lim inf II f(x)I = 0.
The correlation between the fineness of the
grid of possible iterates and the size of Dk also
explains why long steps are not a problem. We
have argued above that if lim infkf D 0,
then lim inf, I f(x) = 0. Now, unless
lim infkf D 0, there can be only a finite
number of distinct iterates, and hence only a
finite number of long steps (or any type of step,
for that matter). Thus even if an infinite number
of "bad" long steps are taken (i.e., steps that
decrease f(x) but that violate (1)), the mere fact
that there are infinitely many distinct iterates
means that lim infkf D = 0, and hence lim
infkf II f(x) = 0.


5.4 Observations

This analysis might suggest an interpretation of
pattern search as a search over successively finer
finite grids. If the finite set of candidates is
exhausted without finding a point that improves
f(A), then the grid is refined by reducing Dk and
the process is repeated.
However, this interpretation is misleading
insofar as it suggests that pattern search algo
rithms are exceedingly inefficient. In practice,
pattern search algorithms do not resort to
searching over all the points in increasingly fine
grids but instead behave more like a steepest
descent method. In this sense, the analysis does
not reflect the actual behavior of the algorithm.
This should not be entirely surprising since,
unlike gradient-based methods, the specification
of pattern search algorithms does not obviously
contain a mechanism designed to guarantee con
vergence.
The situation is analogous to that of the sim
plex method in linear programming. Once one
establishes that the simplex method cannot
cycle, the convergence of the algorithm follows


from the fact that there is only a finite number
of vertices that the simplex method can visit in
its search for a solution. This means that the
simplex method could and does have a theoreti
cal worst-case complexity that is exponential,
but in practice the simplex method has proven
much more efficient than that.
Moreover, the actual behavior of pattern
search in any single iteration can be very differ
ent than the proof of convergence might be
thought to suggest. The search can accept as the
next iterate anypoint in P that satisfies the sim
ple decrease condition f(xk ) < f(x). In particu
lar, the algorithm does not necessarily need to
examine every point in DkP ; it need only do so
before deciding to reduce Dk, which is the worst
case.
In the best case, we may need only a single
evaluation of f(x) to find an acceptable step. In
contrast, in a forward-difference gradient-based
method one needs at least r-l evaluations of
f(x) (in addition to f(x)) to find a new iterate; n
additional values of f(x) to approximate f(x)
and at least one more evaluation of f(x) to
decide whether or not to accept a new iterate.
In order to make progress, pattern search
requires the eventual reduction of Dk. The cost
of discovering the necessity of this step is one
evaluation of f(x) for each direction defined by
the positive basis P For a minimal positive
basis of rtl elements, this cost is the same as
the cost of an unsuccessful quasi-Newton step
using a forward-difference approximation of the
gradient; n evaluations of f(x) to form the finite
difference approximation to f(x), and the
evaluation of f(x) at the rejected x On the
other hand, following an unsuccessful step in the
latter algorithm, one gets to reuse the gradient
approximation; it is not clear how best to reuse
information from unsuccessful iterations of pat
tern search in subsequent iterations.


5.5 The Resulting Convergence Results

Let = {x f(x) f(x)}, and suppose fis C on
a neighborhood of
Theorem If is bounded, then the iterates pro
duced by a pattern search algorithm satisfy

lim inf II Vf(xk) = 0.
k--
If, in addition, limu D = 0 and we require
f(xk, ) < f(x + s) for all s DkP ,, the steps
associated with the positive basis, and the
columns of D are bounded in norm uniformly
in k, then we have


lim IIVf(k) I= 0.


PAGE 6






0 PTIM A 5


By way of comparison, we obtain the result
lim, II f(x) I = 0 for line-search methods
without the assumption that is bounded [12].
However, we must also require sufficient
decrease between iterates according to (1)-(2),
rather than just simple decrease, that is,
f(x 1) (x).
For trust-region methods, with the assump
tion that f(x) is uniformly continuous (but
again, without the assumption that is bound
ed), requiring only simple decrease f(x l) < (x)
suffices to prove that lim II (x)ll = 0, pro
vided the approximation of the Hessian does not
grow too rapidly in norm [15]. With a sufficient
decrease condition, one obtains the stronger
result [11], lim II f(x)ll = 0. However, for
either result f(x) is used in both the fraction
of Cauchy decrease condition on the step and
the update of the trust radius.


Thus, under the hypothesis that is bound
ed, the global convergence results for pattern
search algorithms are as strong as those for gra
dient-based methods. This might seem surprise
ing, but it simply reflects just how little one
needs to establish global convergence. Pattern
search is sufficiently like steepest descent that it
works.
This leads to one caveat for users: like steepest
descent, pattern search methods are good at
improving an initial guess and finding a neigh
borhood of a local solution, but fast local con
vergence should not be expected. In general, one
can expect only a linear rate of local conver
gence.


6 Concluding Remarks

We have tried to explain how and why pattern
search works while refraining from a detailed
description of the convergence analysis. Once
one understands the essential ideas, the proof of
global convergence is reasonably straightforward,
if sometimes tedious. Precisely because pattern
search methods have so little analytical informa
tion explicitly built into them, it takes some
effort to extract an assurance that they actually
do work. However, as we have tried to indicate,
many of the ideas are familiar from standard
analysis of nonlinear programming algorithms.
The novelty lies in the restriction of the iterates
to a lattice, which allows us to relax the condi
tions on accepting steps.
The ideas discussed here also appear in the
analysis of pattern search methods for constrained
minimization [6, 9, 8]. For readers who would
like to explore the connections between pattern
search methods and gradient-based algorithms in
greater detail, we particularly recommend [10].


References

[1] L. ARMIJO, Minimization offunctions having
Lipschitz continuous first partial derivatives,
Pacific Journal of Mathematics, 16 (1966),
pp. 1-3.
[2] G.E.P. Box, Evolutionary operation: A method
for increasing industrial productivity, Applied
Statistics, 6 (1957), pp. 81-101.
[3] C. DAVIS, Theory of positive linear dependence,
American Journal of Mathematics, 76
(1954), pp. 733-746.
[4] A.A. GOLDSTEIN, Constructive Real Analysis,
Harper & Row, New York, NY, 1967.
[5] R. HOOKE AND T.A. JEEVES, Directsearch
solution of numerical and statistical problems,
Journal of the Association for Computing
Machinery, 8 (1961), pp. 212-229.
[6] R.M. LEWIS and V.J. TORCZON, Pattern
search algorithms for bound constrained mini
mization, Tech. Rep. 96-20, Institute for
Computer Applications in Science and
Engineering, Mail Stop 403, NASA Langley
Research Center, Hampton, Virginia 23681
2199, March 1996. To appear in SIAM
Journal on Optimization.
[7] Rank ordering and positive bases in pat
tern search algorithms, Tech. Rep. 96-71,
Institute for Computer Applications in
Science and Engineering, Mail Stop 403,
NASA Langley Research Center, Hampton,
Virginia 23681-2199, 1996. In revision for
Mathematical Programming.
[8] A globally convergent augmented
Lagrangian pattern search algorithm for opti
mization with general constraints and simple


bounds, Tech. Rep. 98-31, Institute for
Computer Applications in Science and
Engineering, Mail Stop 403, NASA Langley
Research Center, Hampton, VA 23681
2199, July 1998. Submitted to SIAM Journal
on Optimization.
[9] Pattern search methods for linearly con
strained minimization, Tech. Rep. 98-3,
Institute for Computer Applications in
Science and Engineering, Mail Stop 403,
NASA Langley Research Center, Hampton,
Virginia 23681-2199, January 1998. To
appear in SIAM Journal on Optimization.
[10] S. LUCIDI AND M. SCIANDRONE, On the glob
al convergence of derivative free methods for
unconstrained optimization, Tech. Rep. 18-96,
DIS, Universita di Roma "La Sapienza,"
1996. Submitted to SIAM Journal on
Optimization.
[11] J.J. MORE, The Levenberg Marquardt algo
rithm.: implementation and theory, in
Numerical Analysis: Proceedings of the bien
nial conference, Dundee 1977, Lecture
Notes in Mathematics no. 630, G. A.
Watson, ed., Springer-Verlag, Berlin, 1978,
pp. 105-116.
[12] S.G. NASH AND A. SOFER, Linear and
Nonlinear Programming, McGraw-Hill, New
York, 1996.
[13] E. POLAK, Computational Methods in
Optimization. A Unified Approach, Academic
Press, New York, 1971.
[14] M.J.D. POWELL, A new algorithm for uncon
strained optimization, in Nonlinear
Programming, J. B. Rosen, O. L.
Mangasarian, and K. Ritter, eds., Academic


Press, New York, NY, 1970, pp. 31-65.
[15] Convergence properties of a class of min
imization algorithms, in Nonlinear
Programming 2, O.L. Mangasarian, R.R.
Meyer, and S.M. Robinson, eds., Academic
Press, New York, NY, 1975, pp. 1-27.
[16] V. TORCZON, On the convergence of pattern
search algorithms, SIAM Journal on
Optimization, 7 (1997), pp. 1-25.
[17] V. TORCZON AND M.W TROSSET, From evo
lutionary operation to parallel direct search.
Pattern search algorithms for numerical opti
mization, Computing Science and Statistics,
29 (1998), pp. 396-401.
[18] Y. WEN-CI, Positive basis and a class of direct
search techniques, Scientia Sinica, Special
Issue of Mathematics, 1 (1979), pp. 53-67.
[19] P. WOLFE, Convergence conditions for ascent
methods, SIAM Review, 11 (1969), pp. 226
235.

Robert Michael Lewis, Institute for Computer
Applications in Science and Engineering (ICASE),
Mail Stop 403, NASA Langley Research Center,
Hampton, Virginia 23681 2199, U.S.A.,
buckaroo@icase.edu
Virginia Torczon, Department of Computer
Science, College of William & Mary, P.O. Box 8795,
Williamsburg, Virginia 23187 8795, U.S.A.,
va@cs.wm.edu.
Michael W. Trosset, College of William & Mary,
P.O. Box 8795, Williamsburg, Virginia 23187 8795,
U.S.A., trosset@math.wm.edu


October 12, 1998


OCTOBER 1998


PAGE 7






S *T I A 5 09


Farewell


Remarks


by the


Outgoing


MPS Chair


n August 1998, my term as Chair of the
Mathematical Programming Society ended. I
confess to some relief and some nostalgia. It
is the greatest honor of my career to have
won election as Chair of the major professional
organization for my primary research interest, and
I have enjoyed the opportunity to make a differ
ence. It has been a busy three years. My officers
and I have made some significant changes that will
benefit the Society, and Tom Liebling and his
Lausanne Organizing Committee put on a mag
nificent Symposium during our watch.
My successor, Jean-Philippe Vial, inherits a
financially sound society in the process of digesting
these changes, but he also inherits a society whose
membership has declined by about 10% over the
last three years. This problem was the result of an
oversight in the registration process for the
Lausanne meeting. Nonmembers were not offered
membership at a reduced rate. Please support the
upcoming membership drive by renewing your
membership. If you are not a member, then you
must be reading this on the MPS web site, and you
can join at this site.


We have made two major changes. In a move
that I believe is crucial to the continued health of
the MPS, we have changed publishers for thejour
nals, MP A&B. The key to this change was our
desire to reduce the library subscription price of
the journals while keeping member subscription as
a membership benefit.
Many of you have told us of battles with cost
conscious librarians to keep our journals in your
libraries. The price will roughly halve next year.
Springer will publish the series A&B under the
same titles and with only a slightly modified cover
art. Our feeling is that allowing Springer to make a
slight change in the cover signals the changes with
out leading to any confusion that might lead a
librarian to think that this is a new journal.
The second move is that SIAM is now handling
our member services. We are receiving more serv
ices and more reliable services with no increase in
cost. For a while, it may not have seemed that way
because many renewals went awry. This happened
because many of you seemed not to have gotten
the cover letter with the ISI mailing of renewal
invoices, asking you to renew through SIAM. Still,
SIAM seems to have straightened out this prob
lem, even though they had no hand in causing it.
In addition, the SIAM staff made a real effort to
make our directory accurate.


Of course, we all knew that the previous direc
stories were unusable, but I did not realize, until the
new directory came out and I saw the volume of
mail generated to correct errors, that ISI seems not
to have made changes to the directory even when
members notified them. I am confident that the
current directory is useful and that next year our
directory will be very accurate. We can help this
process by providing accurate e-mail addresses on
our renewal forms.
Elsewhere in this issue of OPTIMA, Jean
Philippe Vial will write the column that signals the
beginning of his term. He will do a fine job, and he
will do it with his usual sense of style. Let us resolve
to help him all we can.
Finally, let me thank you again for the opportu
nity to serve the MPS in this capacity, and let me
also thank the Council and the Executive
Committee chaired heroically by Steve Wright for
being partners this term. Vice-Chairs Jan Karel
Lenstra and Jean-Philippe Vial, as well as Treasurer
Clyde Monma were all I could have wished and
more.
JOHN DENNIS


OCTOBER 1998


PAGE 8







S T0IM A5


The New MPS Chair






tion and OR, I was motivated
by the belief that quantitative
decision-making in business
and organizations was a field full of promise. The
belief was naive, or at least not based on proper
information. The decades that followed did not
fulfill these great expectations. Our field certainly
has shown much vitality and creativity and some
spectacular applications were achieved, but we
must also acknowledge certain failures: unrealistic
models, insufficient databases, lack of appropriate
software, and drastic hardware limitations. As a
result, the business community developed much
skepticism about our profession. Optimization
and OR requirements diminished or disappeared
from MBA curricula at many institutions.
Those bad times seem to be over. I am happy
to be starting my duties of Chair at a time of
bright prospects for our profession. It is com-
monplace to credit the hardware revolution as a
major cause, but in addition we should note that
algorithms progressed at an even greater rate. A
third factor may, however, be the one that brings
us real support from the business community:
Spreadsheets and modeling languages nowadays
provide an environment in which the users can
think and express their problem. They free users
from most of the mathematical intricacies and
subtleties that we cherish and thereby encourage
them to explore new ways of thinking based on
models and optimization.
It should be one of our goals to regain our pop
ularity in the business community through a new
approach to teaching and consulting. The poten
tial for applications is enormous. In some areas,
decision-makers have no other alternative than
the best methodology we can provide. This is cer
tainly true of combinatorial problems that are so
easy to formulate and so difficult to solve. But I
also have in mind some strategic issues, such as
measuring the economic impact of an effort to
reduce greenhouse gas emission, estimating traffic
congestion, or evaluating oligopolistic situations
in a deregulated world. In these examples, the
concept of equilibrium is probably the only one
that gives a grasp to the analysis, though comput
ing equilibria ll .11l still remains a challenge
in many instances.
Equally important for the future of our Society
is our ability to promote new applications in
engineering. As our former chair pointed out
three years ago, engineers often ignore the avail
ability of powerful optimization tools that would


improve the design and the control of engineer
ing systems. Optimization is still not fully part of
the engineering culture, even though the mini
mum energy principle is so basic and
omnipresent in physics. The failure may be due
to insufficient performance of nonlinear solvers
in the past. It may also be that engineers are more
interested in the design of reliable and robust sys
teams than in obtaining the ultimate with respect
to some criterion. We should publicize optimiza
tion not as the way to get "the" solution, but
rather as an intelligent simulation tool.
Optimization often leads to novel and sometimes
surprising solutions that contribute to a better
knowledge and mastering of engineering systems.
The new field of robust optimization also offers
great opportunities.
Having a positive attitude towards applications
does not mean that we should neglect the
research-oriented character of our Society.
Development of new theories and new algo
rithms is essential to continued vitality of the
field, and remains the primary goal of the
Society's communications media, in particular,
our journals and symposia. Although most of us
are not practitioners, making optimization a
more and more operational tool should also
become our concern. To maintain a proper bal
ance between those two facets of our activity is
our challenge. We may be helped in this mission
by cooperation with sister societies SIAM,
INFORMS, IFORS, which have related but dif
ferent foci.
The previous team initiated some major man
agement innovations, including a new publisher


for Mathematical Programming with a lower
institutional subscription rate, and a new mem
bership services provider. The change of publish
ers to Springer will take effect next January, and
everyone should feel responsible for making their
own organizations library aware of the new lower
rate and encouraging them to subscribe.
Electronic distribution is also part of the new
publisher's program. Membership services have
been provided by the SIAM office since the start
of 1998. They have been working hard to update
the data base of members (which, unfortunately,
had not been maintained well for some years) and
to enhance our Internet presence. Electronic
searching of the database is now possible through
the MPS web site (http://www.mathprog.org/).
In fact, most of your contacts with MPS, include
ing correcting your membership information,
renewing membership, pursuing OPTIMA
online, and gathering information on MPS prizes
and upcoming symposia, can be performed
through the web site.
The election took place this year and brought
in a new council. Steve Wright
(wright@mcs.anl.gov) continues to hold the
appointed post of Executive Committee Chair,
while the Publications Committee continues to
be chaired by Bob Bixby. Our immediate task is
to achieve larger diffusion of our journal and to
increase membership. Above all, we should main
tain the unique character of the Society, most
notably its high scientific level and its genuine
international flavor. We are particularly looking
forward to the meeting in Atlanta in the year
2000, and the Search Committee is actively look
ing for an attractive and exciting place in 2003.
Please contact me (at chair@mathprog.org or
jean-philippe.vial@hec.unige.ch), Steve Wright, or
the Executive Committee (at xcom@mathprog.org)
with any comments on Society business. The low
periodicity. .. i- gatherings favors their qual
ity, but unfortunately, it gives us limited oppor
tunities to exchange views on MPS. Electronic
mail partially compensates for this lack of per
sonal contact, so please don't hesitate to use it and
let us know what you think.
JEAN-PHILLIPPE VIAL
University of Geneva, 102 Bd Carl Vogt,
CH 1211 Geneva 4, Switzerland;
Office: +41 22 705 81 24,
Fax: +41 22 705 81 04;
e-mail: jean-philippe.vial@hec.unige.ch


OCTOBER 1998


PAGE 9








































Conference Calendar


) International Conference on Nonlinear Programming and Variational Inequalities
December 15-18, 1998, Hong Kong
URL: http://www.cityu.edu.hk/ma/conference/icnpvilicnpvi.html
) DIMACS Conference on Semidefinite Programming and Large Scale Discrete Optimization
January 7-9, 1999, Princeton University
URL: http://dimacs.rutgers.eduNVorkshops/SemidefiniteProglindex.html
) DIMACS Conference on Algorithm Engineering and Experimentation
January 15-16, 1999, Baltimore, MD
URL: http://dimacs.rutgers.eduNVorkshops/Algorithm/
)Tenth Annual ACM-SIAM Symposium on Discrete Algorithms
January 17-19, 1999, Baltimore, Maryland
URL: http://www.siam.org/meetings/da991
) DIMACS Conference on Large Scale Discrete Optimization in Manufacturing and Transportation
February 8-10, 1999, DIMACS Center Rutgers
) DIMACS Conference on Mobile Networks and Computing
March 24-26, 1999, DIMACS Center Rutgers
URL: http://dimacs.rutgers.eduNVorkshops/Mobilelindex.html
) INFORMS National Meeting
May 2-5, 1999, Cincinnati, KY
URL: http://www.cba.uc.eduldept/qa/cinforms/
) Sixth SIAM Conference on Optimization
May 10-12, 1999, Atlanta, GA
URL: http:/lwww.siam.org/meetings/op99/index.htm
) 1999 SIAM Annual Meeting
May 12-15 1999, Atlanta, GA
URL: http:/lwww.siam.org/meetings/an99/index.htm
) Workshop on Continuous Optimization
June 21-26, 1999, Rio de Janeiro
URL: http://www.impa.br/~opt/
Fourth International Conference on Industrial and Applied Mathematics
July 5-9, 1999, Edinburgh, Scotland
URL: http://www.ma.hw.ac.ukliciam99/
) 19th IFIP TC7 Conference on System Modelling and Optimization
July 12-16, 1999, Cambridge, England
URL: http://www.damtp.cam.ac.ukluser/naltc7con


PAGE 10


Call for Papers


Seventh Conference on Integer
Programming and Combinatorial
Optimization
IPCO '99
June 9-11, 1999
TU Graz, Graz, Austria
Conference Approach
This meeting, the seventh in the
series of IPCO conferences held every
year in which no MPS International
Symposium takes place, is a forum
for researchers and practitioners
working on various aspects of integer
programming and combinatorial
optimization. The aim is to present
recent developments in theory, com-
putation, and applications of integer
programming and combinatorial
optimization. Topics include, but are
not limited to: polyhedral combine
torics; integer programming; cutting
planes; branch and bound; geometry
of numbers; semidefinite relaxations;
matroids and submodular functions;
computational complexity; graph and
network algorithms; approximation
algorithms; on-line algorithms; and
scheduling theory and algorithms.
In all these areas, we welcome struc
tural and algorithmic results, reveal
ing computational studies, and novel
applications of these techniques to
practical problems. The algorithms
studied may be sequential or parallel,
deterministic or randomized.
During the three days, approximately
36 papers will be presented in a series
of sequential (non-parallel) sessions.
Each lecture will be 30 minutes long.
The Program Committee will select
the papers to be presented on the
basis of extended abstracts to be sub
mitted as described below.
The proceedings of the conference
will be published in the Springer
Lecture Notes in Computer Science
series and will contain full texts of all
presented papers. Copies will be pro
vided to all participants at registration
time.
Paper Submission
An extended abstract (up to 10 pages)
must be submitted by November 15,
1998. Electronic submissions (in
PostScript) are strongly encouraged.
Please refer to the Conference web
site for further submission instruc
tions.









If an electronic submission is not
possible, submit eight copies by reg
ular mail by November 1, 1998. Your
submission must include author's
name, affiliation, and e-mail address.
Authors will be notified of accept
ance or rejection by January 20,
1999. The final full version of the
accepted paper, for inclusion in the
conference proceedings, is due by
March 7, 1999.
Contact Address
Gerhard J. Woeginger, Department
of Mathematics, TU Graz,
Steyrergasse 30, A-8010 Graz
AUSTRIA.
Fax: (0043) 316 873 5369;
E-mail: ipco99@opt.math.tu
graz.ac.at;
URL: http://www.opt.math.tu
graz.ac.at/ipco99.
Program Committee
Chair: Gerard P. Cornuejols
(Carnegie Mellon University); Rainer
E. Burkard (TU Graz); Ravi Kannan
(Yale University); Rolf H. Moehring
(TU Berlin); Manfred Padberg (New
York University); David B. Shmoys
(Cornell University); Paolo Toth
(University of Bologna); and
Gerhard J. Woeginger (TU Graz).
Important Dates
Extended abstracts due: November 1,
1998 (hard copy), November 15,
1998 (electronic); Authors notified:
January 20, 1999; Final versions
received: March 7, 1999; IPCO '99
Graz: June 9-11, 1999


OCTOBER 1998


PAGE 11


First

Announcement

& Call for

Papers


Fourth Workshop on Models and
Algorithms for Planning and
Scheduling Problems
June 14-18, 1999
MAPSP '99
Following three successful work
shops at Lake Como, Italy, in
1993, in Wernigerode, Germany,
in 1995, and in Cambridge,
England, in 1997, the Fourth
Workshop on Models and
Algorithms for Planning and
Scheduling Problems is to be held
in Renesse, The Netherlands, June
14-18, 1999. The conference
hotel, 'De Zeeuwsche Stromen,' is
located in the dunes of Renesse, a
beach resort in the province of
Zeeland.
The workshop aims to provide a
forum for scientific exchange and
cooperation in the field of plan
ning, scheduling, and related areas.
To maintain the informality of the
previous workshops and to encour
age discussion and cooperation,
there will be a limit of 100 partici
pants and a single stream of pre
sentations.


Contributions on any aspect of
scheduling and related fields are
welcome.
Conference Organizers
Emile Aarts, Philips Research
Laboratories, Eindhoven; Han
Hoogeveen, Eindhoven University
of Technology; Cor Hurkens,
Eindhoven University of
Technology; Jan Karel Lenstra,
Eindhoven University of
Technology; Leen Stougie,
Eindhoven University of
Technology; and Steef van de Velde,
Erasmus University, Rotterdam.
Invited Speakers
Michel Goemans, CORE, Louvain
la-Neuve, Belgium; Martin
Grotschel, ZIB, Berlin, Germany;
Michael Pinedo, New York
University, New York, USA; Lex
Schrijver, CWI, Amsterdam, The
Netherlands; Eric Tainard, IDSIA,
Lugano, Switzerland; Richard
Weber, Cambridge University,
Cambridge, England; Joel Wein,
Polytechnic University, Brooklyn,
USA; and Gerhard Woeginger,
Technische Universitaet Graz,
Austria.
Preregistration
If you are interested in participate
ing, please send an e-mail to
mapsp99@win.tue.nl. You will be
included in our e-mail list for fur
their notifications. Preregistration
does not bear any obligations, but


helps us to plan the schedule and
keep you informed. In your e-mail
please include: last name, first
name, affiliation, e-mail address,
and whether or not you intend to
give a talk.
Presentations will be selected on
the basis of a one-page abstract to
be submitted no later than March
31, 1999.
Important Dates
July 1, 1998 -Announcement and
first call for papers; November 1,
1998 -Second announcement;
March 1, 1999 -Deadline for
abstract submission; April 1, 1999
-Last date of notification of
acceptance; May 1, 1999 -Last
date for early registration.
Registration costs include fee and
accommodation, based on double
room occupancy. Prices mentioned
are tentative.
Early registration fee: NLG 800;
Late registration fee: NLG 900;
Supplement for single room: NLG
125; Beach party: to be announced
The deadline for early registration
is May 1, 1999. To register, please
consult the conference web site.
Information Sources
For up to date information, con
sult the conference web site
(http://www.win.tue.nl/-mapsp99)


First Announcement


6th Twente Workshop on Graphs and
Combinatorial Optimization
26 28 May, 1999
University of Twente Enschede, The
Netherlands
The Twente Workshop on Graphs and
Combinatorial Optimization is organized bien
nially at the Faculty of Mathematical Sciences at
the University of Twente. Topics are: graph theo
ry and discrete algorithms (both deterministic
and random) and their applications in opera
tions research and computer science.
We try to keep a 'workshop atmosphere' as
much as possible, and so far have succeeded in
scheduling no more than two presentations in
parallel. We also try to keep the costs as low as
possible in order to make the workshop particu
larly accessible to young researchers.


Prospective speakers are asked to submit an
extended abstract of their representation, which
will be refereed by a program committee. Your
extended abstract should be at least three but
not more than four pages and should reach the
organizers before March 12, 1999.
The accepted extended abstracts will be collected
into a conference volume available at the work
shop and published in a volume of Electronical
Notes in Discrete Mathematics (ENDM).
The external program committee members
include: J.A. Bondy (Lyon); R.H. Mohring
(Berlin); R.E. Burkard (Graz); B. Reed (Paris);
W.J. Jackson (London); R. Schrader (Cologne);
F Maffioli (Milano); and C. Thomassen
(Copenhagen).
A normally refereed special issue of Discrete
Applied Mathematics will be devoted to the pro
ceedings of the workshop.


If you are interested in participating in the 6th
Twente Workshop, please pre-register now infor
mally. Give your complete postal as well as your
e-mail address and indicate whether you would
like to give a presentation (ca. 30 min.). If you
know the subject and/or title of your present
tion, please include that also. You should receive
a definite registration form and more detailed
information by December 1998.
Further information on the workshop will be
available at the web site
(http://www.math.utwente.nl/-tw6)
H.J. BROERSMA, U. FAIGLE, C. HOEDE, J.L. HURINK
Faculty of Mathematical Sciences,
University of Twente, P.O. Box 217,
7500 AE Enschede, The Netherlands
(e-mail: tw6@math.utwente.nl)






OCTOBER 1998


Addendum to the

Book Review from

OPTIMA N57

Theory and Algorithms for

Linear Optimization." An

Interior Point Approach

by C. Roos, T. Terlaky and
J.-Ph. Vial

Wiley, Chichester, 1997
ISBN 0 471-95676


Even though the book was altogether
a pleasure to read, I complained in
this review that "the most prominent
topic not addressed is infeasible point
methods."
I would like to add that while this
is true formally, the authors spend a
big section of the book on the skew
symmetric model, which is one possi
ability to avoid having a feasible interi
or point to start with.
From a practical point of view, it is
argued that the slightly larger skew
symmetric model is computationally
not significantly more expensive than
standard feasible methods, but it
enjoys additional theoretical proper
ties.
In view of all this, my above com
plaint is ti .. I . ..I and I would like
to add that the book in fact handles
the issue of starting points for IP
quite elegantly through the extended
skew-symmetric model.
FRANZ RENDL


Grobner Bases and Convex

Polytopes

by Bernd Sturmfels

University Lecture Series Vol. 8,
American Mathematical Society

Providence RI, 1995
ISBN 0-8218-0487-1


This book is a state-of-the-art
account of the rich interplay between
the combinatorics and geometry of
convex polytopes and computational
commutative algebra, via the tool of
Grobner bases. It is an essential intro
duction for those who wish to per
form research in this fast developing,


interdisciplinary field. For the math
programmer, this book could be
viewed as an exposition of the inter
actions between integer program
ming and Grobner bases.
Grobner bases of polynomial ideals
are special generating sets that depend
on certain cost vectors. The discovery
of an algorithm for their computation
in 1965 by Buchberger catapulted
Grobner bases into a central role in
computational commutative algebra
and algebraic geometry. (E ,i:hl-ig,-i
named them after his thesis advisor,
Wolfgang Grobner.) An implemen
station can be found in any of the
major computer algebra packages
Macaulay, Reduce, Singular, CoCoA,
Maple and Mathematica, to name a
few.
The book assumes a working
knowledge of the basics of Grobner
bases, polyhedral theory and linear
programming. The link between
polytopes and algebra is made via a
special class of ideals called toric ideals
which are prime ideals generated by
differences of monomials. The mate
rial is organized into fourteen chap
ters, each of which is followed by
exercises (some are in fact research
projects) as well as brief historical and
bibliographic notes. A number of
open problems are posed throughout
the book, and the reader is quickly
brought from basic definitions to the
forefront of current research. There is
a great deal of emphasis on computa
tional issues, as is evident from the
many algorithms and examples
included in the book. Many of these
computations challenge the ability of
current computer algebra packages
(and the user's imagination and inge
nuity) .
The first three chapters treat gener
al polynomial ideals and introduce
most of the tools used in this book.
The highlights are the notions of uni
versal Grobner bases, weight vectors,
state polytopes and Grobner fans. As
seen later, these notions play a crucial
role in integer programming. The
treatment is custom tailored to the
purposes of this book and unique in
that it differs from usual presentations
of this material. A universal Grobner
basis of an ideal lis a finite subset of


the ideal that is a Grobner basis of I
with respect to all weight (cost) vec
tors. The distinct Grobner bases of I
are in bijection with the vertices of a
state polytope of I The normal fan of
a state polytope is the Grobner fan of
I Chapter 3 presents algorithms for
computing state polytopes, Grobner
fans and universal Grobner bases.
Chapters 4-9 form the heart of the
book and are devoted to toric ideals.
In Chapter 4, the reader is introduced
to toric ideals, their algebraic proper
ties and complexity results for their
Grobner bases. Given a matrix A
ZdX of rank d the toric ideal of A is
the ideal generated by all polynomials
binomialss) of the form X1"x22 x"
- xllx22 xn" where u, v N" and Au
= Av. A construction of Graver
(1975) provides a useful universal
Grobner basis for a toric ideal called
the Graver basis.
Chapter 5 describes three natural
problems that can be associated to the
linear map p : N" fi Nd such that x
i-> A and its fibers pl(b). (The
matrix A is assumed to be in Nd n of
rank where N is the set of non-neg
ative integers.) The first is that of enu
meratingp (b) x .N": A which
amounts to finding all non-negative
integer solutions to a system of linear
equations. The second is that of ran
domly generating an element of p
(b), or sampling, and the last is to
solve the integer program minimize c
x : x p (b). It is shown how all
three of these problems can be solved
using the toric ideal of A and its
Grobner bases. In particular, it is
shown that the Grobner basis of the
toric ideal of A with respect to the
cost vector c is the unique minimal
test set for the family of integer pro
grams minimize c x : x p (b)
obtained by varying b.
Chapter 6 treats the case where A
has only one row, which amounts to
studying knapsack problems. In this
case, the elements of the Graver basis
are primitive partition identities.
The geometry of the universal
Grobner basis of a toric ideal is dis
cussed in Chapter 7. It is shown that
the universal Grobner basis is precise
ly the set of all edge directions in the
convex hulls of the fibers pl(b) as b


PAGE 12










varies. The effect of varying the cost
function c in the integer programs
minimize c x:x pl(b) (b varies)
while keeping A fixed, is completely
captured by the Grobner fan of the
toric ideal of A, making the state poly
tope of a toric ideal a model for sensi
tivity in integer programming.
Algorithms for computing all of these
entities are provided.
Chapter 8 treats the regular trian
gulations of the point configuration
cA, given by the columns of cA. These
are simplicial complexes on cA that
depend again on cost vectors. The
author shows that there is a many-one
onto map from the set of all Grobner
bases of the toric ideal of A and the
regular triangulations of cA. Regular
triangulations of cA are in fact the
analogs in linear programming of
Grobner bases in integer program
ming. If D denotes the regular trian
gulation of cA induced by the cost
vector c, then the maximal simplices
in D are precisely the optimal LP
bases of the family of linear programs
minimize {c x: Ax b, x 0} as b
varies. This approach allows a natural
view of integer programming as an
arithmetic refinement of linear pro
gramming.
Many of the theoretical notions
from the previous chapters are illus
treated in Chapter 9 using the node
edge incidence matrices of complete
graphs (b-matching problems in inte
ger programming).
The last five chapters of the book
deal with advanced topics. Chapter
10 generalizes the notion of initial
ideals of toric ideals to A graded alge
bras. This amounts to abstract integer
programming where one is allowed to
declare a unique random point in
each fiber as being optimal as long as
the optimal points form an order ideal
in N". Chapter 11 discusses the role of
toric ideals in canonical subalgebra
bases. Chapter 12 treats certain
advanced computational aspects of
toric ideals that employ tools from
algebra. In particular, localizations of
initial ideals (of a toric ideal) are relat
ed to group relaxations of integer pro
grams. Toric ideals as defined in this
book are related to those found usual
ly in the algebraic geometry literature,


OCTOBER 1998


in Chapter 13. The book concludes in
Chapter 14 with three sophisticated
point configurations and properties of
their toric ideals.
REKHA THOMAS


Local Sarch in

Combinatorial

Optimization

Edited by Emile Aarts and
Jan Karel Lenstra

Wiley, Chichester

ISBN 0-4719-4822-5


"Local Search in Combinatorial
Optimization" is the first book I know
of that covers under one head many of
the interesting aspects of this topic in
considerable depth and breadth. At
the same time, it finds a fair balance
between general theory, methodolo
gies, and applications. The book con
sists of 13 chapters altogether, each
written by leading experts of the
respective theme. Fortunately, it is not
simply a collection of rather unrelated
articles. Due to the editors' editorial
experience, their knowledge of the
field, their apparent effort in prepare
ing this book, and their careful choice
of both the topics and the authors, a
rather unique and mostly up-to-date
source of theoretical results, different
viewpoints, and empirical observa
tions came into being.
Only a formal indication of the
interplay between the different chap
ters is the fair amount of cross refer
ences, the common list of references
at the end, as well as the joint author
and subject indexes. I have followed
with great excitement the sometimes
different historical perception of dif
ferent authors and especially the more
or less implicit dispute between them,
e.g., between the "advertisers" of some
methodology like (artificial) neural
networks and the potential users of it.
This brings me to one of the big plus
es of the book at hand. Without any
prejudice (but with some humor), the
editors gave room for the description
and thorough discussion of algorith
mic paradigms which caused only
some years ago a great irritation
between, say, some pure" combinato


rial optimizers on the one side and
engineers, practitioners, or scientists
from the artificial intelligence com
munity on the other side. In fact, the
world of local search has changed dra
matically in the last decade and Aarts
and Lenstra's book is a tribute to this
development. For one thing, incredi
ble changes in computer technology
facilitated testing many algorithmic
variants and parameter settings on
several large problem instances. On
the other hand, the development and
the recognition of the importance of
new algorithmic concepts like simu
lated annealing, tabu search, and
genetic algorithms have changed the
landscape significantly. Local search is
no longer synonymous with iterative
improvement. It is part of the main
intention of the editors and authors of
this book to present, review, and dis
cuss the current state and the mathe
matical foundation of these relatively
new concepts, as well as their useful
ness for solving typical combinatorial
problems.
The book is organized in three
parts: the complexity of finding local
ly optimal solutions, algorithmic con
cepts to compute local optima that are
as good as possible, and the applica
tion and refinement of local search
methods to diverse combinatorial
optimization problems. An introduce
tory chapter written by the editors
complements the three parts. It gives a
first overview of the scene and lays the
notational foundation for the rest of
the book. However, these suggestions
are not always taken up in the subse
quent chapters.
The complexity of finding locally
optimal solutions is still not known
for quite a few combinatorial prob
lems and associated neighborhoods.
In response to this fact, Johnson,
Papadimitriou, and Yannakakis intro
duced in 1988 the complexity class
PLS and the concept of a PLS-reduc
tion that relates the difficulty of find
ing local optima between different
problems. The second chapter by
Mihalis Yannakakis is at the same
time a brilliant, pleasant-to-read
introduction to, and a rather up-to
date, in-depth survey of the complex
ity class PLS and PLS-complete prob


lems. In particular, he proves a gener
ic problem to be PLS-complete and
presents then several illuminating
PLS-reductions to popular problems
like the graph partitioning problem
under the Kernighan-Lin or the swap
neighborhood.
Yannakakis' chapter on the compu
national complexity of finding a local
optimum by any means (not necessary
ily by a local search algorithm) is fol
lowed by a chapter on the worst and
average-case complexity of a certain
class of algorithms, written by Craig
Tovey. In contrast to the previous
chapter, Tovey does not consider spe
cific combinatorial problems and
associated neighborhoods, but rather
works in the abstract setting of graphs
reflecting (data-independent) neigh
borhood functions. Consequently, the
algorithms are assumed to draw infor
nation from the neighborhood graph
and from an evaluation oracle for the
objective function only. In essentially
this setting, the following main results
are reviewed and proven. For almost
all neighborhood functions, any algo
rithm to find a local optimum must
examine at least a constant fraction of
the set of all feasible solutions, in the
worst case. In the average case, how
ever, even the standard iterative
improvement algorithm visits at most
a polylogarithmic number of solu
tions, as long as the degree of the
neighborhood graph is sufficiently
small. Note, however, that both the
lower bound on the worst-case per
formance as well as the upper bound
on the average-case behavior live to a
good part from the freedom to choose
arbitrary objective functions. For
more structured functions, our
knowledge remains limited.
At this place it is perhaps suitable
to briefly remind the reader of Aarts
and Lenstra's book of at least some
(uncovered) results that actually com
plement some of the results, remarks,
or questions raised in the first three
chapters: the adjacency neighborhood
on the 0/1-polytope associated with a
linear combinatorial optimization
problem (LCOP) is the unique mini
mal exact neighborhood for local
search (Savage 1976); the diameter of
any d-dimensional 0/1-polytope is at


PAGE 13









most d (Naddef 1989); if P is an NP
hard LCOP and Nis a neighborhood
function such that P ,N is in PLS,
then N cannot be exact, unless P =
NP (Grotschel & Lovasz 1996,
Schulz, Weismantel & Ziegler 1995);
testing the optimality of a given solu
tion is NP-hard, for any NP-hard
LCOP
However, let us come back to the
contents of Aarts and Lenstra's book.
Each of the Chapters 4 to 7 is devot
ed to a certain algorithmic paradigm.
Chapter 4 is a relatively technical dis
cussion of simulated annealing with a
clear emphasis on the stimulating
modeling of simulated annealing
algorithms by Markov chains. First,
the basics of the theory of Markov
chains are carefully introduced. Then,
the authors elaborate on the results on
the probabilistic convergence of simu
lated annealing algorithms to the set
of optimal solutions. Practical issues
like the choice of the cooling sched
ule, the design of parallel algorithms
together with the use of neural net
work models, and the combination of
different approaches are covered as
well. Alain Hertz, Eric Taillard, and
Dominique de Werra survey tabu
search in a very concise manner in
Chapter 5. The technical details of
perhaps the most interesting mathe
matical result on the probabilistic


OCTOBER 1998


PAGE 14


convergence are only given in the
introductory section and the conclu
sion. The main part of the text focus
es on Ili. ...k issues that are illus
treated by a few selected examples. In
contrast to the two previous chapters,
the sixth chapter on genetic algo
rithms contains a wealth of proofs. In
a truly remarkable, unusually person
al style, Heinz Muhlenbein responds
to other, earlier explanations on why
and how genetic algorithms work. His
main message is to use mathematical
methods that were previously devel
oped to explain phenomena in popu
lation genetics. Eventually, we come
to perhaps the most controversial
technique, artificial neural networks.
Carsten Peterson and Bo Soderberg
garnish their introduction to this area
with quite a few examples and some
computational results. In any case, I
recommend also reading Section 7 of
Chapter 8 which helps to get the
results obtained by this method into a
better perspective.
Each of the six remaining chapters
studies the application of one or more
of the previously introduced algorith
mic techniques to a specific class of
combinatorial optimization prob
lems. David Johnson and Lyle
McGeoch investigate the traveling
salesman problem; in the broader
context of vehicle routing, Michel


Gendreau, Gilbert Laporte, and Jean
Yves Potvin also consider all concepts
introduced in the second part of the
book, whereas extensions of edge
exchange neighborhoods are dis
cussed by Gerard Kindervater and
Martin Savelsbergh; Edward
Anderson, Celia Glass, and Chris
Potts review local search algorithms in
the wide field of machine scheduling;
applications of simulated annealing,
tabu search, and genetic algorithms to
the different phases in VLSI layout are
discussed by Emile Aarts, Peter van
Laarhoven, C. Liu, and Peichen Pan;
finally, liro Honkala and Patric Oster
gard describe the use of local search
methods to design good error-correct
ing and covering codes. It would be
almost unfair to highlight any of these
chapters, as all of them are thorough
ly and thoughtfully prepared.
However, three chapters deserve nev
ertheless a short special mentioning.
First, the chapter on the TSP is a shin
ing example of a carefully designed
comparison of the performance of dif
ferent algorithmic techniques and
their implementations. Second, the
chapter on machine scheduling capti
vates by its organization; the authors
first extract nicely the common fea
tures from the several algorithmic
approaches before they actually start
considering particular scheduling


problems. Third, the really self-con
trained chapter on VLSI layout is con
vincing with its scholarly introduction
to both the area of layout problems
and the local search methods
employed to solve them.
I am pleased to report that the 512
pages which make up the book con
tain only relatively few errors and
typos, and only a few of them are
annoying (wrong running times,
wrong dates, wrong variable names or
indices). Another nice and important
feature of this book is that although
the authors' own contributions have
helped to shape the field in the last
decades, most chapters are not merely
a summary of the authors' own
research. Still, some reader might miss
the (more detailed) discussion of one
or the other quite related topic like
Kalai's bound on the diameter of
polytopes, Amenta and Ziegler's
deformed products, abstract objective
functions, or test sets in integer pro
gramming. However, as an editor, one
has to make a choice.
In summary, this book is a very
useful source for researchers and grad
uate students of quite a range of fields.
It gives local search and especially the
modern concepts, at which some peo
ple still smile, the right (mathemati
cal) standing.
-ANDREAS S. SCHULZ


gallimaufry


The editorial board has been very mobile lately, both in terms of addresses and activities.

* Karen Aardal is spending the fall at the Department of CAAM, Rice University.

* Sebastian Ceria has started a new company called Dash Optimization, Inc., but continues to hold

a position at Columbia University.


* Mary Beth Hribar moved to Seattle and is now working for Tera Computer Company. Her new

address appears on p.16.

* Robert Weismantel got a position as professor at the University of Magdeburg. His new address

appears on p.16 as well.


Deadline for the next issue of OPTIMA is November 30, 1998.


hftp/hvw~is~uf~edu-opima






PAGE 15


space for Springer ad


Application for Membership


I wish to enroll as a member of the Society.


Mail to:


My subscription is for my personal use and not for the benefit of any library or institution. Mathematical Programming Society
3600 University City Sciences Center
0 I will pay my membership dues on receipt of your invoice. Philadelphia PA 19104-2688 USA

E I wish to pay by credit card (Master/Euro or Visa).


CREDITCARD NO.


EXPIRY DATE


FAMILY NAME


MAILING ADDRESS


TELEPHONE NO. TELEFAX NO.

EMAIL

SIGNATURE


Cheques or money orders should be made
payable to The Mathematical Programming
Society, Inc. Dues for 1998, including sub
scription to the journal Mathematical
Programming, are US $70.
Student applications: Dues are one-half the
above rate Have a faculty member verity your
student status and send application with dues
to above address.


Faculty verifying status


Institution





O P T I M A

MATHEMATICAL PROGRAMMING SOCIETY


UNIVERSITY OF

SFLORIDA


Center for Applied Optimization
371 Weil
PO Box 116595
Gainesville FL 32611-6595 USA


FIRST CLASS MAIL


EDITOR:
Karen Aardal
Department of Computer Science
Utrecht University
PO Box 80089
3508 TB Utrecht
The Netherlands
e-mail: aardal@cs.ruu.nl
URL: http://www.cs.ruu.nl/staff/aardal.html

AREA EDITOR, CONTINUOUS OPTIMIZATION:
Mary Beth Hribar
Tera Computer Company
2815 Eastlake Ave. E.
Seattle, WA 98102
USA
e-mail: marybeth@tera.com


AREA EDITOR, DISCRETE OPTIMIZATION:
Sebastian Ceria
417 Uris Hall
Graduate School of Business
Columbia University
New York, NY 10027-7004
USA
e-mail: sebas@cumparsita.gsb.columbia.edu
URL: http://www.columbia.edu/-sc244/

BOOK REVIEW EDITOR:
Robert Weismantel
Universitat Magdeburg
Fakultat fur Mathematik
Universitatsplatz 2
D-39106 Magdeburg
Germany
e-mail: weismant@math.uni magdeburg.de


Donald W. Hearn, FOUNDING EDITOR
Elsa Drake, DESIGNER
PUBLISHED BY THE
MATHEMATICAL PROGRAMMING SOCIETY &
GATOREngineering PUBLICATION SERVICES
University of Florida

Journal contents are subject to change by the publisher.




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs